NVMe over Fabrics specification extends the benefits of NVMe to large fabrics, beyond the reach and scalability of PCIe. NVMe enables deployments with hundreds or thousands of SSDs using a network interconnect, such as RDMA over Ethernet. Thanks to an optimized protocol stack, an end-to-end NVMe solution is expected to reduce access latency and improve performance, particularly when paired with a low latency, high efficiency transport such as RDMA. This allows applications to achieve fast storage response times, irrespective of whether the NMVe SSDs are attached locally or accessed remotely across enterprise or data center networks
Hyperconvergence is the epitome of the software-defined data center (SDDC). The software-based nature of hyperconvergence provides the flexibility required to meet current and future business needs without having to rip and replace infrastructure components. Better yet, as vendors add new features in updated software releases, customers gain the benefits of those features immediately, without having to replace hardware.
USE OF COMMODITY X86 HARDWARE
Commodity hardware equals lower cost. The software layer is designed to accommodate the fact that hardware will eventually fail. Customers get the benefit of these failure avoidance/ availability options without having to break the bank to get the hardware.
Financial officers really like the opportunity to save a few bucks. The business gets to enjoy better outcomes than legacy data center systems offer, often at a much lower cost.
CENTRALIZED SYSTEMS AND MANAGEMENT
In hyperconvergence, all components — compute, storage, backup to disk, cloud gateway functionality, and so on — are combined in a single shared resource pool with hypervisor technology. This simple, efficient design enables IT to manage aggregated resources across individual nodes as a single federated system.
Mass centralization and integration also happen at the management level. Regardless of how widespread physical resources happen to be, hyperconverged systems handle them as though they were all sitting next to one another. Resources spread across multiple physical data centers are managed from a single, centralized interface. All system and data management functions are handled within this interface, too.
Agility is a big deal in modern IT. Business expects IT to respond quickly as new needs arise, yet legacy environments force IT to employ myriad resources to meet those needs. Hyperconverged infrastructure enables IT to achieve positive outcomes much faster.
Part of being agile is being able to move workloads as necessary. In a hyperconverged world, all resources in all physical data centers reside under a single administrative umbrella. Workload migration in such environments is a breeze, particularly in a solution that enables consistent deduplication as a core part of its offering. Reduced data is far easier to work with than fully expanded data and helps IT get things done faster.
SCALABILITY AND EFFICIENCY
Hyperconvergence is a scalable building-block approach that allows IT to expand by adding units, just like in a LEGO set. Granular scalability is one of the hallmarks of this infrastructure. Unlike integrated systems products, which often require large investments, hyperconverged solutions have a much smaller step size. Step size is the amount of infrastructure that a company needs to buy to get to the next level of infrastruc- ture. The bigger the step size, the bigger the up-front cost.
The bigger the step size, the longer it takes to fully utilize new resources added through the expansion. A smaller step size results in a far more efficient use of resources. As new resources are required, it’s easy to add nodes to a hyperconverged infrastructure.
Hyperconverged systems have a low cost of entry compared with their integrated system counterparts and legacy infra- structure.
Automation is a key component of the SDDC and goes hand in hand with hyperconvergence. When all resources are truly combined and when centralized management tools are in place, administrative functionality includes scheduling opportunities as well as scripting options.
Also, IT doesn’t need to worry about trying to create auto- mated structures with hardware from different manufacturers or product lines. Everything is encapsulated in one nice, neat environment.
FOCUS ON VM’s
Virtualization is the foundation of the SDDC. Hyperconverged infrastructure options use virtual machines (VMs) as the most basic constructs of the environment. All other resources — storage, backup, replication, load balancing, and so on — support individual VMs.
As a result, policy in the hyperconverged environment also revolves around VMs, as do all the management options available in the system, such as data protection policies, which are often defined in third-party tools in legacy environments. With hyperconvergence, integrated data protection policies and control happen right at VM level.
VM-centricity is also apparent as workloads need to be moved around to different data centers and between services, such as backup and replication. The administrator always works with the virtual machine as the focus, not the data center and not underlying services, such as storage.
Hyperconvergence enables organizations to deploy many kinds of applications in a single shared resource pool without worrying about the dreaded IO blender effect, which wrecks VM performance.
How does hyperconvergence make this type of deployment possible? Hyperconverged systems include multiple kinds of storage — both solid-state storage and spinning-disk — in each appliance. A single appliance can have multiple terabytes of each kind of storage installed. Because multiple appliances are necessary to achieve full environment redundancy and data protection, there’s plenty of both kinds of storage to go around. The focus on the VM in hyperconverged systems also allows the system to see through the IO blender and to optimize based on the IO profile of the individual VM.
Hyperconverged infrastructure’s mix of storage enables systems to handle both random and sequential workloads deftly. Even better, with so many solid-state storage devices in a hyperconverged cluster, there are more than enough IO opera- tions per second (IOPS) to support even the most intensive workloads — including virtual desktop infrastructure (VDI) boot and login storms.
The shared resource pool also enables efficient use of resources for improved performance and capacity, just like those very first server consolidation initiatives that you undertook on your initial journey into virtualization. Along the way, though, you may have created new islands thanks to the post-virtualization challenges discussed earlier. Resource islands carry with them the same utilization challenges that your old physical environments featured. With hyperconvergence, you get to move away from the need to create resource islands just to meet IO needs of particular applications. The environment itself handles all of the CPU, RAM, capacity, and IOPS assignments so that administrators can focus on the application and not individual resource needs.
The business benefits as IT spends less while providing improved overall service. On the performance front, the environment handles far more varied workloads than legacy infrastructure can. IT itself performs better, with more focus on the business and less on the technology.
Although it’s not always the most enjoyable task in the world, protecting data is critical. The sad fact is that many organizations do only the bare minimum to protect their critical data. There are two main reasons why: comprehensive data protection can be really expensive and really complex.
To provide data protection in a legacy system, you have to make many decisions and purchase a wide selection of products. In a hyperconverged environment, however, backup, recovery, and disaster recovery are built in. They’re part of the infrastructure, not third-party afterthoughts to be integrated.