Case for HCI in the Modern Datacenter

In this post I will make my case for taking an honest look at HCI versus a 3-tier infrastructure. The aim here is to be objective and to provide a fair and balanced view of each architecture to allow for an honest comparison when reviewing how the future of your datacenter will be architected.Figure 1: A visual view of 3-tier versus HCI

Before I dive head first into the case we need to level set some terms that will be used throughout the article.

Terms

Legacy 3-tier: The legacy 3-tier infrastructure consists of 3 layers. The compute layer, the storage area network/fabric (SAN), and storage array(s).

HCI: HCI or “Hyper-converged Infrastructure” refers to the convergence of compute, network, storage, and virtualization in a single platform and managed from a single easy to use console.

Performance Capacity: This describes the total iops/throughput that a system can push.

Storage Capacity: This describes the usable storage capacity I.E. “How much data can I store?”

CAPEX: Capital Expenditures. “What is the cost to procure new systems”

OPEX: Operating Expenditures. “What does it cost to run my systems, upgrades, daily administration, migrations etc…”

Now that we have an understanding of the terms used in this case we need to discuss the history of the legacy 3-tier and why it became a central foundation in small and large datacenters alike.

The History of 3-Tier

Before the 3-tier infrastructure we had vast farms of silo’d servers who had their own onboard storage. These created some problems – to name a few…

  1. Storage Capacity – Many servers were starved for storage with only a limited amount of available drive slots on a single server.
  2. Performance – Many servers were performance starved. With a limited number of drive spindles to share the application IO load we would quickly hit the wall on storage performance.
  3. Availability – If we lost a drive or a server crashed we would lose all access to that data until the server could be rebuilt since the data living on the local drives was not shared.

From these issues the idea of large central storage arrays was developed to solve all 3 of the above issues. By having a large central store of storage capacity we could hand out much larger chunks of capacity to each server providing the much needed capacity to starved applications. From a performance perspective we now had the ability to spread the iops/throughput across many drive spindles to allow applications to access data at a much higher rate than previously possible. Most importantly large central storage arrays gave us the new ability to share data across various servers whether these were network attached home drives/media share drives or clustering technologies like Microsoft Clustering to provide high availability to our mission critical applications.

As time moved on and virtualization came on the scene we gained the ability the easily scale our compute layer and take advantage of the benefits of workload consolidation into a much smaller physical server footprint. This ability to scale at the virtualization layer and provide better compute utilization began to expose an issue at the storage array level. VM administrators began to take advantage of the workload consolidation on each physical server which pushed the boundaries of our storage arrays past their limits from a performance capacity perspective. Many times the storage arrays would have more available storage capacity and we would provide this to the virtualization layer to make use of the storage array with the idea that “We bought it, let’s USE IT!”. This idea is fair, we want to consume what we pay for because of course “The business can’t afford waste in our datacenter!”. These pressures from above put storage administrators in a precarious position. Do we 1) Hand out all of our storage to make sure we consume all of the assets that we paid for at the risk of overloading the performance capacity of the storage array or 2) Do we make the case for buying more arrays to handle the performance capacity required from the virtualization/application layer while still having wasted space on our existing (and depreciating) storage arrays? WHATS A STORAGE ADMIN TO DO!?!? With both options the storage administrator will cause problems, with option 1 he/she causes harm to the consumers of the application with the risk of slow response times and angry application owners with possible loss of customers which affects the businesses bottom line. With option 2 he/she keeps the applications happy but adds more complications for everyday storage administration and OPEX while also making the business angry at the wasted assets permeating their datacenters.

With the storage arrays being the current bottleneck flash storage came into the picture. With flash storage we had parallel access to the data which eliminated drive seek time. With each enterprise flash module we gained the ability to give hundreds of thousands of iops to our VMs/apps. However, we still had the scalability issues because each flash array only has a certain amount of CPU capacity thus causing a bottleneck at the CPU layer and not making full use of the individual flash module performance capacity. Additionally, this moved the performance issues potentially to the SAN fabric as we see in the below chart.

Network BW SSDs required to saturate network BW
Controller Connectivity Available Network BW Read I/O Write I/O
Dual 4Gb FC 8Gb == 1GB 2 3
Dual 8Gb FC 16Gb == 2GB 4 5
Dual 16Gb FC 32Gb == 4GB 8 9
Dual 1Gb ETH 2Gb == 0.25GB 1 1
Dual 10Gb ETH 20Gb == 2.5GB 5 6

Figure 2: Referenced from the Nutanix Bible

As we can see above, it does not take many SSDs to saturate our networks or host HBAs.

The Age of HCI

Before we jump into HCI we need to quickly discuss the public cloud and how it changed our view on how we consume resources in our modern datacenters. With the cloud we gained the ability to quickly spin up VMs/apps with the elasticity to scale incrementally and consume as much as we need. This allowed us to “pay as we grow” without needing to look 3-5 years out and make an educated guess to balance the performance/capacity needs of the future. However, with the public cloud we realized that while it is good for elastic workloads like dev/test or retail workloads during peak season (essentially workloads that are not very well understood) it is expensive. While the public cloud solved the issues of the enterprise storage arrays it does not provide the control that most enterprises want.

With that enters HCI, with HCI we gain the following benefits.

  • Scalability at the node level
    • Storage performance and storage capacity/compute scale linearly
  • Data locality (PCIe bus speed)
  • Self-healing capability
  • Single management interface to manage a single cluster or ALL clusters across your datacenters
  • Eliminates server and storage silos
  • Workload distribution with a scale out cluster
  • Hardened self-healing environment at all infrastructure layers
  • Easy to use VM-centric replication and data protection

Note: the above refers to the Nutanix HCI. Not all HCI vendors are created equal so do your homework and make sure these are requirements! 

With HCI we now have the ability to lower our OPEX by a considerable amount. Because we have a single interface to manage our infrastructure we can eliminate the following tasks. With the time savings we can now focus on the business needs and get closer to our application owners. This essentially makes dev/ops possible…

  1. No RAID group creation
  2. No SAN zoning
  3. No volume creation
  4. No host creation and volume masking
  5. No storage array upgrades
  6. No SAN switch upgrades
  7. No interoperability checks
  8. No finger pointing between vendors during support issues
  9. No balancing performance and storage capacity
  10. No more large scale storage migrations
  11. No management of datastores or datastore creation
  12. No volume level replication or snapshot management
  13. and many more…

Note – the following is specific to the Nutanix Enterprise Cloud OS

Nutanix makes this possible by combining compute (CPU/MEM), storage, and network into a single x86 or IBM Power server. Each node has onboard storage and can be a hybrid (HDD and SSD) or All Flash (SSD only) configuration. Whether Hybrid or All Flash we have the ability to use data efficiency technologies such as in-line compression/deduplication, post-process compression/deduplication, and Erasure Coding. We have VM-centric cloning or snapshots which can be stored locally or at a remote Nutanix cluster built into the management interface. Each server has a controller-VM (CVM) which is what provides the storage performance scalability and has direct access to the nodes physical disks through PCIe passthrough. For data availability there is a concept of Replication Factor or RF 2 or 3. RF 2 means that upon any write IO there will be one copy stored locally and one sent to another node in the cluster (RF3 means 3 copies are stored and distributed). Nutanix uses Cassandra to store the metadata and distributes this among all nodes in the cluster as well (for RF2 – 3 metadata copies are stored, RF3 stores 5 metadata copies). By providing this scale out infrastructure this means that ALL NODES participate in any rebuild situation. So there are no concepts of “disk groups” and rebuilds can occur very quickly bringing the cluster back to a completely redundant state. Even with a failed node once self healing occurs and assuming there is capacity in the cluster to handle the rereplicated data we can become fully redundant even before any HW is replaced. Nutanix’s patented data locality is a core feature providing high performance to any application and reducing “noisy neighbor” situations by reducing network utilization. (DO NOT LET SOMEONE TELL YOU DATA LOCALITY IS NOT IMPORTANT THIS IS LIKE SAYING DRIVING FROM PHILLY TO NEW YORK IS NO DIFFERENT THAN DRIVING FROM PHILLY TO LOS ANGELOS!). In short, data locality is all about physics, if the data is local then it will be much faster to retrieve and send to the application than if it needs to talk to another CVM to request and retrieve the data.

In conclusion, HCI provides us the ability to scale as needed and provide the same “pay as you grow” fractional consumption that businesses need in their datacenters to remain competitive and drive down costs. With the OPEX savings and time savings your talented engineers can begin focusing on the needs of your business and can become detectives/consultants instead of firefighters. This post only scratches the surface of HCI and especially Nutanix HCI which we will dive into in future posts.

Please provide any comments and feedback below to help me understand your needs for future posts or if you have any questions from this post!

THANK YOU!

Share with the world!

Leave a Reply

Your email address will not be published. Required fields are marked *