The market is filled with many different HCI virtual SAN software solutions to choose from with many different vendors and flavors of HCI solutions with various software-defined storage technologies backing those solutions. In a really interesting performance benchmark, industry hardware leaders, along with StarWind, combined forces to demonstrate the power and performance of HCI virtual SAN software solutions today.
Using the best commodity hardware available at the time of the performance benchmark, StarWind’s VSAN software solution was able to shine among other solutions in the industry, setting an IOPS benchmark record. In this post, we will take a look at how an HCI solution leveraging StarWind VSAN was able to break industry performance records.
HCI – Coming of Age
Software-defined solutions have totally changed the way we look at solving business challenges today with technology. They are making the capabilities, flexibility, scalability, and agility of technology that is backing business-critical workloads more powerful than ever before.
There are many key value metrics that organizations look at when deciding upon a hyper converged solution to power their production infrastructure. There is no doubt that performance is one of the key decision factors when it comes to choosing an HCI solution.
When HCIs first appeared on the market, many were skeptical of the deviation from legacy, traditional server infrastructure. However, virtual SAN software driven HCI has proven itself to be a reliable, efficient, and as we will see, very performance capable infrastructure. No doubt if you are considering a virtualization server cluster upgrade today, you are at least entertaining the thought of an HCI solution.
When some consider HCI and software-defined solutions however, there can be a misconception that HCI configurations are not for ultra-performance use cases. Let’s see how StarWind’s recent benchmark using commodity hardware helps to dispel the stereotype there is a performance penalty with using HCI solutions.
Test Configuration
The StarWind HCI test stages included the following hardware configuration:
12-node StarWind HyperConverged Appliance cluster specifications:
- Platform: Supermicro SuperServer 2029UZ-TR4+
- CPU: 2x Intel® Xeon® Platinum 8268 Processor 2.90 GHz. Intel® Turbo Boost ON, Intel® Hyper-Threading ON
- RAM: 96GB
- Boot Storage: 2x Intel® SSD D3-S4510 Series (240GB, M.2 80mm SATA 6Gb/s, 3D2, TLC)
- Storage Capacity: 2x Intel® Optane™ SSD DC P4800X Series (375GB, 1/2 Height PCIe x4, 3D XPoint™). The latest available firmware installed.
- RAW capacity: 9TB
- Usable capacity: 8.38TB
- Working set capacity: 4.08TB
- Networking: 2x Mellanox ConnectX-5 MCX516A-CCAT 100GbE Dual-Port NIC
- Switch: 2x Mellanox SN2700 32 Spectrum ports 100GbE Ethernet Switch
Below, you can see an interconnection representation of the hardware configuration used in the performance benchmarks.
There were three different scenarios tested using the above hardware configuration. These three different test bed setups share most of the same hardware and network configuration but use different software components and configurations of the underlying storage. The three different test configurations were as follows:
- iSCSI/iSER (iSCSI Extensions for RDMA) cache-less all-flash hyperconverged cluster
- Slower M.2 SATA flash drives, double the amount Intel Optane cards, and write-back cache
- SPDK NVMe-oF target using StarWind NVMe-oF initiator
Benchmark Tools
StarWind made use of a couple of tools for benchmarking in the test scenarios including the tools VM Fleet and DISKSPD. VM Fleet is an open-source tool freely available. DISKSPD is a popular Windows micro-benchmark tool that can perform benchmarking on thousands of Hyper-V hosts.
iSCSI/iSER Cache-less All-Flash Hyperconverged Cluster
In the first test, the performance benchmark was based on a configuration that was described by StarWind as being “built as a traditional 2-node StarWind Hyper-Converged Appliance (HCA) on steroids.” Even in the two-node configuration, the StarWind VSAN solution can be extremely powerful as you can see in our post here.
In this first performance benchmark, no cache device was used. It was an interesting test watching for I/O latency. What were the results of testing? Using the VM Fleet and DISKSPD utilities, StarWind was able to achieve the following results using real world read/write patters for disk I/O – almost 7 million IOPs.
RUN | PARAMETERS | RESULT |
Maximize IOPS, all-read | 4 kB random, 100% read | 6,709,997 IOPS |
Maximize IOPS, read/write | 4 kB random, 90% read, 10% write | 5,139,741 IOPS |
Maximize IOPS, read/write | 4 kB random, 70% read, 30% write | 3,434,870 IOPS |
Maximize throughput | 2 MB sequential, 100% read | 61.9GBps |
Maximize throughput | 2 MB sequential, 100% write | 50.81GBps |
Maximize throughput +2NVMe SSD | 2 MB sequential, 100% write | 108.38GBps2 |
Maximize throughput +2NVMe SSD | 2 MB sequential, 100% write | 100.29GBps |
What about the second performance test?
M.2 SATA flash drives, Intel Optane Configured as Write-back Cache
In the second performance benchmark test, the same hardware and software configuration were utilized all except the type of drives as well as a cache layer was added. Comparably slower, M.2 SATA flash drives were installed along with double the amount of Intel Optane. In addition to doubling the amount of Intel Optane, they were configured to be used as write-back cache.
The Intel Optane storage was configured per Intel recommended best practices along with the installation of Intel SSD D3-S4510 drives for primary storage. This effectively allowed for block-level replication at the primary storage level while cache storage was not replicated.
This configuration lends itself very well to database driven applications such as SQL Availability Groups, SAP, and others due to the synchronization between primary storage and cache devices in the architecture of most DB systems. The results of the configuration are pretty amazing – Over 26 million IOPs!
RUN | PARAMETERS | RESULT |
Maximize IOPS, all-read | 4 kB random, 100% read | 26,834,060 IOPS |
Maximize IOPS, read/write | 4 kB random, 90% read, 10% write | 25,840,684 IOPS |
Maximize IOPS, read/write | 4 kB random, 70% read, 30% write | 16,034,494 IOPS |
Maximize throughput | 2 MB sequential, 100% read | 116.39GBps2 |
Maximize throughput | 2 MB sequential, 100% write | 101.8GBps |
With the 12-node all-flash NVMe cluster, StarWind virtual SAN software was able to deliver some 26.834 million IOPS, 101.5% performance out of the theoretical 26.4 million IOPs. This was accomplished using the write-back cache (iSCSI without RDMA for client access) and backbone connections running over iSER.
SPDK NVMe-oF target using StarWind NVMe-oF initiator
The third test performed by StarWind involved testing NVMe-oF. In this configuration, the server nodes were provisioned with (4) Intel Optane SSD DC P4800X drives in each node. Windows Server 2019 was used with the StarWind initiator service. SPDK NVMe-oF was used as the target for storage.
This third stage of benchmark testing produced blistering performance as well with over 20 million IOPs delivered using an NVMe-oF cluster passthrough to an SPDK NVMe target VM using StarWind NVMe-oF initiator.
RUN | PARAMETERS | RESULT |
RAW device, Maximize IOPS, all-read | 4 kB random, 100% read | 22,239,158 IOPS |
RAW device, Maximize IOPS, read/write | 4 kB random, 90% read, 10% write | 21,923,445 IOPS |
RAW device, Maximize IOPS, read/write | 4 kB random, 70% read, 30% write | 21,906,429 IOPS |
RAW device, Maximize throughput | 2 MB sequential, 100% read | 119.01GBps2 |
RAW device, Maximize throughput | 2 MB sequential, 100% write | 101.93GBps |
VM-based, Maximize IOPS, all-read | 4 kB random, 100% read | 20,187,670 IOPS |
VM-based, Maximize IOPS, read/write | 4 kB random, 90% read, 10% write | 19,882,005 IOPS |
VM-based, Maximize IOPS, read/write | 4 kB random, 70% read, 30% write | 19,229,996 IOPS |
VM-based, Maximize throughput | 2 MB sequential, 100% read | 118.7GBps4 |
VM-based, Maximize throughput | 2 MB sequential, 100% write | 102.25GBps |
What StarWind’s HCI Performance Record Proves
Aside from the amazing numbers that were posted by StarWind in the performance benchmark that resulted in IOPS records being broken, the StarWind HCI performance benchmark helps to prove something important in the industry. HCI solutions driven by software-defined solutions like StarWind VSAN are the way of the future.
HCI is not inhibited by performance bottlenecks that make it undesirable for performance sensitive workloads. The real eye-opener about the StarWind benchmarks is they show that “regular” commodity hardware available to anyone and with any hypervisor can achieve relatively the same results powered by StarWind VSAN.
This helps to solidify the place of virtual SAN software HCI in the infrastructure of today as well as tomorrow. Software-defined solutions will only continue to get better and more capable. As the underlying storage devices and technologies continue to evolve, you can rest assured software-defined storage solutions will be able to deliver even better performance with better features than with just hardware alone.
0 Comments