StarWind has partnered with Intel, Mellanox, and Supermicro and have done a Hyperconvergence performance testing series with latest hardware from Intel, Mellanox and Supermicro. They used 12 HCI nodes, all-NVMe environment with RAM for compute, Intel® Optane™ SSD DC P4800X Series for storage, and 2 Mellanox ConnectX-5 NICs for interconnect.
StarWind tested their StarWind VSAN solution first without any caching and then with caching via the Intel Optane hardware.
StarWind Virtual SAN used iSER for backbone links to RDMA, delivering maximum possible performance.
For benchmarking they've used an open-source tool called VM Fleet, available on GitHub. VM Fleet allows easily orchestration of DISKSPD, which is a popular Windows micro-benchmark tool, in hundreds or thousands of Hyper-V virtual machines at once.
For the monitoring and management solution, StarWind used another product (still in beta) called StarWind Command Center, as an alternative to Windows Admin Center or System Center Configuration Manager. StarWind Command Center consolidates sophisticated dashboards that provide all the important information about the state of each environment component on a single screen.
The Hyperconvergence Performance Testing
They've done 3 different tests in total. StarWind iSCSI Accelerator (Load Balancer) was installed on each cluster node in order to balance virtualized workloads between all CPU cores in Hyper-V servers.
Note: we have written about StarWind iSCSI Accelerator FREE here.
A quick quote from their testing (you can have a look at the detailed specs of the hardware here:
- Test 1: Just cache-less iSCSI Shared storage – Without cache, our cluster delivered 6.7 million IOPS, 51% out of theoretical 13.2 million IOPS, which was the breakthrough performance for a production configuration. No RDMA for client access was used, we went only with iSCSI; the backbone was running over iSER. There were no proprietary technologies used.
- Test 2: Write-back cache + iSCSI shared storage – On the second stage, we fully loaded our all-NVMe cluster. Each server carried Intel Optane NVMe drives configured as write-back cache devices. 12-node all-NVMe cluster delivered 26.834 million IOPS, 101.5% performance out of theoretical 26.4 million IOPS.
- Test 3: Linux SPDK/DPDK target + StarWind NVMe-oF Initiator – For the final stage, the same cluster was configured using NVMe-oF. We brought this protocol to Windows with Linux SPDK NVMe target VM and StarWind NVMe-oF Initiator. After the protocol was enabled, our 12-node all-NVMe setup delivered 20.187 million IOPS, 84% performance out of theoretical 26.4 million IOPS.
Here is the architecture:
By using StarWind command center, which has a Storage Performance Dashboard and there is an interactive chart plotting cluster-wide aggregate IOPS measured at the CSV filesystem layer in Windows.
Other metrics and values can be pulled out through this tool, which you can ask for a demo.
In the end, after those tests, they were able to compare the testing with Microsoft's native iSCSI initiator which isn't really optimized, so the benchmarks can be possibly used for comparison with other HCI systems and vendors.
If you don't know StarWind you really missing out. Check one of our posts about their products (they have a lot of free tools too) and let us know whether this was helpful for your environment.
Their flagship product called StarWind Virtual SAN (VSAN) has also a free edition which we detailed in our post here – StarWind VSAN Free – All you need to know.
Quote from the press release:
By building such environment, StarWind, Supermicro, Mellanox, and Intel demonstrated that HCI performance high scores should be not about IOPS but more about efficient hardware utilization. Every time faster disks will appear, others may chinch higher number of IOPS just because their storage may be more powerful by design or there may be more drives in servers. However, the only thing that matters in this race is hardware utilization efficiency. Moneywise, this means that you can get all the IOPS that you paid for. What’s the use of an all-flash array that provides you only a mere fraction of the performance that you expect? Remember: the number of IOPS that you can squeeze from the underlying storage is minute, hardware utilization efficiency is eternal.
Production-ready StarWind HyperConverged Appliances built on Supermicro SuperServer chassis were used for the test setup. We used Mellanox ConnectX-5 NICs and Mellanox SN2700 Spectrum 100GbE switches for interconnect. The only thing that made the servers in our test environment different from ones that we spec was Intel Platinum processors – the top-notch CPUs provided by Intel.
Source: StarWind and HCI Performance High Score
More about StarWind from ESX Virtualization:
- Top 3 Free Software from StarWind
- StarWind VVOLS Support and details of integration with VMware vSphere
- Veeam 3-2-1 Backup Rule Now With Starwind VTL
- What is iSER And Why You Should Care?
- StarWind Virtual SAN and Stretched Cluster Architecture
- How To Create NVMe-Of Target With StarWind VSAN
- StarWind Virtual SAN 2-Node Tips and Requirements
More from ESX Virtualization
- Free Tools
- vSphere 6.7 detailed page and latest How-to articles
- VMware vCenter Server Standard vs Foundation – Differences
- VMware vSphere 6.7 Announced – vCSA 6.7
- What is The Difference between VMware vSphere, ESXi and vCenter
- How to Install the latest ESXi VMware Patch – [Guide]
- VMware Desktop Watermark Free Utility is Similar to BgInfo
Stay tuned through RSS, and social media channels (Twitter, FB, YouTube)