Supermicro SuperStorage SSG-6049P-E1CR45H Review

Going Further: Virtualization

Obviously with a server of this caliber, one doesn’t just do desktop workloads. It’s most likely to be used as a NAS, and perhaps even host virtual machines that may need to access that storage locally, rather than be bottlenecked by the network. Let’s continue using Handbrake for a benchmark tool and run some tests to see how well it performs running transcode server virtual machines. We’ll do this under two of most popular virtualization options: Hyper-V and VMware.

The testing method is simple: create one VM with the full vCPU count of the server and a fixed amount of RAM. Scale this one server to two and then four VMs, but each time reduce the vCPUs available to the VM so as not to cause core contention, as we intend on running 100% utilization in each VM. So in the case of our 40-thread (40 vCPU) server with 128GB of RAM, we set up 1 VM with 40 vCPUs and 28GB of RAM, then make that two VMs with 20 vCPUs and 28GB of RAM each, then finally four VMs with 10 vCPUs and 28GB of RAM each. We’ll run our most stressful and lengthy test, 4K to H.265 MKV 1080p30, at each stage concurrent with all the other VMs and then show the results. Maybe you have an idea what will happen based on what we’ve seen already?

Hyper-V

Hyper-V is Microsoft’s VirtualPC platform that is available to use for free with its server operating system. Due to the economics of unlimited Windows licenses for virtual machines ran under Windows Server Datacenter Edition, and the presence of enterprise features such as fault-tolerant clustering and VM live motion, it gained quick adoption, especially in Microsoft-heavy environments.

The 4K to H.265 1080p transcode in Windows Server 2019 (update 1809) bare-metal took 1054 seconds, as shown before in the Handbrake test of the benchmark section. I’ve repeated that result on this chart as a baseline and multiplied it by four to get a total render time of 4216 seconds to run four renders back-to-back. Using Hyper-V on the same version of Windows Server, the first VM takes a small 7.7% performance loss, likely due to VM overhead and partly from the guest OS not being able to recognize Hyperthreaded cores vs real cores. In our second test with a reduced 20 vCPUs per VM, both VMs complete their renders at nearly the same time, giving 1489 seconds to render two videos, for a total of 2978 seconds to complete four renders at that rate, a 34.4% performance boost over rendering sequentially in a VM or 29.3% faster vs doing likewise bare-metal.

The last test, running four VMs to render simultaneously, with a core reduction again to 10 vCPUs per VM, the first VM finishes well ahead of the other three, and the second finishing still fairly faster than the remaining two, suggests a sticky priority to the first-loaded VM, and even the second, likely assigning them to real cores, leaving the remaining two VMs to work off of Hyperthreaded cores and unused cycles of the real cores. This is certainly interesting behavior and merits further research, but is outside the scope of our review today. Suffice it to say, at 3054 seconds for the longest render has the four VM test come in just 2.5% slower than the two VM test, suggesting that the concurrency is really not being helped when we’re heavily loading the Hyperthreaded cores by reducing the cores available to each Handbrake instance (remember that somewhat lackluster core-scaling we noted earlier in the first Handbrake results?).

VMware ESXi 6.7u3

VMware has been around for many years, as practically a de facto standard for virtualization in the datacenter. Pushing the envelope of enterprise features and a large ecosystem of supporting and complimentary software, they continue to be a platform of choice for many system administrators.

Once again, we see a small performance loss moving to virtual, but only a 6.2% deficit this time. Two VMs concurrent takes a bigger hit, being 3.2% slower than Hyper-V’s time, but still being 31.4% faster than the single VM and 27.1% faster than back-to-back renders on bare-metal. The four VM benchmark is where things get interesting. VMware finishes 12.5% faster than Hyper-V, and not only that, but all the renders finish roughly at the same time. Each VM had their default configuration, so VMware’s standard priority CPU shares works amazingly well to keep each VM at roughly the same performance level, allocating GHz fairly vs the lopsidedness we saw under Hyper-V. With this impressive turnout, we end up having a 36.6% performance gain over rendering back-to-back on bare-metal vs Hyper-V’s 29.3%, making VMware a clear winner for concurrent processing of Handbrake and likely other similarly intensive workloads. However, VMware is a hypervisor only, whereas Hyper-V gives you a desktop OS you can utilize in addition to just hosting Virtual Machines, so it too has advantages.

Liked it? Take a second to support Kirk Johnson on Patreon!
Become a patron at Patreon!

Previous 1 2 3 4 5 6 7 Next

Enterprise Hardware Editor
AdoredTV