At the Ottawa Linux Symposium, Benoit de Lingeris and his team from Revolution Linux presented their paper “Virtualization of Linux Server, a comparative study“, mostly the work of Fernando L. Camargos in pursuit of his Masters degree in Computer Science.
They looked at VirtualBox, Xen, KVM, OpenVZ, LinuxVServer and KQemu in an 64bit mode for all tests where possible (hence not for VirtualBox). Their Host OS was Ubuntu 7.10 and the VM’s were Ubuntu 6.06.
It’s pretty obvious that virtualization creates a little overhead, the bigger question however is how much overhead? What’s the penalty when virtualizing an environment? They focused on several aspects, the first one was just trying to figure out what impact the addition of a hypervisor had on an environment.
The second one how many virtual machines one could run in a virtualized environment.
They ran their tests multiple times and the results presented where averages of these tests.
In the first set of tests, impact of the hypervisor compared to the real native machine, they started of with a Linux Kernel compilation workload.
Here Linux Vserver lost almost no performance closely followed by Xen and then OpenVZ. Compared to native machine speed Both VirtualBox and (K)Qemu scored below 50%.
Their second test was file compressions. Here most of the environments scored around 85-95% native speed except from KQemu and OpenVZ.
The Samba team brought us dbench, “dbench is a filesystem benchmark that generates load patterns similar to those of the commercial Netbench benchmark, but without requiring a lab of Windows load generators to run. It is now considered a de-facto standard for generating load on the Linux VFS.”
Here LinuxVserver outscales the rest , Linux VServer scores good here as they use directly the IO drivers of the system where as others don’t. Xen is second best in this test but the other frameworks really need some work done here.
If you want to do low level data copy on UNIX obviously dd is your favorite tool. For the same reasons as above Linux-Vserver scores good here. The strange thing however is that it scores better than Native speed. When copying an existing file Xen and KVM are a good second but OpenVZ seemed to need some work. Another interesting fact is that KQemu and VirtualBox failed the test. When copying data from /dev/zero KVM scores better.
During the test the block devices were backed by different technologies , for Vserver it was a native disk , for Xen a file. Off course this doesn’t give equally good results. Different options for tuning are available here. Still a good advise, do not virtualize your fileserver.
When looking at network IO performance the team opted to use netperf for the test. VirtualBox, Linux-Vserver, Xen and OpenVZ all score good here. The performance of KQemu and KVM were a disaster.
When testing an Rsync with different filesizes OpenVZ scored best and most of the other tools performed around 80% native machine speed , except for KVM that seemed to have more problems with 1 big file than with different small ones. The good scores of VirtualBox are because of their modified IP stack and their efforts there obviously were worth the time…
So they covered, compiling, disk IO, network IO, obviously we want to know a bit about Database performance too. Revolution Linux chose Sysbench for this test. Again good scores for Linux-Vserver and xen , less for the rest
With strange Looks from the OpenVZ people in the audience they concluded that Linux-Vserver has excellent performance and has presented minimal overhead , off course Linux-VServer and OpenVZ are still chroots on steroids, not full virtualization solution. According to Revolution Linux Xen achieved great performance in most of the tests. KVM was fairly good for full virtualization but didn’t perform well for applications relying on I/0
As mentioned earlier apart from the overhead tests Revolution Linux also set to test the scalability , Only 2 tests here kernel compilation and Sysbench performed with n ( n = 1 , 2, 4,8 ,16 and 32) instances .
If they looked at the Number of Transactions globally per host , so spread over the different Virtual Machines) Xen is the best perform it actually reached a higher total throughout with 32 virtual machines than wit 1 vm, peaking at 4-8 VM’s.
With their new benchmark Kernels Compiled per hour , they only have results for Vserver and Xen. With 1 VM both VServer build around 10-11 Kernels per hour , and as of 2/4 VM’s they go up to 20. Xen keeps pace up to 16 VM’s and then slows down.
So obviously there is a very strong correlation between the performance of a machine and the number of instances in that machine.
Also here Linux-Vserver scores better than average with Xen as a good alternative for bare metal Virtualization.
Their conclusions: It has to be said that Revolution Linux is a Linux-VServer shop , and that’s where their preference goes. If they have to be able to run different kernels they seem to prefer Xen.
Generally speaking it seems lots of optimization could be done for different setups. often other than the default setups could help a technology gain a significant boost in performance.
Different network setups ,using specific network stacks ,
or different disk backends (real disk vs file based backends) a lot can change with tuning and installation by experience people.
The tests also have been performed about 6 months ago .. which means that today the results might probably be a lot different.
Leave a Reply