We’ve started by looking back at a decade of Open Source virtualization, and in this second part of the series we’ll tackle today’s landscape (last updated in March 2008).
The least you can say about the current state of Open Source virtualization is that the field is extremely diverse: different approaches in the virtualization area are all represented, with paravirtualization, OS virtualization and hardware-assisted virtualization in various colors and flavours.
Let’s start with paravirtualization:
Xenmaster Ian Pratt released the 1.0 version of Xen somewhere in September 2003, it wasn’t till the Xen 2.0 release (around December 2005) that Xen adoption really started to accelerate. Ian announced the 2.0 release in November 2004 with support for both Linux 2.4, 2.6, FreeBSD and Live Migration support.
Xen pioneered Paravirtualization, giving it both a giant performance boost but also an argument for the naysayers who claimed it was impossible to run Windows on the platform. The fact that the Cambridge lab had access to the Windows source code and even had it running on Xen wasn’t really an argument since they were unable to redistribute it.
Different Linux distributions adopted quickly making Xen the de facto Linux virtualization solution. Also: the Open Solaris project was working on Xen support, first only as a guest but later also as a host operating system.
Then came the VT capabilities, and once again Xen was leading the pack, bringing out a Xen version that supported hardware-assisted virtualization. So the Open Source Xen version was beating the competition on different levels – speed, flexibility, etc. – but had one key element missing: the management layer, a GUI, the part that people actually spend money on …
Meanwhile, the company XenSource Inc. had been founded by the original developers of Xen and they started to work on a set of management tools and bang, next thing we know is Citrix announcing the acquisition of XenSource for $ 500 million USD in the summer of 2007.
While the discussion between Xen and VMware was still going on to see what infrastructure was needed in the kernel to support virtualization, KVM (Kernel Based Virtual Machine) had come out of nowhere: a lightweight kernel module that enabled the VT Capabilities of the new generation of CPUs and that ended up in the mainstream kernel in no time. KVM was ultimately included in the 2.6.20 release of the Linux kernel after merely a couple of months of development.
KVM enabled Qemu to benefit from the VT features and a new team was born. KVM is the lean and mean, small virtual machine, and the fact that it was so small only made it easier to adopt in the main tree. KVM is maintained by Avi Kivity who is working at Qumranet, with Moshe Bar amongst its founders about to launch a product called Solid ICE, aiming for the desktop virtualization market. KVM however is not doing all the work, a modified Qemu version acts as the user space part that enables the full power of KVM.
Today different distributions support both KVM and Xen and are working towards a single tool set to manage them both.
Qemu started to pop up everywhere in the virtualization arena in 2007, e.g. within the VirtualBox project from innotek, a German software company located in Stuttgart.
VirtualBox is one of the most important open source solutions if you want to run other operating systems on your desktop. It’s free, it’s open and it has all the features you would expect from its commercial counterparts! Sometimes these commercial counterparts, facilitate ‘match making’ events, which outcomes are not intended. For example at VMworld in New York in September ’07, Achim Hasenmueller, co-founder and kernel wizard at innotek was introduced to the Sun Microsystems management and less than four months later they announced their ‘marriage’ (Sun acquired innotek for an undisclosed amount in February 2007). As VirtualBox was already running on a multitude of Operating Systems such as Windows, Linux and OS/X, they evidently also added Sun’s Solaris to this impressive list. VirtualBox also supports a large number of guest platforms, including common Windows flavors (NT 4.0, 2000, XP, Server 2003, Vista).
We’ve been talking mostly about paravirtualisation and hardware-assisted virtualization with KVM, Xen and VirtualBox, but of course there is much more out there. Let’s have a look at the players on the Operating System Level virtualisation are, an identical copy of one kernel providing a secured container where user space programs can run. Today, there are 2 main players in this area (VServer and OpenVZ). VServer was started by Jacques Gelinas and is currently lead by Herbert Pötzl from Austria. The Linux-VServer started July 2001 as BSD Jail reimplementatio
Not much of a surprise, people tend to think that Linux-VServer and OpenVZ have a lot in common, and some people even think OpenVZ was once based on a fork of Linux-VServer. According to Herbert Pötzl that isn’t true today: the projects do not share any code, although they provide roughly similar functionality in often quite different ways … In 2003 however , Linux-VServer was forked into FreeVPS by Alex Lyashkov and soon after that, it was integrated into the H-Sphere product, maintained by Positive Software.
SWsoft was founded back in 1999 and released their commercial Virtuozzo product in 2001, as a proprietary virtualization solution for Linux and later also supporting Windows. When SWsoft acquired Plesk in 2003, a proprietary framework to manage hosted solution, evidently virtualization fitted nicely in this picture since the OS level virtualization OpenVZ uses is a perfect match for web hosting.
SWsoft then went on to buy Parallels and managed to keep it a secret for almost 3 years. In late 2007, they finally decided that their Parallels brand was better known than their Virtuozzo or Plesk brands and decided to change the company name into Parallels alltogether. Having a single kernel for each virtual machine that runs in your environment is both the advantage and the disadvantage of OpenVZ and Linux-VServer. Its advantage of being a lightweight solution that can scale easily to hundreds of machines with no significant penalty is also its biggest disadvantage – what if something goes wrong with that kernel? Other approaches such as Xen and KVM allow you to run different kernels , or even different operating systems, which of course requires much more memory for each instance.
If you are into Hot Motorcycles you’ll remember the 1999 Virtual Iron company, a company that manufactered a CD that helped people create a customized bike. Fast forward to 2004, where a domain-squatter was using the site, and in February 2005 a company that looks like the Virtual Iron company we know now, started using the domain. Virtual Iron had a product called Virtual Iron VFE in store, which they presented at Linux World and later also more in depth at OLS. They claimed to have developed a Virtual Machine Monitor that was also Clustered. The Virtual Iron VFe product transparently created a shared memory multi-processor out of number of servers.
Yes, this sounds familiar, it sounds like an SSI implementation, it sounds like openMosix or OpenSSI, and that’s exactly what some people thought it was. Rumors on the net claimed that Virtual Iron was indeed violating the GPL while reusing and modifying openMosix code while not redistributing it’s changes, true or false, we’ll probably never know. In August 2005 Virtual Iron started shifting as they announced they were working on having their software manage other platforms too. Today, their product is based on an open-source Hypervisor, which name you can most likely already guess (yes, indeed, they use Xen). What happened with the SSI alike technology is unclear.
The final player in this area we need to point to is Paul Rusty Russel’s Lguest, formerly known as Lhype, almost known as Rustyvisor or Wonkavisor. It is an experimental Hypervisor developed by Rusty intended as proof of concept for the paravirt ops. Redhat has been working on it also, but who knows what the future will bring?
Which brings us to the final part: where to put your money? That kinda depends on your needs:
- If I’m talking to a hoster who needs to run lots and lots of similar machines with easy management, I’ll be pointing him to Linux-Vserver
- If someone is looking at bare metal hardware virtualisation for his Linux machines, it’s Xen all the way
- If he needs a platform to test different distributions and operating systems on his desktop I’ll probably be pointing to VirtualBox
- If someone really wants to head into placing his desktops virtualized in the data center, KVM would be my bet
What if someone wants to do nothing else but use Linux as a base framework to run Windows virtual machines?
In that case the commercial Xen offerings such as the one from XenSource, Suse and Redhat would be best as they can provide you also with adapted drivers for the guest operating system. But ask me again in 6 months and I’ll probably tell you otherwise.
Watch out for the third part of this article series, with more on Xen!
The Thin Guy says
Hi Kris,
A very nice and complete list, it’s too bad you don’t mention the completely open source Solaris Containers under OS Virtualization. Or FreeBSD Jails, the grandfather of all the OS Virt technologies.
TG.
Ap.Muthu says
ProxMox VE – http://pve.proxmox.com – is emerging as a stable alternative to established vendors of Virtualisation products and yet remains rooted in the Open Source arena.