• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
Virtualization.com

Virtualization.com

News and insights from the vibrant world of virtualization and cloud computing

  • News
  • Featured
  • Partnerships
  • People
  • Acquisitions
  • Guest Posts
  • Interviews
  • Videos
  • Funding

openvz

Open Source Virtualization Updates

January 7, 2009 by Kris Buytaert Leave a Comment

There is a lot going on in the different Open Source Virtualization projects. From KVM, over OpenVZ , to Xen.

First of all there is the news that Anthony Liguori has included kvm support in the main Qemu development tree. In his blogpost Avi Kivity explains that with the KVM code being merged in Qemu development and integration of new and bigger features will evolve faster.

But now that kvm has been merged, it is possible to make larger modifications to qemu in order to make it fit virtualization roles better. Live migration and virtio have already been merged. Device and cpu hotplug are on the queue. Deeper changes, like modifying how qemu manages memory and performs DMA, are pending.

Then there is some discussion going on about the future of OpenVZ , mainly targeted at licencing and Pete Zaitcev is wondering why Parallels hasn’t officialy modified its COPYING.SWsoft file whic
h mainly tells the world “we’re not serious about going upstream” and which makes him wonder
“Kinda makes LXC more of a fait accompli than it already is or needs to be.” According to the LXH site , the planned Linux kernel version for which LXC should be fully functionaly is 2.6.29.

And last but not least Xen has released 2 minor maintenance releases for their 3.1 and 3.2 branches Xen 3.3.1 and Xen 3.2.3 are now available for download.

Filed Under: Guest Posts, News Tagged With: anthony liguori, avi kivity, kvm, kvn, lxc, openvz, pete zaitcev, qemu, Xen

Virtualization Workloads, a comparative study in Open Source environments

August 7, 2008 by Kris Buytaert Leave a Comment

At the Ottawa Linux Symposium, Benoit de Lingeris and his team from Revolution Linux presented their paper “Virtualization of Linux Server, a comparative study“, mostly the work of Fernando L. Camargos in pursuit of his Masters degree in Computer Science.

They looked at VirtualBox, Xen, KVM, OpenVZ, LinuxVServer and KQemu in an 64bit mode for all tests where possible (hence not for VirtualBox). Their Host OS was Ubuntu 7.10 and the VM’s were Ubuntu 6.06.

It’s pretty obvious that virtualization creates a little overhead, the bigger question however is how much overhead? What’s the penalty when virtualizing an environment? They focused on several aspects, the first one was just trying to figure out what impact the addition of a hypervisor had on an environment.
The second one how many virtual machines one could run in a virtualized environment.

They ran their tests multiple times and the results presented where averages of these tests.

In the first set of tests, impact of the hypervisor compared to the real native machine, they started of with a Linux Kernel compilation workload.

Here Linux Vserver lost almost no performance closely followed by Xen and then OpenVZ. Compared to native machine speed Both VirtualBox and (K)Qemu scored below 50%.
Their second test was file compressions. Here most of the environments scored around 85-95% native speed except from KQemu and OpenVZ.

The Samba team brought us dbench, “dbench is a filesystem benchmark that generates load patterns similar to those of the commercial Netbench benchmark, but without requiring a lab of Windows load generators to run. It is now considered a de-facto standard for generating load on the Linux VFS.”
Here LinuxVserver outscales the rest , Linux VServer scores good here as they use directly the IO drivers of the system where as others don’t. Xen is second best in this test but the other frameworks really need some work done here.

If you want to do low level data copy on UNIX obviously dd is your favorite tool. For the same reasons as above Linux-Vserver scores good here. The strange thing however is that it scores better than Native speed. When copying an existing file Xen and KVM are a good second but OpenVZ seemed to need some work. Another interesting fact is that KQemu and VirtualBox failed the test. When copying data from /dev/zero KVM scores better.

During the test the block devices were backed by different technologies , for Vserver it was a native disk , for Xen a file. Off course this doesn’t give equally good results. Different options for tuning are available here. Still a good advise, do not virtualize your fileserver.

When looking at network IO performance the team opted to use netperf for the test. VirtualBox, Linux-Vserver, Xen and OpenVZ all score good here. The performance of KQemu and KVM were a disaster.
When testing an Rsync with different filesizes OpenVZ scored best and most of the other tools performed around 80% native machine speed , except for KVM that seemed to have more problems with 1 big file than with different small ones. The good scores of VirtualBox are because of their modified IP stack and their efforts there obviously were worth the time…

So they covered, compiling, disk IO, network IO, obviously we want to know a bit about Database performance too. Revolution Linux chose Sysbench for this test. Again good scores for Linux-Vserver and xen , less for the rest

With strange Looks from the OpenVZ people in the audience they concluded that Linux-Vserver has excellent performance and has presented minimal overhead , off course Linux-VServer and OpenVZ are still chroots on steroids, not full virtualization solution. According to Revolution Linux Xen achieved great performance in most of the tests. KVM was fairly good for full virtualization but didn’t perform well for applications relying on I/0

As mentioned earlier apart from the overhead tests Revolution Linux also set to test the scalability , Only 2 tests here kernel compilation and Sysbench performed with n ( n = 1 , 2, 4,8 ,16 and 32) instances .

If they looked at the Number of Transactions globally per host , so spread over the different Virtual Machines) Xen is the best perform it actually reached a higher total throughout with 32 virtual machines than wit 1 vm, peaking at 4-8 VM’s.

With their new benchmark Kernels Compiled per hour , they only have results for Vserver and Xen. With 1 VM both VServer build around 10-11 Kernels per hour , and as of 2/4 VM’s they go up to 20. Xen keeps pace up to 16 VM’s and then slows down.

So obviously there is a very strong correlation between the performance of a machine and the number of instances in that machine.
Also here Linux-Vserver scores better than average with Xen as a good alternative for bare metal Virtualization.

Their conclusions: It has to be said that Revolution Linux is a Linux-VServer shop , and that’s where their preference goes. If they have to be able to run different kernels they seem to prefer Xen.

Generally speaking it seems lots of optimization could be done for different setups. often other than the default setups could help a technology gain a significant boost in performance.

Different network setups ,using specific network stacks ,
or different disk backends (real disk vs file based backends) a lot can change with tuning and installation by experience people.
The tests also have been performed about 6 months ago .. which means that today the results might probably be a lot different.

Filed Under: Guest Posts, News Tagged With: kvm, linuxvserver, ols, openvz, Ottawa Linux Symposium, revolutionlinux, ubuntu, VirtualBox, virtualization, workload, Xen

How to … spend your weekend

August 1, 2008 by Kris Buytaert Leave a Comment

Virtualization is obviously becoming a better and better documented topic.

Over at UbuntuGeek.com is a fresh HOWTO on installing VirtualBox 1.6 in ad Ubuntu 8.04 Hardy Heron setup, including USB Support.

Over at the other side of the Linux Distribution Spectrum, Falko Timme at How to Forge documents how to install and use OpenVZ on Centos 5.2.

So if you don’t feel like playing outside this weekend, you know what to do 🙂

Filed Under: Guest Posts Tagged With: CentOS, CentOS 5.2, falko timme, guide, Hardy Heron, How to Forge, howto, openvz, tutorial, ubuntu, Ubuntu 8.04 Hardy Heron, VirtualBox, VirtualBox 1.6, virtualisation, virtualization

OLS Virtualization Minisummit Report

July 24, 2008 by Kris Buytaert 2 Comments

Virtualization.com was present at this week’s Virtualization Minisummit in Ottawa.

The OLS Virtualization Minisummit took place last Tuesday in Les Suites, Ottawa. Aland Adams had created an interesting lineup with a mixture of Kernel level talks and Management framework talks. First up was Andrey Mirkin from OpenVZ. He first gave a general overview of different virtualization techniques.
While comparing them, he claimed that Xen has a higher virtualization overhead because the hypervisor needs to manage a lot of stuff where as “container-based” approaches that use the Linux kernel for this have less overhead.

We discussed OpenVZ earlier, which uses 1 kernel for both the host OS and all the guest OS’s. Each container has it’s own files, process tree, network (virtual network device), devices (which can be shared or not), IPC objects, etc. Often that’s an advantage, sometimes it isn’t.

When Andrey talks about containers, he means OpenVZ containers, which often confused the audience as at the same time the Linux Containers minisummit was gong on in a different suite. He went on to discuss the different features of OpenVZ. Currently it includes checkpointing; they have templates from which they can quickly build new instances.
OpenVZ also supports Live Migration , basically taking a snapshot and transporting (rsync based) it to another node. So not the Xen way .. there is some downtime for the server .. although a minor one.
Interesting to know is that OpenVZ is also working on including OpenVZ into the mainstream Linux Kernel. The OpenVZ team has been contributing a lot of patches and changes to the Linux kernel in order to get their features in. Andrey also showed us a nice demo of the PacMan Xscreensaver being live migrated back and forth.

Still, containers are “chroots on steroids” (dixit Ian Pratt).
Given the recent security fuzz I wondered about the impact of containers. Container-based means you can see the processes in the guest from the host OS, which is a enormous security problem. Imagine a Virtual Host provider using this kind of technique, including having full access to your virtualized platform, whereas in other approaches he’ll actually need to have your passwords etc. to access certain parts of the guest.

The next talk was about Virtual TPM on Xen/KVM for Trusted Computing, by Kuniyasu Suzaki. He kicked offs with explaining the basics of the Trusted Platform Module. The whole problem is to create a full chain of trust from booting till full operation. So you need a boot loader that supports TPM (grub IMA), you need a patched Kernel (IMA) , from where you can have a binary that is trusted. (Ima : Integrity Measurement Architecture).

There are 2 ways to pass TPM to a virtual machine. First, there is a proprietary module by IBM as presented on the 2006 Usenix symposium where they transfer the physical TPM to a VM. Secondly, there is emulating TPM by software, there is an emulator developed by eth on tpm-emulator.berlios.de. KVM and Xen support emulated TPM. Off course this doesn’t keep the hardware trust.

As Qemu is needed to emulate bios-related things you can’t do vTPM on a paravirtualized domain, you need an HVM-based one. A customized KVM by Nguygen Anh Quynh will be released shortly; the patch will be applied to Qemu.

Still, these cases are using the TPM emulator and not the real hardware. An additional problem with virtualization and TPM arises when you start thinking about Migrating machines around … and losing access to the actual TPM module. Kuniyasu then showed a demo shown using VMKnoppix.

Dan Magenheimer is doing a rerun of his Xen Summit 2008 talk titled “Memory Overcommit without the Commitment”.

There is a lot of discussion on why you should or should not support overcommit memory. Some claim you should just buy enough memory (after all, memory is cheap) but it isn’t always: as soon as you go for the bigger memory lats you’ll still be paying a lot of money.
Overcommitment cost performance, you’ll end up swapping which is painful, however people claim that with CPU and IO it also costs performance so sometimes you need to compromise between functionality, cost and performance. Imho, a machine that is low on memory and starts swapping or even OOM’ing processes is much more painful then a machine that slows down because it is reaching its CPU or IO limits.

So one of the main arguments in favor of wanting to support overcommit on Xen was
because VMWare does it …

Dan outlined the different proposed solutions, such as Ballooning, Content-based page sharing , VMM-driven paging demand, , Hotplug memory add/delete, ticketed ballooning or even swap entire guests. in order to come up with his own proposition which he titled Feedback-directed ballooning.

The idea is that you have a lot of information of the memory-status of your guest, that Xen ballooning works like a charm, that Linux actually does perform OK when put under memory stress (provided you have configured swap). And that you can use xenstore tools for two-way communication. So he wrote a set of userland bash scripts that implemented ballooning based on local or directed feedback.

Conclusion: Xen does do memory overcommit today, so Dan replaced a “critical” VMWare feature with a small shell script 🙂

Filed Under: Guest Posts Tagged With: memory ballooning, Memory Overcommit, ols, openvz, oraclevm, tpm, virtualization minissummit, vmknoppix, vmware, vTPM

The Current State of Open Source Virtualization

March 26, 2008 by Kris Buytaert 6 Comments

We’ve started by looking back at a decade of Open Source virtualization, and in this second part of the series we’ll tackle today’s landscape (last updated in March 2008).

The least you can say about the current state of Open Source virtualization is that the field is extremely diverse: different approaches in the virtualization area are all represented, with paravirtualization, OS virtualization and hardware-assisted virtualization in various colors and flavours.

Let’s start with paravirtualization:

Xenmaster Ian Pratt released the 1.0 version of Xen somewhere in September 2003, it wasn’t till the Xen 2.0 release (around December 2005) that Xen adoption really started to accelerate. Ian announced the 2.0 release in November 2004 with support for both Linux 2.4, 2.6, FreeBSD and Live Migration support.

Xen pioneered Paravirtualization, giving it both a giant performance boost but also an argument for the naysayers who claimed it was impossible to run Windows on the platform. The fact that the Cambridge lab had access to the Windows source code and even had it running on Xen wasn’t really an argument since they were unable to redistribute it.

Different Linux distributions adopted quickly making Xen the de facto Linux virtualization solution. Also: the Open Solaris project was working on Xen support, first only as a guest but later also as a host operating system.

Then came the VT capabilities, and once again Xen was leading the pack, bringing out a Xen version that supported hardware-assisted virtualization. So the Open Source Xen version was beating the competition on different levels – speed, flexibility, etc. – but had one key element missing: the management layer, a GUI, the part that people actually spend money on …

Meanwhile, the company XenSource Inc. had been founded by the original developers of Xen and they started to work on a set of management tools and bang, next thing we know is Citrix announcing the acquisition of XenSource for $ 500 million USD in the summer of 2007.

While the discussion between Xen and VMware was still going on to see what infrastructure was needed in the kernel to support virtualization, KVM (Kernel Based Virtual Machine) had come out of nowhere: a lightweight kernel module that enabled the VT Capabilities of the new generation of CPUs and that ended up in the mainstream kernel in no time. KVM was ultimately included in the 2.6.20 release of the Linux kernel after merely a couple of months of development.

KVM enabled Qemu to benefit from the VT features and a new team was born. KVM is the lean and mean, small virtual machine, and the fact that it was so small only made it easier to adopt in the main tree. KVM is maintained by Avi Kivity who is working at Qumranet, with Moshe Bar amongst its founders about to launch a product called Solid ICE, aiming for the desktop virtualization market. KVM however is not doing all the work, a modified Qemu version acts as the user space part that enables the full power of KVM.

Today different distributions support both KVM and Xen and are working towards a single tool set to manage them both.

Qemu started to pop up everywhere in the virtualization arena in 2007, e.g. within the VirtualBox project from innotek, a German software company located in Stuttgart.

VirtualBox is one of the most important open source solutions if you want to run other operating systems on your desktop. It’s free, it’s open and it has all the features you would expect from its commercial counterparts! Sometimes these commercial counterparts, facilitate ‘match making’ events, which outcomes are not intended. For example at VMworld in New York in September ’07, Achim Hasenmueller, co-founder and kernel wizard at innotek was introduced to the Sun Microsystems management and less than four months later they announced their ‘marriage’ (Sun acquired innotek for an undisclosed amount in February 2007). As VirtualBox was already running on a multitude of Operating Systems such as Windows, Linux and OS/X, they evidently also added Sun’s Solaris to this impressive list. VirtualBox also supports a large number of guest platforms, including common Windows flavors (NT 4.0, 2000, XP, Server 2003, Vista).

We’ve been talking mostly about paravirtualisation and hardware-assisted virtualization with KVM, Xen and VirtualBox, but of course there is much more out there. Let’s have a look at the players on the Operating System Level virtualisation are, an identical copy of one kernel providing a secured container where user space programs can run. Today, there are 2 main players in this area (VServer and OpenVZ). VServer was started by Jacques Gelinas and is currently lead by Herbert Pötzl from Austria. The Linux-VServer started July 2001 as BSD Jail reimplementation for Linux. In 2004, it was rewritten from scratch for the 2.6 kernel.

Not much of a surprise, people tend to think that Linux-VServer and OpenVZ have a lot in common, and some people even think OpenVZ was once based on a fork of Linux-VServer. According to Herbert Pötzl that isn’t true today: the projects do not share any code, although they provide roughly similar functionality in often quite different ways … In 2003 however , Linux-VServer was forked into FreeVPS by Alex Lyashkov and soon after that, it was integrated into the H-Sphere product, maintained by Positive Software.

SWsoft was founded back in 1999 and released their commercial Virtuozzo product in 2001, as a proprietary virtualization solution for Linux and later also supporting Windows. When SWsoft acquired Plesk in 2003, a proprietary framework to manage hosted solution, evidently virtualization fitted nicely in this picture since the OS level virtualization OpenVZ uses is a perfect match for web hosting.

SWsoft then went on to buy Parallels and managed to keep it a secret for almost 3 years. In late 2007, they finally decided that their Parallels brand was better known than their Virtuozzo or Plesk brands and decided to change the company name into Parallels alltogether. Having a single kernel for each virtual machine that runs in your environment is both the advantage and the disadvantage of OpenVZ and Linux-VServer. Its advantage of being a lightweight solution that can scale easily to hundreds of machines with no significant penalty is also its biggest disadvantage – what if something goes wrong with that kernel? Other approaches such as Xen and KVM allow you to run different kernels , or even different operating systems, which of course requires much more memory for each instance.

If you are into Hot Motorcycles you’ll remember the 1999 Virtual Iron company, a company that manufactered a CD that helped people create a customized bike. Fast forward to 2004, where a domain-squatter was using the site, and in February 2005 a company that looks like the Virtual Iron company we know now, started using the domain. Virtual Iron had a product called Virtual Iron VFE in store, which they presented at Linux World and later also more in depth at OLS. They claimed to have developed a Virtual Machine Monitor that was also Clustered. The Virtual Iron VFe product transparently created a shared memory multi-processor out of number of servers.

Yes, this sounds familiar, it sounds like an SSI implementation, it sounds like openMosix or OpenSSI, and that’s exactly what some people thought it was. Rumors on the net claimed that Virtual Iron was indeed violating the GPL while reusing and modifying openMosix code while not redistributing it’s changes, true or false, we’ll probably never know. In August 2005 Virtual Iron started shifting as they announced they were working on having their software manage other platforms too. Today, their product is based on an open-source Hypervisor, which name you can most likely already guess (yes, indeed, they use Xen). What happened with the SSI alike technology is unclear.

The final player in this area we need to point to is Paul Rusty Russel’s Lguest, formerly known as Lhype, almost known as Rustyvisor or Wonkavisor. It is an experimental Hypervisor developed by Rusty intended as proof of concept for the paravirt ops. Redhat has been working on it also, but who knows what the future will bring?

Which brings us to the final part: where to put your money? That kinda depends on your needs:

  • If I’m talking to a hoster who needs to run lots and lots of similar machines with easy management, I’ll be pointing him to Linux-Vserver
  • If someone is looking at bare metal hardware virtualisation for his Linux machines, it’s Xen all the way
  • If he needs a platform to test different distributions and operating systems on his desktop I’ll probably be pointing to VirtualBox
  • If someone really wants to head into placing his desktops virtualized in the data center, KVM would be my bet

What if someone wants to do nothing else but use Linux as a base framework to run Windows virtual machines?

In that case the commercial Xen offerings such as the one from XenSource, Suse and Redhat would be best as they can provide you also with adapted drivers for the guest operating system. But ask me again in 6 months and I’ll probably tell you otherwise.

Watch out for the third part of this article series, with more on Xen!

Filed Under: Featured, Guest Posts, News Tagged With: citrix, Ian Pratt, kvm, Linux-VServer, open source virtualization, openvz, Parallels, qemu, qumranet, sun microsystems, swsoft, Virtual Iron, VirtualBox, virtualisation, virtualization, Virtuozzo, vmware, VServer, Xen, xensource

Baseline: 10 Free Virtualization Tools You Should Know

February 28, 2008 by Robin Wauters Leave a Comment

Baseline published an interesting list of “10 free virtualization tools you should know” on its website.

virtualization-free-tools1.png

The list in full:

  1. OpenVZ (Parallels) – also check out our video interview with Werner Fisher from Thomas-Krenn.AG on OpenVZ
  2. FreeVPS (Positive Software)
  3. Sun xVM (Sun Microsystems, who wants to equip Web 2.0 startups with “SAMP”)
  4. VirtualBox (innotek, recently acquired by Sun)
  5. PlateSpin Power Recon (PlateSpin, recently acquired by Novell)
  6. Vizioncore vOptimizer Free Ware (Vizioncore, recently acquired by Quest)
  7. Virtual Iron Single Server Edition (Virtual Iron)
  8. Enomalism Virtualized Management Dashboard – VMD (Enomaly)
  9. Microsoft Virtual Server Migration Toolkit – VSMT (Microsoft) – also check out our video interview with Mike Neil, Virtual Machine Technologies Product Unit Manager at Microsoft
  10. Moka5 LivePC Engine (Moka5)

Filed Under: Uncategorized Tagged With: Baseline, BaselineMag, Enomalism Virtualized Management Dashboard, Enomalism VMD, Enomaly, free, FreeVPS, freeware, innotek, microsoft, Microsoft Virtual Server Migration Toolkit, Microsoft VSMT, Mike Neil, Moka5, Moka5 LivePC Engine, Novell, openvz, Parallels, PlateSpin, PlateSpin Power Recon, Positive Software, quest, quest software, SAMP, sun, sun microsystems, Sun xVM, Virtual Iron, Virtual Iron Single Server Edition, VirtualBox, virtualisation, virtualization, Vizioncore, Vizioncore vOptimizer, Vizioncore vOptimizer Free Ware, Werner Fisher

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

Tags

acquisition application virtualization Cisco citrix Citrix Systems citrix xenserver cloud computing Dell desktop virtualization EMC financing Funding Hewlett Packard HP Hyper-V IBM industry moves intel interview kvm linux microsoft Microsoft Hyper-V Novell oracle Parallels red hat research server virtualization sun sun microsystems VDI video virtual desktop Virtual Iron virtualisation virtualization vmware VMware ESX VMWorld VMWorld 2008 VMWorld Europe 2008 Xen xenserver xensource

Recent Comments

  • C program on Red Hat Launches Virtual Storage Appliance For Amazon Web Services
  • Hamzaoui on $500 Million For XenSource, Where Did All The Money Go?
  • vijay kumar on NComputing Debuts X350
  • Samar on VMware / SpringSource Acquires GemStone Systems
  • Meo on Cisco, Citrix Join Forces To Deliver Rich Media-Enabled Virtual Desktops

Copyright © 2023 · Genesis Sample on Genesis Framework · WordPress · Log in

  • Newsletter
  • Advertise
  • Contact
  • About