Virtualization.com was present at this week’s Virtualization Minisummit in Ottawa.
The OLS Virtualization Minisummit took place last Tuesday in Les Suites, Ottawa. Aland Adams had created an interesting lineup with a mixture of Kernel level talks and Management framework talks. First up was Andrey Mirkin from OpenVZ. He first gave a general overview of different virtualization techniques.
While comparing them, he claimed that Xen has a higher virtualization overhead because the hypervisor needs to manage a lot of stuff where as “container-based” approaches that use the Linux kernel for this have less overhead.
We discussed OpenVZ earlier, which uses 1 kernel for both the host OS and all the guest OS’s. Each container has it’s own files, process tree, network (virtual network device), devices (which can be shared or not), IPC objects, etc. Often that’s an advantage, sometimes it isn’t.
When Andrey talks about containers, he means OpenVZ containers, which often confused the audience as at the same time the Linux Containers minisummit was gong on in a different suite. He went on to discuss the different features of OpenVZ. Currently it includes checkpointing; they have templates from which they can quickly build new instances.
OpenVZ also supports Live Migration , basically taking a snapshot and transporting (rsync based) it to another node. So not the Xen way .. there is some downtime for the server .. although a minor one.
Interesting to know is that OpenVZ is also working on including OpenVZ into the mainstream Linux Kernel. The OpenVZ team has been contributing a lot of patches and changes to the Linux kernel in order to get their features in. Andrey also showed us a nice demo of the PacMan Xscreensaver being live migrated back and forth.
Still, containers are “chroots on steroids” (dixit Ian Pratt).
Given the recent security fuzz I wondered about the impact of containers. Container-based means you can see the processes in the guest from the host OS, which is a enormous security problem. Imagine a Virtual Host provider using this kind of technique, including having full access to your virtualized platform, whereas in other approaches he’ll actually need to have your passwords etc. to access certain parts of the guest.
The next talk was about Virtual TPM on Xen/KVM for Trusted Computing, by Kuniyasu Suzaki. He kicked offs with explaining the basics of the Trusted Platform Module. The whole problem is to create a full chain of trust from booting till full operation. So you need a boot loader that supports TPM (grub IMA), you need a patched Kernel (IMA) , from where you can have a binary that is trusted. (Ima : Integrity Measurement Architecture).
There are 2 ways to pass TPM to a virtual machine. First, there is a proprietary module by IBM as presented on the 2006 Usenix symposium where they transfer the physical TPM to a VM. Secondly, there is emulating TPM by software, there is an emulator developed by eth on tpm-emulator.berlios.de. KVM and Xen support emulated TPM. Off course this doesn’t keep the hardware trust.
As Qemu is needed to emulate bios-related things you can’t do vTPM on a paravirtualized domain, you need an HVM-based one. A customized KVM by Nguygen Anh Quynh will be released shortly; the patch will be applied to Qemu.
Still, these cases are using the TPM emulator and not the real hardware. An additional problem with virtualization and TPM arises when you start thinking about Migrating machines around … and losing access to the actual TPM module. Kuniyasu then showed a demo shown using VMKnoppix.
Dan Magenheimer is doing a rerun of his Xen Summit 2008 talk titled “Memory Overcommit without the Commitment”.
There is a lot of discussion on why you should or should not support overcommit memory. Some claim you should just buy enough memory (after all, memory is cheap) but it isn’t always: as soon as you go for the bigger memory lats you’ll still be paying a lot of money.
Overcommitment cost performance, you’ll end up swapping which is painful, however people claim that with CPU and IO it also costs performance so sometimes you need to compromise between functionality, cost and performance. Imho, a machine that is low on memory and starts swapping or even OOM’ing processes is much more painful then a machine that slows down because it is reaching its CPU or IO limits.
So one of the main arguments in favor of wanting to support overcommit on Xen was
because VMWare does it …
Dan outlined the different proposed solutions, such as Ballooning, Content-based page sharing , VMM-driven paging demand, , Hotplug memory add/delete, ticketed ballooning or even swap entire guests. in order to come up with his own proposition which he titled Feedback-directed ballooning.
The idea is that you have a lot of information of the memory-status of your guest, that Xen ballooning works like a charm, that Linux actually does perform OK when put under memory stress (provided you have configured swap). And that you can use xenstore tools for two-way communication. So he wrote a set of userland bash scripts that implemented ballooning based on local or directed feedback.
Conclusion: Xen does do memory overcommit today, so Dan replaced a “critical” VMWare feature with a small shell script 🙂