Some time ago, we had a chance to sit down (on a bench) together with Xen Guru Ian Pratt, well known for co-founding and ultimately selling XenSource – the company behind the open-source Xen project – to Citrix in October 2007.
This exclusive interview was taken as part of our video coverage at the Fosdem 2008 conference held at the ULB (Brussels Free University, hence the “inspiring” Solbosch campus background). Toon Vanagt, owner and publisher of Virtualization.com, interviewed the rather jet-lagged Ian Pratt on a sunny Sunday morning about Xen, XenServer and the virtualization landscape as a whole.
We cut the interview into digestable pieces which we will publish one at a time. Here’s the first part, the second part can be found here (you can also find a written transcript below for your convenience):
This video is also available on Vimeo and Streamocracy.
Hello Ian Pratt, you are one of the founders of XenSource, which was recently renamed to XenServer after it was acquired by Citrix. Could you give an introduction to para-virtualization, hypervisors or OS-enlightment as Microsoft likes to market it?
“The work on Xen really started in the University of Cambridge back in 2001 as we were interested in figuring out the best way to build virtualization systems. We realized there were two techniques which -when used together- were going to enable you to do a great job at virtualization.
One is getting facilities into the hardware to make the job of virtualizing the platform easier. This means getting stuff into the CPU, chipset and in particular into the I/O-devices, like the NICs and hostbus adapters. But second also working with the operating system vendors to try to get stuff into the operating system to enable the OS to call down into the hypervisor to work better in a virtualized scenario.
We pushed hard on both of those fronts, working to design network interface adapters that had this special hardware support and also working to add these extensions into operating systems like Linux and then other free operating systems and now even an OS like Microsoft Windows. That is how we get to this current generation of virtualization software that really is able to achieve great performance and have great security to provide all the great benefits of virtualization.”
(1:48) Ian, it is quite remarkable that the Xen project is one of the rare open source software projects that actually managed to get its feature requests into the large hardware vendor production. How did you achieve this?
“Well, there is a long lead time on getting anything build into hardware. As Xen had been running for a quite while as a university project, we were talking to all the different hardware vendors. You have to remember in the early days Xen was sponsored by some of those vendors and also working with the operating system vendors. Also we did things like build network interfaces that had these facilities in.
We prototyped them and wrote papers about them. And then companies really began to see that virtualization was important. Let’s be fair: VMware had a great part to play in showing the world that virtualization was important and then I think Xen has done a great job at showing people how to it should really be done.”
(2:47) It is interesting you mention VMware, because Xen is an open source project and VMware remains a closed source product to date. One of the major challenges for people looking at which vendor to select, is the specific Virtual Machine format and how to avoid vendor lock-in. So what is your opinion on the Open Virtual Format (OVF) and how do you see the evolution in this field.
“OVF actually came about as a collaboration between us and VMWare. We had been working on a format we called the Open Virtual Appliance (OVA) and had been putting quite a bit of work in to that. We were obviously really concerned about the interoperability issues. We had a discussion with VMware as they had been working on their next-generation format for their hypervisor and we actually collaborated together and came up with the OVF specifications. And now both sides are implementing that. We will have to see how it works out in practice. You still have to do a certain amount of preparation on the virtual machine to make it able to work on both platforms and it is really down to the people who produce virtual appliances to follow the best practices and make sure their Virtual Machines are portable. But at least now there is a common file format and metadata format for transferring things between different virtualization solutions. Or at least there will be in the future when it is implemented and ratified by the DTMF and al that boring stuff is out of the way.”
(4:20) So you think that once these meta-data have been defined for Virtual Machines and have adopted by Xen, VMware and Microsoft, we will actually be able to do Vmotion or Virtual Machine Relocation between those different vendors?
“Doing live virtual machine relocation is kind of like changing the engine on a plane in flight!
That is certainly further down the road. OVF is really about having a format in which you can package a given virtual appliance, which might actually consist of the multiple virtual machines and install it onto a given hypervisor and have it run there. And hopefully you also will also be able to use it for moving an installed virtual machine between different hypervisors, but there is a way to go, before we can do this live relocation. It is a worthy end goal, but there is a lot of stuff that would need to happen to make that work.”
(5:16) It is one of those things, when you see it happen for real; which now creates a strong WOW-effect in Virtualization.
“It certainly is and it would be nice to be able to live relocate a virtual machine from Xen to Hyper-V or to VMware, but there is a lot of work to do.”
(5:34) Will VM-mirroring ever be possible?
“Absolutely will not only be possible, it has existed for some time. There is some great work that has been done and a couple of things to point out here. One, there is a commercial product available on top of XenServer, which does this today by a company called Marathon Technologies, where they have 2 virtual machines running on different physical hardware on top of Xen and they are synchronizing the state between the two in real-time to the extent that you can just walk up to one of these machines and yank the power cord straight out from their back. None of the users of these applications or services provided on that server will even notice anything has happened, because it instantaneously (or within milliseconds) failed over to the other VM.
So that was the commercial product. There is also a lot of great work going on in open source. For example a project at the University of Michigan, using a technique called deterministic replay. That is very cool. Also work done by the University of British Colombia on a project called Remus, which I think is really cool, because it works for Virtual Machines that are multi-processor, so you can have an SNMP guest and you could be synchronizing that VM-image to another machine. It is looking like they do not even necessarily need to be in the same building. You might be able to synchronize over a suitably fat pipe across the wide area network. You can use it for disaster recovery. We want to get this cool stuff into mainline Xen.”
(7:16) When looking at VM-relocation, the typical reasons people use this for is either to avoid downtime, disaster recovery and high availability, to relocate workloads or to enforce security policies: either with fire walls inside the VM or to lock the OS at the root-level. Can you tell us something more on those security policies you can enforce in Xen?
“One of the nice things you can do with Virtualization is that you can actually stand outside the OS and look into it. And implement some of these facilities which you would normally do using software installed within the VM. You can now actually do it outside and you do not have to worry whether the administrator has actually configured the fire wall, virus scanner or back-up correctly within the VM. Because we can actually do all of these tasks from outside now. I think that is going to be a far more common thing in the future, where you will try to take care of all of those things within the virtualization layer, so that your administrator of the VM does not have to worry about or risks to mess it up. You can kind of protect administrators from themselves.
You will see virus scanners running as part of the virtualization stack or platform and these will scan the contents of all of the VMs running on it. It is like taking the firewall that you might have on the edge of the network, where it connects to the outside world and kind of pulling that in, to put it closer to the VMs that are actually running applications and actually implementing that firewall in a distributed fashion across all of your virtualized platforms.”
Watch the second part of the interview here.