• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
Virtualization.com

Virtualization.com

News and insights from the vibrant world of virtualization and cloud computing

  • News
  • Featured
  • Partnerships
  • People
  • Acquisitions
  • Guest Posts
  • Interviews
  • Videos
  • Funding

virtualization

Microsoft Opened Up Virtualization For Vista Under Court Pressure

March 11, 2008 by Robin Wauters 1 Comment

Earlier this year, Microsoft surprisingly flip-flopped its earlier decision not to allow users to run Vista Home Basic and Vista Home Premium as guest operating systems on a virtual machine. According to Computerworld, court documents now prove MS did this because of a complaint filed with antitrust regulators.

According to a status report filed with U.S. District Court Judge Colleen Kollar-Kotelly, Microsoft changed the end-user licensing agreements (EULA) of Vista Home Basic and Vista Home Premium under pressure from Phoenix Technologies Ltd. Phoenix, best known for the BIOS, or firmware, that it sells to PC makers, had filed a complaint with regulators sometime after early November 2007, arguing that Microsoft should open the less-expensive versions of Vista to virtualization.

virtualization-vista-windows-microsoft.JPG

Although the report didn’t name the Phoenix virtualization product, it was referring to HyperSpace, technology that the company unveiled in November 2007. HyperSpace embeds a Linux-based hypervisor in the computer’s BIOS that allows the computer to run open-source software without booting Windows. A little more than two months after Phoenix filed its complaint, Microsoft gave in. “After discussion with the Plaintiff States and the three-person technical committee that assists in monitoring Microsoft’s compliance, Microsoft agreed to remove the EULA restrictions, and has done so,” the status report said.

Unfortunately, Phoenix Technologies and Microsoft declined to comment about the complaint and the changes to virtualization in Vista.

Filed Under: Featured, News Tagged With: Colleen Kollar-Kotelly, complaint, court, EULA, HyperSpace, microsoft, MS, Phoenix, Phoenix HyperSpace, Phoenix Technologies, virtualisation, virtualization, Vista virtualization, windows, windows vista, Windows Vista virtualization

Looking Back At A Decade of Open Source Virtualization

March 10, 2008 by Kris Buytaert 3 Comments

Will 2008 become the “Virtual Year”?

That’s what some people would have us believe now that the virtualization hype is reaching never before seen heights, and large acquisitions & mergers are starting to become quite common (Citrix bought Xensource, Novell picked up PlateSpin, Sun acquired innotek, Quest Software snapped up Vizioncore while VMware treated itself to Thinstall, and so on).

But few people realize or fail to acknowledge that the large majority of virtualization techniques and developments were started as, or remain Open Source projects.

Where are we coming from ?

Even without looking back, we know that IBM was one of the pioneers in the virtualization area; they were talking about Virtual Machines before I was even born. But who remembers one of the first Open Source virtualization takeovers? Back in 1999, Mandrake Software bought Bochs . Yes, that’s nineteen ninety nine, even before the y2k hype. Kevin Lawton had been working on the Bochs project together with different other developers since 1994. In 1999, he also had started working on Plex86, also known as FreeMWare.

Kevin back then compared Plex86 to other tools such as VMWare, Wine, DOSEMU and Win4Lin. Plex86 in the meanwhile has been totally reinvented. While at first it was capable of running almost all operating systems, it is now a very light virtual machine designed only to run Linux.

Wine was also a frequently covered topic at different Linux Kongress venues. As its initiators claim themselves, Wine is not an emulator, but it most certainly used to be a key player in the virtualization area. Its attempts to run non-native applications in a different operating system, in this case mostly Windows applications on a Linux platform, didn’t exactly pass by unnoticed.

However, installing VMWare or Qemu became such an easier alternative than trying to run an application with Wine. And Win4Lin, its commercial brother, had similar adoption issues. Corporate adoption for neither Wine nor Win4Lin was successful, and Win4Lin recently reinvented itself as a Virtual Desktop Server product, where it is bound to face a lot of stiff competition.

People who claim desktop virtualization was ‘born in 2007’ obviously missed part of history. Although most Unix gurus claim desktop virtualization has been around for several decades via the X11 system, the Open Source alternatives to actually do the same on different platforms (or cross-platform) have also been around for a while.

Who has never heard of VNC, the most famous product that came out the Olivetti & Oracle Research Laboratory (ORL) in Cambridge, England? VNC was one of the first tools people began to use to remotely access Windows machines. System administrators who didn’t feel like running Windows applications on their Unix desktop just hid an old Windows desktop under their desk and connected to it using VNC. It was also quickly adopted by most desktop users as a tool to take over the desktop of a remote colleague. After the Olivetti & Oracle Research Laboratory closed different spin-offs of VNC such as RealVNC , TightVNC and UltraVNC popped up.. and it’s still a pretty actively used tool.

But VNC wasn’t the only contender in the field. Back in 2003, I ran into NX for the very first time , written by the Italian folks from NoMachine , with a FreeNX release co-existing alongside a commercial offering. It was first claimed to be yet another X reinvention, however NX slightly modified the concept and eliminated the annoying X roundtrips. The fact that NX used proxies on each side of the connection guaranteed that it could function even on extremely slow connections.

In the early days of this century, there was some confusion between UML and UMLinux. While Jeff Dike called his User-mode Linux the port of Linux to Linux, it was in essence a full blown Linux kernel running as a process on another Linux machine.

Apart from UML, there was UMLinux, also a User Mode Linux project, featuring a UML linux machine which booted using Lilo and from which an out-of-the-box Linux distribution could be installed. Two projects, one on each side of the Atlantic, with both a really similar goal and similar naming was simply asking for confusion. In 2003, the UMLinux folks decided to rebrand to FAUmachine. hence ending the confusion once and for all.

Research on virtualization wasn’t conducted exclusively in Germany; the Department of Computer Science and Engineering of the University of Washington was working on the lesser known Denali project. The focus of the Denali project is on lightweight protection domains; they are aiming at running 100s and 1000s VM’s concurrently on one single physical host.

And apparently, one project with a confusing name wasn’t enough. The Open Source community seemed desparate for more of that. Hence, the Linux-VServer project and Linux Virtual Server came around around the same time. The Linux Virtual Server actually hasn’t got that much to do with virtualization, at all. In essence, Linux Virtual Server is a load balancer that will balance TCP/IP connections to a bunch of other servers hence acting to the end user as one big High Performant and Highly Available Virtual Server. (The IPVS patch for Linux has been around since early 1999).

Linux VServer (released for the first time in late 2001) on the other hand provides us with different Virtual Private Servers that are running in different security contexts. Linux VServer will create different user space segments , so that each Virtual Private server looks like a real server and can only ‘see’ its own processes.

By then, Plex86 had a big competitor coming from France, where Fabrice Bellard was working Qemu. At first, Qemu was really a Machine Emulator. Much like Bochs (anyone still running AmigaOS?), you could create different virtual machines from totally different architectures. Evidently froml X86, but also from ARM, Sparc, PowerPC, Mips, m68k and even development versions for Alpha and alternative 64bit architectures. Qemu however was perceived by a lot of people as slow compared to other alternatives. There was an Accelerator module available providing an enormous performance boost, however that didn’t have such an open license as the rest of Qemu, which held back its adoption significantly. It was only about a year ago (early 2007) that the Accelerator module also became completely open source.

The importance of Qemu however should not be underestimated, as most of the current hot virtualization projects are borrowing Qemu knowledge or technology left and right. KVM (Kernel-based Virtual Machine) is the most prominent user of Qemu, but even VirtualBox, Xen (in HVM mode) and the earlier mentioned Win4Lin are using parts of Qemu.

As this is an overview of the recent Open Source Virtualisation history the focus has been on running virtual machines on Linux, or connecting to a remote platform from a Linux or Unix desktop, where most of the early developments have taken place. We shouldn’t fail to mention CoLinux in this regard, however. CoLinux allows you to run Linux as a Windows process, giving people on locked down desktops an alternative for VMWare to run Linux on their desktop.

Xen is with no doubt the most famous open source virtualization solution around, certainly after its acquisition by Citrix. Xen was conceived at the XenoServer project from the University of Cambridge, an initiative aiming to build an infrastructure for distributed computing and to create a place where one can safely execute potentially dangerous code in a distributed environment. Xen was first described in a paper presented at SOSP in 2003 but work on it began somewhere in 2001.

Next week, we’ll talk more about virtualization and open source with a detailed look at today’s landscape.

Filed Under: Featured, Guest Posts Tagged With: 64bit, Accelerator, acquisitions, Alpha, ARM, bochs, citrix, CoLinux, denali, DOSEMU, faumachine, FreeMWare, freenx, IBM, Jeff Dike, Kevin Lawton, kvm, linux, linux kernel, Linux Kongress, Linux Virtual Server, Linux-VServer, m68k, Mandrake, Mips, nomachine, nx, Olivetti & Oracle Research Laboratory, open source, ORL, OS, Plex86, PowerPC, qemu, RealVNC, SOSP, sparc, TightVNC, UltraVNC, UML, UMLinux, Unix, User Mode Linux, virtual desktop, virtual machines, Virtual Private Server, VirtualBox, virtualisation, virtualization, vnc, Win4Lin, windows, wine, X11, X86, Xen, xenoserver, xensource

On Virtualization and Server Consolidation

March 10, 2008 by Robin Wauters Leave a Comment

Insightful post by Arthur Cole over at ITBusinessEdge about server consolidation and virtualization. Cole argues that server consolidation done the right way remains the primary driver for most data centers.

But as those who have already taken the virtual plunge have no doubt realized, consolidating servers is not just a simple matter of powering up the virtualization layer and then pulling equipment out of racks. There is a long list of factors to consider with any centralization project and a wide range of land mines that need to be avoided to prevent service failures.

Cole refers to four interesting articles about server consolidation:

  • Server Virtualization and Consolidation Require More Resiliency (Bill Hammond, ITJungle)
  • Virtual Management, Virtual Mess (Kurt Westerfield, CTO ManagedObjects)
  • Thoughts on Server Consolidation Methodologies (IT consultant Brad Harris)
  • Opinion: 6 keys to virtualization project success (Jim Damoulakis, Computerworld)

Read the whole article here.

Filed Under: People Tagged With: Arthur Cole, data center, methodology, resiliency, server consolidation, server virtualization, virtualisation, virtualization

Virtutech Looking To Advance Standards for Virtualized Software Development

March 10, 2008 by Robin Wauters Leave a Comment

Virtutech, a San-Jose based Virtualized Software Development (VSD) provider, today announced an initiative to accelerate the creation of standards for the VSD industry and to drive mainstream acceptance of VSD throughout the electronic systems business. While continuing its long-standing involvement with Power.org at both the Technical Sub Committees and Marketing Programs level, Virtutech has also joined organizations in its domain-Eclipse.org, OSCI and Spirit Consortium-with the aim of fostering standards and best practices. Virtutech further announced collaboration with GreenSocs to promote Open Standards and community development.

virtualization-virtutech.gif

Virtutech intends to leverage its expertise with more than 1,000 successful users accumulated over the course of deploying its Simics platform since 2001 to propose, promote and support best practices, conventions and standards for VSD.

“Virtualized Software Development has the potential to make the same dramatic impact on software development that virtualization has already brought to the data center and business applications. However, the industry needs to stand up and define, promote and drive adoption of virtualization throughout the development community,” said Michel Genard, vice president of marketing at Virtutech. “Virtutech intends to be an agent of change and to actively precipitate the next big virtualization wave.”

[Source: press release]

Filed Under: News, Partnerships Tagged With: domain-Eclipse.org, GreenSocs, Michel Genard, Open Standards, OSCI, Power.org, Spirit Consortium, virtualisation, virtualization, Virtualized Software Development, Virtutech, VSD

Microsoft’s Ray Ozzie On Cloud & Utility Computing

March 10, 2008 by Robin Wauters Leave a Comment

Interesting interview up on GigaOM today, featuring Microsoft‘s Chief Software Architect and industry luminary Ray Ozzie talking about MS’s strategy, the economics of cloud computing and the relevance of desktop and infrastructure challenges.

virtualization-ray-ozzie.jpg

The most interesting bits:

OM: The costs of computing, hardware and bandwidth are dropping quickly. Do you believe that the cost will come down fast enough to make cloud computing actually a profitable business?

RAY OZZIE: Well, it’s unlikely that we would get into it if we didn’t think it was going to be a profitable business. So we’ll just manage it to be profitable. It’s going to have different margins than classic software, or the ad (-supported) business. But, we have every reason to believe that it will be a profitable business. It’s an inevitable business. The higher levels in the app stack require that this infrastructure exists, and the margins are probably going to be higher in the stack than they are down at the bottom.

…

OM: When do you think utility computing can be a profitable business; are we’re looking at like maybe two years, four years out before it actually starts to become a profitable entity?

RAY OZZIE: (Let’s) take (one company) who is in the market today: Amazon. They chose a price point. There are either customers at that price point or not. They may have priced themselves at expected costs as opposed to actual today costs, but it doesn’t really matter. They could have brought it out at twice the existing price and there still would have been a customer base, and they’d be making money at birth.

I think all of these utility-computing services, as they’re born will either be breaking even or profitable. At the scale that we’re talking about, nobody can afford, (even Microsoft) can’t afford to do it at a loss. We could subsidize it, I suppose. Google could subsidize it by profits in other parts of their business, we could subsidize it, but I don’t think there’s any reason that any of us in this world would bring out that infrastructure like this without charging for what we’re paying, and then trying to make some profit over it. The cost base is so high in terms of building these data centers you do want to kind of make it up.

Read the rest of the (edited) interview here.

Filed Under: Interviews, People Tagged With: cloud computing, computing, Google, hardware, microsoft, MS, Ray Ozzie, utility computing, virtualisation, virtualization

Podcast Tip: Andrea Arcangeli on KVM and hypervisor virtualization

March 7, 2008 by Robin Wauters Leave a Comment

Let us join Michael Dolan in pointing you to a great podcast on LinuxCast (LinuxWorld), featuring Don Marti interviewing Andrea Arcangeli on the topic of KVM and the benefits of the kernel taking on the hypervisor role (rather than separating the hypervisor and rewriting all the supporting structures as Xen does).

Listen to the podcast here!

Filed Under: Interviews, People Tagged With: Andrea Arcangeli, Don Marti, Hypervisor, kernel, kvm, LinuxCast, LinuxWorld, virtualisation, virtualization, Xen

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 181
  • Go to page 182
  • Go to page 183
  • Go to page 184
  • Go to page 185
  • Interim pages omitted …
  • Go to page 206
  • Go to Next Page »

Primary Sidebar

Tags

acquisition application virtualization Cisco citrix Citrix Systems citrix xenserver cloud computing Dell desktop virtualization EMC financing Funding Hewlett Packard HP Hyper-V IBM industry moves intel interview kvm linux microsoft Microsoft Hyper-V Novell oracle Parallels red hat research server virtualization sun sun microsystems VDI video virtual desktop Virtual Iron virtualisation virtualization vmware VMware ESX VMWorld VMWorld 2008 VMWorld Europe 2008 Xen xenserver xensource

Recent Comments

  • C program on Red Hat Launches Virtual Storage Appliance For Amazon Web Services
  • Hamzaoui on $500 Million For XenSource, Where Did All The Money Go?
  • vijay kumar on NComputing Debuts X350
  • Samar on VMware / SpringSource Acquires GemStone Systems
  • Meo on Cisco, Citrix Join Forces To Deliver Rich Media-Enabled Virtual Desktops

Copyright © 2025 · Genesis Sample on Genesis Framework · WordPress · Log in

  • Newsletter
  • Advertise
  • Contact
  • About