• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
Virtualization.com

Virtualization.com

News and insights from the vibrant world of virtualization and cloud computing

  • News
  • Featured
  • Partnerships
  • People
  • Acquisitions
  • Guest Posts
  • Interviews
  • Videos
  • Funding

Mips

The Gap Between Hardware and Software

April 7, 2008 by Robin Wauters 1 Comment

Interesting read over at EE Times Asia, titled “IC industry addresses multicore, programming software gap“.

An excerpt:

“The semiconductor industry is starting to address what’s being called a software gap between a rising tide of multicore processors and a lack of parallel programming tools and techniques to make use of them.

The gap came into stark focus in the embedded world at the Multicore Expo, where chipmakers Freescale Semiconductor, Intel Corp., MIPS and a handful of silicon startups sketched out directions for their multicore products. Others warned that the industry has its work cut out for it delivering the software that will harness the next-generation chips.”

“There is a major gap between the hardware and the software,” said Eric Heikkila, director of embedded hardware research at Venture Development Corp. (VDC).

About 55 % of embedded system developers surveyed by VDC said they are using or will use multicore processors in the next 12 months. That fact is fueling the company’s projections that the market for embedded multicore processors will grow from about $372 million in 2007 to $2.47 billion in 2011.

In the PC market, the figures are even more dramatic. About 40 % of all processors Intel shipped in 2007 used multiple cores, but that will rise to 95 % in 2011, said Doug Davis, general manager of Intel’s embedded group.

But on the software side, vendors reported that only about 6 % of their tools were ready for parallel chips in 2007, a figure that will only rise to 40 % in 2011, VDC said. As much as 85 % of all embedded programming is now done in C or C++, languages that are “difficult to optimize for multicore,” said Heikkila.

Standardization

The Multicore Association announced at the Multicore Expo it has completed work on an applications programming interface for communications between cores, and is now working to define a standard for embedded virtualization.

“The ultimate goal of every computer scientist is to create a new language, but my personal view is we should not do it this time around,” said Wen-mei Hwu, a veteran researcher in parallel programming and professor of engineering at the University of Illinois at Urbana-Champaign, referring to a flowering of languages developed for big parallel computers two decades ago, many of which never gained traction. I believe there will be new language constructs in C/C++ to support some of the new frameworks people will develop, but even these constructs, if we are not careful, will not be widely adopted,” Hwu said. “Ultimately, I think we will make a small amount of extensions to C, but I think it’s too early.”

On-chip fabric

For their part, Freescale and Intel sketched out design trends they see on the horizon for their multicore chips.

“Freescale is now sampling the first dual-core versions of its PowerQuicc processors, aimed at telecom OEMs. The chips are part of a family that will eventually scale to 32-core devices”, said Dan Cronin, VP of R&D for Freescale’s networking division.

The processors will use a new on-chip interconnect fabric. They will also embed in hardware a hypervisor, a kind of low-level scheduling unit, co-developed with IBM according to specs set in the Power.org group. “Freescale will release an open source reference design for companies that want to build virtualization software that taps into the hypervisor”, Cronin said.

[Source: VMBlog]

Filed Under: News Tagged With: embedded hypervisors, Freescale, gap, hardware, intel, Mips, Multicore Expo, software, virtualisation, virtualization

Looking Back At A Decade of Open Source Virtualization

March 10, 2008 by Kris Buytaert 3 Comments

Will 2008 become the “Virtual Year”?

That’s what some people would have us believe now that the virtualization hype is reaching never before seen heights, and large acquisitions & mergers are starting to become quite common (Citrix bought Xensource, Novell picked up PlateSpin, Sun acquired innotek, Quest Software snapped up Vizioncore while VMware treated itself to Thinstall, and so on).

But few people realize or fail to acknowledge that the large majority of virtualization techniques and developments were started as, or remain Open Source projects.

Where are we coming from ?

Even without looking back, we know that IBM was one of the pioneers in the virtualization area; they were talking about Virtual Machines before I was even born. But who remembers one of the first Open Source virtualization takeovers? Back in 1999, Mandrake Software bought Bochs . Yes, that’s nineteen ninety nine, even before the y2k hype. Kevin Lawton had been working on the Bochs project together with different other developers since 1994. In 1999, he also had started working on Plex86, also known as FreeMWare.

Kevin back then compared Plex86 to other tools such as VMWare, Wine, DOSEMU and Win4Lin. Plex86 in the meanwhile has been totally reinvented. While at first it was capable of running almost all operating systems, it is now a very light virtual machine designed only to run Linux.

Wine was also a frequently covered topic at different Linux Kongress venues. As its initiators claim themselves, Wine is not an emulator, but it most certainly used to be a key player in the virtualization area. Its attempts to run non-native applications in a different operating system, in this case mostly Windows applications on a Linux platform, didn’t exactly pass by unnoticed.

However, installing VMWare or Qemu became such an easier alternative than trying to run an application with Wine. And Win4Lin, its commercial brother, had similar adoption issues. Corporate adoption for neither Wine nor Win4Lin was successful, and Win4Lin recently reinvented itself as a Virtual Desktop Server product, where it is bound to face a lot of stiff competition.

People who claim desktop virtualization was ‘born in 2007’ obviously missed part of history. Although most Unix gurus claim desktop virtualization has been around for several decades via the X11 system, the Open Source alternatives to actually do the same on different platforms (or cross-platform) have also been around for a while.

Who has never heard of VNC, the most famous product that came out the Olivetti & Oracle Research Laboratory (ORL) in Cambridge, England? VNC was one of the first tools people began to use to remotely access Windows machines. System administrators who didn’t feel like running Windows applications on their Unix desktop just hid an old Windows desktop under their desk and connected to it using VNC. It was also quickly adopted by most desktop users as a tool to take over the desktop of a remote colleague. After the Olivetti & Oracle Research Laboratory closed different spin-offs of VNC such as RealVNC , TightVNC and UltraVNC popped up.. and it’s still a pretty actively used tool.

But VNC wasn’t the only contender in the field. Back in 2003, I ran into NX for the very first time , written by the Italian folks from NoMachine , with a FreeNX release co-existing alongside a commercial offering. It was first claimed to be yet another X reinvention, however NX slightly modified the concept and eliminated the annoying X roundtrips. The fact that NX used proxies on each side of the connection guaranteed that it could function even on extremely slow connections.

In the early days of this century, there was some confusion between UML and UMLinux. While Jeff Dike called his User-mode Linux the port of Linux to Linux, it was in essence a full blown Linux kernel running as a process on another Linux machine.

Apart from UML, there was UMLinux, also a User Mode Linux project, featuring a UML linux machine which booted using Lilo and from which an out-of-the-box Linux distribution could be installed. Two projects, one on each side of the Atlantic, with both a really similar goal and similar naming was simply asking for confusion. In 2003, the UMLinux folks decided to rebrand to FAUmachine. hence ending the confusion once and for all.

Research on virtualization wasn’t conducted exclusively in Germany; the Department of Computer Science and Engineering of the University of Washington was working on the lesser known Denali project. The focus of the Denali project is on lightweight protection domains; they are aiming at running 100s and 1000s VM’s concurrently on one single physical host.

And apparently, one project with a confusing name wasn’t enough. The Open Source community seemed desparate for more of that. Hence, the Linux-VServer project and Linux Virtual Server came around around the same time. The Linux Virtual Server actually hasn’t got that much to do with virtualization, at all. In essence, Linux Virtual Server is a load balancer that will balance TCP/IP connections to a bunch of other servers hence acting to the end user as one big High Performant and Highly Available Virtual Server. (The IPVS patch for Linux has been around since early 1999).

Linux VServer (released for the first time in late 2001) on the other hand provides us with different Virtual Private Servers that are running in different security contexts. Linux VServer will create different user space segments , so that each Virtual Private server looks like a real server and can only ‘see’ its own processes.

By then, Plex86 had a big competitor coming from France, where Fabrice Bellard was working Qemu. At first, Qemu was really a Machine Emulator. Much like Bochs (anyone still running AmigaOS?), you could create different virtual machines from totally different architectures. Evidently froml X86, but also from ARM, Sparc, PowerPC, Mips, m68k and even development versions for Alpha and alternative 64bit architectures. Qemu however was perceived by a lot of people as slow compared to other alternatives. There was an Accelerator module available providing an enormous performance boost, however that didn’t have such an open license as the rest of Qemu, which held back its adoption significantly. It was only about a year ago (early 2007) that the Accelerator module also became completely open source.

The importance of Qemu however should not be underestimated, as most of the current hot virtualization projects are borrowing Qemu knowledge or technology left and right. KVM (Kernel-based Virtual Machine) is the most prominent user of Qemu, but even VirtualBox, Xen (in HVM mode) and the earlier mentioned Win4Lin are using parts of Qemu.

As this is an overview of the recent Open Source Virtualisation history the focus has been on running virtual machines on Linux, or connecting to a remote platform from a Linux or Unix desktop, where most of the early developments have taken place. We shouldn’t fail to mention CoLinux in this regard, however. CoLinux allows you to run Linux as a Windows process, giving people on locked down desktops an alternative for VMWare to run Linux on their desktop.

Xen is with no doubt the most famous open source virtualization solution around, certainly after its acquisition by Citrix. Xen was conceived at the XenoServer project from the University of Cambridge, an initiative aiming to build an infrastructure for distributed computing and to create a place where one can safely execute potentially dangerous code in a distributed environment. Xen was first described in a paper presented at SOSP in 2003 but work on it began somewhere in 2001.

Next week, we’ll talk more about virtualization and open source with a detailed look at today’s landscape.

Filed Under: Featured, Guest Posts Tagged With: 64bit, Accelerator, acquisitions, Alpha, ARM, bochs, citrix, CoLinux, denali, DOSEMU, faumachine, FreeMWare, freenx, IBM, Jeff Dike, Kevin Lawton, kvm, linux, linux kernel, Linux Kongress, Linux Virtual Server, Linux-VServer, m68k, Mandrake, Mips, nomachine, nx, Olivetti & Oracle Research Laboratory, open source, ORL, OS, Plex86, PowerPC, qemu, RealVNC, SOSP, sparc, TightVNC, UltraVNC, UML, UMLinux, Unix, User Mode Linux, virtual desktop, virtual machines, Virtual Private Server, VirtualBox, virtualisation, virtualization, vnc, Win4Lin, windows, wine, X11, X86, Xen, xenoserver, xensource

Primary Sidebar

Tags

acquisition application virtualization Cisco citrix Citrix Systems citrix xenserver cloud computing Dell desktop virtualization EMC financing Funding Hewlett Packard HP Hyper-V IBM industry moves intel interview kvm linux microsoft Microsoft Hyper-V Novell oracle Parallels red hat research server virtualization sun sun microsystems VDI video virtual desktop Virtual Iron virtualisation virtualization vmware VMware ESX VMWorld VMWorld 2008 VMWorld Europe 2008 Xen xenserver xensource

Recent Comments

  • C program on Red Hat Launches Virtual Storage Appliance For Amazon Web Services
  • Hamzaoui on $500 Million For XenSource, Where Did All The Money Go?
  • vijay kumar on NComputing Debuts X350
  • Samar on VMware / SpringSource Acquires GemStone Systems
  • Meo on Cisco, Citrix Join Forces To Deliver Rich Media-Enabled Virtual Desktops

Copyright © 2025 · Genesis Sample on Genesis Framework · WordPress · Log in

  • Newsletter
  • Advertise
  • Contact
  • About