• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
Virtualization.com

Virtualization.com

News and insights from the vibrant world of virtualization and cloud computing

  • News
  • Featured
  • Partnerships
  • People
  • Acquisitions
  • Guest Posts
  • Interviews
  • Videos
  • Funding

hardware

Wind River Releases Proprietary Hypervisor For Hardware Virtualization

June 16, 2008 by Robin Wauters Leave a Comment

Wind River Systems, the California, US-based provider of Device Software Optimization (DSO), today introduced a multicore software solution for device development, which aims to help companies solve complex business challenges by taking advantage of multicore processing and virtualization.

Wind River Systems

In connection with today’s announcement, Wind River announced that it will introduce a scalable hypervisor that aims to enable virtualization for devices across a broad range of vertical markets, including networking, industrial and consumer devices. The ability to virtualize hardware allows multiple operating environments to share underlying processing cores, memory and other hardware resources.

Wind River’s hypervisor will incorporate the same design practices and technology that Wind River uses for its products, such as Multiple Independent Levels of Security (MILS). The hypervisor will be tightly integrated with VxWorks and Wind River Linux, and can support a variety of other operating systems. It will be available for early access in August 2008.

Filed Under: News Tagged With: Device Software Optimization, device virtualization, hardware, hardware virtualization, Hypervisor, MILS, multicore software, Multiple Independent Levels of Security, virtualisation, virtualization, VxWorks, Wind River, Wind River Linux, Wind River Systems

The Gap Between Hardware and Software

April 7, 2008 by Robin Wauters 1 Comment

Interesting read over at EE Times Asia, titled “IC industry addresses multicore, programming software gap“.

An excerpt:

“The semiconductor industry is starting to address what’s being called a software gap between a rising tide of multicore processors and a lack of parallel programming tools and techniques to make use of them.

The gap came into stark focus in the embedded world at the Multicore Expo, where chipmakers Freescale Semiconductor, Intel Corp., MIPS and a handful of silicon startups sketched out directions for their multicore products. Others warned that the industry has its work cut out for it delivering the software that will harness the next-generation chips.”

“There is a major gap between the hardware and the software,” said Eric Heikkila, director of embedded hardware research at Venture Development Corp. (VDC).

About 55 % of embedded system developers surveyed by VDC said they are using or will use multicore processors in the next 12 months. That fact is fueling the company’s projections that the market for embedded multicore processors will grow from about $372 million in 2007 to $2.47 billion in 2011.

In the PC market, the figures are even more dramatic. About 40 % of all processors Intel shipped in 2007 used multiple cores, but that will rise to 95 % in 2011, said Doug Davis, general manager of Intel’s embedded group.

But on the software side, vendors reported that only about 6 % of their tools were ready for parallel chips in 2007, a figure that will only rise to 40 % in 2011, VDC said. As much as 85 % of all embedded programming is now done in C or C++, languages that are “difficult to optimize for multicore,” said Heikkila.

Standardization

The Multicore Association announced at the Multicore Expo it has completed work on an applications programming interface for communications between cores, and is now working to define a standard for embedded virtualization.

“The ultimate goal of every computer scientist is to create a new language, but my personal view is we should not do it this time around,” said Wen-mei Hwu, a veteran researcher in parallel programming and professor of engineering at the University of Illinois at Urbana-Champaign, referring to a flowering of languages developed for big parallel computers two decades ago, many of which never gained traction. I believe there will be new language constructs in C/C++ to support some of the new frameworks people will develop, but even these constructs, if we are not careful, will not be widely adopted,” Hwu said. “Ultimately, I think we will make a small amount of extensions to C, but I think it’s too early.”

On-chip fabric

For their part, Freescale and Intel sketched out design trends they see on the horizon for their multicore chips.

“Freescale is now sampling the first dual-core versions of its PowerQuicc processors, aimed at telecom OEMs. The chips are part of a family that will eventually scale to 32-core devices”, said Dan Cronin, VP of R&D for Freescale’s networking division.

The processors will use a new on-chip interconnect fabric. They will also embed in hardware a hypervisor, a kind of low-level scheduling unit, co-developed with IBM according to specs set in the Power.org group. “Freescale will release an open source reference design for companies that want to build virtualization software that taps into the hypervisor”, Cronin said.

[Source: VMBlog]

Filed Under: News Tagged With: embedded hypervisors, Freescale, gap, hardware, intel, Mips, Multicore Expo, software, virtualisation, virtualization

Dell Reportedly Plans To Give Away VMware ESX Server 3i For Free, World Keeps Turning

March 15, 2008 by Robin Wauters 4 Comments

According to The Inquirer, it appears Dell is considering to stop charging VMware ESX Server 3i licensing fees on its PowerEdge servers to its customers. This was reportedly said by VMWare’s Senior Product Marketing Manager Martin Niemer and comes two weeks after the virtualization vendor announced it would start embedding the 32 MB hypervisor across Dell, Fujitsu-Siemens, HP and IBM servers.

This doesn’t come as a big surprise, as VMware had added to the previous announcement that hardware vendors would be able to choose which premium they would charge to end customers, if any.  Expect the other hardware vendors to follow suit and drop the prices for including the hypervisors significantly (or even zero) if Dell comes through, especially with the sharp-priced Microsoft hypervisor Hyper-V on its way.

But don’t expect this to have a serious impact on the whole VMware reseller channel’s bottom line, as some blogs are proclaiming already.  The real money is in the enterprise offering and upgrades anyway, and the smaller distributors and resellers have other advantages when it comes to SMB offerings besides pricing.

Filed Under: Rumors Tagged With: Dell, Fujitsu-Siemens, hardware, HP, Hyper-V, Hypervisor, IBM, microsoft, Microsoft Hyper-V, MS, OEM, servers, virtualisation, virtualization, vmware, VMware ESX, VMware ESX Server 3i

Microsoft’s Ray Ozzie On Cloud & Utility Computing

March 10, 2008 by Robin Wauters Leave a Comment

Interesting interview up on GigaOM today, featuring Microsoft‘s Chief Software Architect and industry luminary Ray Ozzie talking about MS’s strategy, the economics of cloud computing and the relevance of desktop and infrastructure challenges.

virtualization-ray-ozzie.jpg

The most interesting bits:

OM: The costs of computing, hardware and bandwidth are dropping quickly. Do you believe that the cost will come down fast enough to make cloud computing actually a profitable business?

RAY OZZIE: Well, it’s unlikely that we would get into it if we didn’t think it was going to be a profitable business. So we’ll just manage it to be profitable. It’s going to have different margins than classic software, or the ad (-supported) business. But, we have every reason to believe that it will be a profitable business. It’s an inevitable business. The higher levels in the app stack require that this infrastructure exists, and the margins are probably going to be higher in the stack than they are down at the bottom.

…

OM: When do you think utility computing can be a profitable business; are we’re looking at like maybe two years, four years out before it actually starts to become a profitable entity?

RAY OZZIE: (Let’s) take (one company) who is in the market today: Amazon. They chose a price point. There are either customers at that price point or not. They may have priced themselves at expected costs as opposed to actual today costs, but it doesn’t really matter. They could have brought it out at twice the existing price and there still would have been a customer base, and they’d be making money at birth.

I think all of these utility-computing services, as they’re born will either be breaking even or profitable. At the scale that we’re talking about, nobody can afford, (even Microsoft) can’t afford to do it at a loss. We could subsidize it, I suppose. Google could subsidize it by profits in other parts of their business, we could subsidize it, but I don’t think there’s any reason that any of us in this world would bring out that infrastructure like this without charging for what we’re paying, and then trying to make some profit over it. The cost base is so high in terms of building these data centers you do want to kind of make it up.

Read the rest of the (edited) interview here.

Filed Under: Interviews, People Tagged With: cloud computing, computing, Google, hardware, microsoft, MS, Ray Ozzie, utility computing, virtualisation, virtualization

BT Global Services To Virtualize Its Data Centre Network

February 27, 2008 by Robin Wauters 2 Comments

Interesting case-study from BT, delivered by Stefan Overtveldt, BT Global Services Vice President and Head of IT Transformation Practice at VMworld Europe (check our video coverage) today: they’re using virtualization technologies to run its network of data centres around the globe more effectively, improve service customer service delivery and save no less than 50 % on its overall running costs.

bt_logo.jpg

Having already deployed its virtual data centre (VDC) concept to 11 of its 58 data centres around the world, Stefan Overtveldt said the ongoing work and savings would also ultimately help it serve its business customers better. He said VDC first became attractive when BT began to run out of data centre space at the same time as the management of its 3,500 internal and 1,400 customer platforms – made up of applications, operating systems and an average of 10 servers each – became increasingly complex.

BT developed a classic server consolidation strategy in response, which relied on deploying VMware’s virtualization technologies and its competitors in the data centres to reduce its reliance on multiple physical servers. In doing so, it was able to take advantage of the dynamic provisioning capabilities running a virtual infrastructure within a standardized, distributed environment can offer at the same time.

Overtveldt concluded:

“We managed to reduce our overall costs by 50 % and the time it takes to provision new server, storage and network capacity down to hours from days. And our model of extreme standardization means any new internal application requirements outside of the virtual data centres have to have a strong business case first.”

[Source: ITPro]

Filed Under: News, People Tagged With: British Telecom, BT, data centre, data centres, hardware, servers, Stefan Overtveldt, virtualisation, virtualization, vmware, VMWorld, VMWorld 2008, VMWorld Europe, VMWorld Europe 2008

VMWare To Embed VMware ESX 3i Hypervisor Across Dell, Fujitsu-Siemens, HP and IBM Servers

February 27, 2008 by Robin Wauters 2 Comments

Today at VMWorld Europe 2008 (watch our video reports), VMware announced agreements to embed the VMware ESX 3i hypervisor in servers from Dell, Fujitsu Siemens Computers, HP and IBM. System providers are expected to begin shipping servers embedded with the VMware ESX 3i hypervisor within the next 60 days.

vmware1.jpg

From the release:

“We are very excited to be partnering with Dell, Fujitsu-Siemens, HP and IBM to proliferate virtualization and fast-track customers on the path to running a self-managing virtual datacenter,” said Diane Greene, president and chief executive officer of VMware. “Customers can now get VMware pre-integrated and pre-configured for the hardware platform of their choice for immediate standalone server consolidation. As customers want to expand their adoption and get more value from virtualization, they can upgrade from the ESX 3i hypervisor to VMware’s complete datacenter virtualization and management suite, VMware Infrastructure 3 (VI3).”

“VI3 provides automatic load balancing, business continuity, power management and the ability to move a virtual machine across physical machines with no service interruption. In addition, customers who are already using VI3 can plug-and-play virtualization-enabled servers into their datacenters to dynamically and automatically expand the pool of resources (CPU, memory, and networking) available to meet their changing business requirements.”

Greene and other VMware executives highlighted the new VMware ESX 3i agreements with representatives from Dell, HP and IBM during today’s VMworld Europe general session keynote presentations. The presentations will be available via a recorded web cast (http://www.vmware.com/go/europe-webcast ) by 3:00 p.m. Central European Time and 9:00 a.m. Eastern Standard Time today.

Filed Under: Featured, News, Partnerships Tagged With: Dell, Fujitsu-Siemens, hardware, HP, IBM, server, servers, virtualisation, virtualization, vmware, VMware ESX, VMware ESX 3i, VMware ESX 3i Hypervisor, VMWare ESX Server

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

Tags

acquisition application virtualization Cisco citrix Citrix Systems citrix xenserver cloud computing Dell desktop virtualization EMC financing Funding Hewlett Packard HP Hyper-V IBM industry moves intel interview kvm linux microsoft Microsoft Hyper-V Novell oracle Parallels red hat research server virtualization sun sun microsystems VDI video virtual desktop Virtual Iron virtualisation virtualization vmware VMware ESX VMWorld VMWorld 2008 VMWorld Europe 2008 Xen xenserver xensource

Recent Comments

  • C program on Red Hat Launches Virtual Storage Appliance For Amazon Web Services
  • Hamzaoui on $500 Million For XenSource, Where Did All The Money Go?
  • vijay kumar on NComputing Debuts X350
  • Samar on VMware / SpringSource Acquires GemStone Systems
  • Meo on Cisco, Citrix Join Forces To Deliver Rich Media-Enabled Virtual Desktops

Copyright © 2025 · Genesis Sample on Genesis Framework · WordPress · Log in

  • Newsletter
  • Advertise
  • Contact
  • About