• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
Virtualization.com

Virtualization.com

News and insights from the vibrant world of virtualization and cloud computing

  • News
  • Featured
  • Partnerships
  • People
  • Acquisitions
  • Guest Posts
  • Interviews
  • Videos
  • Funding

Guest Posts

Roger Baskerville Leaves Citrix / XenSource

November 2, 2008 by Kris Buytaert Leave a Comment

Roger Baskerville has left Citrix, where he started out as the Sales Director of Xensource EMEA , after the Citrix merger to become Regional Director Northern Europe Server Virtualization .

During his years at Xensource Roger was one of the first commercial pushers of Xen and later XenEnterprise.

Roger has now joined Vizioncore as Vice President for EMEA and he will be responsible for EMEA operations. He leads the sales, marketing and systems engineering teams based across the region. Baskerville has held a variety of senior channel focused EMEA sales leadership positions with both mature high tech organizations as well as start-up operations. Previous companies include LightPointe, Palm, Compaq and NCR. A seasoned industry speaker who is both technically astute and sales focussed, Baskerville brings with him a wealth of experience in virtualization and international sales.

Earlier this year Quest fully acquired VizionCore as part of their journey into virtualization, after earlier
already owning a smaller part of the company. VizionCore then was described as the leading provider of disaster recovery and other products for virtual infrastructure management.

Today their website reads
“VizionCore Inc. provides software that helps organizations safeguard and optimize their virtualized environments and allows them to extract the maximum return on their investment in the VMware platform. Vizioncore’s software products support essential IT strategies, including business continuity, high availability and disaster recovery.”

With this move Roger stays in the Virtualization world where he has worked for the past couple of years, however moving from a fully Open Source based technology to a back to a proprietary environment.

The bigger question however is .. who else will be leaving Citrix/ XenSource. and when ? XenSource has been with Citrix for about a year now .. maybe there are other people jumping ship.

Filed Under: Featured, Guest Posts, People Tagged With: citrix, citrix xenserver, Citrix XenSource, industry moves, recruitment, Roger Baskerville, virtualisation, virtualization, Vizioncore, XenEnterprise, xensource

The Future of Xen At Red Hat

October 28, 2008 by Kris Buytaert Leave a Comment

As you might know, most of the development for the upcoming Red Hat releases is happening in the Fedora project, so if you want to keep an eye on what’s going to happen in future RedHat releases Fedora is a good place to look.

Reuven pointed out that the next Fedora release (Fedora 10) won’t have Dom0 support.

This however is not yet the strategic decision from Redhat after buying Qumranet and thus KVM. But merely a lack of time before the release has to ship.

Xen typically is being developed with an older 2.6.18 Linux kernel release and forward porting of these features is a time consuming effort. The typical kernel-xen package in Fedora has always been a bit behind on the kernel package
So the slow introduction of paravirt_ops, is keeping Dom0 support out of Fedora 10.

The idea behind paravirt_ops is to build a kernel structure that gives an interface to a virtualization layer, any virtualization layer, it allows the kernel to run on both a hypervisor and the actual hardware. Initial support for Xen, VMware and KVM is available.

Today running Fedora 10 as a host for your virtual machines will give you KVM as your only option. However Fedora 11 should solve that problem again. But then again, you probably don’t want to be running Fedora in a production environment, RedHat Enterprise or CentOS 5 are a much more viable alternative

Filed Under: Guest Posts Tagged With: Fedora, kvm, RedHat, Xen

Guest Post: Clouds, Networks and Recessions

October 13, 2008 by Robin Wauters Leave a Comment

This is a cross-post of a blog article written by Gregory Ness, former VP of Marketing for Blue Lane Technologies who is currently working for InfoBlox.

Over the last three decades we’ve watched a meteoric rise in processing power and intelligence in network endpoints and systems drive an incredible series of network innovations; and those innovations have led to the creation of multi-billion dollar network hardware markets.  As we watch the global economy shiver and shake we now see signs of the next technology boom: Infrastructure2.0.

Infrastructure1.0- The Multi-billion Dollar Static Network

From the expansion of TCP/IP in the 80s/90s, the emergence of network security in the mid/late 90s to the evolution of performance and traffic optimization in the late 90s/early 00s we’ve watched the net effects of ever-changing software and system demands colliding with static infrastructure.  The result has been a renaissance of sorts in the network hardware industry, as enterprises installed successive foundations of specialized gear dedicated to the secure and efficient transport of an ever increasing population of packets, protocols and services.  That was and is Infrastructure1.0.

Infrastructure1.0 made companies like Cisco, Juniper/NetScreen, F5 Networks and more recently Riverbed very successful.  It established and maintained the connectivity between ever increasing global populations of increasingly powerful network-attached devices.  Its impact on productivity and commerce are proportionate to the advent of oceanic shipping, paved roads and railroads, electricity and air travel.  It has shifted wealth and accelerated activities on a level that perhaps has no historical precedent.

I talked about the similar potential economic impacts of cloud computing in June, comparing its future role to the shipment of spices across Asia and the Middle East before the rise of oceanic shipping.  One of the key enables of cloud computing is virtualization.  And our early experiences with data center virtualization have taught us plenty about the potential impact of clouds on static infrastructure.  Some of these impacts will be felt on the network and others within the cloudplexes.

The market caps of Cisco, Juniper, F5, Riverbed and others will be impacted by how well they can adapt to the new dynamic demands challenging the static network.

Virtualization: The Beginning of the End of Static Infrastructure

The biggest threat to the world of multi-billion dollar Infrasructure1.0 players is neither the threat of a protracted global recession nor the emergence of a robust population of hackers threatening increasingly lucrative endpoints.  The biggest threat to the static world of Infrastructure1.0 is the promise of even higher factors of change and complexity on the way as systems and endpoints continue to evolve.

More fluid and powerful systems and endpoints will require either more network intelligence or even higher enterprise spending on network management.

This became especially apparent when VMware, Microsoft, Citrix and others in virtualization announced their plans to move their offerings into production data centers and endpoints.  At that point the static infrastructure world was put on notice that their habitat of static endpoints was on its way into the history books.  I blogged about this, (sort of ) at Always On in February 2007 when making a point about the difficulties inherent with static network security keeping up with mobile VMs.

The sudden emergence of virtualization security marked the beginning of an even greater realization that the static infrastructure built over three decades was unprepared for supporting dynamic systems.  The worlds of systems and networks were colliding again and driving new demands that would enable new solution categories.

The new chasm between static infrastructure and software now disconnected from hardware, is much broader than virtsec, and will ultimately drive the emergence of a more dynamic and resilient network, empowered by continued application layer innovations and the integration of static infrastructure with enhanced management and connectivity intelligence.

As Google, Microsoft, Amazon and others push the envelope with massive virtualization-enabled cloudplexes revitalizing small town economies -and whomever else rides the clouds– they will continue to pressure the world of Infrastructure1.0.  More sophisticated systems will require more intelligent networks.  That simple premise is the biggest threat today to network infrastructure players.

The market capitalizations of Cisco, Juniper, F5 and Riverbed will ultimately be tied to their ability to service more dynamic endpoints, from mobile PCs to virtualized data centers and cloudplexes.  Thus far, the jury is still out about the nature and implications of various partnership announcements between 1.0 players and virtualization players.

As enterprises scale their networks to new heights they are already seeing the evidence of the stresses and strains between static infrastructure and more dynamic endpoint requirements.  A recent Computerworld Research Report on core network services already shows larger networks paying a higher price (per IP address) for management.  Back in grad school we called that a diseconomy of scale; today in the networked world I think it would be one of the four horsemen of infrastructure1.0 obsolescence.  Those who cannot adapt will lose.

Virtsec as Metaphor for the New Age

Earlier this year VMware announced VMsafe at VMworld in Cannes.  Yet at the recent VMworld conference mere months later the virtsec buzz was noticeably absent.  The inability of the VMsafe partners to deliver on the promise of virtualization security was a major buzz killer and I think it may be yet another harbinger of things to come for all network infrastructure players.  This issue is infinitely larger than virtsec.

I suspect that the VMsafe gap between expectations and reality drove production virtualization into small hypervisor VLAN pockets, limiting the payoff of production virtualization and I think impacting VMware’s data center growth expectations.  That gap was based on the technical limitations of Infrastructure1.0, more than any other factor.  It also didn’t help the 1.0 players grow their markets by addressing these new demands.  The result was as slowdown in production virtualization, a huge potential catalyst for IT, with new economies of scale and potential.

The appliances that have been deployed across the last thirty years simply were not architected to look inside servers (for other servers) or dynamically keep up with fluid meshes of hypervisors powering servers on and off on demand and moving them around with mouse clicks.

Enterprises already incurring diseconomies of scale today will face sheer terror when trying to manage and secure the dynamic environments of tomorrow.  Rising management costs will further compromise the economics of static network infrastructure.

The virtsec dilemma was clearly a case of static netsec meeting dynamic software capable of moving across security zones or changing states.  There are more dilemmas on the way.  Take the following chart and simply add cloud and virtualization in the upper right and kink the demands line up even higher:

If you take a step back and look at the last thirty years you’ll see a series of big bang effects from TCP/IP and application demand collisions.  As we look forward five years into a haze of economic uncertainty, maybe it’s a proper time to take heed that the new demands of movement and change posed by virtualization and cloud computing need to be addressed sooner rather than later.

If these demands are not addressed, more enterprise networks will face diseconomies of scale as TCP/IP proliferates.  They’ll experience additional availability and security challenges and will emerge when the haze clears at a competitive disadvantage after years of overpaying for fundamental things like IP address management (or IPAM).  Most enterprises today are still managing IP addresses with manual updates and spreadsheets and paying the price, according to Computerworld research.  How will that support increasing rates of change?

The Emergence of Connectivity Intelligence

As I mentioned one of the biggest challenges of virtsec was the inability of network appliances to see VMs and keep track of them as they move around inside a virtualized blade server environment (racks and stacks of powerful commodity servers deployed in a fluid pool that can add or remove servers/VMs on short notice and therefore operate with less power than the conventional data center with each server running a unique application or OS and therefore having to be powered 24/7).

The static infrastructure was not architected to keep up with these new levels of change and complexity without a new layer of connectivity intelligence, delivering dynamic information between endpoint instances and everything from Ethernet switches and firewalls to application front ends.  Empowered with dynamic feedback, the existing deployed infrastructure can evolve into an even more responsive, resilient and flexible network and deliver new economies of scale.

A dynamic infrastructure would empower a new level of synergy between new endpoint and system initiatives (consolidation, compliance, mobility, virtualization, cloud) and open new markets for existing and emerging infrastructure players.  Cisco, Juniper, F5 Networks, Riverbed and others who benefited from the evolving collisions between TCP/IP and applications could then benefit from the rise of virtualization and enterprise and service provider versions of cloud, versus watching it from the sidelines.

The Rise of Core Net Service Automation

That connectivity intelligence requirement will make core network service automation (DNS, DHCP, and IPAM, for example) strategic to infrastructure2.0.  Most of these services are today manually managed.  That means that network and system are connected and adjusted manually.  More changes will mean more costs and more downtime and less budget for static infrastructure.

These networks need dynamic reachability (addressing and naming) and visibility (status and location) capabilities.  In essence, I’m advocating the evolution of a central nervous system for the network capable of delivering commands and feedback between endpoints, systems and infrastructure; at the core it would be a kind of digital positioning system (DPS) that would enable access, policy, enforcement and flexibility without the need for ongoing and tedious manual intervention.

In between recent emails with Rick Kagan and Stuart Bailey (both also at Infoblox) Stuart recommended Morville’s “Ambient Findability”.  I soon found out why.  The following is from the online Amazon review:

“The book’s central thesis is that information literacy, information architecture, and usability are all critical components of this new world order. Hand in hand with that is the contention that only by planning and designing the best possible software, devices, and Internet, will we be able to maintain this connectivity in the future.”

In a recessionary scenario these labor-intensive strains will get worse as budgets and resources are trimmed.  Rising TCO for infrastructure will impact the success of the infrastructure players as well as VMware, Microsoft and others, as virtsec friction has already impacted VMware.  The virtualization players will be forced to build or acquire application layer and connectivity intelligence as a means of survival.  They may not wait for the static team to convert to a more fluid vision.

That is why the fates of the static infrastructure players (and IT) will be increasingly tied to their ability to make their solutions more intelligent, dynamic and resilient.  Without added intelligence today’s network players will benefit less and less from ongoing innovations that show no sign of slowing; the impacts of a recession would be made even more severe.

Filed Under: Guest Posts Tagged With: cloud computing, Greg Ness, Gregory Ness, guest post, networks, recession, virtualisation, virtualization

KVM Lives On At Red Hat, So Now What?

September 27, 2008 by Kris Buytaert Leave a Comment

Over a year after the first big Open Source Virtualization acquisition, Citrix Acquiring Xensource, the next industry shaking acquisition is a fact. Red Hat has reeled in Virtualization startup Qumranet, While RedHat had already announced that they were going to support both KVM and Xen in their product range , taking over Qumranet for some people sounds like a really strange thing to do , afterall apart from its work on KVM as the underlying opensource component of their product, Qumranet is a pretty proprietary software company.

Qumranet understood that the Bare Metal Low Layer virtualization layer was not going to bring them any money any day soon. There were going to be different Free and free alternative Virtualization layers out there anyhow so why keeping theirs secret rather than having it flourish as a community product and contribute back to the linux kernel community while at it.

On the other hand the products of Qumranet were closed, altough based on Linux their business was in selling a VDI solution to bigger customers. The question now becomes how this kind of product range will fit into RedHat’s tradidional Open Source offering. Red Hat has a long history of opensourcing everything they do. Obviously there is Redhat Linux, Jboss but also
the proprietary directory server they bought from Netscape which they opensourced . Sometimes it takes a while, like with their Satellite product, but they have a good track record here. So most parts of the SolidIce product line will be opensourced , but will they grow a community ?

Lots of people ask themselves if RedHat was interrested in the VDI infrastructure or did the just want to have the KVM Kernel developers on board. The fact is that they have a direct entry into managing Windows desktops , a market previously closed for themAnd that makes it an interresting move. As of now, managing a windows box is just managing a file on a Linux server, easy to copy, easy to replace.

With RedHat clearly preferring KVM over Xen in the future. What’s going to happen with Xen in the other distributions.
The 451 group reports that
“Novell insists Xen is its hypervisor of choice and it remains committed to the virtualization software and project.”,
but as we all know .. Novell will be working on other interoperability challenges too.

With Oracle’s Unbreakable Linux being a RedHat derivate the future of Virtualization in Unbreakable becomes an interresting topic.
Oracle clearly choose the Xen platform as their favourite virtualization technology earlier. And given the fact that it will be hosting the next
North American Xen summit , Oracle seems to plan on continuing to build their platform on Xen.

To close of there’s also the question of people at Qumranet, Qumranet was cofounded by serial virtualization enterpreneur Moshe Bar who previously had also cofouned Qlusters and Xensource What’s he going to do , will he stay around at RedHat or will he refocus to his other startup Sullego.

In a couple of weeks Xensource will celebrate it’s first anniversary at Citrix , let’s see what happens then …

Filed Under: Acquisitions, Guest Posts, People, Rumors Tagged With: citrix, Oracle VM, qumranet, RedHat, SolidICE, xensource

openQRM 4.1 Released With Support for KVM

September 16, 2008 by Kris Buytaert Leave a Comment

Matt just sent mail to let us know that the openQRM team has released a fresh openQRM 4.1

After the initial 4.0 release of the “next generation” of openQRM, re-written in PHP, the new release comes with some nice new features. the most important one being the addition of . support for KVM-Virtualization And a new image-shelf plugin that provides ready-made and ready-to-deploy server-images to get started easily.
KVM was added as a feature on top of the already supported virtualization platforms such as Xen , LinuxVserver and VMWare.

The 4.1 version also provides lots of usability-enhancements, shorter GUI-sequences, meaning less mouse-clicks), some security- and other bug-fixes as documented in our bug-tracker.

Binary packages (RPM and DEB) for Centos 5, openSuse 10.3, Debian 4.0 and Ubuntu 8.04 are available here

No cloud rebranding here however .. altough openQRM perfectly fits the under the Cloud umbrella

Filed Under: Guest Posts, News Tagged With: cloud, kvm, openqrm, vmware, Xen

Citrix To Jump On Cloud Wagon, But How?

September 10, 2008 by Kris Buytaert 1 Comment

—

Tarry is hinting at a “big” announcement that Citrix will make on september 15th. He reveals nothing, apparently having signed an NDA, but hints that the news concerns his topic of focus of lately.

Tarry’s blog recently shifted from pure virtualization news to reports on virtualization and cloud computing. So our bet is that Citrix will be jumping on the “Cloud Wagon”, or should we say “Cloud Hype” somewhere next week. And why shouldn’t they?

(Update: one of our commenters suspects an acquisition of some sorts, and that’s not unlikely.)

Citrix has been in the business of remotely accessing applications and managing such environments since they started out, so it makes perfect sense for them to actually tebrand their whole product line from Citrix to Xen … and then to “XenCloud”.

Oh, and Intel obviously will announce a new chip, called the CloudCore, no more need to buy an octocore CPU, Intel will instead host them for you. 🙂

On the other hand: given next week’s VMWorld event, Citrix and Intel might also be announcing some real news to steal some of VMware’s thunder.

What’s your guess?

Filed Under: Featured, Guest Posts, Rumors Tagged With: acquisition, announcement, citrix, Citrix Xen, cloud, cloud computing, cloud wagon, cloup hype, rumor, Rumors, Tarry Singh, virtualisation, virtualization, Xen, XenCloud

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 7
  • Go to page 8
  • Go to page 9
  • Go to page 10
  • Go to page 11
  • Interim pages omitted …
  • Go to page 15
  • Go to Next Page »

Primary Sidebar

Tags

acquisition application virtualization Cisco citrix Citrix Systems citrix xenserver cloud computing Dell desktop virtualization EMC financing Funding Hewlett Packard HP Hyper-V IBM industry moves intel interview kvm linux microsoft Microsoft Hyper-V Novell oracle Parallels red hat research server virtualization sun sun microsystems VDI video virtual desktop Virtual Iron virtualisation virtualization vmware VMware ESX VMWorld VMWorld 2008 VMWorld Europe 2008 Xen xenserver xensource

Recent Comments

  • C program on Red Hat Launches Virtual Storage Appliance For Amazon Web Services
  • Hamzaoui on $500 Million For XenSource, Where Did All The Money Go?
  • vijay kumar on NComputing Debuts X350
  • Samar on VMware / SpringSource Acquires GemStone Systems
  • Meo on Cisco, Citrix Join Forces To Deliver Rich Media-Enabled Virtual Desktops

Copyright © 2025 · Genesis Sample on Genesis Framework · WordPress · Log in

  • Newsletter
  • Advertise
  • Contact
  • About