This article is part of a series of guest posts by investor, Open Source pioneer and the creator of the Concurrent Versions System Brian Berliner. The original posts, recapping much of what was said at the IDC Virtualization Forum West in San Francisco, have also been published on Brian’s blog.
At the IDC Virtualization Forum West Conference, John Humphreys, Program Vice President of the Enterprise Platform Group at IDC shared some more detailed thoughts on the virtualization markets (great presentation, BTW).
Further takeaways include:
- Of the customers that are doing virtualization in their data centers today, IDC says that 22% of servers have already been virtualized, with an expected rise to 45% in 12 months. Note that Gartner claims that the overall virtualized server market share is 5%. Which means, to me, that there is a whole lot of headroom for virtualized server growth.
- Power & Cooling account for $0.50 for every $1 spent on servers, or about $29 Billion annually.
- Roughly $8 in maintenance spent for every $1 in new infrastructure.
- “Server consolidation” is already appearing to be “old news”. Now “Desktop Consolidation” is hot – the ability to serve up the desktop client image from a central location, and all the centralized admin goodness that comes from that. IDC notes a number of challenges, like the fact that moving the desktop client images into the data center results in 20-30% cost of storage increase (I would think it would be much more, personally), due to the additional network storage requirements; There are still challenges with running the virtualized OS legally (if you are not already a Software Assurance volume pricing customer, that is; who wants to buy another retail copy of Windows just to serve it up from the central data center?); And, performance of the remote desktop protocols can be poor for some client workloads. IDC specifically mentioned Qumranet and their SPICE remote connection protocol as potentially addressing some of these performance issue.
- Virtualization appears to be solving the complexity problems that surround the deployment of “clusters” in the data center. And, I completely agree. I’ve set up many, and they are way too complicated. And virtualizing is way too easy. Death to clusters!
My thought on the Virtual Desktop Infrastructure topic: Today’s desktop computers are extremely powerful and should not be used as dumb terminals that just do “Remote Desktop” access. You need to find a hybrid approach that allows you to use the power of the desktop client (and all that lovely disk drive space on the client). Once clients start being delivered with a built-in hypervisor (which is not too far away), you could argue that you might be able to treat the client as a server. Then, there is just the matter of managing the Virtual Hard Disk images. Using a CacheFS would be one very easy way to do so (transparent local storage that can be taken offline with automatic server-based backing I/O).
The Citrix folks have an interesting approach to this, including both the ability to “stream” an application load to a diskful and stateful Windows client, OR to deliver a server-hosted virtual machine through a remote protocol connection. Choice. Choice is good, as one size will not fit all customer environments for client desktop management. Check out the Citrix Delivery Center.
[Original post can be found here]
Leave a Reply