• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
Virtualization.com

Virtualization.com

News and insights from the vibrant world of virtualization and cloud computing

  • News
  • Featured
  • Partnerships
  • People
  • Acquisitions
  • Guest Posts
  • Interviews
  • Videos
  • Funding

cloud computing

3Tera Aims To Bring XenApp and XenDesktop Into The Cloud

October 29, 2008 by Robin Wauters Leave a Comment

3Tera, provider of cloud computing technology and utility computing services, announced today it will be demonstrating for the first time Citrix XenApp and Citrix XenDesktop technologies deployed and running in a cloud environment using 3Tera’s AppLogic Cloud Computing Platform during Citrix Summit in Orlando, an invitation-only event for Citrix partners to learn about the latest and greatest products and technologies for application delivery infrastructure solutions.

Customers of all sizes will be able to take advantage of this technology, both in public external Internet based clouds and as a commercial enterprise strength platform for deployment in clouds behind the corporate datacenter firewall. The initial focus will be collaboration with Citrix and its solution partners to deliver continued application virtualization and desktop management and virtualization, with the added portability and scalability cloud computing enables.

Filed Under: News Tagged With: 3 Tera, 3Tera, 3Tera AppLogic, 3Tera AppLogic Cloud Computing Platform, AppLogic, AppLogic Cloud Computing Platform, citrix, Citrix XenApp, Citrix XenDesktop, cloud computing, cloud services, virtualisation, virtualization, XenApp, XenDesktop

Microsoft Launches Cloud Platform, Dubs It Windows Azure

October 27, 2008 by Robin Wauters Leave a Comment

Ray Ozzie opened the Microsoft PDC ’08 this morning with a keynote speech, announcing Windows Azure, Microsoft’s “Windows in the cloud” (press release here). It is a new service based operating environment, which he described as a massive highly scalable service platform. What is being released today is just a fraction of what it will become. It will be Microsoft’s highest scalable system enabling people and companies to create services on the Web.

Ozzie described how this platform combines cloud-based developer capabilities with storage, computational and networking infrastructure services, all hosted on servers operating within Microsoft’s global datacenter network. This provides developers with the ability to deploy applications in the cloud or on-premises and enables experiences across a broad range of business and consumer scenarios.

Mary-Jo Foley offers a ‘guide for the perplexed‘.

Microsoft did not disclose pricing, licensing or timing details for Azure. The company is planning to release a Community Technology Preview (CTP) test build of Azure to PDC attendees on October 27.

More details later.

Filed Under: Featured, News Tagged With: Azure, cloud, cloud computing, cloud platform, microsoft, Microsoft Azure, Microsoft PDC, Ray Ozzie, virtualisation, virtualization, Windows Azure, Windows cloud platform, Windows in the cloud

rPath Introduces Cloud Computing Adoption Model

October 15, 2008 by Robin Wauters 1 Comment

rPath today announced its Cloud Computing Adoption Model, which defines a pragmatic, five-step approach for the graduated adoption of cloud computing. The Cloud Computing Adoption Model will be rolled out on October 23, as part of an rPath webinar featuring guests Amazon Web Services, Forrester Research and MomentumSI.

Cloud computing holds real promise for enterprises and SMBs, as well as the ISVs and individual developers serving them. Among other benefits, cloud computing can help organizations:

  • reduce capital and operating expenses by increasing infrastructure utilization and reducing server sprawl;
  • reduce the cost of software consumption by allowing business lines to consume application functionality on demand and to align cost with value received; and
  • dramatically improve business agility and responsiveness by compressing deployment cycles and time-to-value for application functionality.

To help organizations realize the promise and avoid the perils of cloud computing, the Cloud Computing Adoption Model provides a pragmatic, actionable, step-by-step framework for achieving measurable benefits now, while laying the foundation for the strategic benefits of a cloud infrastructure over time.

For each level, The Cloud Computing Adoption Model outlines strategic goals, investment requirements, expected returns, risk factors, and readiness criteria for advancement.

Filed Under: Uncategorized Tagged With: cloud computing, Cloud Computing Adoption Model, rPath, rPath Cloud Computing Adoption Model, virtualisation, virtualization, webinar

Gartner Identifies the Top 10 Strategic Technologies for 2009, Virtualization and Cloud Computing Rule

October 15, 2008 by Robin Wauters Leave a Comment

Gartner analysts today highlighted the top 10 technologies and trends that will be strategic for most organizations. The analysts presented their findings during Gartner Symposium/ITxpo, being held through October 16.

Gartner defines a strategic technology as one with the potential for significant impact on the enterprise in the next three years. Factors that denote significant impact include a high potential for disruption to IT or the business, the need for a major dollar investment, or the risk of being late to adopt.
These technologies impact the organization’s long-term plans, programs and initiatives. They may be strategic because they have matured to broad market use or because they enable strategic advantage from early adoption.
The top 10 strategic technologies for 2009 include:
Virtualization
“Much of the current buzz is focused on server virtualization, but virtualization in storage and client devices is also moving rapidly. Virtualization to eliminate duplicate copies of data on the real storage devices while maintaining the illusion to the accessing systems that the files are as originally stored (data deduplication) can significantly decrease the cost of storage devices and media to hold information. Hosted virtual images deliver a near-identical result to blade-based PCs. But, instead of the motherboard function being located in the data center as hardware, it is located there as a virtual machine bubble. However, despite ambitious deployment plans from many organizations, deployments of hosted virtual desktop capabilities will be adopted by fewer than 40 percent of target users by 2010.”
Cloud Computing
“Cloud computing is a style of computing that characterizes a model in which providers deliver a variety of IT-enabled capabilities to consumers. They key characteristics of cloud computing are 1) delivery of capabilities “as a service,” 2) delivery of services in a highly scalable and elastic fashion, 3) using Internet technologies and techniques to develop and deliver the services, and 4) designing for delivery to external customers. Although cost is a potential benefit for small companies, the biggest benefits are the built-in elasticity and scalability, which not only reduce barriers to entry, but also enable these companies to grow quickly. As certain IT functions are industrializing and becoming less customized, there are more possibilities for larger organizations to benefit from cloud computing.”

Filed Under: Featured Tagged With: cloud computing, gartner, Gartner top 10 strategic technologies for 2009, research, strategy, top 10 strategic technologies for 2009, virtualisation, virtualization

Guest Post: Clouds, Networks and Recessions

October 13, 2008 by Robin Wauters Leave a Comment

This is a cross-post of a blog article written by Gregory Ness, former VP of Marketing for Blue Lane Technologies who is currently working for InfoBlox.

Over the last three decades we’ve watched a meteoric rise in processing power and intelligence in network endpoints and systems drive an incredible series of network innovations; and those innovations have led to the creation of multi-billion dollar network hardware markets.  As we watch the global economy shiver and shake we now see signs of the next technology boom: Infrastructure2.0.

Infrastructure1.0- The Multi-billion Dollar Static Network

From the expansion of TCP/IP in the 80s/90s, the emergence of network security in the mid/late 90s to the evolution of performance and traffic optimization in the late 90s/early 00s we’ve watched the net effects of ever-changing software and system demands colliding with static infrastructure.  The result has been a renaissance of sorts in the network hardware industry, as enterprises installed successive foundations of specialized gear dedicated to the secure and efficient transport of an ever increasing population of packets, protocols and services.  That was and is Infrastructure1.0.

Infrastructure1.0 made companies like Cisco, Juniper/NetScreen, F5 Networks and more recently Riverbed very successful.  It established and maintained the connectivity between ever increasing global populations of increasingly powerful network-attached devices.  Its impact on productivity and commerce are proportionate to the advent of oceanic shipping, paved roads and railroads, electricity and air travel.  It has shifted wealth and accelerated activities on a level that perhaps has no historical precedent.

I talked about the similar potential economic impacts of cloud computing in June, comparing its future role to the shipment of spices across Asia and the Middle East before the rise of oceanic shipping.  One of the key enables of cloud computing is virtualization.  And our early experiences with data center virtualization have taught us plenty about the potential impact of clouds on static infrastructure.  Some of these impacts will be felt on the network and others within the cloudplexes.

The market caps of Cisco, Juniper, F5, Riverbed and others will be impacted by how well they can adapt to the new dynamic demands challenging the static network.

Virtualization: The Beginning of the End of Static Infrastructure

The biggest threat to the world of multi-billion dollar Infrasructure1.0 players is neither the threat of a protracted global recession nor the emergence of a robust population of hackers threatening increasingly lucrative endpoints.  The biggest threat to the static world of Infrastructure1.0 is the promise of even higher factors of change and complexity on the way as systems and endpoints continue to evolve.

More fluid and powerful systems and endpoints will require either more network intelligence or even higher enterprise spending on network management.

This became especially apparent when VMware, Microsoft, Citrix and others in virtualization announced their plans to move their offerings into production data centers and endpoints.  At that point the static infrastructure world was put on notice that their habitat of static endpoints was on its way into the history books.  I blogged about this, (sort of ) at Always On in February 2007 when making a point about the difficulties inherent with static network security keeping up with mobile VMs.

The sudden emergence of virtualization security marked the beginning of an even greater realization that the static infrastructure built over three decades was unprepared for supporting dynamic systems.  The worlds of systems and networks were colliding again and driving new demands that would enable new solution categories.

The new chasm between static infrastructure and software now disconnected from hardware, is much broader than virtsec, and will ultimately drive the emergence of a more dynamic and resilient network, empowered by continued application layer innovations and the integration of static infrastructure with enhanced management and connectivity intelligence.

As Google, Microsoft, Amazon and others push the envelope with massive virtualization-enabled cloudplexes revitalizing small town economies -and whomever else rides the clouds– they will continue to pressure the world of Infrastructure1.0.  More sophisticated systems will require more intelligent networks.  That simple premise is the biggest threat today to network infrastructure players.

The market capitalizations of Cisco, Juniper, F5 and Riverbed will ultimately be tied to their ability to service more dynamic endpoints, from mobile PCs to virtualized data centers and cloudplexes.  Thus far, the jury is still out about the nature and implications of various partnership announcements between 1.0 players and virtualization players.

As enterprises scale their networks to new heights they are already seeing the evidence of the stresses and strains between static infrastructure and more dynamic endpoint requirements.  A recent Computerworld Research Report on core network services already shows larger networks paying a higher price (per IP address) for management.  Back in grad school we called that a diseconomy of scale; today in the networked world I think it would be one of the four horsemen of infrastructure1.0 obsolescence.  Those who cannot adapt will lose.

Virtsec as Metaphor for the New Age

Earlier this year VMware announced VMsafe at VMworld in Cannes.  Yet at the recent VMworld conference mere months later the virtsec buzz was noticeably absent.  The inability of the VMsafe partners to deliver on the promise of virtualization security was a major buzz killer and I think it may be yet another harbinger of things to come for all network infrastructure players.  This issue is infinitely larger than virtsec.

I suspect that the VMsafe gap between expectations and reality drove production virtualization into small hypervisor VLAN pockets, limiting the payoff of production virtualization and I think impacting VMware’s data center growth expectations.  That gap was based on the technical limitations of Infrastructure1.0, more than any other factor.  It also didn’t help the 1.0 players grow their markets by addressing these new demands.  The result was as slowdown in production virtualization, a huge potential catalyst for IT, with new economies of scale and potential.

The appliances that have been deployed across the last thirty years simply were not architected to look inside servers (for other servers) or dynamically keep up with fluid meshes of hypervisors powering servers on and off on demand and moving them around with mouse clicks.

Enterprises already incurring diseconomies of scale today will face sheer terror when trying to manage and secure the dynamic environments of tomorrow.  Rising management costs will further compromise the economics of static network infrastructure.

The virtsec dilemma was clearly a case of static netsec meeting dynamic software capable of moving across security zones or changing states.  There are more dilemmas on the way.  Take the following chart and simply add cloud and virtualization in the upper right and kink the demands line up even higher:

If you take a step back and look at the last thirty years you’ll see a series of big bang effects from TCP/IP and application demand collisions.  As we look forward five years into a haze of economic uncertainty, maybe it’s a proper time to take heed that the new demands of movement and change posed by virtualization and cloud computing need to be addressed sooner rather than later.

If these demands are not addressed, more enterprise networks will face diseconomies of scale as TCP/IP proliferates.  They’ll experience additional availability and security challenges and will emerge when the haze clears at a competitive disadvantage after years of overpaying for fundamental things like IP address management (or IPAM).  Most enterprises today are still managing IP addresses with manual updates and spreadsheets and paying the price, according to Computerworld research.  How will that support increasing rates of change?

The Emergence of Connectivity Intelligence

As I mentioned one of the biggest challenges of virtsec was the inability of network appliances to see VMs and keep track of them as they move around inside a virtualized blade server environment (racks and stacks of powerful commodity servers deployed in a fluid pool that can add or remove servers/VMs on short notice and therefore operate with less power than the conventional data center with each server running a unique application or OS and therefore having to be powered 24/7).

The static infrastructure was not architected to keep up with these new levels of change and complexity without a new layer of connectivity intelligence, delivering dynamic information between endpoint instances and everything from Ethernet switches and firewalls to application front ends.  Empowered with dynamic feedback, the existing deployed infrastructure can evolve into an even more responsive, resilient and flexible network and deliver new economies of scale.

A dynamic infrastructure would empower a new level of synergy between new endpoint and system initiatives (consolidation, compliance, mobility, virtualization, cloud) and open new markets for existing and emerging infrastructure players.  Cisco, Juniper, F5 Networks, Riverbed and others who benefited from the evolving collisions between TCP/IP and applications could then benefit from the rise of virtualization and enterprise and service provider versions of cloud, versus watching it from the sidelines.

The Rise of Core Net Service Automation

That connectivity intelligence requirement will make core network service automation (DNS, DHCP, and IPAM, for example) strategic to infrastructure2.0.  Most of these services are today manually managed.  That means that network and system are connected and adjusted manually.  More changes will mean more costs and more downtime and less budget for static infrastructure.

These networks need dynamic reachability (addressing and naming) and visibility (status and location) capabilities.  In essence, I’m advocating the evolution of a central nervous system for the network capable of delivering commands and feedback between endpoints, systems and infrastructure; at the core it would be a kind of digital positioning system (DPS) that would enable access, policy, enforcement and flexibility without the need for ongoing and tedious manual intervention.

In between recent emails with Rick Kagan and Stuart Bailey (both also at Infoblox) Stuart recommended Morville’s “Ambient Findability”.  I soon found out why.  The following is from the online Amazon review:

“The book’s central thesis is that information literacy, information architecture, and usability are all critical components of this new world order. Hand in hand with that is the contention that only by planning and designing the best possible software, devices, and Internet, will we be able to maintain this connectivity in the future.”

In a recessionary scenario these labor-intensive strains will get worse as budgets and resources are trimmed.  Rising TCO for infrastructure will impact the success of the infrastructure players as well as VMware, Microsoft and others, as virtsec friction has already impacted VMware.  The virtualization players will be forced to build or acquire application layer and connectivity intelligence as a means of survival.  They may not wait for the static team to convert to a more fluid vision.

That is why the fates of the static infrastructure players (and IT) will be increasingly tied to their ability to make their solutions more intelligent, dynamic and resilient.  Without added intelligence today’s network players will benefit less and less from ongoing innovations that show no sign of slowing; the impacts of a recession would be made even more severe.

Filed Under: Guest Posts Tagged With: cloud computing, Greg Ness, Gregory Ness, guest post, networks, recession, virtualisation, virtualization

CloudCamp Sailing Into Brussels On A Boat (30 October)

October 9, 2008 by Robin Wauters Leave a Comment

CloudCamp is an unconference where early adopters of Cloud Computing technologies exchange ideas. With the rapid change occurring in the industry, we need a place we can meet to share our experiences, challenges and solutions. At CloudCamp, you are encouraged you to share your thoughts in several open discussions, as we strive for the advancement of Cloud Computing. End users, IT professionals and vendors are all encouraged to participate.

There was one in London previously, and now there’s one being held in (our home town) Brussels on the Biouel boat the 30th of October.

Tarry Singh, Founder & CEO at Avastu, alerted us on the news and will be speaking at the event, as will Q-layer‘s Tom Leyden.

Registration is free of charge, you can sign up for the event right here.

Filed Under: Uncategorized Tagged With: Avastu, cloud computing, CloudCamp, CloudCamp Brussels, event, Q-layer, Tarry Singh, Tom Leyden, unconference, virtualisation, virtualization

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 4
  • Go to page 5
  • Go to page 6
  • Go to page 7
  • Go to page 8
  • Interim pages omitted …
  • Go to page 11
  • Go to Next Page »

Primary Sidebar

Tags

acquisition application virtualization Cisco citrix Citrix Systems citrix xenserver cloud computing Dell desktop virtualization EMC financing Funding Hewlett Packard HP Hyper-V IBM industry moves intel interview kvm linux microsoft Microsoft Hyper-V Novell oracle Parallels red hat research server virtualization sun sun microsystems VDI video virtual desktop Virtual Iron virtualisation virtualization vmware VMware ESX VMWorld VMWorld 2008 VMWorld Europe 2008 Xen xenserver xensource

Recent Comments

  • C program on Red Hat Launches Virtual Storage Appliance For Amazon Web Services
  • Hamzaoui on $500 Million For XenSource, Where Did All The Money Go?
  • vijay kumar on NComputing Debuts X350
  • Samar on VMware / SpringSource Acquires GemStone Systems
  • Meo on Cisco, Citrix Join Forces To Deliver Rich Media-Enabled Virtual Desktops

Copyright © 2025 · Genesis Sample on Genesis Framework · WordPress · Log in

  • Newsletter
  • Advertise
  • Contact
  • About