• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
Virtualization.com

Virtualization.com

News and insights from the vibrant world of virtualization and cloud computing

  • News
  • Featured
  • Partnerships
  • People
  • Acquisitions
  • Guest Posts
  • Interviews
  • Videos
  • Funding

Search Results for: EC2

CohesiveFT Adds KVM Format To Its Automated Elastic Server Platform

January 11, 2009 by Robin Wauters Leave a Comment

CohesiveFT today announced support to automate the deployment of Kernel-based Virtual Machine (KVM) servers via the Elastic Server web-based factory. Elastic Server is an automated “factory” that allows IT professionals to assemble, deploy, and manage virtual servers using a simple point-and-click interface. Beginning today, customers can assemble custom servers for deployment to the Kernel Virtual Machine format.

KVM is a Linux kernel virtualization infrastructure licensed under the GNU GPL.  It provides a mechanism for splitting a single physical computer into multiple virtual machines.  KVM’s approach differs from other virtualization formats in that it requires no patching of the kernel and takes advantage of performance improvements available on hardware containing virtualization extensions (Intel VT or AMD-V).

The Elastic Server platform is a complement to virtualization and cloud offerings. Users assemble custom servers by choosing from a library of popular components. Once assembled, these custom application stacks can be configured to a variety of virtualization and cloud-ready formats, downloaded and deployed in real-time. Completed server stacks can be distributed through the Elastic Server platform. There are more than two thousand community users contributing nearly five thousand Elastic Servers to the market. The addition of KVM follows CohesiveFT’s recent addition of Virtual Iron, support for Amazon EC2 in Europe, the Ubuntu operating system, and the industry’s first commercial cloud security solution, VPN-Cubed.

Filed Under: News Tagged With: CohesiveFT, CohesiveFT Elastic Server, CohesiveFT KVM, Elastic Server, Elastic Server Platform, Kernel-based Virtual Machine, kvm, virtualisation, virtualization, VPN-Cubed

Can We Stop Hyping The Cloud Yet ?

November 5, 2008 by Kris Buytaert 2 Comments

The past six to nine months we’ve seen the rapid invasion of the Cloud, Cloud Computing or a variant including Cloud. We’ve had different Barcamp style Cloudcamps, there are bloggers rebranding their virtualization blog to a cloudblog and there are new aggregators popping that gather all cloudy news.

Now let’s face it, there is absolutely nothing new on the horizon.
The cloud terminology has been coined by the marketing people, you know the weird folks in suits that are a bit uncomfortable at campstyle events, yep those guys. Oh well.. not all of them are like that 🙂

When Amazon had an overstock of machines in the summer of 2002 they launched Amazon Web Services and for a lot of people that was the start of what today they call Cloud Computing. Their Server as a Service , the Elastic Compute Cloud, also known as “EC2”, The idea that you can launch a Virtual Machine somewhere remotely, manage it via an API and Pay As You Use .

So in came the abbreviations, SAAS, Software as a Service, the new business model for a lot of software vendors, PAAS , Platform As A Service, the new service for the ISP’s. And SOSAAS, Same Old Software as a Service

But the strange thing is that the idea wasn’t Amazon’s in the first place.

If you would read the following project description :
“The project is building a public infrastructure for wide-area distributed computing. We envisage a world in which execution platforms will be scattered across the globe and available for any member of the public to submit code for execution. The sponsor of the code will be billed for all the resources used or reserved during the course of execution. This will serve to encourage load balancing, limit congestion, and hopefully even make the platform self-financing.”

You’d think Amazon wouldn’t you ? Wrong bet, The above text is coming straight from the Xenoservers project at the University of Cambridge yes, the project that eventually lead to the development of the Xen Virtual Machine Monitor, on which coincidentally Amazon EC2 is based.

But was this the first form of distributed deployment of user resources. ?
Reuven, Mr Cloud, thinks not ,

Even way back then the criminal syndicates had developed “service oriented architectures” and federated id systems including advanced encryption. It has taken more then 10 years before we actually started to see this type of sophisticated decentralization to start being adopted by traditional enterprises.

So the script kiddies had a whole cloud of dynamically on demand deployable instances of hosts where they could deploy their malware. No Pay As You Go, and certainly no fuzz about which licenses needed to be bought.

Just as in today’s Clouds, on of the reasons why the cloud is getting so popular is that people using it don’t have to think about how many extra software licenses, the biggest part of it’s underlying technology is Open Source, not a non scalable, proprietary platform

The cloud to me is the mix of Virtualization, Scalability, Automation , Open Source, Large Scale Deployment , playing the puppetmaster, and High Availability .. and let it be the Virtualization part and the Management of Virtual environments which I cover for Virtualization.com

So yes you’ll be reading more cloud news here, as after all part of it is just plain old Virtualization, or SAAS, or Thin Client

Thin Cloud Computing

Filed Under: Guest Posts Tagged With: Amazon EC2, cloud, PaaS, SaaS, sosaas, virtualization, Xen, xenoservers

RightScale Supports The Smell Of Saunas

November 4, 2008 by Kris Buytaert Leave a Comment

Today RightScale Inc. announced they will team up with the Eucalyptus team have their platform available with Eucalyptus so they can deliver an Easy to Mange Open Source Cloud Computing platform.

They have announced that starting today, November 4, 2008 they will have the RightScale Cloud computing management platform ready for use with the Eucalyptus Puclic Cloud (EPC).


“We are honored to collaborate with the talented UCSB Eucalyptus Project Team to accelerate the advancement of cloud computing technology,” said Michael Crandell, CEO at RightScale. “Now anyone — from those just becoming familiar with cloud computing to organizations evaluating a massive application for deployment on Amazon’s EC2 — will be able to easily test their applications on the Eucalyptus EC2-compatible, open source cloud infrastructure using RightScale’s management platform.”

RightScale was already supporting Amazon’s EC2, FlexiScale and now GoGrid and sends a big message to the Cloud Community that Eucalyptus is a valuable platform to support.

Earlier this year Elastra also announced support for Eucalyptus. May we wonder why the Eucalyptus folks went with RightScale and not with Scalr ? Afterall integrating Scalr with Eucalyptus seems like a good way to achieve a fully featured opensource platform.

And on a final note .. if RightScale titles their Press Release “RightScale and the Eucalyptus Team Join Forces to Deliver Easy-to-Manage Open Source Cloud Computing” , when will they show us the code ?

Filed Under: Guest Posts, News, Partnerships Tagged With: cloud, ec2, eucalyptus, FlexiScale, GoGrid, rightscale, virtualization

Citrix’s Open Source “Project Kensho” Tech Preview Now Available Under LGPL

October 14, 2008 by Robin Wauters Leave a Comment

Citrix recently announced “Project Kensho,” which would deliver Open Virtual Machine Format (OVF) tools that allow independent software vendors (ISVs) and enterprise IT managers to easily create hypervisor-independent, portable enterprise application workloads.

Well, it looks like Citrix just released the first technical preview of project Kensho under the LGPL license.

Because the tools are based on an industry standard schema, customers are ensured a rich ecosystem of options for virtualization.  And because of the open-standard format and special licensing features in OVF, customers can seamlessly move their current virtualized workloads to either XenServer or Windows Server 2008, enabling them to distribute virtual workloads to the platform of choice while simultaneously ensuring compliance with the underlying licensing requirements for each virtual appliance.

Citrix also announced a partnership with rPath to build and deliver new virtual appliances by assembling Linux packages “like Lego bricks”. The two are working together to allow rPath’s rBuilder to inject OVF virtual appliances directly into Xen-based cloud computing environments, like Amazon EC2. This collaboration will allow Linux and Windows based OVF appliances created on XenServer, Windows Server 2008 Hyper-V or Microsoft Hyper-V Server 2008 to be installed and run in the cloud and managed through their entire lifecycle.

Citrix Systems

Filed Under: Featured, News, Partnerships Tagged With: citrix, Distributed Management Task Force, DMTF, LGPL, Open Virtualization Format, ovf, OVF 1.0, Project Kensho, rBuilder, rPath, rPath rBuilder, Tech Preview, Technical Preview, virtual appliance, virtual appliances, virtualisation, virtualization

VMworld 2008 – VMware CEO Paul Maritz Keynote Liveblog

September 16, 2008 by Lode Vermeiren

Welcome to the Virtualization.come liveblog of the VMworld 2008 Keynote with CEO Paul Maritz.
[Updated with Q&A and more pictures]

[8.09] The program is starting. A video with testimonials from eBay, Qualcomm and the likes is on the big screens. There’s 14.000 people in the room ready for the first big keynote of the new CEO.

[8.10] The theme of the conference and the keynote is “Virtually anything is possible”.

[8.11] Paul Maritz takes the stage. Since VMware is now a public company, a disclaimer about forward looking statements is projected on the screen.

[8.12] 2008 is the occasion of two anniversaries: the 10th anniversary of VMware, and Paul Maritz stated working 30 years ago. Over the last three decades he’s seen several big “tsunamis” in IT. He’s looking forward to the next waves.

[8.13] Maritz recognizes two big forces, centralized and decentralized IT. Both have their strenghts and weaknesses. VMware brings the best of both: central, easy management, and a rich, customized user experience.

[8.14] Martiz spend a big part of the 1990s as an evangelist of client-server computing (while at Microsoft), leading to a big proliferation of x86 servers around the world.

[8.16] At the same time during the mid 1990s, the world wide web emerged, providing a rich yet decentralized user experience.

[8.17] Maritz recognizes the contribution of Diane Greene and Mendel Rosenblum, who founded VMware in 1998 and recently left the company.

[8.18] The roots of VMware are in the first product, VMware Workstation, a client-side product with extremely clever engineering.

As CPUs got more powerful, VMware introduced server products, the first being VMware GSX in 2000.

[8.19] The success of server virtualization made the VMware Infrastructure possible in 2004, to address the management issues that arose from an increasing number of servers, virtualized and physical.

Over the last few years, new application frameworks have emerged merging centralized computing and rich end user experiences. Examples are AJAX, RoR, Python-based frameworks like Django, ..

[8.21] VMware has continued to invest in client-side computing. like VMware Fusion in 2007 – lowering the barrier of Macs to enter into most corporate environments.

Nowadays, in 2008, the big buzzword is “cloud”. Paul Maritz says everyone’s got “cloud fever”.

We’re moving fundamentally away from a client-centered world. The applications and services get more flexible, and move around between places and devices. This requires the IT environments to be managed as a giant environment – a cloud.

[8.23] VMware is responding to this trends with three initiatives: Virtual Datacenter OS, vCloud Initiative, vClient Initiative.

The driving forces:

  • Internal cloud – IT departments are forced to use their resources and applications even more efficient and efficiently.
  • Scaling outside the firewalll – Making the connection between the internal cloud and external cloud providers in public datacenters. (such as Amazon EC2, ..)
  • Solving the “desktop dilemma”

The first announcement is the formal announcement of the “virtual datacenter OS” – a new level of abstraction that separates the underlying infrastructure from the application workloads, to create a self-healing, self-managing elastic compute cloud.

[8.27] Several collaborations made this vCompute, vCloud, … initiatives possible. The first one is a collaboration between Intel and VMware.

Intel Xeon 7400 series provide the next generation of FlexMigration technology, making VMotion between heterogeneous CPUs possible.

Cisco formally announces the first third-party virtual switch for the Virtual Infrastructure.

vStorage: disaster recovery collaboration with lots of storage vendors to enable Site Recovery Manager. Up on the slide now: 3par, Falconstore, Compellent, IBM, Dell EqualLogic, EMC, Netapp, HP, Lefthand, Xiotech and Netapp.

[830] vApps: extending the Virtual Appliance approach. Initial support from IBM, SAP and Novell. vApps describes collections of appliances (for multi-tiered applications), and their service level properties. This metadata enables the VDC OS to provide service levels to this collection of apps.

[8.33] New management services to help manage this extensive datacenter OS. VirtualCenter becomes vCenter: a framework into which new plugins that extend the services can be integrated. It also interacts with third party management software from BMC, CA, HP, IBM Tivoli, …

[8.36] The strategic, business benefits of a VDC-OS, according to VMware:

  • Increase infrastructure efficiency with standardized management and efficient use of resources
  • Increase application agility with simple provisioning, repurposing and zero downtime
  • Enable business driven IT, whith shorter response times, quick disaster recovery, pay-for-usage models and reduced energy and real estate needs

[8.39] Even though this all sounds quite new, the needs for this cloud are here today. VMware hopes to be able to make the internal infrastructure, the “internal clooud” to be more compatible with external cloud providers.

Service provider partners today: more than 100. On the slide: BT, Verizon, Sungard, Savvis and T Systems. They provide compatibility with workloads defined as vApps.

[8.43] Technology preview of the vCloud on stage. Demo: an application has an SLA with a response time under 4 seconds. If the SLA is not met, the app will be pushed to an external cloud provider. The SLA is defined using the Appspeed component of vCenter

[8.48] Extra load is being generated. The internal VM can’t meet the SLA anymore, so a VM is being launched in the cloud to which the load is diverted.

Interesting demo, but a bit hard to grasp from a distance, not as impressive as continuous availability last year (now called VMware Fault Tolerance). This will be cool nonetheless.

[8.50] Now on to the Desktop Dilemma. IT wants to bring down costs, the users want to be mobile, use macs, don’t want the info locked up in a single device, want the info anywhere, anywhere, and in a rich experience. This will be the VMware View announcement.

Data lifetime spans device lifetime, so it should not be lockup up in a single device.

VMware has always continued to invest in client-side products.

[8.53] The first solution to this problem was the VMware VDI concept. This way, the users workspace was no longer tied to a single device.

[8.56] A next demo of VMware View. First they demo the existing VDM product on a thin client. There’s lots of other devices on the stage though. The VM is saved on a USB key, which is used to boot a laptop. They also demo improved 3D graphics support.

New announcement: codevelopment with Teradici to improve the remote desktop protocol.

The new product family of VDI products is going to be called VMware View.

[9.05] And with that, the keynote comes to an end. Paul Maritz seems to have forgotten the Twitter questions…

So that wraps up our first live keynote blog. Keep watching Virtualization.com for more updates from VMworld. A video overview of the keynote will be following soon.


Update: Some of the questions from the Q&A session afterwards:


Q: What do you see as the two major challenges for VMware in the coming years?

A: Paul Maritz – Several related challenges: Deepen and extend the value proposition of the products, and extend this to the cloud. The organisation needs to mature further to be able to deliver this into actual products. Maintain high quality, interact with the ecosystem around virtualization, mature as a partner. Reach out better to the community. Articulate this all to the customers.

Q: What about the threat posed by Red Hat/Qumranet/KVM
A: Paul Maritz – Open Source is a great phenomenon. Open sourcing ESX has been discussed already. First step is making ESXi free. They’re not religious about not supporting for instance Xen in the VDC framework.

Q: How does SRM fit in with the vCloud initiative. Will it not make SRM redundant?
A: vCloud federation complements SRM in a way that it facilitates DR for companies that have no external disaster site, to be able to get up and running faster on an external cloud.

Q: Is VMware becoming an OS company?
A: Paul Martiz – In a way. That’s why it’s explicitly called Virtual Datacenter Operating System. The name came from customers, who kept calling it a Datacenter OS.

It’s an OS in the sense that it abstracts the applciation loads from the underlying hardware. Not in a traditional OS/app sense, but it has lots of parallels with traditional OS’es, so the name does apply. The services and APIs can be used to deliver OS capabilities, to frameworks like Ruby on Rails, so that the underlying [JeOS] gets thinner and thinner in the complete framework.

Q: VMotion today is restricted to a network domain. Are you working on enabling this across networks and geographies?
A: Yes. On the storage side we need replication for this. On the network side you need stretched VLANs or interaction with routing tables. VMware is working with its partners to provide this capabilities.

Q: Are there initiatives coming for non-profits and educational customers?
A: Today there is already academic pricing for US schools. There also will be an academic program to provide software to schools around the world, which will later be extended to non-profits.
No timeframe on this yet.

Q: Are you going to further expand your product line to the Mac? Specifically ESX and ACE.
A: More on this in the Fusion / ACE sessions..

Q: The issue with update 2 (the timebomb bug) was a wake-up call that VMware is becoming a single point of failure for the complete datacenter. How do you address this very real concern?
A: Paul Maritz: We’re continously maturing and improving our QA and security procedures. The issue with update 2 was amateur hour. We do take this serious and try to make sure this never happens again.

[NB: There’s a session with Paul Maritz on this issue tomorrow afternoon.]

Q: It seems like everytime I dream about a new feature I’d like to see in VMware, you guys come up with it. However, one big issue with the vCloud initiative is time drift in application. How will you fix this?
A: It is a big dilemma. On the Linux side we’re submitting patches to the kernel to improve this. Paravirtualization also helps OS’es communicate with the hypervisor to avoid timing issues.

Q: With the VDI solution where the desktop can reside on different devices. How’s the licensing going to work out? Per client? Per devices?
A: Not worked out yet.

Q: When are you going to release a VirtualCenter client for Linux?
A: Paul Martiz – I think VirtualCenter as a modern application should be OS independent. It’s not out of the question, but it’s hard work, not in a defined roadmap yet.

Filed Under: Uncategorized Tagged With: vmworld keynote vmware

vMAN Over At DMTF Is Immune To Kryptonite And Now Powered by OVF Version 1.0

September 16, 2008 by Toon Vanagt 2 Comments

Like superheroes with a weak spot (remember Superman and green Kryptonite), large providers of green data center technologies and virtualization software had an Achilles’ heel with their vendor lock-in, which scared away quite a few prospects. Today the major players have all agreed to drop their distinct proprietary formats and aim to adopt the Open Virtualization Format 1.0 as soon as possible (most are already compliant upon release). We first learned about OVF during our interview with Ian Pratt and the release of this open standard is a great step forward. The short lead time of ‘only’ one year proves the industry has understood that open standards are the way to go.

Above is our exclusive video interview recorded at VMworld in Las Vegas, where DMTF president Winston Bumpus revealed the release of OVF 1.0 and their larger Virtualization Management Initiative (vMAN). vMAN provides IT managers the freedom to deploy pre-installed, pre-configured solutions across heterogeneous computing networks and to manage those applications through their entire lifecycle. This Initiative delivers much-needed open industry standards to the management of virtualized environments. Ultimately, the group’s goal is to eliminate the need for IT managers to separately install, configure and manage interdependencies between virtualized operating systems and applications, by enabling automated management of the virtual machine lifecycle.

This new specification created by Dell, HP, IBM, Microsoft, VMware and XenSource is about to become an industry standard and aspires to help ensure portability, integrity and automated installation/configuration of virtual machines. We did not have the time to transcribe the interview yet, but already took a few of Winston Bumpus’ quotes from the DMTF press release.

“With the increasing demand for virtualization in enterprise management, the new spec developed through this industry-wide collaboration dove-tails nicely into existing virtualization management standardization activity within the DMTF…
OVF extends the work we have underway to offer IT managers automation of critical, error-prone activities in the deployment of a virtualized infrastructure.”

By collaborating on the development of the OVF specification, the DMTF group aims to make it easier for IT organizations to pre-package and certify software packaged as virtual machine templates for deployment in their virtualized infrastructure and to facilitate the secure distribution of pre-packaged virtual appliances by ISVs and virtual appliance vendors.

Filed Under: Featured, Interviews, People, Videos Tagged With: 1.0, Bumpus, DMTF, ESX, HP, Hyper-V, IBM, interview, microsoft, Open Virtual Machine Format, ovf, OVF 1.0, OVF releaseDell, release, video, video interview, virtualisation, virtualization, vmware, VMWorld, Winston Bumpus, Xen, xensource

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 5
  • Go to page 6
  • Go to page 7
  • Go to page 8
  • Go to page 9
  • Go to page 10
  • Go to Next Page »

Primary Sidebar

Tags

acquisition application virtualization Cisco citrix Citrix Systems citrix xenserver cloud computing Dell desktop virtualization EMC financing Funding Hewlett Packard HP Hyper-V IBM industry moves intel interview kvm linux microsoft Microsoft Hyper-V Novell oracle Parallels red hat research server virtualization sun sun microsystems VDI video virtual desktop Virtual Iron virtualisation virtualization vmware VMware ESX VMWorld VMWorld 2008 VMWorld Europe 2008 Xen xenserver xensource

Recent Comments

  • C program on Red Hat Launches Virtual Storage Appliance For Amazon Web Services
  • Hamzaoui on $500 Million For XenSource, Where Did All The Money Go?
  • vijay kumar on NComputing Debuts X350
  • Samar on VMware / SpringSource Acquires GemStone Systems
  • Meo on Cisco, Citrix Join Forces To Deliver Rich Media-Enabled Virtual Desktops

Copyright © 2025 · Genesis Sample on Genesis Framework · WordPress · Log in

  • Newsletter
  • Advertise
  • Contact
  • About