• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
Virtualization.com

Virtualization.com

News and insights from the vibrant world of virtualization and cloud computing

  • News
  • Featured
  • Partnerships
  • People
  • Acquisitions
  • Guest Posts
  • Interviews
  • Videos
  • Funding

Video Interview: Werner Vogels, CTO Amazon on Virtualization and the VC Threat

July 31, 2008 by Toon Vanagt 8 Comments

At the GigaOM Structure08 conference in San Francisco, we had the opportunity to question Amazon’s CTO Werner Vogels on his virtualization experience, while building the Amazon cloud. He confirmed Amazon Web Services are still powered by Xen hypervisors.

It is remarkable to hear the CTO of a multinational openly thank the open source community for their active support on Xen and hear him claim this to be the main reason for having chosen Xen as a crucial Amazon cloud-enabling building block.


Werner Vogels CTO Amazon.com from Toon Vanagt on Vimeo.

As we reported earlier, Amazon is also very open on its performance and welcomes independent companies to measure and report on parameters for public virtual computing facility such as security, availability, scalability, performance and cost.

Werner finished our video interview by explaining why cloud computing is even disruptive outside of the datacenter and transforms unexpected industries. Venture capitalists seem upset about side effects, such as start-up funding independence, as these fast growing tech companies are no longer in need to burn lots of VC-money on hardware platforms and technologies upfront. They can now scale their offering dynamically, driven by organic growth, while generating the necessary revenues to cover the extra cloud cost.

At Virtualization.com we like to think that “shift happens” and look forward to the upcoming VC-riots on Sand Hill Road against these unthankful self-sufficient start-ups 🙂

A full transcript of the interview is below. If you are interested in Amazon Web Services, you might also want to participate in our contest to win a free book, dedicated by Werner Vogels.


(00:00) Werner Vogels, welcome on Virtualization.com. You are the system administrator of a small bookshop. Could you tell us something more about yourself and on how you virtualized your infrastructure to such a dimension.

“I am the Chief Technology Officer for Amazon.com and I am responsible for the long term vision for technology within Amazon as well as how we can develop radically new technologies to support that business. But also the kind of businesses Amazon could move into, because of the unique technologies that we have developed.”

(00:33) Werner, I am a bit puzzled, because I did an interview with Xen founder Ian Pratt and he told me that Amazon is using this extensively. In your keynote here at the GigaOm Structure08 conference you just claimed you’re using no more third party applications. Did you refer to Xen in that respect?

“My remark about third parties applications was more about our enterprise stuff, where you look at databases and middleware… We do use some third party software and Xen is one of those. But we use them in the mode everybody in this world is using them. We don’t put these types of technologies to the extreme, because we want to make sure their vendors can support us, in a way they support any other customer they have. The remark I made this morning was more about when you really start pushing technology to the edge, we cannot blame vendors for not being able to support us.”

(1:30): How hard was it to integrate the Xen hypervisor into your cloud platform?

“I think Xen is a great product. It is easy to use. But most importantly is the very active community around it. I would not say many ‘issues’ around using Xen, but ‘challenges’ are addressed there with the things every virtual machine has to deal with. Things such as: I/O-issues, guaranteed scheduling issues, domain zero security concerns,…The community out there is very helpful. That was a very big reason for us in selecting Xen.”

(02:15) With “Security”, you just mentioned one of the big Virtualization issues at stake. How do you make absolutely sure that VM’s are isolated in a mixed customer cloud environment? Is Amazon using VLans to do achieve that or did you design proprietary solutions or techniques you can share with the community?

“It is our policy not to discuss specific security techniques. Except for that we have done extensive software development. To make sure that we can audit, maintain and manage the security issues.”

(02:45) You see this as one of your competitive advantages?

I like to believe that security is one of the main concerns and you have to address those upfront. There is no excuse. In this world of cloud computing the most fundamental promise needs to be that it is secure!

(03:10) Yesterday CloudStatus was launched and I imagine you are aware of this? Is Amazon happy about that?”

Absolutely, we love them. But I want to take a step back there. It is very important with things like CloudStatus, that they are actually reporting on things that make sense for our customers. So we are looking forward to working with them and to bring them into contact with our customers and to make sure that the things they are reporting on are useful to our customers…”

(03:40) You would like to advice CloudStatus on the Amazon parameter set they should be reporting on?

“It is up to them off-course. This is not going to be a winner take all business as there will be many cloud providers in the future. As I mentioned in my talk, we will be measured on security, availability, scalability, performance and at cost. So it is very important that we have independent companies measuring these kind of things.”

(04:18) When you talk about independent companies and open alternatives, one of the general concerns remains vendor lock-in. With Eucalyptus there is an open source equivalent, which sort of reverse engineered your APIs (Application Programming Interfaces) and is compliant with Amazon. Do you think that these options of knowing you can in-source your cloud if needed, helps to comfort prospective companies in selecting a cloud provider?

“Let’s first start of with the notion of vendor lock-in. As I mentioned in my talk, I like to believe that Amazon works very hard to provide APIs, which are so simple that there is hardly any vendor lock-in. We use standard techniques to give people access to our APIs. If you look at Eucalyptus, their need came out the schools, involved in high performance computing, on the one hand want to use the public cloud for doing parallel computing, but on the other hand one to keep a similar interface internally. I think they have been very successful to actually make sure that all these schools  adopt this same model.”

(05:32) A last question on your disruptive cloud platform. Could you explain how this technology also disrupts start-up funding cycles and the move from the CAPEX to OPEX expense models? [A capital expenditure (CAPEX) is the cost of developing or providing non-consumable parts for a product, service or system. Its counterpart an operating expense (OPEX) is an on-going cost for running that product]

“Last night I was at a reception, where a venture capitalist walked up to me, who said he hated Amazon, because we killed his business. After we talked for a while, he actually had to confess they also have to adapt to this new world. Where in the old world, they could lock themselves into a company; get their hand on a large part of the equity, because those companies had to spend a lot of money on resources upfront. What we see now is that the availability of these services makes companies start to think differently. Before start-ups maybe had the idea that the only way they could be successful was to have a very big exit. For that they needed a lot of hardware and lots of investments. Many companies based on the fact that these services are available are now moving to a model, where they think they can build a sustainable business. Maybe we can build great products and charge our customers for it. And if you then attract more customers, you spend more on the (development) of these services. Which is just fine as your income follows your customer needs.”

Amazon

Filed Under: Funding, Interviews, People, Videos

Release: VMware Fusion 2.0 Beta 2

July 31, 2008 by Robin Wauters 1 Comment

VMware introduced their latest Beta build of their Apple Mac virtualization product, VMware Fusion 2.0 Beta 2, nearly three months after releasing the Beta 1 build. The new release is focused on several key areas, mainly to improve the user experience with updated video, data protection and Unity capabilities.

VMware Fusion 2.0 will be made available free to owners of VMware Fusion 1.x.

A rundown of the new features:

  • Multiple Snapshots
    • Save your virtual machine in any number of states, and return to those states at any time
    • Automatically take snapshots at regular intervals with AutoProtect
  • File and URL Sharing
    • Share applications between your Mac and your virtual machines
    • Finder can now open your Mac’s files directly in Windows applications like Microsoft Word and Windows Media Player
    • VMware Fusion can configure virtual machines to open their files in Mac applications like Preview and iTunes
    • Click on a URL in a virtual machine and open it in your favorite Mac browser, or configure your Mac to open its links in a virtual machine
    • Map key folders in Windows Vista and Windows XP (Desktop, My Documents, My Music, My Pictures) to their corresponding Mac folders (Desktop, Documents, Music, and Pictures)
    • Greatly improved reliability of shared folders—now compatible with Microsoft Office and Visual Studio
  • Experimental Support for Mac OS X Server Virtual Machines
    • You can create Mac OS X Server 10.5 virtual machines (experimental support). Due to Apple licensing restrictions, the standard edition of Mac OS X 10.5 is not supported in a virtual machine
  • Display Improvement
    • Improved 3D support
    • Use 1080p full high definition video in Windows XP or Windows Vista
    • Freely resize your virtual machine?s window and enter and exit Full Screen view while playing games
    • Run Linux applications directly on your Mac?s desktop under Unity view
  • UI Improvements
    • The New Virtual Machine Assistant has Linux Easy Install in addition to Windows Easy Install
    • Cut and paste files up to 4 MB, including graphics and styled text
    • Status icons glow when there is activity
    • A screen shot of the last suspended state of a virtual machine is displayed in Quick Look and Cover Flow
    • You can remap keyboard and mouse input
    • Keyboard compatibility between the Mac and the virtual machine is improved
    • The vmrun command line interface is available for scripting
  • Broader Hardware and Software Support
    • VMware Fusion supports Ubuntu 8.04 Hardy Heron
    • VMware Fusion supports 64-bit Vista Boot Camp; handles activation for Microsoft Office 2003 and Office 2007
    • Experimental support for 4-way SMP (note: Windows Vista and Windows XP limit themselves to two CPUs)
  • Support for Virtual Hard Disks
    • You can mount the virtual disk of a powered-off Windows virtual machine using VMDKMounter (Mac OS X 10.5 or higher)
    • You now have the ability to resize virtual disks

Here’s a demo video the VMware team put out:

VMware

[Source: VMBlog]

Filed Under: Featured, News Tagged With: Apple, Apple virtualization, Fusion, Fusion 2.0, Fusion 2.0 Beta 2, Mac, Mac virtualization, virtualisation, virtualization, vmware, VMWare Fusion, VMWare Fusion 2.0, VMware Fusion 2.0 Beta 2

There We Go Again: EMC Shares Rise On Acquisition Rumors

July 31, 2008 by Robin Wauters 1 Comment

This is one rumor that just keeps coming back: Reuters is reporting that EMC shares rose as much as 6.3 percent yesterday on market speculation that the world’s largest maker of corporate storage equipment could be acquired. The company stills holds a majority stake in virtualization juggernaut VMware.

Shares of EMC rose as high as $14.92 in trade on the New York Stock Exchange, before retreating to $14.75 in afternoon trading.

EMC spokesman Dave Farmer declined to comment, saying the company never responds to market rumors or speculation. Pacific Growth Equities analyst Kaushik Roy said the most likely company to be interested in buying EMC would be Cisco Systems. Last May, we reported on rumors of a possible merger.

A popular phrase says there is fire where there is smoke, but we’re getting a bit skeptical. These rumors have been floating for years now, and although a Cisco-EMC combo would seem like a pretty logical combination, you can ask yourself if it why a deal would be in the works now, when a merger or full acquisition should have already happened if both companies and their shareholders agreed.

EMC Corporation

Cisco Systems

Filed Under: Acquisitions, Featured, Rumors Tagged With: acquisition, Cisco, Cisco Systems, EMC, EMC Corp, merger, virtualisation, virtualization, vmware

DataSynapse Extends Dynamic Service Management Tools for VMware Infrastructure Support

July 30, 2008 by Robin Wauters Leave a Comment

DataSynapse, maker of dynamic application service management software, today announced that it is working with VMWare to provide customers with simplified deployment and operational management of application platforms and services to facilitate always-on, always-responsive virtualized applications.

As a member of the VMware Technology Alliance Partner (TAP) program, DataSynapse plans to integrate its FabricServer dynamic application service management software specifically with VMware VirtualCenter. This integration will allow IT organizations to automate deployment and provisioning and optimize service levels of server applications in a virtualized infrastructure, helping them to reduce capital and operating costs while delivering order of magnitude improvement in time to market for critical business applications.
DataSynapse FabricServer software is an enabling platform for standardizing and automating the configuration, activation, scaling and aligning application service levels with planned business policies, which minimizes downtime, automates service level management and improves enterprise application performance. Combined with VMware VMotion technology, VMware’s live migration capability, FabricServer provides responsive horizontal scalability.
DataSynapse

Filed Under: News, Partnerships Tagged With: DataSynapse, DataSynapse FabricServer, Dynamic Service Management, FabricServer, virtualcenter, virtualisation, virtualization, VMotion, vmware, VMware Infrastructure, VMware Technology Alliance Partner, vmware virtualcenter, VMware VMotion

Veeam Releases Backup 2.0 for VMware Infrastructure

July 30, 2008 by Robin Wauters 1 Comment

—

Veeam Software today announced general availability of Veeam Backup 2.0, a product offering both backup and replication for virtual environments. Version 2.0 includes added functionality, as well as a new optimized backup engine that allows for up to five times faster backup and replication performance than the 1.0 version.

Major new features in Veeam Backup 2.0 include:

  • Five times faster — Veeam Backup 2.0 has a new optimized backup
    engine, which allows for up to five times faster backup and replication
    performance than the previous version.
  • Windows Volume Shadow Copy Service (VSS) support — Veeam Backup 2.0
    leverages VSS to ensure consistent backup and recovery of VSS-aware
    applications, including Active Directory, Microsoft Exchange and Microsoft
    SQL Server.
  • ESXi support — Now customers can back up ESXi servers using VMware
    Consolidated Backup (VCB). File-level recovery is fully supported for
    guests running on ESXi, and full image restore is supported to ESX 3.x
    servers. These images can then be VMotioned to ESXi as needed.
  • Enhanced reporting and notification — Comprehensive real-time job
    statistics are available, including automated e-mail notification of backup
    job status, activity and performance details.
  • Backup portability — Veeam Backup users can now easily import backups
    made using previous versions of the software, or backups that have been
    archived to tape.
  • Support for third-party tape backup systems — Now users can specify a
    script to automatically run when the VMware backup is finished, initiating
    tape backups to begin.

Veeam Backup 2.0 is available immediately, and pricing still begins at $499 USD per socket.

Veeam

[Source: MarketWire]

Filed Under: News Tagged With: backup, Backup 2.0, replication, restore, Veeam, Veeam Backup 2.0, Veeam Backup Version 2.0, Veeam Software, virtualisation, virtualization, vmware, VMware Infrastructure

A Round Table on Virtualization Security with Industry Experts

July 30, 2008 by Kris Buytaert 3 Comments

Virtualization security or ‘virtsec’ is one of the hottest topics in virtualization town. But do we need another abbreviation on our streets? Does virtualization require its own security approach and how would it be different from the physical world?

Different opinions fly around in the blogosphere and among vendors. Some security experts claim there is nothing new under the sun and the VirtSec people are just trying to sell products based on the Virtualization Hype. Some see a genuine need to secure new elements in the infrastructure, others claim that Virtualization allows new capabilities to raise security from the ground up and cynics claim it is just a way for the Virtualization industry to get a larger piece from the security budget.

So our editors Tarry and Kris set out to clarify the different opinions, together with the support of StackSafe, they organized a conference call with some of the most prominent bloggers, industry analyst and vendors in this emerging field.

On the call were Joe Pendry (Director of Marketing at StackSafe), Kris Buytaert (Principle at Consultant Inuits), Tarry Singh (Industry/Market Analyst Founder & CEO of Avastu), Andreas Antonopoulos (SVP & Founding Partner at Nemertes Research),Allwyn Sequeira (SVP & CTO at Blue Lane), Michael Berman (CTO at Catbird), Chris Hoff (Chief Security Architect – Systems & Technology Division and Blogger at Unisys) and Hezi Moore (President, Founder & CTO at Reflex Security)

During our initial chats with different security experts their question was simple: “what does virtsec mean?”. Depending on our proposed definition, opinions varied.

So obviously the first topic for discussion was the definition of VirtSec:

Allwyn Sequeira from Blue Lane kicked off the discussion by telling us that he defined Virt Sec as “Anything that is not host security or that’s not network-based security. If there’s a gap there, I believe that gap – in the context of virtualization – would fall under the realm of virtualization security. ” He continued to question who is in charge of Inter-VM communication security, or how features such as Virtual Machine Migration and Snapshottiting add a different complexity to todays infrastructure.

Andreas Antonopoulos of Nemertes Research takes a different approach and has two ways of looking at VirtSec “How do you secure a virtualized environment” and in his opinion a more interesting question is “How do you virtualize all of the security infrastructure in an organization” Andreas also wonders how to call the new evolutions “What do you call something that inspects memory inside of VM and inspects traffic and correlates the results? We don’t really have a definition for that today, because it was impossible, so we never considered it.” He expects virtualization to change the security landscape “Just like virtualization has blurred the line between physical server, virtual server, network and various other aspects of IT, I see blurring the lines within security very much and transforming the entire industry.”

Hezi Moore from Reflex Security wants to search for actual problems. He wants to know what changed since we started virtualizing our infrastructures. “A lot of the challenges that we faced before we virtualized; are still being faced after we virtualized. But a lot of them got really intensified, got much more in higher rate and much more serious.”

Michael Berman from Catbird thinks the biggest role of VirtSec still is Education, “..and the interesting thing I find is the one thing we all know that never changes is human nature.” He is afraid of virtualization changing the way systems are being deployed with no eye on security. Virtualization made it a lot easier to bypass the security officers and the auditors. The speed at which one can deploy a virtual instance and a bigger number of them has changed drastically regarding to a physical only environment, and security policies and procedures have still to catch up. “We can have an argument whether the vendors are responsible for security, whether the hypervisors about who attack servers. The big deal here is the human factor. “

Chris Hoff summarizes the different interpretations of VirtSec in three bullets:

  • One, there is security in virtualization, which is really talking about the underlying platforms, the hypervisors. The answer there is a basic level of trust in your vendors. The same we do with operating systems, and we all know how well that works out.
  • Number two is virtualized security, which is really ‘operationalization’, which is really how we actually go ahead and take policies and deploy them.
  • The third one is really gaining security through virtualization, which is another point.

Over the past decade different Virtualization threats have surfaced, some with more truth than others. About a decade ago when Sun introduced their E10K system, they were boasting they really had 100% isolation between guest and host OS. But malicious minds figured out how to abuse the management framework to go from one partition to another. Joana Rutkowska’s “Blue Pill” Vulnerability Theory turned out to more of a myth than actual danger. But what is the VirtSec industry really worried about?

It seems the market is not worried about these kind of exploits yet. They are more worried about the total lack of security awareness. Andreas Antonopoulos summarizes this quite well “I don’t see much point in really thinking too much about five steps ahead, worrying about VM Escape, worrying about hypervisor security, etc. when we’re running Windows on top of these systems and they’re sitting there naked”.

Allwyn from Blue Lane however thinks this is an issue…certainly with Cloud Computing becoming more popular, we suggest to seriously think about how to tackle deployment of Virtual Machines in environments we don’t fully control. The Virtual Service Providers will have to provide us with a secure way to manage our platforms, and enough guarantee that upon deployment of multiple services these can communicate in a secured and isolated fashion.

Other people think we first have to focus on the Human Factor, we still aren’t paying enough attention to security in the physical infrastructure, so we better focus on the easy to implement solutions that are available today, rather than to worry about, exploits that might or might not occur one day.

Michael Berman from Catbird thinks that Virtualization vendors are responsible to protect the security of their guest. A memory Breakout seems inevitable, but we need to focus on the basic problems before tackling the more esoteric issues…He is worried about scenarios where old NT setups, or other insecure platforms are being migrated from one part of the network to another, and what damages can occur from such events.

Part of the discussion was about standardization, and if standardization could help in the security arena. Chris Hoff reasons that today we see mostly server virtualization, but there is much more to come, client virtualization, network virtualization, etc. As he says: “I don’t think there will be one one ring zero to rule them all.”. There are more and more vendors joining the market, VMWare, Oracle, Citrix, Cisco, Qumranet and different others have different Virtualization platforms and some vendors have based their products on top of them.

In the security industry standardization has typically been looked at as a bad thing, the more identical platforms you have the easier it will be for an attacker, if he breaks one, he has similar access to the others. Building a multi-vendor or multi-technology security infrastructure is common practice.

Another important change is the shift of responsibilities, traditionally you had the Systems people and the network people, and with some luck an isolated security role. Today the Systems people are deploying virtual machines at a much higher rate , and because of Virtualization they take charge of part of the network, hence giving the Network people less control. And the security folks less visibility

Allwyn Sequeira from Blue Lane thinks the future will bring us streams of Virtualization Security, the organizations with legacy will go for good VLAN segmentation and some tricks left and right because the way they use Virtualization blocks them for doing otherwise. He thinks the real innovation will come from people who can start with an empty drawing board.

Andreas Antonopoulos from Nemertes Research summarized that we all agree that the Virtualization companies have a responsibility to secure their hypervisor. There is a lot of work to be done in taking responsibility so that we can implement at least basic security. The next step is to get security on to the management dashboard , because if the platform is secure, but the management layer is a wide open goal, we haven’t gained anything.

Most security experts we talked to still prefer to virtualize their current security infrastructure vover the products that focus on securing virtualization. There is a thin line between needing a product that secures a virtual platform and changing your architecture and best practices to a regular security product fits in a Virtualized environment.

But all parties seem to agree that lots of the need for VirtSec comes from changing scale, and no matter what tools you throw at it, it’s still a people problem

The whole VirtSec discussion has just started, it’s obvious that there will be a lot of work to be done and new evolutions will pop up left and right. I`m looking forward to that future So as Chriss Hoff said “Security is like bell bottoms, every 10-15 years or so it comes back in style”, this time with a Virtualization sauce.

Listen to the full audio of the conference call!

Filed Under: Featured, Guest Posts, Interviews, People Tagged With: Allwyn Sequeira, Andreas Antonopoulos, Avastu, Blue Lane, Catbird, Chris Hoff, conference call, Hezi Moore, interview, Inuits, Joe Pendry, Kris Buytaert, Michael Berman, Nemertes Research, Reflex Security, round table, StackSafe, Tarry Singh, Unisys, virtsec, virtualisation, virtualization, virtualization security

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 278
  • Go to page 279
  • Go to page 280
  • Go to page 281
  • Go to page 282
  • Interim pages omitted …
  • Go to page 371
  • Go to Next Page »

Primary Sidebar

Tags

acquisition application virtualization Cisco citrix Citrix Systems citrix xenserver cloud computing Dell desktop virtualization EMC financing Funding Hewlett Packard HP Hyper-V IBM industry moves intel interview kvm linux microsoft Microsoft Hyper-V Novell oracle Parallels red hat research server virtualization sun sun microsystems VDI video virtual desktop Virtual Iron virtualisation virtualization vmware VMware ESX VMWorld VMWorld 2008 VMWorld Europe 2008 Xen xenserver xensource

Recent Comments

  • C program on Red Hat Launches Virtual Storage Appliance For Amazon Web Services
  • Hamzaoui on $500 Million For XenSource, Where Did All The Money Go?
  • vijay kumar on NComputing Debuts X350
  • Samar on VMware / SpringSource Acquires GemStone Systems
  • Meo on Cisco, Citrix Join Forces To Deliver Rich Media-Enabled Virtual Desktops

Copyright © 2025 · Genesis Sample on Genesis Framework · WordPress · Log in

  • Newsletter
  • Advertise
  • Contact
  • About