• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
Virtualization.com

Virtualization.com

News and insights from the vibrant world of virtualization and cloud computing

  • News
  • Featured
  • Partnerships
  • People
  • Acquisitions
  • Guest Posts
  • Interviews
  • Videos
  • Funding

Kris Buytaert

A Round Table on Virtualization Security with Industry Experts

July 30, 2008 by Kris Buytaert 3 Comments

Virtualization security or ‘virtsec’ is one of the hottest topics in virtualization town. But do we need another abbreviation on our streets? Does virtualization require its own security approach and how would it be different from the physical world?

Different opinions fly around in the blogosphere and among vendors. Some security experts claim there is nothing new under the sun and the VirtSec people are just trying to sell products based on the Virtualization Hype. Some see a genuine need to secure new elements in the infrastructure, others claim that Virtualization allows new capabilities to raise security from the ground up and cynics claim it is just a way for the Virtualization industry to get a larger piece from the security budget.

So our editors Tarry and Kris set out to clarify the different opinions, together with the support of StackSafe, they organized a conference call with some of the most prominent bloggers, industry analyst and vendors in this emerging field.

On the call were Joe Pendry (Director of Marketing at StackSafe), Kris Buytaert (Principle at Consultant Inuits), Tarry Singh (Industry/Market Analyst Founder & CEO of Avastu), Andreas Antonopoulos (SVP & Founding Partner at Nemertes Research),Allwyn Sequeira (SVP & CTO at Blue Lane), Michael Berman (CTO at Catbird), Chris Hoff (Chief Security Architect – Systems & Technology Division and Blogger at Unisys) and Hezi Moore (President, Founder & CTO at Reflex Security)

During our initial chats with different security experts their question was simple: “what does virtsec mean?”. Depending on our proposed definition, opinions varied.

So obviously the first topic for discussion was the definition of VirtSec:

Allwyn Sequeira from Blue Lane kicked off the discussion by telling us that he defined Virt Sec as “Anything that is not host security or that’s not network-based security. If there’s a gap there, I believe that gap – in the context of virtualization – would fall under the realm of virtualization security. ” He continued to question who is in charge of Inter-VM communication security, or how features such as Virtual Machine Migration and Snapshottiting add a different complexity to todays infrastructure.

Andreas Antonopoulos of Nemertes Research takes a different approach and has two ways of looking at VirtSec “How do you secure a virtualized environment” and in his opinion a more interesting question is “How do you virtualize all of the security infrastructure in an organization” Andreas also wonders how to call the new evolutions “What do you call something that inspects memory inside of VM and inspects traffic and correlates the results? We don’t really have a definition for that today, because it was impossible, so we never considered it.” He expects virtualization to change the security landscape “Just like virtualization has blurred the line between physical server, virtual server, network and various other aspects of IT, I see blurring the lines within security very much and transforming the entire industry.”

Hezi Moore from Reflex Security wants to search for actual problems. He wants to know what changed since we started virtualizing our infrastructures. “A lot of the challenges that we faced before we virtualized; are still being faced after we virtualized. But a lot of them got really intensified, got much more in higher rate and much more serious.”

Michael Berman from Catbird thinks the biggest role of VirtSec still is Education, “..and the interesting thing I find is the one thing we all know that never changes is human nature.” He is afraid of virtualization changing the way systems are being deployed with no eye on security. Virtualization made it a lot easier to bypass the security officers and the auditors. The speed at which one can deploy a virtual instance and a bigger number of them has changed drastically regarding to a physical only environment, and security policies and procedures have still to catch up. “We can have an argument whether the vendors are responsible for security, whether the hypervisors about who attack servers. The big deal here is the human factor. “

Chris Hoff summarizes the different interpretations of VirtSec in three bullets:

  • One, there is security in virtualization, which is really talking about the underlying platforms, the hypervisors. The answer there is a basic level of trust in your vendors. The same we do with operating systems, and we all know how well that works out.
  • Number two is virtualized security, which is really ‘operationalization’, which is really how we actually go ahead and take policies and deploy them.
  • The third one is really gaining security through virtualization, which is another point.

Over the past decade different Virtualization threats have surfaced, some with more truth than others. About a decade ago when Sun introduced their E10K system, they were boasting they really had 100% isolation between guest and host OS. But malicious minds figured out how to abuse the management framework to go from one partition to another. Joana Rutkowska’s “Blue Pill” Vulnerability Theory turned out to more of a myth than actual danger. But what is the VirtSec industry really worried about?

It seems the market is not worried about these kind of exploits yet. They are more worried about the total lack of security awareness. Andreas Antonopoulos summarizes this quite well “I don’t see much point in really thinking too much about five steps ahead, worrying about VM Escape, worrying about hypervisor security, etc. when we’re running Windows on top of these systems and they’re sitting there naked”.

Allwyn from Blue Lane however thinks this is an issue…certainly with Cloud Computing becoming more popular, we suggest to seriously think about how to tackle deployment of Virtual Machines in environments we don’t fully control. The Virtual Service Providers will have to provide us with a secure way to manage our platforms, and enough guarantee that upon deployment of multiple services these can communicate in a secured and isolated fashion.

Other people think we first have to focus on the Human Factor, we still aren’t paying enough attention to security in the physical infrastructure, so we better focus on the easy to implement solutions that are available today, rather than to worry about, exploits that might or might not occur one day.

Michael Berman from Catbird thinks that Virtualization vendors are responsible to protect the security of their guest. A memory Breakout seems inevitable, but we need to focus on the basic problems before tackling the more esoteric issues…He is worried about scenarios where old NT setups, or other insecure platforms are being migrated from one part of the network to another, and what damages can occur from such events.

Part of the discussion was about standardization, and if standardization could help in the security arena. Chris Hoff reasons that today we see mostly server virtualization, but there is much more to come, client virtualization, network virtualization, etc. As he says: “I don’t think there will be one one ring zero to rule them all.”. There are more and more vendors joining the market, VMWare, Oracle, Citrix, Cisco, Qumranet and different others have different Virtualization platforms and some vendors have based their products on top of them.

In the security industry standardization has typically been looked at as a bad thing, the more identical platforms you have the easier it will be for an attacker, if he breaks one, he has similar access to the others. Building a multi-vendor or multi-technology security infrastructure is common practice.

Another important change is the shift of responsibilities, traditionally you had the Systems people and the network people, and with some luck an isolated security role. Today the Systems people are deploying virtual machines at a much higher rate , and because of Virtualization they take charge of part of the network, hence giving the Network people less control. And the security folks less visibility

Allwyn Sequeira from Blue Lane thinks the future will bring us streams of Virtualization Security, the organizations with legacy will go for good VLAN segmentation and some tricks left and right because the way they use Virtualization blocks them for doing otherwise. He thinks the real innovation will come from people who can start with an empty drawing board.

Andreas Antonopoulos from Nemertes Research summarized that we all agree that the Virtualization companies have a responsibility to secure their hypervisor. There is a lot of work to be done in taking responsibility so that we can implement at least basic security. The next step is to get security on to the management dashboard , because if the platform is secure, but the management layer is a wide open goal, we haven’t gained anything.

Most security experts we talked to still prefer to virtualize their current security infrastructure vover the products that focus on securing virtualization. There is a thin line between needing a product that secures a virtual platform and changing your architecture and best practices to a regular security product fits in a Virtualized environment.

But all parties seem to agree that lots of the need for VirtSec comes from changing scale, and no matter what tools you throw at it, it’s still a people problem

The whole VirtSec discussion has just started, it’s obvious that there will be a lot of work to be done and new evolutions will pop up left and right. I`m looking forward to that future So as Chriss Hoff said “Security is like bell bottoms, every 10-15 years or so it comes back in style”, this time with a Virtualization sauce.

Listen to the full audio of the conference call!

Filed Under: Featured, Guest Posts, Interviews, People Tagged With: Allwyn Sequeira, Andreas Antonopoulos, Avastu, Blue Lane, Catbird, Chris Hoff, conference call, Hezi Moore, interview, Inuits, Joe Pendry, Kris Buytaert, Michael Berman, Nemertes Research, Reflex Security, round table, StackSafe, Tarry Singh, Unisys, virtsec, virtualisation, virtualization, virtualization security

OLS Virtualization Minisummit Report

July 24, 2008 by Kris Buytaert 2 Comments

Virtualization.com was present at this week’s Virtualization Minisummit in Ottawa.

The OLS Virtualization Minisummit took place last Tuesday in Les Suites, Ottawa. Aland Adams had created an interesting lineup with a mixture of Kernel level talks and Management framework talks. First up was Andrey Mirkin from OpenVZ. He first gave a general overview of different virtualization techniques.
While comparing them, he claimed that Xen has a higher virtualization overhead because the hypervisor needs to manage a lot of stuff where as “container-based” approaches that use the Linux kernel for this have less overhead.

We discussed OpenVZ earlier, which uses 1 kernel for both the host OS and all the guest OS’s. Each container has it’s own files, process tree, network (virtual network device), devices (which can be shared or not), IPC objects, etc. Often that’s an advantage, sometimes it isn’t.

When Andrey talks about containers, he means OpenVZ containers, which often confused the audience as at the same time the Linux Containers minisummit was gong on in a different suite. He went on to discuss the different features of OpenVZ. Currently it includes checkpointing; they have templates from which they can quickly build new instances.
OpenVZ also supports Live Migration , basically taking a snapshot and transporting (rsync based) it to another node. So not the Xen way .. there is some downtime for the server .. although a minor one.
Interesting to know is that OpenVZ is also working on including OpenVZ into the mainstream Linux Kernel. The OpenVZ team has been contributing a lot of patches and changes to the Linux kernel in order to get their features in. Andrey also showed us a nice demo of the PacMan Xscreensaver being live migrated back and forth.

Still, containers are “chroots on steroids” (dixit Ian Pratt).
Given the recent security fuzz I wondered about the impact of containers. Container-based means you can see the processes in the guest from the host OS, which is a enormous security problem. Imagine a Virtual Host provider using this kind of technique, including having full access to your virtualized platform, whereas in other approaches he’ll actually need to have your passwords etc. to access certain parts of the guest.

The next talk was about Virtual TPM on Xen/KVM for Trusted Computing, by Kuniyasu Suzaki. He kicked offs with explaining the basics of the Trusted Platform Module. The whole problem is to create a full chain of trust from booting till full operation. So you need a boot loader that supports TPM (grub IMA), you need a patched Kernel (IMA) , from where you can have a binary that is trusted. (Ima : Integrity Measurement Architecture).

There are 2 ways to pass TPM to a virtual machine. First, there is a proprietary module by IBM as presented on the 2006 Usenix symposium where they transfer the physical TPM to a VM. Secondly, there is emulating TPM by software, there is an emulator developed by eth on tpm-emulator.berlios.de. KVM and Xen support emulated TPM. Off course this doesn’t keep the hardware trust.

As Qemu is needed to emulate bios-related things you can’t do vTPM on a paravirtualized domain, you need an HVM-based one. A customized KVM by Nguygen Anh Quynh will be released shortly; the patch will be applied to Qemu.

Still, these cases are using the TPM emulator and not the real hardware. An additional problem with virtualization and TPM arises when you start thinking about Migrating machines around … and losing access to the actual TPM module. Kuniyasu then showed a demo shown using VMKnoppix.

Dan Magenheimer is doing a rerun of his Xen Summit 2008 talk titled “Memory Overcommit without the Commitment”.

There is a lot of discussion on why you should or should not support overcommit memory. Some claim you should just buy enough memory (after all, memory is cheap) but it isn’t always: as soon as you go for the bigger memory lats you’ll still be paying a lot of money.
Overcommitment cost performance, you’ll end up swapping which is painful, however people claim that with CPU and IO it also costs performance so sometimes you need to compromise between functionality, cost and performance. Imho, a machine that is low on memory and starts swapping or even OOM’ing processes is much more painful then a machine that slows down because it is reaching its CPU or IO limits.

So one of the main arguments in favor of wanting to support overcommit on Xen was
because VMWare does it …

Dan outlined the different proposed solutions, such as Ballooning, Content-based page sharing , VMM-driven paging demand, , Hotplug memory add/delete, ticketed ballooning or even swap entire guests. in order to come up with his own proposition which he titled Feedback-directed ballooning.

The idea is that you have a lot of information of the memory-status of your guest, that Xen ballooning works like a charm, that Linux actually does perform OK when put under memory stress (provided you have configured swap). And that you can use xenstore tools for two-way communication. So he wrote a set of userland bash scripts that implemented ballooning based on local or directed feedback.

Conclusion: Xen does do memory overcommit today, so Dan replaced a “critical” VMWare feature with a small shell script 🙂

Filed Under: Guest Posts Tagged With: memory ballooning, Memory Overcommit, ols, openvz, oraclevm, tpm, virtualization minissummit, vmknoppix, vmware, vTPM

Qlusters Shuts Down

July 1, 2008 by Kris Buytaert 3 Comments

Israel business site Globes Online got the scoop about Qlusters closing shop after seven years in business. The last 30 employees of the company were informed earlier this week of the company’s decision.

Qlusters, the company formerly behind the openQRM project shutters only about a year after it raised $10 million in a Series C round. While the market of Infrastructure Management tools and particularly Virtualization Management frameworks is growing, Qlusters failed to realize its goals and burned through approximately $34 million of capital.

As we reported earlier, Qlusters had no new roadmap after dropping the opensource openQRM project, which recently announced the beta of it’s 4.0 rewrite. It’s always sad to see a company like Qlusters leave the market like this, but fortunately their technology will continue to live on thanks to the open source community.

Filed Under: Featured, Guest Posts, News Tagged With: infrastructure management, openqrm, qlusters, virtualisation, virtualization, virtualization management

openQRM 4.0 (Beta), Going Strong

June 18, 2008 by Kris Buytaert Leave a Comment

Matt Rechenburg has just announced the beta version of openQRM 4.0.

Not even that long after Qlusters decided to set the project free, the openQRM team has changed directions in a way Qlusters never would have allowed.

openQRM 4.0 is a major rewrite of the openQRM functionality in PHP. The openQRM team has been listening to the community and learned that contributions will be much easier if the tool would be rewritten in a scripting-based language. They decided to keep the platform as simple as possible.

Plugin support now is a lot easier, rather than having to reconfigure and enable plugins from the command line, one can now enable plugins from the webGUI (similar to the Drupal modules)

The old openQRM provided a lot of proprietary libraries and tools that were already available on a typical Linux distribution. From now on, openQRM will use the tools available from the distribution.

The goal of openQRM hasn’t changed , the focus of the project is still on on rapid-, appliance-based deployment, virtualization and storage management.

Still no news from the Qlusters side however, and their site obviously needs an update.

Filed Under: Guest Posts, News Tagged With: matt rechenburg, openqrm, openQRM 4.0, openQRM 4.0 beta, qlusters, virtualisation, virtualization, vmware, Xen

A Conversation About Virtualization Security, The Quotes

June 11, 2008 by Kris Buytaert 2 Comments

Last week, an interesting conference call took place with several industry leaders in the virtualization security (virtsec) area, initiated by Virtualization.com. The panel included:

  • Joe Pendry, Director of Marketing – StackSafe,
  • Kris Buytaert – Infrastructure Architect; Open Source Expert; Principle Consultant Inuits; Blogger & editor at Virtualization.xom,
  • Tarry Singh – Sr. Consultant, Blogger, Industry/Market Analyst; Founder & CEO of Avastu & editor at Virtualization.xom
  • Andreas Antonopoulos, SVP & Founding Partner – Nemertes Research
  • Allwyn Sequeira ,SVP & CTO – Blue Lane, Michael Berman, CTO – Catbird
  • Chris Hoff, Chief Security Architect – Systems & Technology Division and Blogger – Unisys
  • Hezi Moore, President, Founder & CTO – Reflex Security

We’ll publish the highlights from our conversations shortly, but as a teaser, here are some of the most interesting quotes:

“I don’t see much point in really thinking too much about five steps ahead, worrying about VM Escape, worrying about hypervisor security, etc. when we’re running Windows on top of these systems and they’re sitting there naked.”

“We’re dealing with virtualized storage, while nobody will ever raise their hand saying they’re a security expert when it comes to that.”

“More than 75 percent of the people we asked, how are you securing virtualized environments? Their answer was VLANs. That’s where we stand today.”

“This was a network guy and his email went: WTF, you need 30 VLANS on one server? That’s the first time he became aware of virtualization. That team wasn’t even working with him. And the first inkling he had when he got a request that was just so out of the norm he just didn’t know what was going on.”

“To me, security is like bell bottoms, every 10-15 years or so, it comes back into style.”

Watch Virtualization.com for more!

Filed Under: Featured, Interviews, People Tagged With: Allwyn Sequeira, Andreas Antonopoulos, Avastu, Blue Lane, Catbird, conference call, interview, Inuits, Joe Pendry, Kris Buytaert, Michael Berman, Nemertes Research, quotes, StackSafe, Tarry Singh, virtsec, virtualisation, virtualization, virtualization security

Build Your Own Cloud!

June 6, 2008 by Kris Buytaert 2 Comments

Given enough hardware, you can now build your own Amazon Elastic Cloud or similar platform. And all in Open Source.

A group of developers from the Department of Computer Science at the University of California, Santa Barbara has recently released a tool that can make your personal Cloud dreams come true!

EUCALYPTUS – Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems – is an open-source software infrastructure for implementing “cloud computing” on clusters. The current interface to EUCALYPTUS is compatible with Amazon’s EC2 interface, but the infrastructure is designed to support multiple client-side interfaces

Eucalyptus

Eucalyptus has been developed in the MAYHEM Lab within the Computer Science Department at the University of California, Santa Barbara, primarily as a tool for cloud-computing research. It is distributed as open source with a FreeBSD-style license that does not restrict its usage much. Eucalyptus 1.0 targets Linux systems that use Xen (versions 3.*) for virtualization.

Eucalyptus is based on the Rocks cluster management platform. In the future, the EUCALYPTUS team will offer a source release along with other methods of deployment.

Being API compliant with Amazon EC2 means you can reuse the tools you already wrote for Amazon and effectively build your own while not having to change your applications. EUCALYPTUS also opens the door for other organizations with spare CPU cycles to offer Virtual Machines instances at a competitive price.

Eucalyptus 1.0 was just released last month and the ISO iso available for download.

See also the report on Ostatic.

If you’re interested in this topic, you should check out Structure 08, an upcoming conference on cloud computing, infrastructure and virtualization (we’re a media partner for this event).

Filed Under: Featured, Guest Posts Tagged With: Amazon, Amazon EC, Amazon EC2, Amazon Web Services, cloud, cloud computing, ec2, Elastic Computing, Elastic Utility Computing Architecture for Linking Your, eucalyptus, Eucalyptus 1.0, open source, virtualisation, virtualization

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 11
  • Go to page 12
  • Go to page 13
  • Go to page 14
  • Go to page 15
  • Go to page 16
  • Go to Next Page »

Primary Sidebar

Tags

acquisition application virtualization Cisco citrix Citrix Systems citrix xenserver cloud computing Dell desktop virtualization EMC financing Funding Hewlett Packard HP Hyper-V IBM industry moves intel interview kvm linux microsoft Microsoft Hyper-V Novell oracle Parallels red hat research server virtualization sun sun microsystems VDI video virtual desktop Virtual Iron virtualisation virtualization vmware VMware ESX VMWorld VMWorld 2008 VMWorld Europe 2008 Xen xenserver xensource

Recent Comments

  • C program on Red Hat Launches Virtual Storage Appliance For Amazon Web Services
  • Hamzaoui on $500 Million For XenSource, Where Did All The Money Go?
  • vijay kumar on NComputing Debuts X350
  • Samar on VMware / SpringSource Acquires GemStone Systems
  • Meo on Cisco, Citrix Join Forces To Deliver Rich Media-Enabled Virtual Desktops

Copyright © 2025 · Genesis Sample on Genesis Framework · WordPress · Log in

  • Newsletter
  • Advertise
  • Contact
  • About