• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
Virtualization.com

Virtualization.com

News and insights from the vibrant world of virtualization and cloud computing

  • News
  • Featured
  • Partnerships
  • People
  • Acquisitions
  • Guest Posts
  • Interviews
  • Videos
  • Funding

Search Results for: xensource

DTMF Accepts Draft Specification for Open Virtual Machine Format (OVF)

August 26, 2008 by Robin Wauters Leave a Comment

The Distributed Management Task Force (DMTF) today announced the acceptance of a draft specification submitted by leading virtualization companies (VMware, Oracle and CA recently joined the task force) targeting an industry standard format for portable virtual machines. Virtual machines packaged in this format can be installed on any virtualization platform that supports the standard simplifying interoperability, security and virtual machine lifecycle management for virtual infrastructures.

The companies behind the collaboration on this specification include Dell, HP, IBM, Microsoft, VMware, and XenSource. This group of virtualization industry leaders has submitted the specification to the DMTF for development into an industry standard. DMTF is the industry organization leading the development, adoption and promotion of interoperable management initiatives and standards. DMTF will continue to develop this technology into a successful, open industry standard and promote it worldwide.

The proposed format, called the Open Virtual Machine Format (OVF), uses existing packaging tools to combine one or more virtual machines together with a standards-based XML wrapper, giving the virtualization platform a portable package containing all required installation and configuration parameters for the virtual machines. This allows any virtualization platform that implements the standard to correctly install and run the virtual machines.

(IBM recently announced its open-ovf project.)

Most importantly, OVF specifies procedures and technologies to permit integrity checking of the virtual machines (VM) to ensure that they have not been modified since the package was produced. This enhances the security of the format and will alleviate security concerns of users who adopt virtual appliances produced by third parties. OVF also provides mechanisms that support license checking for the enclosed VMs, addressing a key concern of both independent software vendors (ISVs) and customers. Finally, OVF allows an installed VM to acquire information about its host virtualization platform and run-time environment, which allows the VM to localize the applications it contains and optimize its performance for the particular virtualization environment.

In addition to providing portability, integrity, and configurability of existing virtual hard disk formats. OVF is also extensible to support future developments of virtual hard disk formats whose specifications are openly available.

Filed Under: Featured, News, Partnerships Tagged With: board, Dell, Distributed Management Task Force, DMTF, HP Microsoft, IBM, industry standard, industry standard format, Open Virtual Machine Format, oracle, ovf, portable virtual machines, standard, virtual machine, virtualisation, virtualization, vmware

SplashTop Maker DeviceVM Raises $15 Million in Series C Funding Round

August 20, 2008 by Robin Wauters 1 Comment

DeviceVM, maker of the Splashtop instant-on environment, announced today a $15 million Series C round of funding led by venture capital firm New Enterprise Associates. Existing investors including Storm Ventures, DFJ Dragon, Tim Draper, and Larry Augustin also participated in this Series C investment, demonstrating continued commitment to DeviceVM’s growth.

“Splashtop is a game-changing product in the personal computing space,” said NEA partner Peter Sonsini, who previously led NEA’s investment in XenSource (acquired by Citrix Systems), and was an executive at VMware prior to joining NEA. “We are excited to invest in a revolutionary product and a team like DeviceVM.”

Warren Lazarow and Paul Sieben of O’Melveny & Myers LLP represented DeviceVM in the transaction.

DeviceVM

Filed Under: Funding Tagged With: DeviceVM, Funding, instant-on environment, investment, NEA, New Enterprise Associates, Splashtop, virtualisation, virtualization

DataSheet Proposal for Xen 3.3 Hypervisor Published

August 19, 2008 by Robin Wauters 1 Comment

Stephen Spector published a post yesterday on the Xen blog featuring a proposed data sheet (PDF) for the upcoming Xen 3.3 release, which we said was in final testing stage in the beginning of this month.

Update 26 August: Xen 3.3.0 is available for download.

The complete list of new features in Xen 3.3 includes:

Performance and Scalability

  • CPUID Levelling
  • Shadow 3 Page Table Optimizations
  • EPT/NPT 2MB Page Support
  • Virtual Framebuffer Support for HVM Guests
  • PVSCSI — SCSI Support for PV Guests
  • Full 16-bit Emulation on Intel VT
  • Support for memory overcommit allowing more VMs per physical machine for some workloads

Security

  • PVGRUB Secure Replacement for PYGRUB
  • IO Emulation “stub domains” for HVM IO
  • Green Computing
  • Enhanced C & P State Power Management
  • Graphics Support
  • VT-d Device Pass-Through Support

Miscellaneous

  • Upgrade QEMU Version
  • Multi-Queue Support for Modern NICs
  • Removal of Domain Lock for PV Guests
  • Message Signalled Interrupts
  • Greatly improved precision for time-sensitive SMP VMs
XenSource

Filed Under: News Tagged With: data sheet, datasheet, datasheet proposal, Hypervisor, open source, Stephen Spector, virtualisation, virtualization, Xen, Xen 3.3, Xen hypervisor

A Round Table on Virtualization Security with Industry Experts

July 30, 2008 by Kris Buytaert 3 Comments

Virtualization security or ‘virtsec’ is one of the hottest topics in virtualization town. But do we need another abbreviation on our streets? Does virtualization require its own security approach and how would it be different from the physical world?

Different opinions fly around in the blogosphere and among vendors. Some security experts claim there is nothing new under the sun and the VirtSec people are just trying to sell products based on the Virtualization Hype. Some see a genuine need to secure new elements in the infrastructure, others claim that Virtualization allows new capabilities to raise security from the ground up and cynics claim it is just a way for the Virtualization industry to get a larger piece from the security budget.

So our editors Tarry and Kris set out to clarify the different opinions, together with the support of StackSafe, they organized a conference call with some of the most prominent bloggers, industry analyst and vendors in this emerging field.

On the call were Joe Pendry (Director of Marketing at StackSafe), Kris Buytaert (Principle at Consultant Inuits), Tarry Singh (Industry/Market Analyst Founder & CEO of Avastu), Andreas Antonopoulos (SVP & Founding Partner at Nemertes Research),Allwyn Sequeira (SVP & CTO at Blue Lane), Michael Berman (CTO at Catbird), Chris Hoff (Chief Security Architect – Systems & Technology Division and Blogger at Unisys) and Hezi Moore (President, Founder & CTO at Reflex Security)

During our initial chats with different security experts their question was simple: “what does virtsec mean?”. Depending on our proposed definition, opinions varied.

So obviously the first topic for discussion was the definition of VirtSec:

Allwyn Sequeira from Blue Lane kicked off the discussion by telling us that he defined Virt Sec as “Anything that is not host security or that’s not network-based security. If there’s a gap there, I believe that gap – in the context of virtualization – would fall under the realm of virtualization security. ” He continued to question who is in charge of Inter-VM communication security, or how features such as Virtual Machine Migration and Snapshottiting add a different complexity to todays infrastructure.

Andreas Antonopoulos of Nemertes Research takes a different approach and has two ways of looking at VirtSec “How do you secure a virtualized environment” and in his opinion a more interesting question is “How do you virtualize all of the security infrastructure in an organization” Andreas also wonders how to call the new evolutions “What do you call something that inspects memory inside of VM and inspects traffic and correlates the results? We don’t really have a definition for that today, because it was impossible, so we never considered it.” He expects virtualization to change the security landscape “Just like virtualization has blurred the line between physical server, virtual server, network and various other aspects of IT, I see blurring the lines within security very much and transforming the entire industry.”

Hezi Moore from Reflex Security wants to search for actual problems. He wants to know what changed since we started virtualizing our infrastructures. “A lot of the challenges that we faced before we virtualized; are still being faced after we virtualized. But a lot of them got really intensified, got much more in higher rate and much more serious.”

Michael Berman from Catbird thinks the biggest role of VirtSec still is Education, “..and the interesting thing I find is the one thing we all know that never changes is human nature.” He is afraid of virtualization changing the way systems are being deployed with no eye on security. Virtualization made it a lot easier to bypass the security officers and the auditors. The speed at which one can deploy a virtual instance and a bigger number of them has changed drastically regarding to a physical only environment, and security policies and procedures have still to catch up. “We can have an argument whether the vendors are responsible for security, whether the hypervisors about who attack servers. The big deal here is the human factor. “

Chris Hoff summarizes the different interpretations of VirtSec in three bullets:

  • One, there is security in virtualization, which is really talking about the underlying platforms, the hypervisors. The answer there is a basic level of trust in your vendors. The same we do with operating systems, and we all know how well that works out.
  • Number two is virtualized security, which is really ‘operationalization’, which is really how we actually go ahead and take policies and deploy them.
  • The third one is really gaining security through virtualization, which is another point.

Over the past decade different Virtualization threats have surfaced, some with more truth than others. About a decade ago when Sun introduced their E10K system, they were boasting they really had 100% isolation between guest and host OS. But malicious minds figured out how to abuse the management framework to go from one partition to another. Joana Rutkowska’s “Blue Pill” Vulnerability Theory turned out to more of a myth than actual danger. But what is the VirtSec industry really worried about?

It seems the market is not worried about these kind of exploits yet. They are more worried about the total lack of security awareness. Andreas Antonopoulos summarizes this quite well “I don’t see much point in really thinking too much about five steps ahead, worrying about VM Escape, worrying about hypervisor security, etc. when we’re running Windows on top of these systems and they’re sitting there naked”.

Allwyn from Blue Lane however thinks this is an issue…certainly with Cloud Computing becoming more popular, we suggest to seriously think about how to tackle deployment of Virtual Machines in environments we don’t fully control. The Virtual Service Providers will have to provide us with a secure way to manage our platforms, and enough guarantee that upon deployment of multiple services these can communicate in a secured and isolated fashion.

Other people think we first have to focus on the Human Factor, we still aren’t paying enough attention to security in the physical infrastructure, so we better focus on the easy to implement solutions that are available today, rather than to worry about, exploits that might or might not occur one day.

Michael Berman from Catbird thinks that Virtualization vendors are responsible to protect the security of their guest. A memory Breakout seems inevitable, but we need to focus on the basic problems before tackling the more esoteric issues…He is worried about scenarios where old NT setups, or other insecure platforms are being migrated from one part of the network to another, and what damages can occur from such events.

Part of the discussion was about standardization, and if standardization could help in the security arena. Chris Hoff reasons that today we see mostly server virtualization, but there is much more to come, client virtualization, network virtualization, etc. As he says: “I don’t think there will be one one ring zero to rule them all.”. There are more and more vendors joining the market, VMWare, Oracle, Citrix, Cisco, Qumranet and different others have different Virtualization platforms and some vendors have based their products on top of them.

In the security industry standardization has typically been looked at as a bad thing, the more identical platforms you have the easier it will be for an attacker, if he breaks one, he has similar access to the others. Building a multi-vendor or multi-technology security infrastructure is common practice.

Another important change is the shift of responsibilities, traditionally you had the Systems people and the network people, and with some luck an isolated security role. Today the Systems people are deploying virtual machines at a much higher rate , and because of Virtualization they take charge of part of the network, hence giving the Network people less control. And the security folks less visibility

Allwyn Sequeira from Blue Lane thinks the future will bring us streams of Virtualization Security, the organizations with legacy will go for good VLAN segmentation and some tricks left and right because the way they use Virtualization blocks them for doing otherwise. He thinks the real innovation will come from people who can start with an empty drawing board.

Andreas Antonopoulos from Nemertes Research summarized that we all agree that the Virtualization companies have a responsibility to secure their hypervisor. There is a lot of work to be done in taking responsibility so that we can implement at least basic security. The next step is to get security on to the management dashboard , because if the platform is secure, but the management layer is a wide open goal, we haven’t gained anything.

Most security experts we talked to still prefer to virtualize their current security infrastructure vover the products that focus on securing virtualization. There is a thin line between needing a product that secures a virtual platform and changing your architecture and best practices to a regular security product fits in a Virtualized environment.

But all parties seem to agree that lots of the need for VirtSec comes from changing scale, and no matter what tools you throw at it, it’s still a people problem

The whole VirtSec discussion has just started, it’s obvious that there will be a lot of work to be done and new evolutions will pop up left and right. I`m looking forward to that future So as Chriss Hoff said “Security is like bell bottoms, every 10-15 years or so it comes back in style”, this time with a Virtualization sauce.

Listen to the full audio of the conference call!

Filed Under: Featured, Guest Posts, Interviews, People Tagged With: Allwyn Sequeira, Andreas Antonopoulos, Avastu, Blue Lane, Catbird, Chris Hoff, conference call, Hezi Moore, interview, Inuits, Joe Pendry, Kris Buytaert, Michael Berman, Nemertes Research, Reflex Security, round table, StackSafe, Tarry Singh, Unisys, virtsec, virtualisation, virtualization, virtualization security

Rich Wolski on Eucalyptus: Open Source Cloud Computing (Video Interview – 2/2)

July 18, 2008 by Toon Vanagt Leave a Comment

In this second part of our video interview with Rich Wolski (see the first part here), recorded at the O’Reilly Velocity conference, we learn how Eucalyptus came around the Amazon subscription method, where credit cards are the key to authentication. Offering ‘free and open’ clouds in university environments was achieved by introducing a system administrator in between the user account request and the issuing of certificates. Upon user request, the Eucalyptus user subscription interface generates an e-mail to an administrator, who will then perform a ‘manual’ verification. This can be a phone call or a physical meeting.


Eucalyptus Director Rich Wolski on open source cloud computing, Xen and Amazon’s EC2 (part 2/2) from Toon Vanagt on Vimeo.

Users did not like Rocks (leading open source cloud management tool), but the community (in smaller community/ deployment supports) preferred to do this manually. So Eucalyptus 1.1 provides Guidance, a script to build from scratch by hand.

A ‘build with one button’ remains the goal for future versions.

The full Eucalyptus image is only 55 Mb (without Linux image) and includes the necessary packages in order to make sure all of the revision-levels are fully compatible. Eucalyptus comes as Free BSD Open-Source license with a small disclaimer that the University of Santa Barbara explicitly wants to avoid any intellectual property infringements and will take necessary steps if needed.

Virtualization is supported by Xen 3.1 for security sake (3.0 works too, but is discouraged).

Lessons learned in building clouds from open source are quite rare. Here are a few from Rich:

Unlike commercial environments (where one controls the configuration, hardware purchase and networking), the architectural decisions are very different in open source environment, where one does not know the installation. One of the current challenges is to build a system depending on the control you have over your specific installation, you could successfully remove more of the portability from the system as you needs fit.

A second lesson is that people do things by hand and this is an opportunity for automation. Nobody is deploying Linux manually, instead sys admin use distributions. Shouldn’t there be a similar cloud distribution product out there? The people at Puppet were eager to help on providing such scripts for cloud deployments. According to Rich, this illustrates how O’Reilly should be credited for creating a good atmosphere at the Velocity 08 conference where a lot of cross-fertilization happened.

Rich ends the interview by throwing a fundamental question at the cloud community. He classifies current cloud initiatives on a scale based on the ‘closeness’ of the application layer to the cloud API. At the one end of this spectrum, he puts Google Apps (with Python oriented function calls) and at the other end Amazon EC2 (a set of very simple web service interfaces to the underlying virtualization technology) and all other cloud offerings float in between. This impacts what you can do with virtualization. Google AppEngine becomes your compiler on their end of the scale.

Rich wonders if this tighter link to the Google AppEngine will become a liability or an asset in the future when it comes to virtualization capabilities?

We invite you to provide your answers in the comments below!

Filed Under: Interviews, People, Videos Tagged With: Amazon EC2, cloud computing, ec2, eucalyptus, interview, kvm, LibVert, O'Reilly, O'Reilly Velocity, open source, open source cloud computing, Rich Wolski, VDE, video, video interview, virtualisation, virtualization, vmware, Xen, Xen virtualization

“Benchmarking” The Citrix / XenServer Combo with Ian Pratt (Video Interview – Part 4)

June 15, 2008 by Robin Wauters Leave a Comment

During the Fosdem 2008 conference, we had a chance to sit down (on a bench) with Xen Guru Ian Pratt. Below is the fourth and last part (see part 1, part 2 and part 3) of our exclusive interview, where Ian shines his light on Citrix Xenserver, relocating virtual machines (VM), VM-mirroring, OVF, page tables algorithms, open source community involvement, management frameworks, the Citrix take-over, Virtualization marketing with OS-enlightment, FUD-tactics by VMWare, self-healing servers, Xen embedded in firmware, why Amazon goes with Xen, the Xen GPL license, OracleVM, xVM (Sun), Parallels and the future of virtualization…

We cut the interview into 4 digestable pieces, which we publish one at a time (see part 1, part 2 and part 3). As said, this is the final part (soon, you’ll also find a written transcript below for your convenience):

The video is also up on YouTube and Steamocracy.

Filed Under: Featured, Interviews, People, Videos Tagged With: citrix, Citrix Ian Pratt, citrix xenserver, Ian Pratt, interview, Sun xVM, University of Cambridge, video, virtualisation, virtualization, Xen, Xen Ian Pratt, xen.org, XenDesktop, xenserver, xensource, XVM

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to page 6
  • Go to page 7
  • Interim pages omitted …
  • Go to page 10
  • Go to Next Page »

Primary Sidebar

Tags

acquisition application virtualization Cisco citrix Citrix Systems citrix xenserver cloud computing Dell desktop virtualization EMC financing Funding Hewlett Packard HP Hyper-V IBM industry moves intel interview kvm linux microsoft Microsoft Hyper-V Novell oracle Parallels red hat research server virtualization sun sun microsystems VDI video virtual desktop Virtual Iron virtualisation virtualization vmware VMware ESX VMWorld VMWorld 2008 VMWorld Europe 2008 Xen xenserver xensource

Recent Comments

  • C program on Red Hat Launches Virtual Storage Appliance For Amazon Web Services
  • Hamzaoui on $500 Million For XenSource, Where Did All The Money Go?
  • vijay kumar on NComputing Debuts X350
  • Samar on VMware / SpringSource Acquires GemStone Systems
  • Meo on Cisco, Citrix Join Forces To Deliver Rich Media-Enabled Virtual Desktops

Copyright © 2025 · Genesis Sample on Genesis Framework · WordPress · Log in

  • Newsletter
  • Advertise
  • Contact
  • About