• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
Virtualization.com

Virtualization.com

News and insights from the vibrant world of virtualization and cloud computing

  • News
  • Featured
  • Partnerships
  • People
  • Acquisitions
  • Guest Posts
  • Interviews
  • Videos
  • Funding

Search Results for: virtualization security

Symantec Acquires nSuite Technologies, Moves Into Endpoint Virtualization Management

August 5, 2008 by Robin Wauters Leave a Comment

NetworkWorld is reporting that Symantec has agreed to acquire nSuite Technologies, a small firm specializing in virtualization solutions for the healthcare industry, for an undisclosed amount in cash. Symantec aims to build out its portfolio of virtualization security and management technologies.

nSuite makes software called PrivacyShell Virtual Workspace Management, which is primarily used by hospitals for secure desktop management on behalf of physicians and medical staff. The nSuite software works to create a container around an individual’s authorized applications and data. When a user is authenticated, the user’s applications and data can quickly be ported to the Windows-based desktop where the individual is working, and later removed. PrivacyShell Virtual Workspace Management balances centralized control with the flexibility needed to provide tailored user environments. It leverages virtualization and authentication technologies to optimize the way hospitals deliver and manage end user workspaces.

After the nSuite acquisition is completed, which is expected this month, Symantec anticipates integrating the nSuite software with its Altiris SVS and AppStream management consoles in the future. It will announce product updates at ManageFusion, a hands-on lab and training event held in various cities worldwide.

nSuite Technologies

Filed Under: Acquisitions, Featured Tagged With: acquisition, endpoint virtualization, healthcare, nSuite, nSuite Technologies, PrivacyShell, PrivacyShell Virtual Workspace Management, Symantec, Symantec nSuite, thin client computing, Virtual Workspace Management, virtualisation, virtualization, virtualization management, virtualization security

Video Interview: Werner Vogels, CTO Amazon on Virtualization and the VC Threat

July 31, 2008 by Toon Vanagt 8 Comments

At the GigaOM Structure08 conference in San Francisco, we had the opportunity to question Amazon’s CTO Werner Vogels on his virtualization experience, while building the Amazon cloud. He confirmed Amazon Web Services are still powered by Xen hypervisors.

It is remarkable to hear the CTO of a multinational openly thank the open source community for their active support on Xen and hear him claim this to be the main reason for having chosen Xen as a crucial Amazon cloud-enabling building block.


Werner Vogels CTO Amazon.com from Toon Vanagt on Vimeo.

As we reported earlier, Amazon is also very open on its performance and welcomes independent companies to measure and report on parameters for public virtual computing facility such as security, availability, scalability, performance and cost.

Werner finished our video interview by explaining why cloud computing is even disruptive outside of the datacenter and transforms unexpected industries. Venture capitalists seem upset about side effects, such as start-up funding independence, as these fast growing tech companies are no longer in need to burn lots of VC-money on hardware platforms and technologies upfront. They can now scale their offering dynamically, driven by organic growth, while generating the necessary revenues to cover the extra cloud cost.

At Virtualization.com we like to think that “shift happens” and look forward to the upcoming VC-riots on Sand Hill Road against these unthankful self-sufficient start-ups 🙂

A full transcript of the interview is below. If you are interested in Amazon Web Services, you might also want to participate in our contest to win a free book, dedicated by Werner Vogels.


(00:00) Werner Vogels, welcome on Virtualization.com. You are the system administrator of a small bookshop. Could you tell us something more about yourself and on how you virtualized your infrastructure to such a dimension.

“I am the Chief Technology Officer for Amazon.com and I am responsible for the long term vision for technology within Amazon as well as how we can develop radically new technologies to support that business. But also the kind of businesses Amazon could move into, because of the unique technologies that we have developed.”

(00:33) Werner, I am a bit puzzled, because I did an interview with Xen founder Ian Pratt and he told me that Amazon is using this extensively. In your keynote here at the GigaOm Structure08 conference you just claimed you’re using no more third party applications. Did you refer to Xen in that respect?

“My remark about third parties applications was more about our enterprise stuff, where you look at databases and middleware… We do use some third party software and Xen is one of those. But we use them in the mode everybody in this world is using them. We don’t put these types of technologies to the extreme, because we want to make sure their vendors can support us, in a way they support any other customer they have. The remark I made this morning was more about when you really start pushing technology to the edge, we cannot blame vendors for not being able to support us.”

(1:30): How hard was it to integrate the Xen hypervisor into your cloud platform?

“I think Xen is a great product. It is easy to use. But most importantly is the very active community around it. I would not say many ‘issues’ around using Xen, but ‘challenges’ are addressed there with the things every virtual machine has to deal with. Things such as: I/O-issues, guaranteed scheduling issues, domain zero security concerns,…The community out there is very helpful. That was a very big reason for us in selecting Xen.”

(02:15) With “Security”, you just mentioned one of the big Virtualization issues at stake. How do you make absolutely sure that VM’s are isolated in a mixed customer cloud environment? Is Amazon using VLans to do achieve that or did you design proprietary solutions or techniques you can share with the community?

“It is our policy not to discuss specific security techniques. Except for that we have done extensive software development. To make sure that we can audit, maintain and manage the security issues.”

(02:45) You see this as one of your competitive advantages?

I like to believe that security is one of the main concerns and you have to address those upfront. There is no excuse. In this world of cloud computing the most fundamental promise needs to be that it is secure!

(03:10) Yesterday CloudStatus was launched and I imagine you are aware of this? Is Amazon happy about that?”

Absolutely, we love them. But I want to take a step back there. It is very important with things like CloudStatus, that they are actually reporting on things that make sense for our customers. So we are looking forward to working with them and to bring them into contact with our customers and to make sure that the things they are reporting on are useful to our customers…”

(03:40) You would like to advice CloudStatus on the Amazon parameter set they should be reporting on?

“It is up to them off-course. This is not going to be a winner take all business as there will be many cloud providers in the future. As I mentioned in my talk, we will be measured on security, availability, scalability, performance and at cost. So it is very important that we have independent companies measuring these kind of things.”

(04:18) When you talk about independent companies and open alternatives, one of the general concerns remains vendor lock-in. With Eucalyptus there is an open source equivalent, which sort of reverse engineered your APIs (Application Programming Interfaces) and is compliant with Amazon. Do you think that these options of knowing you can in-source your cloud if needed, helps to comfort prospective companies in selecting a cloud provider?

“Let’s first start of with the notion of vendor lock-in. As I mentioned in my talk, I like to believe that Amazon works very hard to provide APIs, which are so simple that there is hardly any vendor lock-in. We use standard techniques to give people access to our APIs. If you look at Eucalyptus, their need came out the schools, involved in high performance computing, on the one hand want to use the public cloud for doing parallel computing, but on the other hand one to keep a similar interface internally. I think they have been very successful to actually make sure that all these schools  adopt this same model.”

(05:32) A last question on your disruptive cloud platform. Could you explain how this technology also disrupts start-up funding cycles and the move from the CAPEX to OPEX expense models? [A capital expenditure (CAPEX) is the cost of developing or providing non-consumable parts for a product, service or system. Its counterpart an operating expense (OPEX) is an on-going cost for running that product]

“Last night I was at a reception, where a venture capitalist walked up to me, who said he hated Amazon, because we killed his business. After we talked for a while, he actually had to confess they also have to adapt to this new world. Where in the old world, they could lock themselves into a company; get their hand on a large part of the equity, because those companies had to spend a lot of money on resources upfront. What we see now is that the availability of these services makes companies start to think differently. Before start-ups maybe had the idea that the only way they could be successful was to have a very big exit. For that they needed a lot of hardware and lots of investments. Many companies based on the fact that these services are available are now moving to a model, where they think they can build a sustainable business. Maybe we can build great products and charge our customers for it. And if you then attract more customers, you spend more on the (development) of these services. Which is just fine as your income follows your customer needs.”

Amazon

Filed Under: Funding, Interviews, People, Videos

Secerno Introduces Database Security Solution for Virtual Environments

July 29, 2008 by Robin Wauters Leave a Comment

Database security provider Secerno today announced the availability of its Secerno.SQL database activity monitoring and blocking solution as a virtualized appliance on the VMware platform. This, according to the company, marks the first availability of its appliance-based database protection in a virtualized environment, allowing enterprises the same database protection afforded by hardware yet with the utilisation, management and cost benefits of a virtualized application.

Secerno’s virtualized database protection product is powered by its patent-pending SynoptiQ technology. The company chose VMware’s technology platform based on its penetration into more than 20,000 corporate customers.

“This unique announcement comes at an exciting time for Secerno. We have just secured funding that allows us to respond to market opportunities and customer demand,” said Steve Hurn, CEO of Secerno. “In this case, we are bringing Secerno to a virtualised environment to offer organisations more variety in their deployments and reduce their overall costs in terms of resources and expenses. Virtualisation is emerging as a key strategy for companies, and we are delighted to be the first to offer this option for database activity monitoring and blocking.”

Secerno.SQL for VMware is set to be available this quarter.

Secerno

Filed Under: News, Partnerships Tagged With: database monitoring, database security, Secerno, Secerno SynoptiQ, Secerno.SQL, security, SynoptiQ, virtual environments, virtualisation, virtualization, vmware

OLS Virtualization Minisummit Report

July 24, 2008 by Kris Buytaert 2 Comments

Virtualization.com was present at this week’s Virtualization Minisummit in Ottawa.

The OLS Virtualization Minisummit took place last Tuesday in Les Suites, Ottawa. Aland Adams had created an interesting lineup with a mixture of Kernel level talks and Management framework talks. First up was Andrey Mirkin from OpenVZ. He first gave a general overview of different virtualization techniques.
While comparing them, he claimed that Xen has a higher virtualization overhead because the hypervisor needs to manage a lot of stuff where as “container-based” approaches that use the Linux kernel for this have less overhead.

We discussed OpenVZ earlier, which uses 1 kernel for both the host OS and all the guest OS’s. Each container has it’s own files, process tree, network (virtual network device), devices (which can be shared or not), IPC objects, etc. Often that’s an advantage, sometimes it isn’t.

When Andrey talks about containers, he means OpenVZ containers, which often confused the audience as at the same time the Linux Containers minisummit was gong on in a different suite. He went on to discuss the different features of OpenVZ. Currently it includes checkpointing; they have templates from which they can quickly build new instances.
OpenVZ also supports Live Migration , basically taking a snapshot and transporting (rsync based) it to another node. So not the Xen way .. there is some downtime for the server .. although a minor one.
Interesting to know is that OpenVZ is also working on including OpenVZ into the mainstream Linux Kernel. The OpenVZ team has been contributing a lot of patches and changes to the Linux kernel in order to get their features in. Andrey also showed us a nice demo of the PacMan Xscreensaver being live migrated back and forth.

Still, containers are “chroots on steroids” (dixit Ian Pratt).
Given the recent security fuzz I wondered about the impact of containers. Container-based means you can see the processes in the guest from the host OS, which is a enormous security problem. Imagine a Virtual Host provider using this kind of technique, including having full access to your virtualized platform, whereas in other approaches he’ll actually need to have your passwords etc. to access certain parts of the guest.

The next talk was about Virtual TPM on Xen/KVM for Trusted Computing, by Kuniyasu Suzaki. He kicked offs with explaining the basics of the Trusted Platform Module. The whole problem is to create a full chain of trust from booting till full operation. So you need a boot loader that supports TPM (grub IMA), you need a patched Kernel (IMA) , from where you can have a binary that is trusted. (Ima : Integrity Measurement Architecture).

There are 2 ways to pass TPM to a virtual machine. First, there is a proprietary module by IBM as presented on the 2006 Usenix symposium where they transfer the physical TPM to a VM. Secondly, there is emulating TPM by software, there is an emulator developed by eth on tpm-emulator.berlios.de. KVM and Xen support emulated TPM. Off course this doesn’t keep the hardware trust.

As Qemu is needed to emulate bios-related things you can’t do vTPM on a paravirtualized domain, you need an HVM-based one. A customized KVM by Nguygen Anh Quynh will be released shortly; the patch will be applied to Qemu.

Still, these cases are using the TPM emulator and not the real hardware. An additional problem with virtualization and TPM arises when you start thinking about Migrating machines around … and losing access to the actual TPM module. Kuniyasu then showed a demo shown using VMKnoppix.

Dan Magenheimer is doing a rerun of his Xen Summit 2008 talk titled “Memory Overcommit without the Commitment”.

There is a lot of discussion on why you should or should not support overcommit memory. Some claim you should just buy enough memory (after all, memory is cheap) but it isn’t always: as soon as you go for the bigger memory lats you’ll still be paying a lot of money.
Overcommitment cost performance, you’ll end up swapping which is painful, however people claim that with CPU and IO it also costs performance so sometimes you need to compromise between functionality, cost and performance. Imho, a machine that is low on memory and starts swapping or even OOM’ing processes is much more painful then a machine that slows down because it is reaching its CPU or IO limits.

So one of the main arguments in favor of wanting to support overcommit on Xen was
because VMWare does it …

Dan outlined the different proposed solutions, such as Ballooning, Content-based page sharing , VMM-driven paging demand, , Hotplug memory add/delete, ticketed ballooning or even swap entire guests. in order to come up with his own proposition which he titled Feedback-directed ballooning.

The idea is that you have a lot of information of the memory-status of your guest, that Xen ballooning works like a charm, that Linux actually does perform OK when put under memory stress (provided you have configured swap). And that you can use xenstore tools for two-way communication. So he wrote a set of userland bash scripts that implemented ballooning based on local or directed feedback.

Conclusion: Xen does do memory overcommit today, so Dan replaced a “critical” VMWare feature with a small shell script 🙂

Filed Under: Guest Posts Tagged With: memory ballooning, Memory Overcommit, ols, openvz, oraclevm, tpm, virtualization minissummit, vmknoppix, vmware, vTPM

Reflex Security Introduces Reflex Virtual Security Center

July 22, 2008 by Robin Wauters Leave a Comment

—

Reflex Security, a virtualization security management company, today announced the availability of Reflex Virtual Security Center (VSC), which aims to provide heightened visibility and control for virtualized environments. Reflex VSC provides a single authoritative visual interface to secure the virtual data center.

According to the company, its solution is the industry’s first to take the approach that without the ability to visualize both the logical and physical elements of the virtual infrastructure, effective and efficient security cannot be achieved. Reflex VSC increases the visibility of an organization’s virtual infrastructure so it can be properly secured, enabling organizations to align virtualization security solutions with real business objectives.
Reflex VSC invokes best practice methodology by first discovering the entire virtual environment to determine the security controls needed to fully secure the infrastructure with Reflex VSA. This approach dramatically simplifies and automates the security of VMs, while reducing the potential for improper configurations.
Recognizing customer need for cross-functional centralized security controls for their virtual infrastructure, Reflex VSC harnesses the power of virtualization to simplify and automate routine activities such as security deployment, policy configuration, and event correlation and reporting. With Reflex VSC, operations and security teams have a single authoritative visual interface to administer, secure, and monitor the dynamic virtual infrastructure. This results in better network and event visibility for a faster and more effective security response. Through extensive real-time and historical visual reporting, Reflex VSC gives administrators the tools they need to efficiently meet stringent compliance requirements.
Through the combination of visibility and revision control provided by Reflex VSC, Reflex VSA has been enhanced, giving administrators the insight required to understand, monitor and secure dynamic virtual environments. The combined functionality provides a broader view of virtual infrastructure which can stop threats earlier on in the environment. Together, Reflex VSA and VSC provide the ability to view alerts and events in context to the virtual infrastructure.
Reflex VSC is available now and is included in the purchase of Reflex VSA.
[Source: Marketwatch]

Filed Under: News Tagged With: Reflex, Reflex Security, Reflex Virtual Security Appliance, Reflex Virtual Security Center, Reflex Virtual Security Center (VSC), Reflex VSA, Reflex VSC, virtsec, Virtual Security Appliance, Virtual Security Center, virtualisation, virtualization, virtualization security, VSA, VSC

Azure Uses Intel Virtualization Extensions To Counter Malware

July 22, 2008 by Robin Wauters Leave a Comment

—

Paul Royal, principal researcher at Damballa, has developed a new tool called Azure, which takes advantage of the virtualization extensions in Intel‘s chips to evade the virtual machine and sandbox checks malware authors often include in their ‘work’. Because the extensions exist at the hardware level, below the level of the host OS, the malware doesn’t have the ability to detect Azure, allowing researchers to analyze its behavior unimpeded.

“The whole point is to get out of the guest OS so the malware can’t detect you and attack,” said Royal. “Intel VT doesn’t have the weakness of in-guest approaches because it’s completely external. Others use system emulators, but to get everything exactly right in terms of emulation can be tricky.”

Royal plans to release the source code for Azure at the upcoming Black Hat conference in Las Vegas and will make the tool available for download, as well. Royal said he is still working on features that he plans to add to a future version of Azure, including a precision automated unpacker and a system call tracer.

Intel’s virtualization technology (VT) is a set of extensions added to some of the company’s chipsets that help implement virtualization on the hardware, rather than the software level. VT is designed to help enterprises make better use of their hardware resources and save energy.

[Source: SearchSecurity]

Filed Under: News Tagged With: Azure, Black Hat, Black Hat conference, Damballa, Damballa Azure, hardware virtualization, intel, Intel Virtualization, Intel virtualization extensions, Intel virtualization technology, Intel VT, malware, Paul Royal, research, security, virtualisation, virtualization, virtualization extensions

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 13
  • Go to page 14
  • Go to page 15
  • Go to page 16
  • Go to page 17
  • Interim pages omitted …
  • Go to page 65
  • Go to Next Page »

Primary Sidebar

Tags

acquisition application virtualization Cisco citrix Citrix Systems citrix xenserver cloud computing Dell desktop virtualization EMC financing Funding Hewlett Packard HP Hyper-V IBM industry moves intel interview kvm linux microsoft Microsoft Hyper-V Novell oracle Parallels red hat research server virtualization sun sun microsystems VDI video virtual desktop Virtual Iron virtualisation virtualization vmware VMware ESX VMWorld VMWorld 2008 VMWorld Europe 2008 Xen xenserver xensource

Recent Comments

  • C program on Red Hat Launches Virtual Storage Appliance For Amazon Web Services
  • Hamzaoui on $500 Million For XenSource, Where Did All The Money Go?
  • vijay kumar on NComputing Debuts X350
  • Samar on VMware / SpringSource Acquires GemStone Systems
  • Meo on Cisco, Citrix Join Forces To Deliver Rich Media-Enabled Virtual Desktops

Copyright © 2025 · Genesis Sample on Genesis Framework · WordPress · Log in

  • Newsletter
  • Advertise
  • Contact
  • About