• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
Virtualization.com

Virtualization.com

News and insights from the vibrant world of virtualization and cloud computing

  • News
  • Featured
  • Partnerships
  • People
  • Acquisitions
  • Guest Posts
  • Interviews
  • Videos
  • Funding

interview

Video interview with George Kurian, Vice President and General Manager of the Application Delivery Business Unit at Cisco (Part 1/2)

September 14, 2008 by Toon Vanagt Leave a Comment

In this first part of our interview with George Kurian at Cisco’s headquarters we get to know how Cisco looks at Virtualization in the datacenter from three different sets of product capabilities: pervasive networking platforms, services and VFrame (provisioning and orchestration tools).


From his position as the vice president and general manager of the Application Delivery Business Unit at Cisco, he sees the need for server virtualization to be complemented with virtualization capabilities in the network and explains how his teams are engineering the network to be a facilitator for all the virtues Virtualization brings. The goal for George’s data center technology group is to make the network aware of the new atomic unit in the data center: the Virtual Machine and no longer the physical server or port. He goes on pointing to the Nexus series and new introductions for the Catalyst series with capabilities to support some of these (r)evolutionary trends in the data center. In essence Cisco is reducing the number of connections that the server has to have from roughly eight today to only two, thus simplifying power, cooling, cabling, the whole series of transformations in the data center and then from an operational standpoint, providing a single network that you need to manage.

Read the full transcript below or go to the second part.

0:04 George, welcome to Virtualization.com. Could you tell a little bit more about yourself and what you’re doing here at Cisco?

George Kurian: Hi Toon. I’m George Kurian and I’m the vice president and general manager of the Application Delivery Business Unit. We’re part of the engineering organization at Cisco and within Cisco’s engineering team, we are part of the data center technology group, the group that builds all our switching and so services platforms for the data center.

0:30 Okay and how does Cisco think about the data center and virtualization in particular?

Kurian: First of all, in terms of the data center itself, we looked at the data center from the lines of three different sets of capabilities, product capabilities. The first are pervasive networking interconnect platforms such as our Catalyst 6500 platforms that provide LAN to server connections. Platforms such as our MDS platforms which are for storage interconnects, InfiniBand which provide inter process or cluster communication interconnects, and in addition that recently introduced, the Nexus 5000 family, that provides access interconnects for servers to the network. So in essence of range of interconnect platforms, layered on top of that are services such as security services, load balancing application delivery services, WAN acceleration services that drives the performance of applications from the data center to the remote branches, and then putting all of that together is a layer of provisioning and orchestration tools that we call VFrame. So networking platforms, services and provisioning tools.

1:52 Okay. Let’s start off on how one can capture all the benefits like one of the major benefits of virtual machines that you can relocate, just put a lot of strain on the network? How did you deal with that?

Kurian: In essence, we believe, Toon, that virtualization of the server environment needs to be complimented with virtualization capabilities in the network. Because to be able to get the benefits of efficiency plus flexibility in that server virtualization tries to create, you need to have the network be a facilitator of all of that. So, specifically, some of the benefits that virtual machine motion has as well as different fail over scenarios that customers are used to around the physical machine environment bringing that to the virtual machine environment needs the network to be virtual machine aware so that you can have what we call transparent virtualization, and so a lot of work we’re doing in the data center technology group is to make the network aware now of the new atomic unit in the data center which is no longer the physical server and what we forever call the port but really now the virtual machine itself.

3:12 That’s also a big change if you look at like the Catalyst series as well as the jack-of-all trades where you would just plug in a firewall, routing device or whatever; whereas now, we’re moving to a whole new situation because some people call it the flat layered two-domain mess that is being created by virtual machines and hypervisors. How do you cope with that?

Kurian: The Nexus series of products as well as the Catalyst have important new introductions of capabilities to support some of the evolutionary trends that you see in a data center, right? The first one, which is still the most used by customers, is what we call server consolidation and standardization. This is the movement from a variety of distributed computing environments to a few standardized X86 environments in the data center. What consolidation especially with the movement of multiprocessor CPUs does is it drives a much higher density and bandwidth per slot. So the Catalyst as well as the Nexus 7000 series for example are much more dense platforms. In addition, what we see is the movement from more client to server-oriented applications to some of the more server-to-server communication paradigms introduced by Web 2.0 and other types of new applications. It drives a lot of what we call cross-sectional bandwidth and so there are new innovations both by the Nexus 5000 and 7000 series that take advantage of those new types of platforms.

Now, one of the new trends that we are seeing as part of what we announced, which we call the unified fabric, is the consolidation of a variety of currently heterogenous networking environments in the data center into a single unified networking fabric. The most important of the networking environments in the data center classically the LAN, which is an Ethernet environment, and then storage, which has historically been a Fiber Channel environment. What we announced in the end of January is what we call the unified fabric and what the unified fabric essentially does is bring some of the best elements of Ethernet simplicity, scalability, and cost efficiency together with the needs of fiber channel, for example, lossless transport, lower latency, and so on. And so we see that really transforming the next generation data center. In essence, reducing the number of connections that the server has to have from roughly eight today to two simplifying power, cooling, cabling, the whole series of transformations in the data center and then from an operational standpoint, providing a single network that you need to manage.

6:12 Okay. Are you working on standardization, an industry standards, to do this?

Kurian: Yes. We’re working with a combination of an industry partner ecosystem with players like the Intel and IBM and others as well as the ITF and some standardization bodies…We try to standardize some of these key technologies such as Fiber Channel over Ethernet.

6:36 What of type of bandwidth do you see within the Fiber Channels over Ethernet? We’re at 10 gigabyte today.

Kurian: Right.

6:41 How would that evolve? What timing do you think we’ll be able to do this?

Kurian: There’s certainly a movement in the Fiber Channel work to bring out 8 Gigabyte Fiber Channel on the Ethernet side. The two next levels of performance are 40 Gb and 100 Gb Ethernets. There’s a standard work in both of those performance levels that are in process.

7:08 Okay. When we talked about the virtualization capabilities that you want to build into the network, could you maybe tell a little bit about the differences there between the Nexus architecture and the Catalyst architecture?

Kurian: In essence, the benefits of Nexus and the Catalyst are roughly similar when you consider the interactions between the server and the network, right? I think what the Nexus certainly does is take density and per slot performance to a whole new level as well as what we see in the Nexus is the increased intelligence on the port basis because what we see in the Nexus world where we really have built that to be the platform of the next ten to fifteen years of data center. The physical NIC on server has now a lot more traffic behind and a lot more application environments hosted behind it so we brought a lot more per-port intelligence for example into the Nexus. We will see that intelligence also coming into upcoming versions of the Catalyst as well but that’s one of the hallmarks that we bring.

8:15 Today, that it really one of the bottlenecks in virtualization, all the I/O virtualization, going on?
Kurian: That’s right. In essence, you want to have quality of service now at the NIC itself right, because you’ve got this disparate application environment sitting behind that single physical interface.

Filed Under: Interviews, People, Videos Tagged With: Catalyst 6500, Cisco, Ethernet, Fiber Channel, General Catalyst, George Kurian, Infiniband, interview, ITF, Kurian, Nexus, Nexus 5000, Nexus 7000, Toon Vanagt, unified fabric, VFrame, video, video interview, virtualisation, virtualization, X86

Video interview with Nick Van Der Zweep, Virtualization Director at HP (Part 4/4)

September 14, 2008 by Toon Vanagt 1 Comment

In this fourth and final part of our interview with Nick Van Der Zweep we got some numbers that Virtualization at HP has grown over 80% last year and the claim that HP is

‘growing with VMware faster than VMware is growing in any industry’.

Also because HP has about half of the Blade market and Nick adds that:

‘the connect rate of virtualization to Blade Servers is much heavier than just Standalone Rack Servers. Blades are just an absolute natural fit for virtualization’.

With iVirtualization (not aimed at Apple), HP is adding backward compatible ‘integrated virtualization’ to its Proliant Server range. Another unique feature to the HP iVirtualization is the virtual console which can handle several environments (e.g. VMware, Citrix.) each with their multiple virtual machines. The standard I/O integrated lights out remote console management will automatically connects into the overall console or down right into each of the different VMs within the machine.

Read the full transcript below or return to the previous part

0:11 Could you give us some number on how important virtualization is to HP?

Nick Van Der Zweep: To our business, it’s absolutely critically important and we’re seeing the numbers rolling in from a connect rate perspective.. A few numbers that I know of: integrity systems with the software per virtualization on our Integrity servers grew at about 120% in the last year so that’s a pretty strong growth. VMware numbers I think are public as to how VMware has grown somewhere in the eighty to some percent range which is very good. Our VMware connect rates on our X86 servers have grown beyond that. So we’re growing with VMware faster than VMware is growing in any industry.

Other areas that might be of interest in virtualization space are Blades. The connect rate of virtualization to Blades is much, much heavier than just Standalone Rack Servers. Blades are just an absolute natural fit for virtualization. It was something that we focused on when we designed our C class Blades systems and we’re doing well in the industry because we focused so much on enabling virtualization with that platform. Close to 50% market share in the industry which is outstanding to say the least and then part of what we put in there was HP Virtual Connect in order to make this really work well together, and that was the main product of year for us by a couple of different institutions. It’s really facilitating growth within HP with our management software, Blades, infrastructure virtualization and we’re taking more and more steps with our inside software management and VSC products as well.

2:03 Are we going to see a white ProLiant server soon, because HP launched iVirtualization and I think Apple will be curious to know what that would exactly look like?

Van Der Zweep: Well, actually, we will custom-paint any our infrastructure to match the decor that you want to put it into. So we can comply with whatever color codes that you want to have within your data center.

2:27 I think Steve Jobs is going to be very jealous of that. We can order pink Proliants now ?

Van Der Zweep: Right. If you want it, we can make it. iVirtualization definitely is a key point to us and that goes back to your partnership with VMware, Citrix, and Microsoft. Right out of the box, we get a ProLiant server and instead of saying boot from disc or boot from the network, its boot up the hypervisor, built right into this.

2:55 You’re actually shipping in with an extra flash card where these are precharged?

Van Der Zweep: Exactly and the interesting thing is even before we announced the integrated iVirtualization, we had that ability to add those flash cards. We have the USB capability built into our previous models, so we can upgrade existing models to an integrated virtualization as well. So, what’s inside exactly is that it’s got a USB key with the either ESXI software or for instance Citrix server or that type of software in virtualization.

3:29 From a logistical point of view that sounds like quite a challenge, because you’re shipping from factory… how do you keep close to the release cycles of the hypervisors to make sure you got the latest available version along with the hardware and ship this to the customers?

Van Der Zweep: Yeah because there’re flash drives, we can upgrade them and flash them back into the field as well as if they need upgrades. I think the more important thing is we’re not just putting a flash drive and some VMware, Citrix or such software within the machine, we add value around that as well. So, for instance, we introduced iVirtualization with a virtual console so that when you’re running, for instance, a Citrix environment and you set up multiple virtual machines, our standard I/O integrated lights out remote console management automatically connects into the overall console or down right into each of the different VMs within the machine and that’s again unique in the industry. We’re working so closely with our partners and adding value on top of it instead of just putting a CD in a box.

4:34 What about the virtualization services HP is offering because this technology is so disruptive that many departments seek help to get there?

Van Der Zweep: Yeah. The services that we offer range in spectrum, everything from macro view of data center consolidation and data center transformation services to architect, the physical data centers to look at how to consolidate, how to go from eighty data centers to six similar to some of the initiatives we’ve had even at HP, how to deal with the technology. If you did not touch virtualization technology before, we can train you to be able to implement that, to do capacity planning kinds of initiatives, support you after the facts. So, we’ve got a full range of services that can help you from design all the way through the execution.

5:28 Okay. Nick Van Der Zweep, thanks a lot for the time that you’ve give us and I hope to see you soon.

Van Der Zweep: You’re quite welcome.

Filed Under: Featured, Interviews, People, Videos Tagged With: Hewlett Packard, HP, HP virtualization, interview, Nick Van Der Zweep, video, video interview, virtualisation, virtualization

Video interview with Nick Van Der Zweep, Virtualization Director at HP (Part 3/4)

September 14, 2008 by Toon Vanagt 2 Comments

In this third part of our video interview with Nick Van Der Zweep, Director for Virtualization at HP, he predicts Desktop Virtualization to be the next big tipping point in our industry. He adds this is one of the areas were HP is differentiating itself from IBM with a full desktop-to-data-center strategy.

“People like IBM are still struggling to catch up to that because they’ve got management systems for every platform that they have and trying to pull that together.  That’s critically important to be able to see holistic view of the entire data center…”

But also when it comes to flexible and usage-based data center pricing models and cloud computing, Nick claims HP is a pioneer with clients such as Dreamworks, rendering their movies on HP’s excess infrastructure.

The Opsware acquisition is referred to often in this interview when it comes to HP offering the full broad enterprise management software and configuration management with server automation. Nick also hints at their current investments in Virtualization related security offerings.

The interview was recorded at the HP headquarters in Cupertino, where Nick is often asked by financial analysts: ‘Is virtualization bad for your business?”. His clear answer is “NO”, as it unlocks the potential for businesses to do more and enables HP to sell a lot more robust configurations with a larger amount of condensed CPUs, much more memory, more I/O capability, etc.

Nick also shines a light on the future of virtualization, which will have (mostly free) hypervisors as a commodity. What really unlocks virtualization however is the management software and related automation capabilities. This is why HP bought and integrated a company like Opsware.

Read the full transcript below or read the previous part here or move on to the last episode.

So, we differentiate ourselves from IBM today by covering this desktop to data center that got out off the whole desktop space and this is going to explode.  Desktop virtualization is absolutely going to explode and that’s the next kind of big tipping point that we’re seeing. Integrated and we’re not afraid to take our technology off of our high-end systems, our nonstop UNIX systems.  We’re not afraid to put it on X86 and we put it there early and fast because that’s where the market needs it.  And so we’re proactively pushing that there.  For instance with our latest release, we took a whole bunch of technology that was only on Integrity and UNIX and brought it to Windows and X86.  So, desktop to data center, fully integrated stack up our management software, systems insight manager for a number of years has been able to manage across our entire portfolio of Integrity, Proliant, et cetera.  People like IBM are still struggling to catch up to that because they’ve got management systems for every platform that they have and trying to pull that together.  That’s critically important to be able to see holistic view of the entire data center.

1:19:  Virtualizations is actually also enabling cloud computing and  grid computing and all of these which are no longer coming from expensive mainframe hardware but virtual power through G4 X86 type of servers, and this brings us to usage-based pricing  models.  HP has been in there.  Did you have plans on offering infrastructure as a service or data center as a service?

Van Der Zweep:  So today, we already do sets of infrastructure service.  Data center has its service capabilities.  We certainly offer our Integrity servers on a usage basis where you buy the capacity almost like a prepaid mobile card where you buy 30 CPU days, and as you use it, it takes down in 30 minute increments. We also have adaptive infrastructure as a service.  We’re taking all of our capabilities of virtualization, automation, etcetera, in helping customers move to what we call an adaptive infrastructure and next generation data center.  And we’ve implemented that ourselves and provide that as a service to our customers.  This goes back to many years ago, four or five years ago for instance with DreamWorks, where DreamWorks wants this kind of environment where to render films, they’ve got a certain amount of capacity themselves but there’s peak times when they really need to get busy and so, we’ve got a whole set of technology, a whole set of data centers that can handle excess capacity, excess requirements from them to render films.  So we worked with DreamWorks and others to render films, do this in the manufacturing industry and others.  And it’s all paid by the direct kind of pricing.

3:09:  Okay.  What about HP server automation technology?  I know you’re a Virtualization Director.  How easy is it for all those administrators to use that and to deploy everything?

Van Der Zweep:  It was a very strategic acquisition for us to get into the whole infrastructure automation space, server automation, with the Opsware products and tying right back into server automation and configuration management.  Our infrastructure is very much the best infrastructure in the industry in providing management software there to advantage the infrastructure and some of that I’ve been describing.  But these all plugs in to our full broad enterprise management software and configuration management, and server automation as well.  The nice thing about teaming this together, you’ve got the ability to very quickly change your infrastructure but with the server automation, very quickly be able to change your applications, commission or decommission web servers and application servers quickly, and then with our infrastructure be able to redeploy those assets.  So, you have to do those two things in conjunction with each other.  It makes a lot of sense to put that on portfolio.

4:26:  Up to now, we talked a lot about the good new things virtualization can bring, but these new relationships between guest and host systems also popped up a lot of security issues.  It’s still very new although it’s been there for a few years.  It’s still quite a new technology.  How do you think virtualization security issues can be addressed?

Van Der Zweep:  Well, I think that’s evolving.  We’ve definitely been working with the vendors, the VMwares, the Citrix, our own technologies to make sure that the software is very hardened.  We’re looking at trusted computing models that can work in this industry as well.  Certainly, we’ve had those working bare-metal physical machines to get that working more so in the virtualization space.  So, I think that’s evolving over time.  We’ve got many offerings today to be able to help in this space but that’s another area of investment for us.

Filed Under: Interviews, People, Videos Tagged With: Hewlett Packard, HP, HP virtualization, interview, Nick Van Der Zweep, video, video interview, virtualisation, virtualization

Video interview with Nick Van Der Zweep, Virtualization Director at HP (Part 2/4)

September 14, 2008 by Toon Vanagt 2 Comments

In this second part of our lengthy video interview with Nick Van Der Zweep, Director for Virtualization at HP, we get further introduced to how HP defines virtualization and how it differentiates from its competitors.

Nick also shares what typical Virtualization problems his clients are grappling with and what skill set is needed in IT departments to overcome the pitfalls.

Read the full transcript below, return to part 1 or go ahead to part 3

0:12 HP has one of the most complete virtualization solutions offerings. How are these portfolios integrated?
Nick Van Der Zweep:  That’s really where we started with some of our management software as I mentioned in the Integrity space back in 1999-2000.We had high availability and partitioning and pay-as-you-go and instant capacity in management software and we glued it all together so that we produce one-user interspace to that environment.  Just recently, we announced and started shipping last month Inside Dynamics which takes that software, makes it available to go to Integrity, ProLiant, X86.  One management footprints Systems Inside Manager which is known across the industry as one management software for ProLiant, Integrity, Discovery, fault management and from there it manages all the hypervisors up there, we can…

1:08  Does it do deployment automatically?
Van Der Zweep:  So, we’ve got deployment built in to it so through a WRAP Deployment Packet of deployed into bare-metal and they’re deployed to virtual machines. We support Citrix, VMware, Microsoft and so we took that software that higher level of management software Inside Dynamic VSC which represent VSC in the integrity space and really glued it together. What’s really interesting right now is that we can provide hypervisor-like capabilities even to bare metal machines and that interface brings that all together.  You can’t even tell if you’re working on bare-metal machine versus a VMware hypervisor. You can do moves from moving application from place to place within the infrastructure and whether be bare metal or its using VMware behind the scenes so definitely heavy integrated.
2:05  I’m very interested to know how HP views its competitive landscape in the virtualization industry?
Van Der Zweep:  I think, we are extremely well positioned in the industry to be able to help our customers in the whole virtualization space and then also help HP and our shareholders as well. And because we’ve got the capabilities of covering this from desktop to the data center, we have a huge what we call a personal systems group where we sell desktops, Thin clients, Blade PCs, virtual desktop environments. So we are heavily invested in that side of the technology as well as the server side technology as well.  Storage virtualization, server virtualization, we have a huge multi-billion dollar software organization within HP to deliver infrastructure management, our Opsware/Mercury capabilities are layered on top of that as well.  So, we’ve got a technological portfolio that I think is a number one bar in the industry that anybody looks at.  And then the services portfolio to be able to help customers, architectural data centers, to data center transformations, look at everything from power cooling environmental pieces of the puzzle because that’s comes in to virtualization very quickly as well as you know because we can design the data centers, and as well help people and customers to do installation support and ITIL practice because as you go into a shared environment and now your employees are sharing resources.  Well you better standardize the jobs across the data centers. So the people who are doing server administration are all doing it the same way, not doing it one way for SAP environment and another way for their Exchange environment, because it’s all shared infrastructure.
3:59 Do you think we need a new skill set out there? Now their tasks are merging: for the networking people and security people and storage people. They’re really now have to talk together?
Van Der Zweep:  They absolutely do.  So, there are two things that happen that we focus on that.  I think it’s really happening to the industry. One is standardizing their roles and responsibilities so that and their interlocks so that they can talk to each other.  But then again we do things that simplify the processes, automate the processes.  If you look at the likes of Opsware or even our Blade environment, we added something called virtual connect to our Blade environment putting in a virtual fabric, a virtual back plain.  Now, what we’re able to do with Virtual Connect Blades and Inside Dynamics is move a Microsoft Exchange environment running on one blade through a point and click, move it to another Blade in another enclosure. If you try to do that today within a typical datacenter, you’ve got to call up the server guy to install that on the new Blade.  You have to call the network guy up and have them move the VLAN information from that node to that node and you have to call up the SAN storage guy to say “I’m going to reroute all the SAN in order to make that movement happen”.  Whether or not you have hypervisors or not, you got to set all of these up three people which means a week worth of work.  We can do it point and click everything is automated.  All the steps happened and it’s done.  So, it’s a matter of working better together from a people perspective but also delivering technologies that bust through the processes of the past and automate them as well.
5:43  What are the typical issues that your customers are grappling with today?
Van Der Zweep:  The typical issues that they grapple with today certainly is out of macro level cost, how do we drive down cost agility, how do I be more responsive to the business so that when they say, “Hey I’m one of deploy infrastructure or new application or scale up I want to do that today.  I don’t want to do that in a month,” and then service levels.  People are constantly saying I’m moving towards more of an environment where instead of that it’s just one mission critical system on that one server that keeps the business running, everything is kind of connected together with the applications that we have today, services oriented architectures and such where you’ve got ten or hundreds of pieces of infrastructure that are working in concept which each other and you have to have a high availability to everything and so they are want to get that built in without complex clustering.
6:45  What about the greener side of ITs, there is also a lot of buzz around the green data centers, do you find that your customers –due to boosting energy costs- are looking for for cheap electricity bills and renewable energy sources and are they actually looking at what is this server consuming and if that’s being underutilized to have like lesser power  being consumed?
Van Der Zweep:  Absolutely.  So, we have customers especially in the enterprise based that have data centers that within the next what three, four, five years they do not have enough power to handle the growth where is that they’re having within that environment.  Well, they’d have to build an entire new data centers and that’s cost thing. And they don’t feel good about it from an environmental perspective and the cost that they pay to the utilities.  So they’re concerned about that. So virtualization can help there with the software that I described with Inside Dynamics, what we built into this release is that we’ve put in  the ability to do consolidation and it will automatically come back to you with scenarios. Here is your current scenario, and this is exactly how many kilowatts you’re using per month and you enter into it.  You tell what your rates are, so it says you’re paying three hundred and fifty dollars a month for these systems for energy. And then it comes back and says, “Well here is the new environment consolidated using virtualization.”  And we can actually tell you its hundred fifteen dollars a month is your exact energy cost that you would be paying versus today versus option A or maybe option B or C that was exciting when we were in Barcelona demonstrating this and we had four people deep and four people wide sitting in front of one screen looking at this and one person is going I need you to put in my rates for Sweden and share this because I got to bring this home.  It’s a hot area.

Filed Under: Interviews, People, Videos Tagged With: citrix, Discovery, Hewlett Packard, HP, HP virtualization, Inside Dynamics, Integrity, interview, Nick Van Der Zweep, Proliant, Toon Vanagt, Van Der Zweep, video, video interview, virtualisation, virtualization, virtualization management software, vmware

Video interview with Nick Van Der Zweep, Virtualization Director at HP (Part 1/4)

September 2, 2008 by Toon Vanagt 1 Comment

In this first part of our lengthy video interview (4 parts) with Nick Van Der Zweep, Director for Virtualization at HP, we get introduced to how HP defines virtualization as flowing computing resources around and how this drops your costs and increases agility from desktop virtualization to data center virtualization and storage.

The interview was recorded at the HP headquarters in Cupertino, where Nick is often asked by financial analysts: ‘Is virtualization bad for your business?”. His clear answer is “NO”, as it unlocks the potential for businesses to do more and enables HP to sell a lot more robust configurations with a larger amount of condensed CPUs, much more memory, more I/O capability, etc.

Nick also shines a light on the future of virtualization, which will have (mostly free) hypervisors as a commodity. What really unlocks virtualization however is the management software and related automation capabilities. This is why HP bought and integrated a company like Opsware.

Apart from its top-range Integrity platform, with the HP-UX operating system, (deeply virtualized since 1999), HP is absolutely not entering the X86-market with a proprietary hypervisor. With products like Inside Dynamics, HP reaches into third party hypervisor software and manipulates those virtualization layers agnostically for multiple vendors. Nick is very happy with the excellent responsiveness from the X86 virtualization leaders and claims HP is the number one partner for VMware, Citrix and Microsoft.

Read the full transcript below.

0:12 Nick Van Der Zweep, welcome on Virtualization.com. You are the director for virtualizationat HP. We are at your Cupertino headquartersand you’ve got the longest job title I’ve come across in a while. I think that illustrates how disruptive this virtualization technology is to the industry. Could you tell us something more about that?

Van Der Zweep:  So virtualization for HP is all about pooling and sharing of resources so that the supply of resources can meet the demand from a business demand.  The idea is to move away from silos of resources, servers, networking, software, and storage, that is dedicated on an application by application basis, more to a pooled set of resources that can flow and ebb and flow to the application on demand.  You want to be able to do that automatically so that automatically when one application needs more resources, they automatically flow to it, although that’s scary for a large amount of IT organizations out there to have automatic reallocation of resources.  So at a minimum, you want to have the ability to just type in a command to reroute resources very, very quickly, instantaneously even, from one place to another.  So virtualization  to us is everything from desktop virtualization, to data center virtualization, storage, etc.  But ultimately, it’s all about flowing those resources around, dropping your costs, increasing your agility.

1:43 What types of virtualization does HP support?

Van Der Zweep:  Well, we’re investing heavily in all aspects of virtualization.  Like I said, desktop to data center, desktop virtualization, thin clients, storage virtualization, that started years ago and it’s back into a renaissance again with some of the capabilities that are out there.  Server raid virtualization absolutely top of mind, to folks as well, the software, software virtualization, management software around it.  So, all the technology aspects for sure and then services because this is new to a large amount of companies.  So services, plan for it, plan consolidation, data center transformations, implement the technologies, help people through cultural changes as they move to a shared environment as well. Because that’s another probably one of the biggest sticky factor as well is you’ve got to move to a mode where you’re sharing with your co-workers, your infrastructure instead of having dedicated and that’s a bit of a wall sometimes.

02:51 I’m interested to know if virtualization was expected to lower hardware sales because people are finally going to be better utilizing their hardware.  But it turns out that it’s actually quite good for hardware sales and HP is one of the ones that has benefited of this movement. Which elements does one need to get better performing hardware to do  virtualization the right way?

Van Der Zweep:  Yeah, classic question that we hear all the time. Usually the question is not from technical people, but from the financial analysts and goes “is this bad for your business”?  But it absolutely is good and this even goes back to ’98 when I was doing the consolidation program.  People would ask, is this bad for you business?  It isn’t.  It’s good, because it unlocks the potential for businesses to do more and because they are frustrated because they have a hundred projects to do but they can only afford a certain amount of infrastructure and a certain amount of projects so this really allows them to do a lot more.  And then from a net-net to HP we see a lot more robust configurations going out the door, so a larger amount of CPUs within it, much, much more memory within the systems, more I/O capability so there are very much richer systems that they can run many applications on top.

4:11 It’s more condensed, more cores.

Van Der Zweep:  More cores and more memory.  Memory is a big one; more I/O is a big one.  And then because virtualization causes a lot of sprawl as well—virtualization sprawl.  While you might have had a hundred servers before you install virtualization, you go to twenty servers but, pretty quickly, you’ve got 200 images of OSes running, so you need better management software to manage that ecosystem, where as you might have done it manually before.  You’ve got to put in management software, virtualization management and then automation comes into play.  Hence, things like our investment in automation, in buying companies like Opsware as well.

4:56  Where do you think virtualization is headed?

Van Der Zweep:  You know that’s an interesting one.  I think it’s going to move fast.  It’s been moving fast.  I don’t think it’s going to slow down.  To a large extent the hypervisors are going to commoditize.  People are seeing a lot of that moving on.

5:13 Prices are dropping or even free.

Van Der Zweep:  Prices are dropping, free open source, a lot of activity in that space.  Management software virtualization or management software automation is what really unlocks virtualization.  Those core hypervisors give some basic functionality but that software really unlocks the power to deliver, reduce cost, better agility, and high availability—those types of things.  That is where the value is showing up.  So we’re going to see a lot more of that.To be honest I don’t think there’s anybody in the industry that can really predict what it’s going to look like in five or six years because this thing is moving so fast that if anybody says, “I can tell you exactly where virtualization is going,” I just walk away, because it’s going to change dramatically again over the next number of years as well.

6:10  HP hasn’t built its own hypervisor.  You chose to offer your clients the choice between VMware, Xen, and Hyper-V.  You ship them with the hardware?

Van Der Zweep:  It’s actually a combination. We do have our own hypervisor for our Integrity platform, so on that platform we have an HP-UX operating system, the partitioning, hypervisor, management software, and we deeply virtualized back since 1999-2000.  In the X86 space, we absolutely are not entering the market with the hypervisor.  VMware is out there and Microsoft is out there with virtual server but Hyper-V is if it’s not today it’s soon to be generally available.Citrix, acquiring XenSource and the other Xen open source environments and Linux with KVM.  There is plenty of work going on in the hypervisor space.  We are trying to enable on top of that, add management to be on top that.  Our products like HP Insight Dynamics-VSE reach into and manipulate and use VMware’s software and manipulate that virtualization layer.

7:23:  How happy are you with the support of these partners?  VMware and Xen service or technology partners?

Van Der Zweep:  So they’re very responsive us and we’ve got a very good relationship with them.  We’re the number one partner of VMware in the industry, the number one partner of Microsoft in the industry, and the number one partner of Citrix in the industry.  So they tend to jump when we give them a call saying, “Hey we’re looking at integrated hypervisors or building management software around it.”  They know they get a huge addressable market by working very, very close with us.  So they’ve been very responsive.

HP

Filed Under: Featured, Interviews, People, Videos Tagged With: Hewlett Packard, HP, HP virtualization, interview, Nick Van Der Zweep, video, video interview, virtualisation, virtualization

A Round Table on Virtualization Security with Industry Experts

July 30, 2008 by Kris Buytaert 3 Comments

Virtualization security or ‘virtsec’ is one of the hottest topics in virtualization town. But do we need another abbreviation on our streets? Does virtualization require its own security approach and how would it be different from the physical world?

Different opinions fly around in the blogosphere and among vendors. Some security experts claim there is nothing new under the sun and the VirtSec people are just trying to sell products based on the Virtualization Hype. Some see a genuine need to secure new elements in the infrastructure, others claim that Virtualization allows new capabilities to raise security from the ground up and cynics claim it is just a way for the Virtualization industry to get a larger piece from the security budget.

So our editors Tarry and Kris set out to clarify the different opinions, together with the support of StackSafe, they organized a conference call with some of the most prominent bloggers, industry analyst and vendors in this emerging field.

On the call were Joe Pendry (Director of Marketing at StackSafe), Kris Buytaert (Principle at Consultant Inuits), Tarry Singh (Industry/Market Analyst Founder & CEO of Avastu), Andreas Antonopoulos (SVP & Founding Partner at Nemertes Research),Allwyn Sequeira (SVP & CTO at Blue Lane), Michael Berman (CTO at Catbird), Chris Hoff (Chief Security Architect – Systems & Technology Division and Blogger at Unisys) and Hezi Moore (President, Founder & CTO at Reflex Security)

During our initial chats with different security experts their question was simple: “what does virtsec mean?”. Depending on our proposed definition, opinions varied.

So obviously the first topic for discussion was the definition of VirtSec:

Allwyn Sequeira from Blue Lane kicked off the discussion by telling us that he defined Virt Sec as “Anything that is not host security or that’s not network-based security. If there’s a gap there, I believe that gap – in the context of virtualization – would fall under the realm of virtualization security. ” He continued to question who is in charge of Inter-VM communication security, or how features such as Virtual Machine Migration and Snapshottiting add a different complexity to todays infrastructure.

Andreas Antonopoulos of Nemertes Research takes a different approach and has two ways of looking at VirtSec “How do you secure a virtualized environment” and in his opinion a more interesting question is “How do you virtualize all of the security infrastructure in an organization” Andreas also wonders how to call the new evolutions “What do you call something that inspects memory inside of VM and inspects traffic and correlates the results? We don’t really have a definition for that today, because it was impossible, so we never considered it.” He expects virtualization to change the security landscape “Just like virtualization has blurred the line between physical server, virtual server, network and various other aspects of IT, I see blurring the lines within security very much and transforming the entire industry.”

Hezi Moore from Reflex Security wants to search for actual problems. He wants to know what changed since we started virtualizing our infrastructures. “A lot of the challenges that we faced before we virtualized; are still being faced after we virtualized. But a lot of them got really intensified, got much more in higher rate and much more serious.”

Michael Berman from Catbird thinks the biggest role of VirtSec still is Education, “..and the interesting thing I find is the one thing we all know that never changes is human nature.” He is afraid of virtualization changing the way systems are being deployed with no eye on security. Virtualization made it a lot easier to bypass the security officers and the auditors. The speed at which one can deploy a virtual instance and a bigger number of them has changed drastically regarding to a physical only environment, and security policies and procedures have still to catch up. “We can have an argument whether the vendors are responsible for security, whether the hypervisors about who attack servers. The big deal here is the human factor. “

Chris Hoff summarizes the different interpretations of VirtSec in three bullets:

  • One, there is security in virtualization, which is really talking about the underlying platforms, the hypervisors. The answer there is a basic level of trust in your vendors. The same we do with operating systems, and we all know how well that works out.
  • Number two is virtualized security, which is really ‘operationalization’, which is really how we actually go ahead and take policies and deploy them.
  • The third one is really gaining security through virtualization, which is another point.

Over the past decade different Virtualization threats have surfaced, some with more truth than others. About a decade ago when Sun introduced their E10K system, they were boasting they really had 100% isolation between guest and host OS. But malicious minds figured out how to abuse the management framework to go from one partition to another. Joana Rutkowska’s “Blue Pill” Vulnerability Theory turned out to more of a myth than actual danger. But what is the VirtSec industry really worried about?

It seems the market is not worried about these kind of exploits yet. They are more worried about the total lack of security awareness. Andreas Antonopoulos summarizes this quite well “I don’t see much point in really thinking too much about five steps ahead, worrying about VM Escape, worrying about hypervisor security, etc. when we’re running Windows on top of these systems and they’re sitting there naked”.

Allwyn from Blue Lane however thinks this is an issue…certainly with Cloud Computing becoming more popular, we suggest to seriously think about how to tackle deployment of Virtual Machines in environments we don’t fully control. The Virtual Service Providers will have to provide us with a secure way to manage our platforms, and enough guarantee that upon deployment of multiple services these can communicate in a secured and isolated fashion.

Other people think we first have to focus on the Human Factor, we still aren’t paying enough attention to security in the physical infrastructure, so we better focus on the easy to implement solutions that are available today, rather than to worry about, exploits that might or might not occur one day.

Michael Berman from Catbird thinks that Virtualization vendors are responsible to protect the security of their guest. A memory Breakout seems inevitable, but we need to focus on the basic problems before tackling the more esoteric issues…He is worried about scenarios where old NT setups, or other insecure platforms are being migrated from one part of the network to another, and what damages can occur from such events.

Part of the discussion was about standardization, and if standardization could help in the security arena. Chris Hoff reasons that today we see mostly server virtualization, but there is much more to come, client virtualization, network virtualization, etc. As he says: “I don’t think there will be one one ring zero to rule them all.”. There are more and more vendors joining the market, VMWare, Oracle, Citrix, Cisco, Qumranet and different others have different Virtualization platforms and some vendors have based their products on top of them.

In the security industry standardization has typically been looked at as a bad thing, the more identical platforms you have the easier it will be for an attacker, if he breaks one, he has similar access to the others. Building a multi-vendor or multi-technology security infrastructure is common practice.

Another important change is the shift of responsibilities, traditionally you had the Systems people and the network people, and with some luck an isolated security role. Today the Systems people are deploying virtual machines at a much higher rate , and because of Virtualization they take charge of part of the network, hence giving the Network people less control. And the security folks less visibility

Allwyn Sequeira from Blue Lane thinks the future will bring us streams of Virtualization Security, the organizations with legacy will go for good VLAN segmentation and some tricks left and right because the way they use Virtualization blocks them for doing otherwise. He thinks the real innovation will come from people who can start with an empty drawing board.

Andreas Antonopoulos from Nemertes Research summarized that we all agree that the Virtualization companies have a responsibility to secure their hypervisor. There is a lot of work to be done in taking responsibility so that we can implement at least basic security. The next step is to get security on to the management dashboard , because if the platform is secure, but the management layer is a wide open goal, we haven’t gained anything.

Most security experts we talked to still prefer to virtualize their current security infrastructure vover the products that focus on securing virtualization. There is a thin line between needing a product that secures a virtual platform and changing your architecture and best practices to a regular security product fits in a Virtualized environment.

But all parties seem to agree that lots of the need for VirtSec comes from changing scale, and no matter what tools you throw at it, it’s still a people problem

The whole VirtSec discussion has just started, it’s obvious that there will be a lot of work to be done and new evolutions will pop up left and right. I`m looking forward to that future So as Chriss Hoff said “Security is like bell bottoms, every 10-15 years or so it comes back in style”, this time with a Virtualization sauce.

Listen to the full audio of the conference call!

Filed Under: Featured, Guest Posts, Interviews, People Tagged With: Allwyn Sequeira, Andreas Antonopoulos, Avastu, Blue Lane, Catbird, Chris Hoff, conference call, Hezi Moore, interview, Inuits, Joe Pendry, Kris Buytaert, Michael Berman, Nemertes Research, Reflex Security, round table, StackSafe, Tarry Singh, Unisys, virtsec, virtualisation, virtualization, virtualization security

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Interim pages omitted …
  • Go to page 7
  • Go to Next Page »

Primary Sidebar

Tags

acquisition application virtualization Cisco citrix Citrix Systems citrix xenserver cloud computing Dell desktop virtualization EMC financing Funding Hewlett Packard HP Hyper-V IBM industry moves intel interview kvm linux microsoft Microsoft Hyper-V Novell oracle Parallels red hat research server virtualization sun sun microsystems VDI video virtual desktop Virtual Iron virtualisation virtualization vmware VMware ESX VMWorld VMWorld 2008 VMWorld Europe 2008 Xen xenserver xensource

Recent Comments

  • C program on Red Hat Launches Virtual Storage Appliance For Amazon Web Services
  • Hamzaoui on $500 Million For XenSource, Where Did All The Money Go?
  • vijay kumar on NComputing Debuts X350
  • Samar on VMware / SpringSource Acquires GemStone Systems
  • Meo on Cisco, Citrix Join Forces To Deliver Rich Media-Enabled Virtual Desktops

Copyright © 2023 · Genesis Sample on Genesis Framework · WordPress · Log in

  • Newsletter
  • Advertise
  • Contact
  • About