[Jun 24, 2008] VMware's CEO talks Microsoft, security, EMC and cloud computing By Jon Brodkin

06/24/2008 | Network World

Diane Greene is the president, CEO and co-founder of VMware, a pioneer of x86 server virtualization and one of the most innovative companies to hit the IT world in the past decade. Greene was in Boston last week with her VMware team, briefing analysts on new technologies that haven't been made public yet. She took some time out to speak with Network World's Jon Brodkin about a range of topics.

Microsoft is entering the market with Hyper-V. How are you preparing for that?

We've got our hypervisor, which is the world's best, the most reliable, the most secure, the most functional, the smallest footprint. Then we have this broad portfolio of 21 products that make this hypervisor powerful. We've been expecting competition for years. We got ready for it. We knew what they would do, they would come in and say 'the hypervisor is free.' And we have shifted our revenue, we have shifted our value to the software that makes that hypervisor so valuable.

VMware does charge a lot more than its competitors. Are you feeling any pressure to lower your prices?

We're the only company with a price point for every kind of use of virtualization starting with just the hypervisor. ESXi is available from our Web site for $495. We have a free VMware Server that is very actively used, if you look at the discussion groups.

The portfolio of software for managing and automating the applications for running virtual machines, giving them quality of service, is where we increasingly charge, but that's completely separate from the base platform, the hypervisor layer.

What really differentiates the VMware hypervisor from Microsoft and Xen server virtualization software? (Compare server products.)

VMware's hypervisor is incredibly robust. We have a [big pharmaceuticals] customer that has run one with no reboots for over four years. No restarts. It's the only hypervisor that has no dependence on an operating system. It can be much more secure because a hypervisor is only as secure as its weakest link. We have this architecture that can be embedded in the hardware with this small, very secure footprint [under 32MB]. And the functionality our hypervisor supports, such as memory overcommit and so forth is the broadest. Not to mention that it's in use in production by over 100,000 customers.

IBM virtualized the mainframe several decades ago. How is VMware's technology modeled after mainframe virtualization?

VMware was founded with the notion that if you revisited this concept of virtualization that IBM had done and modernized it, and brought it to industry-standard systems, that where hardware had come in terms of fast CPUs, and cheap memory and cheap disk and networking support was going to make it phenomenally valuable. We had taken it to a much broader applicability than was originally done on the mainframe, but the concept of virtualization and a lot of the value proposition that IBM saw in the late ‘60s hasn't changed at all.

There were some rumors about EMC selling VMware, which seem to have fizzled out. How much attention do you pay to that kind of thing?

As CEO of VMware my job is to keep the company executing and fulfilling our potential, and that's really what I focus on and lead the company to focus on.

Any idea why these rumors crop up from time to time, though?

The situation VMware is in, where we're 86% [owned by EMC] and a partial spinout [on the stock market] is an unusual situation and has some instability associated with it. Naturally people are watching that closely. If you looked at data on companies that get partially spun out, generally something happens afterwards, you know it evolves one way or another.

You've discussed in the past how it was important to keep the operations of VMware separate from EMC, even though EMC owns VMware. Who do you report to? Is there any interference from on high?

VMware is now a separate public company from EMC. As CEO I report to the VMware board.

Which is composed mostly of EMC executives?

The VMware board is mostly EMC, either directors or officers of EMC. What we're focused on at VMware is our partnering, that's very key to how we go to market, how we integrate, and executing on our strategy.

Do you ever feel friction or have strategic disagreements with EMC?

I find that it's important to be very articulate about how in order for VMware to realize its full potential -- and we're in an amazing position right now -- the importance of our partners and our ability to execute in an unfettered way.

You co-founded VMware with your husband. What is his current involvement with the company and what's your business relationship like?

VMware had five co-founders and Mendel Rosenblum, who is my husband, is our chief scientist. He's also a professor of computer science in the systems space at Stanford University where he continues to be a full-time professor. But he is also very involved at VMware one day a week and on an ongoing basis.

Is there a next wave of virtualization we don't know about?

What we're doing with computers is getting more complex. The sophistication with which we handle delivering applications with the top quality of service and security. The next wave is using virtualization to provide a complete simplification of how you do that -- being able to build, develop, deploy, maintain and update applications, where an application can be a composite application of multiple virtual machines, and delivering that from any place over any set of hardware resources, be it on-premise or off-premise in a cloud, if you will.

How does cloud computing play into virtualization?

Virtualization is really the key building block to being able to do cloud computing, because the notion of a cloud is that all the resources are kind of aggregated, sort of magically, and you just run services from the quote cloud. It's very important you be able to separate the software from the hardware and move it around without any service interruption. And be able to have the application take with it the quality of service it wants. So what customers want is complete freedom of choice. They want to take their application and run it anywhere in any cloud. The only way to do that is with virtualization.

Amazon uses virtualization in its EC2 cloud. Are they using VMware?

I can't comment on that, but I think the model that Amazon is doing is helping people to understand what's possible.

What's the biggest challenge for VMware this year?

If I had to identify one thing, we've definitely stepped up our communication this year. Part of that is being a public company. Part of that is explaining, as the noise has increased due to our expected arrival of competitors coming into the market, explaining the different category of value proposition we have.

What are the most innovative VMware customers doing with virtualization?

There are great innovations going on in the desktop and in the data center and even in the cloud. The desktop is used heavily in the health industry and the hospitals. There's Huntsville hospital [in Alabama], which uses thin clients on wheeled carts with the desktop hosted on a centralized server. They're using virtualized desktops so they can have wireless thin clients running around on hospital carts.

The U.S. Marine Corps consolidated 300 data centers down to 30, and 100 mobile platforms, a data center in a box, that can go on a tank or what have you. With [virtual desktop software] VMware ACE, they can carry [desktops] on their thumb drives and deploy on any PC.

The National Security Agency, of course, has used it to provide different security levels on the same physical machine. They've been a customer of ours since 1999. They started out really early, we gave them our source code right away, they did a full audit of it. One of the first things they used it for was to isolate what people are doing because a single individual that has top security clearance needs to do different things at different security levels. [Previously], they had to have a different PC depending on how secure the data was. They were able to consolidate that onto a single machine made up of multiple virtual machines. Each virtual machine was encrypted and so forth and had a different security clearance level.

The VMsafe program you announced in February essentially opened your hypervisor to security vendors. Has anything innovative come out of that yet? (Compare security products.)

You'll see products over the next 18 to 24 months out of that. Once that comes out you'll see a new level of security, because instead of being either inside the operating system or out on the network, you're on a special very secure virtual machine that can aggregate what's going on in all the memory and CPU and operating system and network. Also, people won't need to install antivirus in the software anymore because you'll be able to put it in the container [the virtual server running the software]. In other words, you can control what goes in and out of a virtual machine. If there's a new virus you can update right there, you don't have to update the operating system or the application.

Do you think a virtual server today is more secure or less secure than a physical server?

I would say it's certainly as secure and in some ways more secure simply because the hypervisor is so small, so you can really secure the hypervisor. And a virtual machine container is as secure as the hardware.

Are there any problems people run into that are unique to virtual servers?

With products like ACE, where you can actually put security policies around the virtual machine, that actually takes it to a stronger level. You can have a desktop virtual machine not allowed to send anything to a printer or have to check in with a central server to make sure it's still valid and properly updated before you can operate it. You can add security policies around that in a way you wouldn't be able to do with a physical machine.

Another problem is virtual server sprawl. How does that compare to physical server sprawl?

A virtual machine when it's inactive uses no power. A virtual machine doesn't require physical space other than the disk the virtual machine file sits on. So if you have excellent monitoring and management tools around a virtual machine you're going to be in a much better position than if you had to bring out a new physical machine every time you want to run another workload. It is interesting, oftentimes when people bring in our capacity planner tool, they'll discover machines and nobody knows what they're used for.

VMware is doing application virtualization now, with the acquisition of Thinstall. Are there any other types of virtualization you haven't done yet that you might get into?

Virtualization can be a very broad, all-encompassing term. People apply virtualization to social networks, even. But we virtualize comprehensively all the hardware resources: servers, storage, network, memory, CPU, disk, I/O. We do that within ESX.

That lets you separate the software from the hardware. Then we let you virtualize the application from the operating system so you can seamlessly install an application on any version of the operating system.

In a way we're virtualizing how you do management and automation because we're simplifying it.

... ... ...

[Jun 24, 2008] 7 side effects of sloppy virtualization - Network World By Denise Dubie

06/24/2008 | Network World ,

Virtualization can cause as many problems as it solves if left unmanaged, according to Gartner.

IT professionals may initially be awestruck by the promises of virtualization, but Gartner analysts warn that awe could turn into upset when organizations start to suffer from seven nasty side effects.

David Coyle, research vice president at Gartner, detailed the seven side effects at the research firm's Infrastructure, Operations and Management Summit, which drew nearly 900 attendees. While virtualization promises to solve issues such as underutilization, high hardware costs and poor system availability, the benefits come only when the technology is applied with proper care and consistently monitored for change, Coyle explained.

Here are the reasons Gartner says virtualization is no IT cure-all:

1. Magnified failures. In the physical world, a server hardware failure typically would mean one server failed and backup servers would step in to prevent downtime. In the virtual world, depending on the number of virtual machines residing on a physical box, a hardware failure could impact multiple virtual servers and the applications they host.

"Failures will have a much larger impact, effecting multiple operating systems, multiple applications and those little tiny fires will turn into big fires fast," Coyle said.

2. Degraded performance. Companies looking to ensure top performance of critical applications often dedicate server, network and storage resources for those applications, segmenting them from other traffic to ensure they get the resources they need. With virtualization, sharing resources that can be automatically allocated on demand is the goal in a dynamic environment. At any given time, performance of an application could degrade, perhaps not to a failure, but slower than desired.

3. Obsolete skills. IT might not realize the skill sets it has in-house won't apply to a large virtualized production environment until they have it live. The skills needed to manage virtual environments should span all levels of support, including service desk operators who may be fielding calls regarding their virtual PCs. Companies will feel a bit of a talent shortage when moving toward more virtualized systems, and Coyle recommends starting the training now.

"Virtualized environments require enhanced skill sets, and virtual training across many disciplines," he said.

4. Complex root cause analysis. Virtual machines move -- that is the part of their appeal. But as Coyle pointed out, it is also a potential issue when managing problems. Server problems in the past could be limited to one box, but now the problem can move with the virtual machine and lull IT staff into a false sense of security.

"Is the problem fixed or did you just lose it? You can't tell in a virtual environment," Coyle said. "Are you just transferring the problem around from virtual server to virtual server?"

5. No standardization. Tools and processes used to address the physical environment can't be directly applied to the virtual world, so many IT shops will have to think about standardizing how they address issues in the virtual environment.

"Mature tools and processes must be revamped," Coyle said.

6. Virtual machine sprawl. The most documented side effect to date, virtual server sprawl results from the combination of ease of deployment and lack of life-cycle management of virtual machines. The issue could cause consolidation efforts to go awry when more virtual machines crop up than there are server administrators to manage them.

"The virtualized environment is in constant flux," he said.

7. May be habit forming. Once IT organizations start to use virtualization, they can't stop themselves, Coyle said. He offered tips to help curb the damage done from giving into a virtual addition.

"Start small. Map dependencies. Create strong change processes. Update runbooks. Invest in capacity management tools. And test, test, test," he said.

[Mar 19, 2008] Technology - Open source’s green claims by Sam Hiser

March 19 2008 | FT.com

The combination of free software from the likes of Linux and GNU with virtualisation – which maximises computing efficiency – is a compelling proposition. It offers lower costs, flexibility and greater efficiency, plus environmental benefits

Bogamil Balkansky, chief of product marketing at VMWare, the leading virtualisation vendor, agrees that vitualisation is inherently green. “It helps shrink the physical footprint and the energy footprint,” he says. “For every application running virtually, the data centre saves about 7,000 kilowatt-hours a year.”

With virtual machine technology, application workloads can be consolidated to fewer servers and they can be moved around, closed down, opened up and re-provisioned remotely with remarkable ease.

Until now, PCs and servers have been vastly over-provisioned because of software inflexibility and the need to maintain some leeway in hardware resources.

Mike Grandinetti, chief marketing officer of Virtual Iron and senior lecturer at MIT’s Sloan School of Management, says: “Average server capacity utilization is between 4 and 7 per cent.” Experts say we might expect long-term capacity utilization above 50 per cent, and well beyond that for some applications, but the virtualisation trend is only just getting started.

IBM is curing its own server sprawl in a big green effort to shrink 3,900 servers down to 33 z10 mainframes. “One z10 is equivalent to 1,500 standard Intel x86 servers; it takes up 85 per cent less space and uses 85 per cent less power,” says David Gelardi, IBM’s vice-president of mainframes and high performance computing.

The project is focused on migrating IBM’s internal Unix applications to applications running in virtual machines upon zLinux, which supports either Red Hat Enterprise Linux (RHEL 4 or 5) or Novell’s Suse Linux Enterprise Server (SLES 9 or 10) as host.

Alongside server consolidation, GNU/Linux and virtualisation are driving the Web 2.0 trend, which sees businesses using the internet to interact with customers and the wider public.

Companies such as Google, Ebay, Amazon and MySpace use GNU/Linux to drive their services and many use virtual machine technology to provision applications efficiently and allocate hardware resources for customers.

There is hardly any debate that by 2018 IT will consume more aggregate kilowatt-hours than it did in 2008, but by then data centres will do so much more and desktops so much less.

Mr Grandinetti of Virtual Iron says: “There will be an explosion in services without an explosion in energy consumed.”