Of Clouds, Clients, IBM and Virtual Machines
I had a long chat about “all things virtual” with Baker Hull, Technical Alliance Manager for VMware a few weeks ago. The conversation started in the cloud, moved quickly to customer maturity and then meandered all over the map. This is how it went:
Bloor: What is VMware doing in the area of cloud that’s got so many people’s attention? Is this just attaching the word “cloud” to virtualization or is there something more significant we should be taking notice of.
Hull: We are at the maximum-hype point with cloud computing right now with every vendor “cloud washing” their product whether it’s justified or not. Everyone has their cloud positioning. But cloud means something quite specific to VMware. It means taking unmodified applications, virtualizing them and then wrapping a self-service front-end around them. So the back-end virtualization infrastructure stays just the same; it’s all VSphere and VCenter, but we add a portal to the front-end so that you move more towards self-service IT.
That means platform-as-a-service for development groups so that they can deploy applications themselves, even multi-tier applications, without the need to involve administrators. For the user it can mean being able to deploy some applications without involvement from IT. We’ve built a whole suite of tools around managing and monitoring the infrastructure, from application performance down to the end user level to being able to do charge-back and capacity modeling to enable this.
This is also about deploying new applications. VMware has been acquiring some third party companies like Collaboration Software to improve the options available. So if you want to use these applications rather than traditional applications like Exchange, that’s an option.
But what we’re really about here is the “private cloud.” This is about taking a “virtualization first” approach. It’s not about moving all your applications out of the data center into the cloud as fast as possible. It’s about moving from, say, 30 percent virtualized to 100 percent virtualized - virtualizing all of your line of business applications. We believe that virtualization goes first.
Once you’re fully virtualized, it’s easy to bring in this self-service portal. At that point you’ve got the flexibility to do as much hosting as you want with the service providers that support our APIs - SalesForce.com and Google, are examples. And you can move applications in and out of the public cloud according to seasonal changes in workload, as happens for example in the retail industry.
Bloor: I’ve been tracking virtualization for quite a while since about 2002. And I’ve noticed that nowadays just about every data center is doing virtualization. Some have just started, some are reasonably advanced. There’s a lot of low-hanging fruit with virtualization, servers that are obvious candidates. And virtualizing those saves real money and real floorspace very very quickly. My impression of that kind of virtualization is that it’s got very mature. But this idea of a private cloud, and the migration of applications from the private cloud to the public cloud, what level of maturity do you think that’s at right now?
Hull: The vast majority of our customers are less than 30 percent virtualized. In fact, the average is 33 percent virtualized. The percentage could be much higher. This is where I like to discuss the Reliability, Availability, Serviceability (RAS) and Service Level Agreement (SLA) features we’ve added into the product portfolio over the last two or three years. If you combine that with servers that IBM has, with their RAS features, then you have mainframe-like capability - HA, DRS, VMotion. Right now many customers are leaving that on the table - leaving those VMware capabilities unused on applications where they need high availability and where they could and should be used.
We need to get customers to move past the 35 percent virtualization level. There are two keys to this. First, to encourage them to support the large mission-critical applications using 4-way and 8-way servers. We also need to encourage customers to go beyond the 15-to-20-VMs-per-host average. That’s a comfort level and it was all the VMs you could support per server three or four years ago. The hardware was less powerful and, to be honest our hypervisor wasn’t that scalable 4 years ago. But now we have much greater scalability. We can do more.
Bloor: When you look at the mission critical systems that some organizations run, thinking in terms of the average number of VMs you are running per server probably isn’t the right way to look at it, because some of these workloads are very large. It’s better to think simply in terms of now being able to provide the capabilities that such workloads needs - particularly redundancy, clustering - providing the computer power when it’s needed where it’s needed. It’s now a matter of getting it into people’s minds that this is a path they should be moving along now, because the technology is ready.
Hull: Right. If you want to receive the big bang benefits. Say you’ve got a thousand servers and we can take all those applications and run them on ten servers. That’s an obvious gain, with huge freeing up of floor space and reduction in cooling requirements. It’s the gift that keeps on giving in terms of operating expense. But it’s also about business agility. So I can deploy a new application in ten minutes when before it might have taken two weeks to acquire the hardware and get it racked and stacked. And then finally, if you take the virtualization first approach, you end up with the agility of being able to deploy in the data center or in the public cloud.
Bloor: Nobody really knows, but everyone suspects, that the cloud providers are going to become more and more economic with the passage of time. So this suggests a gradual movement of the small and medium sized data centers into the cloud, maybe eliminating them or at least reducing them to a much smaller size. But the reality is that most data centers have thousands of applications running, and managing and scheduling those applications and managing other costs like the cost of energy and doing it all in a near optimal manner, is no picnic. And those are things that every data center manager must want to bring under control - and they can only be brought under control when you have moved away from the world of siloed servers that used to be so common.
Hull: That’s right. You talk about thousands of applications and maybe the extremely large companies have the money to rewrite applications but the mid-tier and small companies simply don’t. So they either keep their traditional applications or they get what they can from Google or Amazon or whoever and develop on top of that.
For the development requirement, VMware acquired a company called SpringSource and another company called Zimbre. People were scratching their heads. So that’s a collaboration company and an open source Java development company, and the software is free. That was about the proliferation of technology. Most if the Java community use Spring, so we ensured that it was out there for them to use indefinitely and that it will support the cloud, whether its public cloud or private cloud.
Bloor: I’ve watched VMware for years now, and I’ve been wondering when it will saturate. What happens with most successful companies is that the growth is very dynamic for quite a while, but at some point the company starts to saturate the primary market. So what level of saturation do you think that VMware is at? Do you think there still a great deal of space to expand into, with the private cloud and the public cloud?
Hull: Yes I do. Right now we’ve got something like 90,000 customers and we’ve got a goal to move from being a $2 billion company to a $6 billion company by 2014. So to triple in size in four years. That’s a pretty big goal.
Bloor: Yeah. That’s about a 65 percent compound growth rate.
Hull: But there’s a huge market out there. There are hundreds of thousands of customers that aren’t virtualizing at all. There are customers hanging onto the old midrange systems, the Sun Sparc systems and HP UX and even the old IBM AS/400 systems. As systems get more and more powerful it’s going to make less and less sense to be on those systems and it will make no sense not to virtualize at all.
Then there are considerations like disaster recovery (DR) that is so easy to accommodate on virtualized systems. Even if it’s not already catered for, if you “see the hurricane coming” DR can be set up in the cloud in almost no time and, if the worst happens, you can be back-up and running very quickly. With older systems,.. Well, good luck with jumping on a plane with a bag full of tapes trying to restore applications in a back-up site.
Bloor: Virtualization has done two things. First, it’s created an alternative option for DR and secondly the option it provides is more effective and appears to be a lot less expensive than all the options it can replace.
Hull: It’s very cost effective because you don’t have to have like-for-like hardware. It’s almost impossible to keep a traditional disaster recovery plan up-to-date. For example if it’s for Intel servers then it’s already out of date after about a year. And even if the hardware doesn’t create problems, you have to have an identical environment right down to the drivers.
Bloor: So what kind of take-up do you have for that approach to DR.
Hull: It’s usually the second thing that customers do after they’ve kicked the tires and done a few deployments and tested them. Site Recovery Manager itself is our second best selling product behind VSphere and VCenter.
Bloor: I know you work closely with IBM. We’ve already covered the memory capacity of IBM’s System x servers. Are there other areas where you think IBM provides technology that makes a difference?
Hull: IBM has an impressive storage portfolio. The DS 5000 and the DS 3500 are capable of remote mirroring and that’s a minimum requirement for Site Recovery Manager. You have to have array-based mirroring capability and those arrays provide it in a cost-effective way, compared to the monolithic storage cabinets that the other vendors sell. It’s a rack-based-build-it-out-to-the size-you-need capability.
IBM also provides a DS storage plug-in to VCenter server which is really cool. It adds another configuration and monitoring tab in VCenter server so you have a single GUI to manage your virtualized server environment; your hosts and VMs and templates and with plug-in you can manage your storage arrays right there. So you can create new LUNs and then assign that storage to specific hosts, without having to bring up some other management tool. Most other plug-ins to VCenter just provide static monitoring and you need to fire up the associate tool if you want to make configuration changes, but this plugin is fully integrated into VCenter.
There’s an array integration API tool set we’ve provided with VCenter 4.1 which IBM has integrated with. There are three primitives we’ve defined in this package, but basically they offload virtualization administration tasks to the storage arrays themselves. So deploying a template normally takes up cpu cycles, copying a VM takes up cpu cycles and networking and storage i/o. These VAAI integration points which require the co-operation of the storage vendors, allow the processing to be offloaded completely to the storage array. It takes advantage of the flash copy capability within the storage array and uses it to do that work faster and more efficiently, without any impact on cpu cycles and network resources. It offloads the task from ESX completely.
Links to more information
- Our IBM System x blog: https://www.ibm.com/blogs/ibmx86/?lang=en_us
- IBM's VMworld virtual booth: http://www.vmworld.com/community/exhibitors/ibm
- Follow us on twitter: http://www.twitter.com/ibmsysxblade
- Become a fan on facebook: http://facebook.com/pages/IBM-System-X-Blades/151336661546?ref=ts
- Connect with us on one of our other communities: http://www-03.ibm.com/systems/x/community/index.html
ABOUT the speakers:
Baker Hull is a Technical Alliance Manager for VMware LinkedIn
Robin Bloor Ph.D. Chief Analyst & President, The Bloor Group and Founder, Bloor Research
Author: Words You Don’t Know, The Electronic Bazaar.
Co-Author: Service Oriented Architecture for Dummies, Service Management for Dummies, Cloud Computing for Dummies