All Virtualization was not Created Equal
Server virtualization, or the ability to run more than one operating environment on a single piece of hardware, while simultaneously letting each operating environment (virtual machine) think it is the only one running, has clearly reached the data center mainstream. However, all server virtualization was not created equal.
Virtualization was first commercially available in the 1960s with CP/CMS on IBM’s s/360 mainframes and has matured four decades into today’s z/VM on IBM’s System z. System z can run 1964 COBOL in one virtual machine and state-of-the art Linux in another. IBM’s System z is the most virtualizable server ever designed. For example, any one 4GHz processor core can, at any time, become:
* Central Processors - Running the z/OS operating system (and others)
* System Assist Processors -- Offloading I/O processing from the z/OS central processors
* Internal Coupling Facility -- Running special z/OS clustering microcode
* Integrated Facility for Linux -- Running Linux at a lower cost than on z/OS central processors.
* Application Assist Processor – Lowering costs by off-loading JAVA from z/OS central processors
* Integrated Information Processor -- Lowering costs by off-loading selected DB2 work from center processors
* Any spares -- Can dynamically replace any failing core
Moreover, this architecture allows multiple concurrent hypervisors, whereas typically one hypervisor controls an entire server environment. A hypervisor lies between the physical hardware and virtual machines. Hypervisors make one physical resource look like multiple virtual resources. It abstracts the physical hardware resources into logical representations used by virtual machines and regulates virtual machine access to these abstracted resources. Advanced hypervisor functionality can combine multiple physical resources into shared pools from which users receive virtual resources, on demand.
The System z PR/SM hypervisor has been available since 1988, and:
* Has a CC EAL5 security rating
* Creates separate pools of z/OS and z/VM LPARs.
* Allows CPU resources to be shared within these pools
* Has memory dedicated to an LPAR
* Manages low latency virtual networks
* Facilitates I/O sharing between or dedicated to LPARs
Classic, single operating systems running on a physical server timeshares the execution of threads and tasks. Add virtualization and multiple virtual machines not only must time share the underlying environment, but each has its threads and tasks switching. Until recently, it would have been difficult to implement a viable virtualization infrastructure atop commercially available off the shelf hardware. The switching and housekeeping overhead was prohibitive on older, slower, limited memory footprint, general purpose server technology. Historically, servers and operating systems were not virtualization-ready, except for System z [IBM Mainframe]. Most high performance enterprise processors did not have even a hypervisor execution mode. This would be necessary for efficient virtualization.
Modern high performance processors have user and kernel (used only by the operating system) execution modes. This was the classic execution architecture; user space and [single] operating system kernel space. This worked fine when server performance was a primary marketing feature, regardless of low system utilization. In an attempt to implement multiple OSs running on the same server, given, among other constraints: processor designs without a hypervisor privilege, Non Uniform Memory Architecture, lack of I/O management units, etc., resulted in servers with hardware partitioned domains, such as Sun’s SPARC & Solaris Domains or HP’s SuperDomes & nPars. These classic limitations, clearly missing on x86 processors, where eventually addressed by novel technologies from VMware and others, that would trap and translate (and cache) executing x86 kernel opcodes into user mode opcodes allowing the hypervisor to run as if in kernel mode.
Hardware partitioning is crude compared with today’s virtualization, for the underlying HW is not shared, but rather each partition is electrically isolated from interfering with each other, expressly lacking the ability to share processing capability or other systems resources. In such an environment a single OS/hard partition could be running at 100%, with other partitions idling, wasting overall utilization.
Until very recently, Intel and AMD x86 processors had almost no capability of aiding hypervisors in abstracting hardware. The latest Intel Nehalem-class processors with Virtual Machine Extensions and chip set extensions, greatly help hypervisors abstract underlying hardware and keep track of virtual machine status. Eventually commercial x86 virtualization packages will make more use of these new capabilities. It would have been preferable if all x86 processors were designed with virtualization in mind, but that was not the case.
The IBM POWER processor was specifically designed to operate in a virtualized environment. The processor has three execution modes: hypervisor, kernel, and user, has peripheral management units, etc. Having been designed for a virtualized environment, the POWER hypervisor, PowerVM, is a firmware product. Since PowerVM functions synergistically with POWER’s architecture, it has the ability to abstract a single POWER core down to 1/10 of a processor, (alternatively create 10 virtual or logical processors out of one physical core) within multiple shared processor pools containing up to 254 logical processors. These kinds of capabilities would be rather difficult to match on current x86 class processors and virtualization suites.
System virtualization enables the consolidations of systems, workloads, and operating environments, optimizes resource use, improve data center flexibility and responsiveness. Virtualization provides the following benefits:
Consolidation and reduction of hardware costs -- virtualization enables efficient access and manages resources, reducing operation and system management costs while maintaining needed capacity. Typical server wide utilization of ~20% is increased to over 80% with well implemented virtualization.
Optimization of workloads -- virtualization enables dynamic response to the application needs of its users. Virtualization increases the use of existing resources by enabling dynamic sharing of resource pools.
IT flexibility and responsiveness -- virtualization provides a single, consolidated view of, and easy access to, all available resources in the network, regardless of location.
When contemplating virtualization, consider the consolidation of logical resources rather than physical resources into an environment designed to support server, storage, and network virtualization. Adding virtualization technologies to your data center creates an on demand, secure, and flexible infrastructure prepared to automatically handle dynamic workload changes in your data center environment.
Refer to http://www-03.ibm.com/systems/virtualization/ for further information on IBM virtualization.