Virtualization

DOWNLOAD
 .DOC .PPT


Introduction


Virtualization is a concept that has evolved from what many first recognized as a niche technology to one that's driving many mainstream networks. Evidence of virtualization exists in nearly all aspects of information technology today. You can see virtualization in sales, education, testing, and demonstration labs and you can see it even driving network servers.
Virtualization is the latest in a long line of technical innovations designed to increase the level of system abstraction and enable IT users to harness ever-increasing levels of computer performance. At its simplest level, virtualization allows you, virtually and cost-effectively, to have two or more computers, running two or more completely different environments, on one piece of hardware. For example, with virtualization, you can have both a Linux machine and a Windows machine on one system. Alternatively, you could host a Windows 95 desktop and a Windows XP desktop on one workstation. In slightly more technical terms, virtualization essentially decouples users and applications from the specific hardware characteristics of the systems they use to perform computational tasks. This technology promises to usher in an entirely new wave of hardware and software innovation. For example, and among other benefits, virtualization is designed to simplify system upgrades (and in some cases may eliminate the need for such upgrades), by allowing users to capture the state of a virtual machine (VM), and then transport that state in its entirety from an old to a new host system. Virtualization is also designed to enable a generation of more energy-efficient computing. Processor, memory, and storage resources that today must be delivered in fixed amounts determined by real hardware system configurations will be delivered with finer granularity via dynamically tuned VMs. The term "virtualization" was coined in the 1960s, to refer to a virtual machine (sometimes called pseudo machine), a term which itself dates from the experimental IBM M44/44X system.
A virtual machine can be more easily controlled and inspected from outside than a physical one, and its configuration is more flexible. This is very useful in kernel development and for teaching operating system courses. A new virtual machine can be provisioned as needed without the need for an up-front hardware purchase. Also, a virtual machine can easily be relocated from one physical machine to another as needed. For example, a salesperson going to a customer can copy a virtual machine with the demonstration software to his laptop, without the need to transport the physical computer. At the same time, an error inside a virtual machine does not harm the host system, so there is no risk of breaking down the OS on the laptop. Because of the easy relocation, virtual machines can be used in disaster recovery scenarios.
 The creation and management of virtual machines has been called platform virtualization, or server virtualization, more recently. In fact the first virtualization was made by the University of Manchester with the Atlas computer. This supercomputer was the first one using virtual memory to handle the 576 KB of his four drum stores and also used paging technologies. Platform virtualization is performed on a given hardware platform by host software (a control program), which creates a simulated computer environment, a virtual machine, for its guest software. The guest software, which is often itself a complete operating system, runs just as if it were installed on a stand-alone hardware platform. Typically, many such virtual machines are simulated on a single physical machine, their number limited by the host’s hardware resources. Typically there is no requirement for a guest OS to be the same as the host one. The guest system often requires access to specific peripheral devices to function, so the simulation must support the guest's interfaces to those devices. Trivial examples of such devices are hard disk drive or network interface card.
Well, to keep it simple, consider virtualization to be the act of abstracting the physical boundaries of a technology. Physical abstraction is now occurring in several ways, with many of these methods illustrated in Figure 1. For example, workstations and servers no longer need dedicated physical hardware such as a CPU or motherboard in order to run as independent entities. Instead, they can run inside a virtual machine (VM). In running as a virtual machine, a computer's hardware is emulated and presented to an operating system as if the hardware truly existed. With this technology, you have the ability to remove the traditional dependence that all operating systems had with hardware. In being able to emulate hardware, a virtual machine can essentially run on any x86-class host system, regardless of hardware makeup. Furthermore, you can run multiple VMs running different operating systems on the same system at the same time!



Virtualization is More than Virtual Machine Software


It is clear to those who have explored the topic of virtualization technology that it encompasses far more than just virtual machine software, such as VMware ESX Server, XenSource XenEnterprise or Microsoft Virtual Server. Over the last 30 years,  virtualization technology has been developed to enhance how individuals access computing solutions, how applications are developed and deployed, how they are processed, where and how they are stored, how systems communicate with one another, and, of course, how an extended system environment can be made both secure and manageable. This broad view is very important if an organization hopes to make optimal use of this technology.
Somewhere along the way, many in the industry have come to believe that virtualization is merely the use of virtual machine software. This rather narrow view of virtualization is based upon the view that the whole purpose of virtualization is to encapsulate an operating system and a whole stack of software enabling an application or Web service to run. Virtual machine software then makes it possible for one or more of these "capsules" to run simultaneously on a single machine.
While this viewpoint is useful if the goals were only consolidating an existing application portfolio onto a smaller number of systems, cost reduction, cost avoidance or making it easier to deploy systems for new tasks, it is not as useful if the organization is seeking higher levels of performance, greater levels of scalability, greater agility, high levels of reliability and availability or being able to manage their physical and virtual resources in a uniform way. Virtual machine software, after all, is only one of five virtual processing functions. Virtual processing is one of seven layers of virtualization technology. Decision makers must work with a broader view of virtualization technology.
The Kusnetzky Group believes this state of affairs can be attributed to the marketing prowess of a small number of suppliers of virtual machine software rather than virtualization really being such a limited concept. This paper will examine a useful model of virtualization technology and present what each type of virtualization can do for an organization. Future papers in this series will focus on other aspects of virtualization.
What is virtualization?
Virtualization is a way to abstract applications and their underlying components away from the hardware supporting them and present a logical view of these resources. This logical view may be strikingly different than the physical view. The goal usually is one of the following: higher levels of performance, scalability, reliability/availability, agility or to create a unified security and management domain.

No comments: