If you know little about virtualization don’t worry too much. Virtualization is the process of creating a virtual environment rather than an actual on. It occurs where an operating systems, servers and storage devices use one environment and dont know this is actually the case.
The concept is quite simple. To explain it even further its where you have a logical division of drives to appear as though there are two separate instances but in reality they all share the same resources.
The evolution of virtualization
Virtualization can apply to applications, servers, storage, and networks and is the single most effective way to reduce IT expenses while boosting efficiency and agility for all size businesses.
Benefits of Virtualization
Virtualization can increase IT agility, flexibility, and scalability while creating significant cost savings. Workloads get deployed faster, performance and availability increases and operations become automated, resulting in IT that’s simpler to manage and less costly to own and operate. Additional benefits include:
- Reduce capital and operating costs.
- Minimize or eliminate downtime.
- Increase IT productivity, efficiency, agility and responsiveness.
- Provision applications and resources faster.
- Enable business continuity and disaster recovery.
- Simplify data center management.
- Build a true Software-Defined Data Center
How virtualization technology works
Virtualization uses software to simulate the existence of hardware and create a virtual computer system. Doing this allows businesses to run more than one virtual system – and multiple operating systems and applications — on a single server. This can provide economies of scale and greater efficiency.
A key use of virtualization technology is server virtualization, which uses a software layer called a hypervisor to emulate the underlying hardware. This often includes the CPU’s memory, I/O and network traffic. The guest operating system, normally interacting with true hardware, is now doing so with a software emulation of that hardware, and often the guest operating system has no idea it’s on virtualized hardware. While the performance of this virtual system is not equal to the performance of the operating system running on true hardware, the concept of virtualization works because most guest operating systems and applications don’t need the full use of the underlying hardware. This allows for greater flexibility, control and isolation by removing the dependency on a given hardware platform. While initially meant for server virtualization, the concept of virtualization has spread to applications, networks, data and desktops.
Types of virtualization
There are six areas of IT where virtualization is making headway:
1. Network virtualization is a method of combining the available resources in a network by splitting up the available bandwidth into channels, each of which is independent from the others and can be assigned — or reassigned — to a particular server or device in real time. The idea is that virtualization disguises the true complexity of the network by separating it into manageable parts, much like your partitioned hard drive makes it easier to manage your files.
2. Storage virtualization is the pooling of physical storage from multiple network storage devices into what appears to be a single storage device that is managed from a central console. Storage virtualization is commonly used in storage area networks.
3. Server virtualization is the masking of server resources — including the number and identity of individual physical servers, processors and operating systems — from server users. The intention is to spare the user from having to understand and manage complicated details of server resources while increasing resource sharing and utilization and maintaining the capacity to expand later.
The layer of software that enables this abstraction is often referred to as the hypervisor. The most common hypervisor — Type 1 — is designed to sit directly on bare metal and provide the ability to virtualize the hardware platform for use by the virtual machines (VMs). KVM virtualization is a Linux kernel-based virtualization hypervisor that provides Type 1 virtualization benefits similar to other hypervisors. KVM is licensed under open source. A Type 2 hypervisor requires a host operating system and is more often used for testing/labs.
4. Data virtualization is abstracting the traditional technical details of data and data management, such as location, performance or format, in favor of broader access and more resiliency tied to business needs.
5. Desktop virtualization is virtualizing a workstation load rather than a server. This allows the user to access the desktop remotely, typically using a thin client at the desk. Since the workstation is essentially running in a data center server, access to it can be both more secure and portable. The operating system license does still need to be accounted for as well as the infrastructure.
6. Application virtualization is abstracting the application layer away from the operating system. This way the application can run in an encapsulated form without being depended upon on the operating system underneath. This can allow a Windows application to run on Linux and vice versa, in addition to adding a level of isolation.
Virtualization can be viewed as part of an overall trend in enterprise IT that includes autonomic computing, a scenario in which the IT environment will be able to manage itself based on perceived activity, and utility computing, in which computer processing power is seen as a utility that clients can pay for only as needed. The usual goal of virtualization is to centralize administrative tasks while improving scalability and workloads.