top of page
kienemoldsterib

What Is Virtualization Technology Brief Introduction



Today, virtualization is a standard practice in enterprise IT architecture. It is also the technology that drives cloud computing economics. Virtualization enables cloud providers to serve users with their existing physical computer hardware; it enables cloud users to purchase only the computing resources they need when they need it, and to scale those resources cost-effectively as their workloads grow.




What is Virtualization Technology Brief Introduction



CPU (central processing unit) virtualization is the fundamental technology that makes hypervisors, virtual machines, and operating systems possible. It allows a single CPU to be divided into multiple virtual CPUs for use by multiple VMs.


The concept of virtualization is generally believed to have its origins in the mainframe days in the late 1960s and early 1970s, when IBM invested a lot of time and effort in developing robust time-sharing solutions. Time-sharing refers to the shared usage of computer resources among a large group of users, aiming to increase the efficiency of both the users and the expensive computer resources they share. This model represented a major breakthrough in computer technology: the cost of providing computing capability dropped considerably and it became possible for organizations, and even individuals, to use a computer without actually owning one. Similar reasons are driving virtualization for industry standard computing today: the capacity in a single server is so large that it is almost impossible for most workloads to effectively use it. The best way to improve resource utilization, and at the same time simplify data center management, is through virtualization.


Data centers today use virtualization techniques to make abstraction of the physical hardware, create large aggregated pools of logical resources consisting of CPUs, memory, disks, file storage, applications, networking, and offer those resources to users or customers in the form of agile, scalable, consolidated virtual machines. Even though the technology and use cases have evolved, the core meaning of virtualization remains the same: to enable a computing environment to run multiple independent systems at the same time.


You might create your own images or you might only use those created by othersand published in a registry. To build your own image, you create a Dockerfilewith a simple syntax for defining the steps needed to create the image and runit. Each instruction in a Dockerfile creates a layer in the image. When youchange the Dockerfile and rebuild the image, only those layers which havechanged are rebuilt. This is part of what makes images so lightweight, small,and fast, when compared to other virtualization technologies.


This is an operating system-level virtualization technology for Linux which uses a patched Linux kernel for virtualization, isolation, resource management and checkpointing. The code was not released as part of the official Linux kernel.


The way nested virtualization can be implemented on a particular computer architecture depends on supported hardware-assisted virtualization capabilities. If a particular architecture does not provide hardware support required for nested virtualization, various software techniques are employed to enable it.[5] Over time, more architectures gain required hardware support; for example, since the Haswell microarchitecture (announced in 2013), Intel started to include VMCS shadowing as a technology that accelerates nested virtualization.[7]


There are opportunities to work with virtual machines and virtualization technology outside of professional positions. Platforms and tools are available to develop your own virtual machine project, like the Google Cloud Console. Building a virtual machine requires knowledge of cloud computing and operating systems. If you want to learn how to create a virtual machine, consider the Google Cloud Training Project: Creating a Virtual Machine.


Full virtualization mode lets virtual machines run unmodified operating systems, such as Windows* Server 2003. It can use either Binary Translation or hardware-assisted virtualization technology, such as AMD* Virtualization or Intel* Virtualization Technology. Using hardware assistance allows for better performance on processors that support it.


In enterprise networks, virtualization and cloud computing are often used together to build a public or private cloud infrastructure. In small businesses, each technology will be deployed separately to gain measurable benefits. In different ways, virtualization and cloud computing can help you keep your equipment spending to a minimum and get the best possible use from the equipment you already have.


Multicore processing and virtualization are rapidly becoming ubiquitous in software development. They are widely used in the commercial world, especially in large data centers supporting cloud-based computing, to (1) isolate application software from hardware and operating systems, (2) decrease hardware costs by enabling different applications to share underutilized computers or processors, (3) improve reliability and robustness by limiting fault and failure propagation and support failover and recovery, and (4) enhance scalability and responsiveness through the use of actual and virtual concurrency in architectures, designs, and implementation languages. Combinations of multicore processing and virtualization are also increasingly being used to build mission-critical, cyber-physical systems to achieve these benefits and leverage new technologies, both during initial development and technology refresh.


In this introductory blog post, I lay the foundation for the rest of the series by defining the basic concepts underlying multicore processing and the two main types of virtualization: (1) virtualization by virtual machines and hypervisors and (2) virtualization by containers. I will then briefly compare the three technologies and end by listing some key technical challenges these technologies bring to system and software development.


The following figure shows the architectural differences between multicore processing, virtualization via VM, and virtualization via container, with the relevant technologies highlighted in green. In this figure, SW1 through SW4 represent four different software applications. The standard acronyms OS and VM stand for operating system and virtual machine, respectively. The boxes labeled C1 through C4 on the left represent cores in a multi-core processor, whereas on the right they represent containers. The figure also implies the multi-core processing is primarily a hardware technology, whereas virtualization, whether by virtual machines or containers, is a software technology.


The next post in this series will define multicore processing, list its current trends, document its pros and cons, and briefly address its safety and security ramifications. The following two blog entries will do the same for virtualization via virtual machines and virtualization via containers. These postings will be followed by a final blog entry providing general recommendations regarding the use of these technologies on mission-, safety-, and security-critical, cyber-physical systems.


Containers are an operating system virtualization technology used to package applications and their dependencies and run them in isolated environments. They provide a lightweight method of packaging and deploying applications in a standardized way across many different types of infrastructure.


Virtual machines, or VMs, are a hardware virtualization technology that allows you to fully virtualize the hardware and resources of a computer. A separate guest operating system manages the virtual machine, completely separate from the OS running on the host system. On the host system, a piece of software called a hypervisor is responsible for starting, stopping, and managing the virtual machines.


Linux has rich virtual networking capabilities that are used as basis for hosting VMs and containers, as well as cloud environments. In this post, I will give a brief introduction to all commonly used virtual network interface types. There is no code analysis, only a brief introduction to the interfaces and their usage on Linux. Anyone with a network background might be interested in this blog post. A list of interfaces can be obtained using the command ip link help.


After reading this article, you will know what these interfaces are, what's the difference between them, when to use them, and how to create them. For other interfaces like tunnel, please see An introduction to Linux virtual interfaces: Tunnels


You may think that the hypervisor is a fairly recent phenomenon. The first hypervisors were introduced in the 1960s to allow for different operating systems on a single mainframe computer. However, its current popularity is largely due to Linux and Unix. Around 2005, Linux and Unix systems started using virtualization technology to expand hardware capabilities, control costs, and improved reliability and security that hypervisors provided to these systems. 2ff7e9595c


0 views0 comments

Recent Posts

See All

Comments


bottom of page