Installation and configuration of KVM in RHEL7

News Certifications Recommended links Lecture notes for RHSCA certification for RHEL 7 Curriculum Reference Reviews Notes on RHCSA Certification for RHEL 7  
RH133 (old, RHEL6 based info) Red Hat Linux Essentials New Page 1 Sysadmin Horror Stories Understanding and using essential tools Access a shell prompt and issue commands with correct syntax Finding Help Managing files in RHEL Working with hard and soft links
Working with archives and compressed files Using the Midnight Commander as file manager Text files processing Using redirection and pipes Use grep and extended regular expressions to analyze text files Finding files and directories; mass operations on files Connecting to the server via ssh, using multiple consoles and screen command Introduction to Unix permissions model Managing users and groups
RHCSA: Managing local users and groups RHCSA: Introduction to Unix permissions model Introduction to Process Management Configuring network in RHEL7 Installation and configuration of KVM in RHEL7 Tips Unix History with some Emphasis on Scripting Humor Etc

Introduction

Kernel-based Virtual Machine (KVM) represents a developed by Red Hat competitor to XEN  -- the leading  open source implementation of virtualization. It is not clear why Red Hat decided to fracture open source virutalization space, but we have what we have. In comparison with  XEN it looks half baked with several non-documented problem waiting for users who want to try it. The goal of the project was to create a modern hypervisor that builds on the experience of previous generations of technologies and leverages the modern hardware available today (VT-x, AMD-V).

So KVM is based on Quick Emulator (QEMU) which was written by Fabrice Bellard (creator of FFmpeg), and is mainly licensed under GNU General Public License (GPL).

To execute the guest code in the physical CPU, QEMU makes use of posix threads. That said, the guest virtual CPUs are executed in the host kernel as posix threads. This itself brings lots of advantages, as these are just some processes for the host kernel in a high-level view. From another angle, the user space part of the KVM hypervisor is provided by QEMU. QEMU runs the guest code via the KVM kernel module. When working with KVM, QEMU also does I/O emulation, I/O device setup, live migration, and so on.

QEMU opens the device file (/dev/kvm) exposed by the KVM kernel module and executes ioctls() on it. Please refer to the next section on KVM to know more about these ioctls(). To conclude, KVM makes use of QEMU to become a complete hypervisor, and KVM is an accelerator or enabler of the hardware virtualization extensions (VMX or SVM) provided by the processor to be tightly coupled with the CPU architecture. Indirectly, this conveys that virtual systems also have to use the same architecture to make use of hardware virtualization extensions/capabilities. Once it is enabled, it will definitely give better performance than other techniques such as binary translation.

The fundamentals KVM developers followed were the same as the Linux kernel: "Don't reinvent the wheel". That said, they didn't try to change the kernel code to make a hypervisor; rather, the code was developed by following the new hardware assistance in virtualization (VMX and SVM) from hardware vendors as a loadable kernel module. There is a common kernel module called kvm.ko and there are hardware-based kernel modules such as kvm-intel.ko (Intel-based systems) or kvm-amd.ko (AMD-based systems). Accordingly, KVM will load the kvm-intel.ko (if the vmx flag is present) or kvm-amd.ko (if the svm flag is present) modules. This turns the Linux kernel into a hypervisor, thus achieving virtualization. The KVM is developed by qumranet and it has been part of the Linux kernel since version 2.6.20. Later qumranet was acquired by Red Hat.

QEMU is a generic and open source machine emulator and virtualizer. When used as a machine emulator, QEMU can run OSs and programs made for one machine (for example: an ARM board) on a different machine (for example: your own PC). By using dynamic translation translation, it achieves very good performance (see www.QEMU.org).

Formally QEMU belong to hardware support "heavy of full virtualization"  engines which functions by utilizing the CPU virtualization technology extensions on modern Intel and AMD processors, known as Intel-VT and AMD-V. It does not run on CPU that do not support those extensions

This can be checked via command:
egrep -c '(vmx|svm)' /proc/cpuinfo 

KVM belong to "host-based" VM -- it simply turns the Linux kernel into a hypervisor when you install the KVM kernel module. As it uses kernel as the building block of the hypervisor, it benefits from the  impporvments of the standard kernel (memory support, scheduler, etc). Optimizations to these Linux components (such as the new scheduler in the 3.1 kernel) benefit both the hypervisor (the host operating system) and the Linux guest operating systems. For I/O emulations, KVM uses a userland software, QEMU; Qemu is a userland program that does hardware emulation.

QEMU emulates the processor and a long list of peripheral devices: disk, network, VGA, PCI, USB, serial/parallel ports, and so on to build a complete virtual hardware on which the guest operating system can be installed and this emulation is powered by KVM.

KVM is managed via the libvirt API and tools. Some popular libvirt tools  include virt-manager and virsh, .

Other tools include , virt-install  virt-clone. virt-image, and virt-viewer, which are used to provision, clone, view virtual machines.

How to avoid troubles

There are several problems you you can encounter with KVM tools:

That's why it is highly recommended to start using KVM tools from real console of vvia DRAC/ILO session.

How to check of necessary kernel modules are loaded

You need to verity the following kernel modules are loaded, and if not load manually:

# lsmod | grep kvm

To load the KVM module use below commands :

# modprobe  kvm
# modprobe kvm_intel           (only on Intel-based systems)

Installation

Recommended virtualization packages:

python-virtinst
Provides the virt-install command for creating virtual machines.
libvirt
libvirt is an API library for interacting with hypervisors. libvirt uses the xm virtualization framework and the virsh command line tool to manage and control virtual machines.
libvirt-python
The libvirt-python package contains a module that permits applications written in the Python programming language to use the interface supplied by the libvirt API.
virt-manager
virt-manager, also known as Virtual Machine Manager, provides a graphical tool for administering virtual machines. It uses libvirt library as the management API.

The best way to install them is use "Installation groups" -- "Virtualisation Tools" "Virtualization Platform" (You can list availble grous with yum grouplist )

yum groupinstall "Virtualisation Tools" "Virtualization Platform"
yum install python-virtinst

Or you can directly by specified the necessary packages:

minimal:

yum install virt-manager libvirt libvirt-python python-virtinst

or with some dependences explicitly listed

# yum install kvm python-virtinst libvirt libvirt-python virt-manager virt-viewer libguestfs-tools bridge-utils
# yum install kvm qemu-kvm python-virtinst libvirt libvirt-python virt-manager libguestfs-tools

Post installation steps: turn on libvirtd service

The libvirtd program is the server side daemon component of the libvirt virtualization management system. Type the following chkconfig command to turn it on:

# chkconfig libvirtd on

Start the libvirtd service by typing the following service command:

# service libvirtd start

You can verify the libvirtd service by tying the following commands:

# service libvirtd status
libvirtd (pid  31128) is running...

# virsh -c qemu:///system list
 Id    Name                           State
----------------------------------------------------

 

Virt-manager

virt-manager (http://virt-manager.org/) uses libvirt to create pretty flexible and powerful enough GUI interface that allows you to creewate an manage VMs.

It is the most populatr GUI-frontend for libvirt, allowing users to create and manage guest virtual machines on libvirt-supported hypervisors such as QEMU/KVM or Xen.

virt-manager can control a host-local hypervisor as well as remote host's hypervisor (over SSH), giving users a location-transparent management interface for virtual machines. For remote desktop access on guest operating systems, virt-manager offers integrated remote desktop sessions via VNC and Spice.

6.3. Creating Guests with virt-manager - Red Hat Customer Portal

when you start your VM, a  separate qemu-kvm process is launched by libvirtd at the request of system management utilities, such as virsh and virt-manager. The properties of the virtual machines (number of CPUs, memory size, I/O device configuration) are stored  in XML files, which are located in the directory /etc/libvirt/qemu.

virsh -- command line interface to libvirt

 The command line client interface of libvirt is the binary called virsh. libvirt is also used by other higher-level management tools.

Most people think that libvirt is restricted to a single node or local node where it is running; it's not true. libvirt has remote support built into the library. So, any libvirt tool (for example virt-manager) can remotely connect to a libvirt daemon over the network, just by passing an extra –connect argument. One of libvirt's clients (the virsh binary provided by the libvirt-client package) is shipped with Red Hat.

The goal of the libvirt library is to provide a common and stable layer to manage VMs running on a hypervisor. In short, as a management layer it is responsible for providing the API that does management tasks such as virtual machine provision, creation, modification, monitoring, control, migration, and so on. In Linux, you will have noticed some of the processes are deamonized. The libvirt process is also deamonized, and it is called libvirtd. As with any other daemon process, the libvirtd provides services to its clients upon request. Let us try to understand what exactly happens when a libvirt client such as virsh or virt-manager requests a service from libvirtd. Based on the connection URI (discussed in the following section) passed by the client, libvirtd opens a connection to the hypervisor. This is how the clients virsh or virt-manager ask the libvirtd to start talking to the hypervisor. In the scope of this book, we are aiming at KVM virtualization technology. So, it would be better to think about it in terms of a QEMU/KVM hypervisor instead of discussing some other hypervisor communication from libvirtd. You may be a bit confused when you see QEMU/KVM as the underlying hypervisor name instead of either QEMU or KVM. But don't worry, all will become clear in due course. The connection between QEMU and KVM is discussed in the following. For now just know that there is a hypervisor that uses both the QEMU and KVM technologies.

the connection URI, which has been passed from the client has, strings of "QEMU", or will have the following skeleton when passed to libvirt to open a connection:

driver[+transport]://[username@][hostname][:port]/[path][?extraparameters]

A simple command line example of a virsh binary for a remote connection would be as follows:

$ virsh --connect qemu+ssh://root@remoteserver.yourdomain.com/system list --all
 

libvirtd uses the details from these XML files to derive the argument  list that is passed to the qemu-kvm process.

Here is an example:

qemu     14644  9.8  6.8 6138068 1078400 ?     Sl   03:14  97:29 
/usr/bin/qemu-system-x86_64 -machine accel=kvm -name guest1 -S -machine pc--m 4196 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 7a615914-ea0d-7dab-e709-0533c00b921f -no-user-config -nodefaults -chardev socket,id=charmonitor-drive file=/dev/vms/hypervisor2,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native -device id=net0,mac=52:54:00:5d:be:06

Here, an argument similar to -m 4196 forms a 4 GB memory for the virtual machine, --smp = 4 points to a 4 vCPU that has a topology of four vSockets with one core for each socket.

 

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites

Introduction to KVM

https://www.youtube.com/watch?v=u_Z0Uwz9HLs

Red Hat Developer Red Hat Enterprise Linux Hello-world RHEL7 based tutotiral

Setting up KVM on Red Hat Enterprise Linux - Red Hat Developer Blog

A brief introduction to KVM technology. For more info, visit www.raritantraining.com.

Install & Configure KVM Virtualization On CentOS 6.X - RHEL 6.X