KVM Tutorial On Ubuntu Server (Beginner)
Copyright (C) 2021 Exforge exforge@x386.xyz
# This document is free text: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# any later version.
# This document is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
# 0.0. Definition
# KVM virtualization tutorial 1 on Ubuntu 20.04 LTS Server.
# Our aim is to install and configure a host computer for virtual machines.
# This tutorial aims to bring you (and me) to a moderate level of Virtualization
# Administration.
#
# 0.1. How It Works
# KVM (Kernel-based Virtual Machine) is a loadable kernel module which supply
# virtualization with APIs.
# QEMU is a virtualizer which uses KVM API. QEMU supports other virtualization
# solutions too.
# Libvirt is a library for managing virtualization hosts. virsh command comes
# from Libvirt.
# Libguestfs is a collection of tools for accessing and managing VM images.
# virt-manager is a GUI for managing VMs. I use it on my workstation for simple
# tasks.
#
# 0.2. Infrastructure
# Server: Ubuntu 20.04 LTS Server on a 4 core CPU, 8 GB RAM no GUI IP: 192.168.0.240
# Workstation: Ubuntu 20.04 LTS
# Network: 192.168.0.0/24 which is supplied by my internet modem
#
# 0.3. (Very) Basic Terminology
# Domain: Virtual Machine (VM)
# Image: A file in which a VM (or a disk of VM) is stored.
# Host: A server which runs virtualization software
# Guest: A VM running on a host
# Snapshot: A saved state of an image. You can revert to that stage later.
#
# 0.4. Resources
https://ostechnix.com/install-and-configure-kvm-in-ubuntu-20-04-headless-server/
https://www.qemu.org/docs/master/interop/qemu-img.html
https://www.libvirt.org/manpages/virsh.html
https://docs.fedoraproject.org/en-US/Fedora/18/html/Virtualization_Administration_Guide/index.html
https://libguestfs.org/
https://fabianlee.org/2020/02/23/kvm-testing-cloud-init-locally-using-kvm-for-an-ubuntu-cloud-image/
https://cloudinit.readthedocs.io/en/latest/topics/examples.html
ISBN: 979-10-91414-20-3 The Debian Administrator's Handbook by Raphaël Hertzog and Roland Mas
ISBN: 978-1-78829-467-6 KVM Virtualization Cookbook by Konstantin Ivanov
1. Installation and Configuration
# 1.1. Installation
# Install necessary packages
sudo apt update
sudo apt-get install libvirt-clients libvirt-daemon-system qemu-kvm \
virtinst virt-manager virt-viewer bridge-utils
#
# 1.2. Bridge Configuration
# By default kvm creates a virtual bridge named virbr0. This bridge allows the VMs
# to communicate between each other and the host. But we prefer that the VMs join to our
# network by getting IP address from our DHCP server. That is we will create a public filter.
# First we need to disable netfilter, which is enabled on bridges by default.
sudo nano /etc/sysctl.d/bridge.conf
# File is empty, add following lines
#_________________________________________
net.bridge.bridge-nf-call-ip6tables=0
net.bridge.bridge-nf-call-iptables=0
net.bridge.bridge-nf-call-arptables=0
#_________________________________________
sudo nano /etc/udev/rules.d/99-bridge.rules
# File is empty, add following line
#_________________________________________
ACTION=="add", SUBSYSTEM=="module", KERNEL=="br_netfilter", RUN+="/sbin/sysctl -p /etc/sysctl.d/bridge.conf"
#_________________________________________
# A reboot is necessary
sudo reboot
#
# Now we need to remove the bridge created by KVM
# With "ip link" command we see all the networks. KVM networks are named as
# virbr0 and virbr0-nic.
# Delete and undefine KVM networks.
sudo virsh net-destroy default
sudo virsh net-undefine default
# If in any case any error happens, you can try two following commands.
sudo ip link delete virbr0 type bridge
sudo ip link delete virbr0-nic
# Now if you run "ip link" again, you will see virbr0 and virbr-nic are removed.
# When you run "ip link" take a note of your interface name(s), it must be something
# like enp0s0. If you have more than 1 interface there will be more than 1 name.
# Backup your network configuration file
# If that file does not exist, there must be another file there with yaml extension. Proceed on that
# file.
sudo cp /etc/netplan/00-installer-config.yaml{,.backup}
# Edit your network config file
sudo nano /etc/netplan/00-installer-config.yaml
# Remove its content , fill it as below, beware of changing enp3s0f0 to your
# interface name. If you have more than 1 interfaces, add them too.
# Also you will add an IP address and default gateway from your local network.
# Mine are 192.168.0.240 and 192.168.0.1
#___________________________________________________
network:
ethernets:
enp3s0f0:
dhcp4: false
dhcp6: false
bridges:
br0:
interfaces: [ enp3s0f0 ]
addresses: [192.168.0.240/24]
gateway4: 192.168.0.1
mtu: 1500
nameservers:
addresses: [8.8.8.8,8.8.4.4]
parameters:
stp: true
forward-delay: 4
dhcp4: no
dhcp6: no
version: 2
#___________________________________________________
# Apply the changes. If you connect through ssh, you connection may break.
# In this case, close the terminal and reconnect.
sudo netplan apply
# If you run "ip link" now, you can see our bridge br0
#
# Add our Bridge to KVM
nano host-bridge.xml
#___________________________________
<network>
<name>host-bridge</name>
<forward mode="bridge"/>
<bridge name="br0"/>
</network>
#___________________________________
virsh net-define host-bridge.xml
virsh net-start host-bridge
virsh net-autostart host-bridge
#
# 1.3. Configure Directories
# Set places for disk images and installation isos
# /srv/kvm for VM disk images
# /srv/isos for installation iso images
sudo mkdir /srv/kvm /srv/isos
sudo virsh pool-create-as srv-kvm dir --target /srv/kvm
# At this point, you may want to copy some installation isos to server's
# /srv/isos dir
# 2.1. Create the 1st VM
# Now it is time to create our first vm
# It will be Ubuntu Server 20.04 LTS with 1 GB RAM and 10 GB HDD
# I already copied Ubuntu server iso ubuntu-20.04.1-live-server-amd64.iso to /srv/isos
# Install a VM named testkvm, through QEMU with KVM virtualization, with 1024MiB memory
# and 1 vcpu,
# prepare a qcow2 format disk of 10GiB, connect a CDROM drive to it with the specified
# image, use the server's network bridge br0, allow VNC connections to the VM through
# the server, optimize it as Linux and Ubuntu 20.04 servers and don't try to attach a
# console from server.
sudo virt-install --name testkvm \
--connect qemu:///system --virt-type kvm \
--memory 1024 --vcpus 1 \
--disk /srv/kvm/testkvm.qcow2,format=qcow2,size=10 \
--cdrom /srv/isos/ubuntu-20.04.1-live-server-amd64.iso \
--network bridge=br0 \
--graphics vnc,port=5901,listen=0.0.0.0 \
--os-type linux --os-variant ubuntu20.04 \
--noautoconsole
# 2.2. os-variant List
# There are lots of OS Variant selections. You can find yours with the following
# command. It helps hypervisor to optimize the system for the guest OS. It can be
# skipped.
osinfo-query os
#
# 2.3. Connecting to the VM
# A graphical desktop is needed to connect to the VM. You can install virt-viewer
# package on your Ubuntu workstation and connect to the VM.
###----------- Run on your workstation BEGIN ----------------###
sudo apt update
sudo apt install virt-viewer
virt-viewer --connect qemu+ssh://exforge@srv/system testkvm
###----------- Run on your workstation END ------------------###
# Remember to replace exforge with your user name on the server and srv with your
# server's hostname
3. Remote Graphical Management
# Our server has no graphical interface (like the most servers). If you really want
# a graphical management, you can install virt-manager on your workstation and
# manage your VMs from there.
###----------- Run on your workstation BEGIN ----------------###
sudo apt update
sudo apt install virt-manager
virt-manager
###----------- Run on your workstation END ------------------###
# The application is added to Applications Menu with the name "Virtual Machine Manager"
4. Installing VMs from Ready Images
# Starting a new VM and installing OS into is a good but time consuming way. Another way
# would be preparing an installed image and start it as a new VM. Most server distros
# supply cloud images. By adding them some necessary configurations (user and network
# definitions), you can use them as ready images.
#
# 4.0. Installing cloud-image-utils
sudo apt update
sudo apt install cloud-image-utils
#
# 4.1. Acquiring Cloud Images
# A search for "ubuntu cloud image" in duckduck2 gives the following address:
https://cloud-images.ubuntu.com/
# Following focal and current, download kvm image focal-server-cloudimg-amd64.img
# Put it to server's /srv/isos folder.
#
# 4.2. Creating a New Image From the Original Image
# We will create a new image from the image we downloaded. The original image has a max
# size of 2.25 GiB, it will be increased to 20 GiB and the new image format will be
# qcow2, the preferred format for KVM.
sudo qemu-img create -b /srv/isos/focal-server-cloudimg-amd64.img -F qcow2 \
-f qcow2 /srv/kvm/ubuntusrv-cloudimg.qcow2 20G
#
# 4.3. Cloud-init Configuration
# The next step is to crate a cloud-init config file. This file contains instructions for
# the cloud image. There is a wide range of instructions like; creating a user, creating
# and filling files, adding apt repositories, running initial commands, installing
# packages, reboot and poweroff after finishing, disk and configuration. See below url
# for details:
https://cloudinit.readthedocs.io/en/latest/topics/examples.html
# Our cloud-init file will configure the following:
# Create a user named exforge with sudo privileges, assign its password, add it to exforge
# group as primary, also add it to users group, create its home directory as
# /home/exforge, and set its shell to bash.
# To add our user's password, we need to have the hash of it.
sudo apt install whois
mkpasswd --method=SHA-512 --rounds=4096
# Enter the user's assigned password here, it will display the hash, copy the hash, we will
# use it later.
#
# Create a place for our cloud-init files. /srv/init would be fine.
sudo mkdir /srv/init
# Create our cloud-init file
sudo nano /srv/init/ubuntu-cloud-init.cfg
#_________________________________________________________________
#cloud-config
hostname: ubuntu20
fqdn: ubuntu20.x386.xyz
manage_etc_hosts: true
groups: exforge
users:
- name: exforge
sudo: ALL=(ALL) ALL
primary_group: exforge
groups: users
home: /home/exforge
shell: /bin/bash
lock_passwd: false
passwd: $6$rounds=4096$0BSLhp4jtwR1$vm3QE1m70VuTO2GJ8j5GvVKPqmq3>
packages: qemu-guest-agent
#_________________________________________________________________
# Do not forget to change passwd value with your copied hash.
#
# 4.4. Cloud-init Network Configuration
# If a network configuration other than DHCP is needed, a network configuration file is
# necessary.
# Remember to change IP addresses as needed by your VM
sudo nano /srv/init/ubuntu-network-init.cfg
#______________________________________________________________
#cloud-config
version: 2
ethernets:
enp1s0:
dhcp4: false
addresses: [ 192.168.0.221/24 ]
gateway4: 192.168.0.1
nameservers:
addresses: [ 192.168.0.1,8.8.8.8 ]
#______________________________________________________________
#
# 4.5. Creating Cloud Seed Image
# Now we will create an image file with our ubuntu-cloud-init.cfg and
# ubuntu-network-init.cfg inside.
sudo cloud-localds --network-config /srv/init/ubuntu-network-init.cfg /srv/kvm/ubuntu20-seed.qcow2 \
/srv/init/ubuntu-cloud-init.cfg
#
# 4.6. Start Our Image as a New VM
virt-install --name ubuntu20 \
--connect qemu:///system \
--virt-type kvm --memory 2048 --vcpus 2 \
--boot hd,menu=on \
--disk path=/srv/kvm/ubuntu20-seed.qcow2,device=cdrom \
--disk path=/srv/kvm/ubuntusrv-cloudimg.qcow2,device=disk \
--graphics vnc,port=5901,listen=0.0.0.0 \
--os-type linux --os-variant ubuntu20.04 \
--network bridge=br0 \
--noautoconsole
# It might take a few minutes for cloud-init to finish. You can connect to your VM from
# your workstation.
virt-viewer --connect qemu+ssh://exforge@srv/system ubuntu20
#
# 4.7. Clean-up Tasks for Cloud-init
# On your VM run:
sudo touch /etc/cloud/cloud-init.disabled
# If the file /etc/cloud/cloud-init.disabled exists, cloud-init does not run.
#
# 4.8. The whole process except 4.7. can be automated by a python script.
# Download latest focal current cloud image (or use an already downloaded one) 4.1.
# A system call to run a command to create a new image 4.2.
# Image size (and name) can be a parameter
# Create password hash and init files 4.3. and 4.4.
# User name can be a parameter
# Password can be obtained at run time
# Network properties (IP, GW etc) can be parameters
# A system call to run a command to create seed image 4.5.
# A system call to run a command to start the new image 4.6.
# Memory size, vcpu count can be parameters.
5. virsh: Shell Based VM Management
# virt-manager can only help with the basic management tasks. If you want to dive deep,
# you need the old-style shell.
# There are countless options to do with virsh command. I can only list a handfull
# of most useful ones (IMHO) here.
# For a complete list of virsh command usage, see the following web page:
https://www.libvirt.org/manpages/virsh.html
# In all examples, NAME is the name of your VM.
#
# 5.1. Info about host
virsh nodeinfo
#
# 5.2. List VMs and their states
# Running VMs
virsh list
# All VMs
virsh list --all
#
# 5.3. Start, shutdown, reboot, force shutdown, remove a VM
virsh start NAME
virsh shutdown NAME
virsh reboot NAME
virsh destroy NAME
virsh undefine NAME
virsh undefine NAME --remove-all-storage
virsh reboot ubuntu20
#
# 5.4. Pause and resume a VM
virsh suspend NAME
virsh resume NAME
#
# 5.5. Autostart a VM (starts when the host starts)
virsh autostart NAME
virsh autostart --disable NAME # Disable autostart
#
# 5.6. Information about a VM
virsh dominfo NAME
virsh domid NAME
virsh domuuid NAME
virsh domstate NAME
# Display VNC connection settings of VM
virsh domdisplay NAME
#
# 5.7. VM Memory Management
# A VM has 2 memory parameters: Max Memory and Used Memory.
# Used memory is the amount of mem allocated to the VM
# Max memory is the max amount of mem to be allocated to the VM
# To see current memory allocation:
virsh domstate NAME
# Change Max memory (Activated after shutdown and start)
virsh setmaxmem NAME 2G --config
# size could be something like 2G 1536M etc
# Used memory can be changed when the VM is running (decreasing is not advised)
# Change memory for this session only (reverts after shutdown and start):
virsh setmem NAME 1536M
virsh setmem NAME 1536M --live
virsh setmem NAME 1536M --current
# Change memory after the next shutdown and start
virsh setmem NAME 1536M --config
# Activate immediately and keep the changes after the next shutdown and start
virsh setmem NAME 1536M --live --config
virsh setmem ubuntu20 1536M --live --config
# !!!!! Beware of Shutdown and Start. Reboots do not count !!!!!
#
# 5.8. VM vCPU Management
# Just like memory, VMs have 2 virtual CPU parameters. Maximum and Current.
# Current is the number of vcpus that VM uses actively (on-line).
# Maximum is the max number of vcpus to be allocated to the VM.
# Also, there are 2 states. Config and Live.
# Config is the permanent state, it will be active after shutdown and start.
# Live is the running VM's state, it may not be active after shutdown and start.
# A cartesian product gives us 4 values:
# maximum config : Max number of vcpus, valid after shutdown and start.
# maximum live : Max number of vcpus, valid now (while running).
# current config : Active number of vcpus, valid after shutdown and start.
# current live : Active number of vcpus, valid now (while running).
# To see these values for your VM:
virsh vcpucount NAME
# I keep saying shutdown and start instead of restart or reboot, because kvm, qemu or
# whatever it is, acts differently when you reboot or shutdown and then start the VM.
# So when I say shutdown and start, I mean shutdown first, wait a while (from 0.001
# miliseconds to as long as you want) and then start the VM.
#
# There is no way (AFAIK) to change maximum live value, you can change maximum config as:
virsh setvcpus NAME NUMBER --maximum --config
virsh setvcpus ubuntu20 3 --maximum --config
# To change current vcpu count for the current state (all options are valid)
virsh setvcpus NAME NUMBER
virsh setvcpus NAME NUMBER --current
virsh setvcpus NAME NUMBER --live
virsh setvcpus ubuntu20 3
virsh setvcpus ubuntu20 3 --current
virsh setvcpus ubuntu20 3 --live
# To change current vcpu count for the config state
virsh setvcpus NAME NUMBER --config
virsh setvcpus ubuntu20 3 --config
# To do it both together
virsh setvcpus NAME NUMBER --config --live
virsh setvcpus ubuntu20 3 --config --live
#
# You can both increase and decrease the vcpu count. But beware that decreasing vcpu count # of a running VM could be dangerous.
#
# When you increase the current live vcpu count, the increased vcpus becomes offline. That
# means you cannot use them right away. At least that is what happened to me. You can see
# online and offline vcpu information of your VM with the following command (run it on
# your VM):
lscpu | head
# To activate an offline cpu, first you have to know its number. cpu numbering starts from
# 0, so if you had 2 vcpus and increased them by 1, the number for the 3rd vcpu will be
# 2. You need to edit the following file and change the 0 inside to 1:
sudo nano /sys/devices/system/cpu/cpu2/online
# The number 2 after cpu means the cpu with number 2 i.e. 3rd cpu. When you change the
# file, magically that vcpu will become online. For more vcpus, you have to change that
# file for each vcpu you added.
#
# 5.9. Snapshots
# When you take a snapshot, current disk and memory state is saved.
# Take a live snapshot
virsh snapshot-create-as VMNAME --name SNAPSHOTNAME --description DESCRIPTION
virsh snapshot-create-as ubuntu20 --name ss1-ubuntu20 --description "Firsh Snapshot of Ubuntu20"
# The snapshot becomes the current one and everything after is built onto this
# snapshot. If you want to revert to that snapshot:
virsh snapshot-revert VMNAME --current
# If you want to revert to a specific snapshot:
virsh snapshot-revert VMNAME --snapshotname SNAPSHOTNAME
# To see which snapshot is current:
virsh snapshot-current VMNAME --name
# To delete the current snapshot
virsh snapshot-delete VMNAME --current
# To delete a specific snapshot
virsh snapshot-delete VMNAME --snapshotname SNAPSHOTNAME
# To list all snapshots of a VM
virsh snapshot-list VMNAME
#
# 5.10. Attach Another Disk to a VM
# Suppose that, for our ubuntu20 VM, we need another disk of 20GB size. Because, we need to
# keep some data on another disk.
# We need to create a new image and attach it to the VM.
# Create a 20GB image in qcow2 format:
sudo qemu-img create -f qcow2 /srv/kvm/ubuntu20-disk2.qcow2 20G
# Now our image is ready to be attached to our VM. Before attaching it to the VM, we have
# to decide its name on the VM.
# VM disks are named as vda, vdb, vdc ... so on. We have to give it a name just follows the
# last disk. Because my ubuntu20 VM has only one disk, name for the second one will be
# vdb. To see your disks on your VM, type the following command (On your VM):
lsblk -o name -d | grep vd
# Most probably you will have only vda, in that case you can use the name vdb. Otherwise
# use a name just after the last disk name.
# Add the new image as a second disk to my ubuntu20 VM:
virsh attach-disk ubuntu20 /srv/kvm/ubuntu20-disk2.qcow2 vdb --persistent
# The disk is added persistently, that is it is added alive and it will be there after
# shutdown and and start. If you want to add the disk for the session only, you can
# change --persistent to --live. Also, if you want to add the disk after shutdown and
# start you can change --persistent to --config.
# Needless to say that, you are going to have to mount the new disk before using it.
#
# In any case, if you want to detach the added disk, solution is easy:
virsh detach-disk ubuntu20 vdb --persistent
# As in virsh attach-disk, you can change --persistent option to --live or --config.
# The following output is taken from the following command:
virsh --help
# You can get detailed help for each subcommand with: virsh SUBCOMMAND --help, like:
virsh dominfo --help
#
# Domain Management (help keyword 'domain')
# attach-device attach device from an XML file
# attach-disk attach disk device
# attach-interface attach network interface
# autostart autostart a domain
# blkdeviotune Set or query a block device I/O tuning parameters.
# blkiotune Get or set blkio parameters
# blockcommit Start a block commit operation.
# blockcopy Start a block copy operation.
# blockjob Manage active block operations
# blockpull Populate a disk from its backing image.
# blockresize Resize block device of domain.
# change-media Change media of CD or floppy drive
# console connect to the guest console
# cpu-stats show domain cpu statistics
# create create a domain from an XML file
# define define (but don't start) a domain from an XML file
# desc show or set domain's description or title
# destroy destroy (stop) a domain
# detach-device detach device from an XML file
# detach-device-alias detach device from an alias
# detach-disk detach disk device
# detach-interface detach network interface
# domdisplay domain display connection URI
# domfsfreeze Freeze domain's mounted filesystems.
# domfsthaw Thaw domain's mounted filesystems.
# domfsinfo Get information of domain's mounted filesystems.
# domfstrim Invoke fstrim on domain's mounted filesystems.
# domhostname print the domain's hostname
# domid convert a domain name or UUID to domain id
# domif-setlink set link state of a virtual interface
# domiftune get/set parameters of a virtual interface
# domjobabort abort active domain job
# domjobinfo domain job information
# domname convert a domain id or UUID to domain name
# domrename rename a domain
# dompmsuspend suspend a domain gracefully using power management functions
# dompmwakeup wakeup a domain from pmsuspended state
# domuuid convert a domain name or id to domain UUID
# domxml-from-native Convert native config to domain XML
# domxml-to-native Convert domain XML to native config
# dump dump the core of a domain to a file for analysis
# dumpxml domain information in XML
# edit edit XML configuration for a domain
# event Domain Events
# inject-nmi Inject NMI to the guest
# iothreadinfo view domain IOThreads
# iothreadpin control domain IOThread affinity
# iothreadadd add an IOThread to the guest domain
# iothreadset modifies an existing IOThread of the guest domain
# iothreaddel delete an IOThread from the guest domain
# send-key Send keycodes to the guest
# send-process-signal Send signals to processes
# lxc-enter-namespace LXC Guest Enter Namespace
# managedsave managed save of a domain state
# managedsave-remove Remove managed save of a domain
# managedsave-edit edit XML for a domain's managed save state file
# managedsave-dumpxml Domain information of managed save state file in XML
# managedsave-define redefine the XML for a domain's managed save state file
# memtune Get or set memory parameters
# perf Get or set perf event
# metadata show or set domain's custom XML metadata
# migrate migrate domain to another host
# migrate-setmaxdowntime set maximum tolerable downtime
# migrate-getmaxdowntime get maximum tolerable downtime
# migrate-compcache get/set compression cache size
# migrate-setspeed Set the maximum migration bandwidth
# migrate-getspeed Get the maximum migration bandwidth
# migrate-postcopy Switch running migration from pre-copy to post-copy
# numatune Get or set numa parameters
# qemu-attach QEMU Attach
# qemu-monitor-command QEMU Monitor Command
# qemu-monitor-event QEMU Monitor Events
# qemu-agent-command QEMU Guest Agent Command
# guest-agent-timeout Set the guest agent timeout
# reboot reboot a domain
# reset reset a domain
# restore restore a domain from a saved state in a file
# resume resume a domain
# save save a domain state to a file
# save-image-define redefine the XML for a domain's saved state file
# save-image-dumpxml saved state domain information in XML
# save-image-edit edit XML for a domain's saved state file
# schedinfo show/set scheduler parameters
# screenshot take a screenshot of a current domain console and store it into a file
# set-lifecycle-action change lifecycle actions
# set-user-password set the user password inside the domain
# setmaxmem change maximum memory limit
# setmem change memory allocation
# setvcpus change number of virtual CPUs
# shutdown gracefully shutdown a domain
# start start a (previously defined) inactive domain
# suspend suspend a domain
# ttyconsole tty console
# undefine undefine a domain
# update-device update device from an XML file
# vcpucount domain vcpu counts
# vcpuinfo detailed domain vcpu information
# vcpupin control or query domain vcpu affinity
# emulatorpin control or query domain emulator affinity
# vncdisplay vnc display
# guestvcpus query or modify state of vcpu in the guest (via agent)
# setvcpu attach/detach vcpu or groups of threads
# domblkthreshold set the threshold for block-threshold event for a given block device or it's backing chain element
# guestinfo query information about the guest (via agent)
#
# Domain Monitoring (help keyword 'monitor')
# domblkerror Show errors on block devices
# domblkinfo domain block device size information
# domblklist list all domain blocks
# domblkstat get device block stats for a domain
# domcontrol domain control interface state
# domif-getlink get link state of a virtual interface
# domifaddr Get network interfaces' addresses for a running domain
# domiflist list all domain virtual interfaces
# domifstat get network interface stats for a domain
# dominfo domain information
# dommemstat get memory statistics for a domain
# domstate domain state
# domstats get statistics about one or multiple domains
# domtime domain time
# list list domains
#
# Host and Hypervisor (help keyword 'host')
# allocpages Manipulate pages pool size
# capabilities capabilities
# cpu-baseline compute baseline CPU
# cpu-compare compare host CPU with a CPU described by an XML file
# cpu-models CPU models
# domcapabilities domain capabilities
# freecell NUMA free memory
# freepages NUMA free pages
# hostname print the hypervisor hostname
# hypervisor-cpu-baseline compute baseline CPU usable by a specific hypervisor
# hypervisor-cpu-compare compare a CPU with the CPU created by a hypervisor on the host
# maxvcpus connection vcpu maximum
# node-memory-tune Get or set node memory parameters
# nodecpumap node cpu map
# nodecpustats Prints cpu stats of the node.
# nodeinfo node information
# nodememstats Prints memory stats of the node.
# nodesuspend suspend the host node for a given time duration
# sysinfo print the hypervisor sysinfo
# uri print the hypervisor canonical URI
# version show version
#
# Checkpoint (help keyword 'checkpoint')
# checkpoint-create Create a checkpoint from XML
# checkpoint-create-as Create a checkpoint from a set of args
# checkpoint-delete Delete a domain checkpoint
# checkpoint-dumpxml Dump XML for a domain checkpoint
# checkpoint-edit edit XML for a checkpoint
# checkpoint-info checkpoint information
# checkpoint-list List checkpoints for a domain
# checkpoint-parent Get the name of the parent of a checkpoint
#
# Interface (help keyword 'interface')
# iface-begin create a snapshot of current interfaces settings, which can be later committed (iface-commit) or restored (iface-rollback)
# iface-bridge create a bridge device and attach an existing network device to it
# iface-commit commit changes made since iface-begin and free restore point
# iface-define define an inactive persistent physical host interface or modify an existing persistent one from an XML file
# iface-destroy destroy a physical host interface (disable it / "if-down")
# iface-dumpxml interface information in XML
# iface-edit edit XML configuration for a physical host interface
# iface-list list physical host interfaces
# iface-mac convert an interface name to interface MAC address
# iface-name convert an interface MAC address to interface name
# iface-rollback rollback to previous saved configuration created via iface-begin
# iface-start start a physical host interface (enable it / "if-up")
# iface-unbridge undefine a bridge device after detaching its slave device
# iface-undefine undefine a physical host interface (remove it from configuration)
#
# Network Filter (help keyword 'filter')
# nwfilter-define define or update a network filter from an XML file
# nwfilter-dumpxml network filter information in XML
# nwfilter-edit edit XML configuration for a network filter
# nwfilter-list list network filters
# nwfilter-undefine undefine a network filter
# nwfilter-binding-create create a network filter binding from an XML file
# nwfilter-binding-delete delete a network filter binding
# nwfilter-binding-dumpxml network filter information in XML
# nwfilter-binding-list list network filter bindings
#
# Networking (help keyword 'network')
# net-autostart autostart a network
# net-create create a network from an XML file
# net-define define an inactive persistent virtual network or modify an existing persistent one from an XML file
# net-destroy destroy (stop) a network
# net-dhcp-leases print lease info for a given network
# net-dumpxml network information in XML
# net-edit edit XML configuration for a network
# net-event Network Events
# net-info network information
# net-list list networks
# net-name convert a network UUID to network name
# net-start start a (previously defined) inactive network
# net-undefine undefine a persistent network
# net-update update parts of an existing network's configuration
# net-uuid convert a network name to network UUID
# net-port-list list network ports
# net-port-create create a network port from an XML file
# net-port-dumpxml network port information in XML
# net-port-delete delete the specified network port
#
# Node Device (help keyword 'nodedev')
# nodedev-create create a device defined by an XML file on the node
# nodedev-destroy destroy (stop) a device on the node
# nodedev-detach detach node device from its device driver
# nodedev-dumpxml node device details in XML
# nodedev-list enumerate devices on this host
# nodedev-reattach reattach node device to its device driver
# nodedev-reset reset node device
# nodedev-event Node Device Events
#
# Secret (help keyword 'secret')
# secret-define define or modify a secret from an XML file
# secret-dumpxml secret attributes in XML
# secret-event Secret Events
# secret-get-value Output a secret value
# secret-list list secrets
# secret-set-value set a secret value
# secret-undefine undefine a secret
#
# Snapshot (help keyword 'snapshot')
# snapshot-create Create a snapshot from XML
# snapshot-create-as Create a snapshot from a set of args
# snapshot-current Get or set the current snapshot
# snapshot-delete Delete a domain snapshot
# snapshot-dumpxml Dump XML for a domain snapshot
# snapshot-edit edit XML for a snapshot
# snapshot-info snapshot information
# snapshot-list List snapshots for a domain
# snapshot-parent Get the name of the parent of a snapshot
# snapshot-revert Revert a domain to a snapshot
#
# Backup (help keyword 'backup')
# backup-begin Start a disk backup of a live domain
# backup-dumpxml Dump XML for an ongoing domain block backup job
#
# Storage Pool (help keyword 'pool')
# find-storage-pool-sources-as find potential storage pool sources
# find-storage-pool-sources discover potential storage pool sources
# pool-autostart autostart a pool
# pool-build build a pool
# pool-create-as create a pool from a set of args
# pool-create create a pool from an XML file
# pool-define-as define a pool from a set of args
# pool-define define an inactive persistent storage pool or modify an existing persistent one from an XML file
# pool-delete delete a pool
# pool-destroy destroy (stop) a pool
# pool-dumpxml pool information in XML
# pool-edit edit XML configuration for a storage pool
# pool-info storage pool information
# pool-list list pools
# pool-name convert a pool UUID to pool name
# pool-refresh refresh a pool
# pool-start start a (previously defined) inactive pool
# pool-undefine undefine an inactive pool
# pool-uuid convert a pool name to pool UUID
# pool-event Storage Pool Events
# pool-capabilities storage pool capabilities
#
# Storage Volume (help keyword 'volume')
# vol-clone clone a volume.
# vol-create-as create a volume from a set of args
# vol-create create a vol from an XML file
# vol-create-from create a vol, using another volume as input
# vol-delete delete a vol
# vol-download download volume contents to a file
# vol-dumpxml vol information in XML
# vol-info storage vol information
# vol-key returns the volume key for a given volume name or path
# vol-list list vols
# vol-name returns the volume name for a given volume key or path
# vol-path returns the volume path for a given volume name or key
# vol-pool returns the storage pool for a given volume key or path
# vol-resize resize a vol
# vol-upload upload file contents to a volume
# vol-wipe wipe a vol
7. qemu-img: Shell Based Image Management
# qemu-img allows us to manipulate images. The command is expected to work offline. That
# means, before you start using qemu-img, you have to shut down the VM associated with
# it.
# !!! Do not use qemu-img with an image of running VM !!!
# A full documentation can be found at the below site:
https://www.qemu.org/docs/master/interop/qemu-img.html
#
# 7.1. Get Basic Info About an Image
qemu-img info FILENAME
# FILENAME is the name of the file which is the image for the VM.
# For my ubuntu20 VM's image info:
qemu-img info ubuntusrv-cloudimg.qcow2
#
# 7.2. Creating an Image
qemu-img create -f FORMAT FILENAME SIZE
# Remember, at 5.10. we created an empty disk image to add as another disk to a VM:
sudo qemu-img create -f qcow2 /srv/kvm/ubuntu20-disk2.qcow2 20G
#
# An image also can be created by backing from another image. In that way, we will have
# another image from an image, differentiating its format and size:
sudo qemu-img create -b BACKINGFILENAME -F BACKINGFILEFORMAT \
-f OUTPUTFILEFORMAT OUTPUTFILENAME SIZE
# Remember, at 4.2. we created a new cloud image from the cloud image we doownloaded:
sudo qemu-img create -b /srv/isos/focal-server-cloudimg-amd64.img -F qcow2 \
-f qcow2 /srv/kvm/ubuntusrv-cloudimg.qcow2 20G
#
# 7.3. Changing the Format of an Image
# There are a lot of formats for images. For us, the 2 most important ones are raw and qcow2.
# raw : As the name implies.
# qcow2 : Feature rich, allows snapshots, compression and encrytion.
# qcow : Older version of qcow2.
# dmg : Mac format.
# nbd : Network block device, used to access remote storages
# vdi : Virtualbox format
# vmdk : VMW*re format
# vhdx : Micros*ft HyperV format
qemu-img convert -f SOURCEFORMAT -O DESTINATIONFORMAT SOURCEFILE DESTFILE
# I have Virtualbox installed on my workstation (Ubuntu 20.04 LTS). There is a Windows 10
# installed on it for testing purposes. I'll copy its image (obviously in vdi format)
# to my server to /srv/kvm directory, convert it to qcow2 and run it on my server using
# KVM.
# !!! On my workstation BEGIN !!!
# Copy Windows 10 image to the server
scp windows10.vdi exforge@srv:/tmp
# !!! On my workstation END !!!
# On my server
sudo qemu-img convert -f vdi -O qcow2 /tmp/windows10.vdi /srv/kvm/windows10.qcow2
# If we want to display the progress percentage while converting the image, add -p option.
sudo qemu-img convert -p -f vdi -O qcow2 /tmp/windows10.vdi /srv/kvm/windows10.qcow2
# Now we can add it as a KVM image
virt-install --name windows10 \
--connect qemu:///system \
--virt-type kvm --memory 2048 --vcpus 2 \
--boot hd,menu=on \
--disk path=/srv/kvm/windows10.qcow2,device=disk \
--graphics vnc,port=5901,listen=0.0.0.0 \
--os-type windows --os-variant win10 \
--network bridge=br0 \
--noautoconsole
#
# 7.4. Resize a Disk Image.
# If you need extra disk space for your VM, you can increase the size of the image file.
sudo qemu-img resize FILENAME +SIZE
# Resize an image, FILENAME is the name of the file which is the image for the VM.
# SIZE could be something like +10G. Image size will be increased by (not to) this amount.
# it is possible to shrink with -
# You must use the parameter --shrink to shrink the image
# You must use partitioning tools in the VM to resize the disk to shrinked size before
# shrinking.
# To increase the size of my ubuntu20 VM's image by 5GB:
sudo qemu-img resize /srv/kvm/ubuntusrv-cloudimg.qcow2 +5G
#
# 7.5. Check an Image For Errors
qemu-img check FILENAME
# In any case if you suspect the integrity of the image file
# If you want to move your VM to another host, or in a way you want to backup and restore
# your VM; there might be a lot of ways to do it. I'm going to demonstrate a very simple
# method which requires shutting down the VM (you can try while it is running, but with
# no success guaranteed).
#
# 8.1. Export
# First of all, let's prepare a place for our backup files, /tmp/kvmbackup would be fine.
mkdir /tmp/kvmbackup
# We need the definition file of our VM and the image file it is using. "virsh dumpxml"
# command creates the definition file in xml format, we can save it with the VM's name.
virsh dumpxml ubuntu20 > /tmp/kvmbackup/ubuntu20.xml
# This file contains all the necessary information about our VM.
#
# If our VM was installed from the scratch as in 2.1. there will be only 1 image file. But
# if it was installed from a cloud image as we did in 4. or if another disk was added as
# in 5.10; there would be more than 1 images. We need to copy all the images.
#
# Images used by the VM is listed in the xml file. Let's find them:
grep "source file" /tmp/kvmbackup/ubuntu20.xml
# For my ubuntu20 VM, output is listed below:
#_____________________________________________________________________
<source file='/srv/kvm/ubuntu20-seed.qcow2' index='2'/>
<source file='/srv/kvm/ubuntusrv-cloudimg.qcow2' index='1'/>
<source file='/srv/isos/focal-server-cloudimg-amd64.img'/>
#_____________________________________________________________________
# That means I need to prepare 3 files: /srv/kvm/ubuntu20-seed.qcow2,
# /srv/kvm/ubuntusrv-cloudimg.qcow2 # and /srv/isos/focal-server-cloudimg-amd64.img.
# Lets copy them to our backup locations.
cp /srv/kvm/ubuntu20-seed.qcow2 /srv/kvm/ubuntusrv-cloudimg.qcow2 \
/srv/isos/focal-server-cloudimg-amd64.img /tmp/kvmbackup
# Beware: You can copy the files while the VM is running, but it is advised to shutdown (or
# at least suspend your VM) before copying. Continue at your own risk.
#
# Let's package them
tar -cf /tmp/ubuntu20.tar -C /tmp/kvmbackup .
# Now we have /tmp/ubuntu20.tar, it has all the necessary data to import our VM anywhere.
# You have to copy this file to another server, before importing there.
#
# 8.2. Import
# Assuming we have another virtualization server and we have copied ubuntu20.tar there, we
# are going to import it and make it operational.
# Beware: Before importing your VM to another server, you have to remove it on the original
# server, # otherwise you would have 2 guests with the same IP and that may cause
# unexpected results.
# ubuntu20.tar is copied to the server's /tmp directory as /tmp/ubuntu20.tar
# Create a place for our import files
mkdir /tmp/import
# Extract tar file there
tar -xf /tmp/ubuntu20.tar -C /tmp/import
# Now we need to move our image files to their directories as in the original server. If
# you have a different directory structure on your new server, and you want to copy files
# to different directories you have to edit the xml file and change directories there.
sudo cp /tmp/import/ubuntu20-seed.qcow2 /tmp/import/ubuntusrv-cloudimg.qcow2 /srv/kvm
sudo cp /tmp/import/focal-server-cloudimg-amd64.img /srv/isos
#
# It is time to define our server. Remember the xml file? We will use it to define our
# ubuntu20 server.
virsh define /tmp/import/ubuntu20.xml
# Now we can start it
virsh start ubuntu20
9. libguestfs: VM Disk Management
# A set of commands for managing VM disks. Full documentation:
https://libguestfs.org/
# Normally, as a system admin, you won't need to reach to VM's disks. But there may happen
# a need once in a while.
# I think you already understand that when you have a VPS on a cloud server, the
# administrators of that cloud environment can reach your VPS' data.
# There are many tools, I'm going to try to explain only mounting commands.
#
# 9.1. Installation
sudo apt update
sudo apt-get install libguestfs-tools
#
# 9.2. Mounting VM's Disks
# Works online (While the VM is running) Mount my VMs disk on my host's /mnt directory:
sudo guestmount -d ubuntu20 -i --ro /mnt
# /mnt directory holds all the files of my VM. If you remove --ro, you can mount it with
# write permissions. But be very careful.
# Unmount it:
sudo guestunmount /mnt
# I prefer mounting with readonly permissions just to be safe.
# Details for guestmount and guestunmount commands:
guestmount --help
guestunmount --help
#
# 9.3. All Commands
# guestfish(1) — interactive shell
# guestmount(1) — mount guest filesystem in host
# guestunmount(1) — unmount guest filesystem
# virt-alignment-scan(1) — check alignment of virtual machine partitions
# virt-builder(1) — quick image builder
# virt-builder-repository(1) — create virt-builder repositories
# virt-cat(1) — display a file
# virt-copy-in(1) — copy files and directories into a VM
# virt-copy-out(1) — copy files and directories out of a VM
# virt-customize(1) — customize virtual machines
# virt-df(1) — free space
# virt-dib(1) — safe diskimage-builder
# virt-diff(1) — differences
# virt-edit(1) — edit a file
# virt-filesystems(1) — display information about filesystems, devices, LVM
# virt-format(1) — erase and make blank disks
# virt-get-kernel(1) — get kernel from disk
# virt-inspector(1) — inspect VM images
# virt-list-filesystems(1) — list filesystems
# virt-list-partitions(1) — list partitions
# virt-log(1) — display log files
# virt-ls(1) — list files
# virt-make-fs(1) — make a filesystem
# virt-p2v(1) — convert physical machine to run on KVM
# virt-p2v-make-disk(1) — make P2V ISO
# virt-p2v-make-kickstart(1) — make P2V kickstart
# virt-rescue(1) — rescue shell
# virt-resize(1) — resize virtual machines
# virt-sparsify(1) — make virtual machines sparse (thin-provisioned)
# virt-sysprep(1) — unconfigure a virtual machine before cloning
# virt-tail(1) — follow log file
# virt-tar(1) — archive and upload files
# virt-tar-in(1) — archive and upload files
# virt-tar-out(1) — archive and download files