bash


KVM Virtual Machines’ Backup and Restoration

Introduction

According to Redhat, KVM is an open-source virtualization technology that is incorporated into Linux. KVM stands for Kernel-based Virtual Machine. To be more specific, KVM enables the transformation of Linux into a hypervisor, which makes it possible for a single physical computer to operate simultaneously several distinct virtual environments, also known as guests or virtual machines (VMs). This post will focus mainly on backing up KVM virtual machines (VMs).

Backup your KVM VM

First, log in using the sudo user and list all of the KVM virtual machines available.

1
2
3
4
5
6
7
8
9
10
11
12
13
virsh list --all;
# virsh help list
#  NAME
#    list - list domains
#
#  SYNOPSIS
#    list [--inactive] [--all] [--transient] [--persistent] [--with-snapshot] [--without-snapshot] [--state-running] [--state-paused] [--state-shutoff] [--state-other] [--autostart] [--no-autostart] [--with-managed-save] [--without-managed-save] [--uuid] [--name] [--table] [--managed-save] [--title]
#
#  DESCRIPTION
#    Returns list of domains.
#
#  OPTIONS
#    --all            list inactive & active domains

Next, you will need to log out of the virtual machine (VM) you intend to back up.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
virsh shutdown $VM_NAME;
# virsh help shutdown
#  NAME
#    shutdown - gracefully shutdown a domain
#
#  SYNOPSIS
#    shutdown <domain> [--mode <string>]
#
#  DESCRIPTION
#    Run shutdown in the target domain.
#
#  OPTIONS
#    [--domain] <string>  domain name, id or uuid
#    --mode <string>  shutdown mode: acpi|agent|initctl|signal|paravirt

Then it would be best if you executed the following to ensure that the command was executed successfully.

1
virsh list --all;

We can break down the process of backing up a KVM virtual machine into two essential parts:

The definition of the domain:

Specifying the physical components that make up the VM, such as its network interfaces, physical and virtual central processing units, RAM, and disk space. You will be able to see this information by executing the following command:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
virsh dumpxml $VM_NAME;
# virsh help dumpxml
#  NAME
#    dumpxml - domain information in XML
#
#  SYNOPSIS
#    dumpxml <domain> [--inactive] [--security-info] [--update-cpu] [--migratable]
#
#  DESCRIPTION
#    Output the domain information as an XML dump to stdout.
#
#  OPTIONS
#    [--domain] <string>  domain name, id or uuid
#    --inactive       show inactive defined XML
#    --security-info  include security sensitive information in XML dump
#    --update-cpu     update guest CPU according to host CPU
#    --migratable     provide XML suitable for migrations

The data file:

The path to the source file includes the location of the data file that needs to be backed up by us. It is where the virtual machine’s hard drive is stored. This section contains the internal configuration, including services, databases, etc. We can use the domain definition to determine the location of this file, or we can use the following command, which will tell you the position of the hard drive. Either way, we can find the location of this file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
virsh domblklist $VM_NAME;
# virsh help domblklist
#  NAME
#    domblklist - list all domain blocks
#
#  SYNOPSIS
#    domblklist <domain> [--inactive] [--details]
#
#  DESCRIPTION
#    Get the summary of block devices for a domain.
#
#  OPTIONS
#    [--domain] <string>  domain name, id or uuid
#    --inactive       get inactive rather than running configuration
#    --details        additionally display the type and device value

Let’s assume that the following is the given location (which is usually the default location for virtual machines created with libvirt):

/var/lib/libvirt/images/

The above command will produce output similar to the following:

Target     Source
------------------------------------------------
hda        /var/lib/libvirt/images/nba.qcow2
hdb        -

The qcow2 images are the disk files that we need to back up.

First, we need to create a place to put the backups:

1
mkdir -p /opt/backup/kvm/;

The following command creates a backup of the domain definition:

1
virsh dumpxml $VM_NAME > "/opt/backup/kvm/$VM_NAME.xml";

The following is what we employ to create a backup of the hard drives based on the feedback of the domblklist command:

1
cp /var/lib/libvirt/images/nba.qcow2 /opt/backup/kvm/;

Restore your KVM virtual machine

We will begin by erasing the virtual machine’s hard disk and undefining the VM so that it does not exist any longer. Using the domblklist command, we need to identify the qcow2 files to be deleted. After we collect that information and make sure that the VM is stopped using the shutdown command, we need to delete the file(s) from the hard drive:

1
rm /var/lib/libvirt/images/nba.qcow2;

Remove the VM definition:

Then you need to remove the existing definition before restoring a backup.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
virsh undefine $VM_NAME;
# virsh help undefine
#  NAME
#    undefine - undefine a domain
#
#  SYNOPSIS
#    undefine <domain> [--managed-save] [--storage <string>] [--remove-all-storage] [--delete-snapshots] [--wipe-storage] [--snapshots-metadata] [--nvram] [--keep-nvram]
#
#  DESCRIPTION
#    Undefine an inactive domain, or convert persistent to transient.
#
#  OPTIONS
#    [--domain] <string>  domain name, id or uuid
#    --managed-save   remove domain managed state file
#    --storage <string>  remove associated storage volumes (comma separated list of targets or source paths) (see domblklist)
#    --remove-all-storage  remove all associated storage volumes (use with caution)
#    --delete-snapshots  delete snapshots associated with volume(s), requires --remove-all-storage (must be supported by storage driver)
#    --wipe-storage   wipe data on the removed volumes
#    --snapshots-metadata  remove all domain snapshot metadata, if inactive
#    --nvram          remove nvram file, if inactive
#    --keep-nvram     keep nvram file, if inactive

Restore the virtual machine (VM).

Now, to bring back the virtual machine that was removed, we will first get back the hard drive:

1
cp /opt/backup/kvm/nba.qcow2 /var/lib/libvirt/images/;

Then, bring back the original definition of the domain.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
virsh define --file "/opt/backup/kvm/$VM_NAME.xml";
# virsh help define
#  NAME
#    define - define (but don't start) a domain from an XML file
#
#  SYNOPSIS
#    define <file> [--validate]
#
#  DESCRIPTION
#    Define a domain.
#
#  OPTIONS
#    [--file] <string>  file containing an XML domain description
#    --validate       validate the XML against the schema

If you’re moving it to a different physical host, you can check to see if the information included within the XML file needs to be updated appropriately. Check to see if the new physical host has network interfaces, for instance, and so forth.

Execute the following to check that the parameters of your virtual machine (VM) have been successfully defined:

1
virsh list --all;

After that, begin using the VM by:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
virsh start $VM_NAME;
# virsh help start
#  NAME
#    start - start a (previously defined) inactive domain
#
#  SYNOPSIS
#    start <domain> [--console] [--paused] [--autodestroy] [--bypass-cache] [--force-boot] [--pass-fds <string>]
#
#  DESCRIPTION
#    Start a domain, either from the last managedsave
#    state, or via a fresh boot if no managedsave state
#    is present.
#
#  OPTIONS
#    [--domain] <string>  name of the inactive domain
#    --console        attach to console after creation
#    --paused         leave the guest paused after creation
#    --autodestroy    automatically destroy the guest when virsh disconnects
#    --bypass-cache   avoid file system cache when loading
#    --force-boot     force fresh boot by discarding any managed save
#    --pass-fds <string>  pass file descriptors N,M,... to the guest

Once your virtual machine (VM) is up and running, you may use SSH or other methods to log into it and check that everything was correctly restored.


Ubuntu Distribution Upgrade: Not enough free disk space

Recently, we tried to upgrade an Ubuntu 20.04 desktop to a 22.04. At some point in the update, we got the following error:

The upgrade has aborted. The upgrade needs a total of 10,6 G free space on disk '/'. Please free at least an additional 8201 M of disk space on '/'. Empty your trash and remove temporary packages of former installations using 'sudo apt-get clean'. The upgrade has aborted. The upgrade needs a total of 430 M free space on disk '/boot'. Please free at least an additional 38,4 M of disk space on '/boot'. You can remove old kernels using 'sudo apt autoremove' and you could also set COMPRESS=xz in /etc/initramfs-tools/initramfs.conf to reduce the size of your initramfs.

First of all, we tried to use the command apt autoremove to clear up some space, which unfortunately was not enough.

1
sudo apt autoremove;

Then to clear up some space, we needed to find remnant memories of older versions of the Kernel. To do so, we used the following command. The following command finds the current version of the kernel and shows the user the remaining packages that do not reflect the active kernel.

1
dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d;/^linux-\(headers\|image\)/!d';

Then we removed all the headers and the images that we did not need using the command apt-get purge.

$ dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d;/^linux-\(headers\|image\)/!d'
linux-headers-5.13.0-52-generic
linux-headers-5.15.0-46-generic
linux-headers-generic-hwe-20.04
linux-image-5.15.0-46-generic
linux-image-generic-hwe-20.04

$ sudo apt-get -y purge linux-headers-5.13.0-52-generic linux-headers-5.15.0-46-generic linux-image-5.15.0-46-generic;

Doing so was enough to clear up the space that was needed for the upgrade to continue.


Error mounting filesystem

After installing the ewf-tools the right way on a GNU/Linux Ubuntu machine, we executed the following command to create the ewf1 mounting point for our .E01 image:

1
2
mkdir /mnt/ewf;
ewfmount ./DISK.E01 /mnt/ewf/;

After the operating system created the mounting point, we opened the ewf1 file that appeared in /mnt/ewf/ using the Gnome Disk Image Mounter. This action made a new entry in the Gnome Disks Utility, showing our new disk.

After clicking on the play button (labeled Mount selected partition) we got the following error:

We then tried to use the terminal to gain more control over the mounting parameters. To proceed with the following commands, we copied the Device value, which was /dev/loop54p3 in this case.

1
2
3
4
5
6
7
8
9
10
$ mkdir /mnt/loc;
$ sudo mount /dev/loop54p3 /mnt/loc;
mount: /mnt/loc: cannot mount /dev/loop54p3 read-only.
$ sudo mount -o ro /dev/loop54p3 /mnt/loc;
mount: /mnt/loc: cannot mount /dev/loop54p3 read-only.
$ sudo mount -o ro,loop /dev/loop54p3 /mnt/loc;
mount: /mnt/loc: cannot mount /dev/loop58 read-only.
$ sudo mount -o ro,loop -t ext4 /dev/loop54p3 /mnt/loc;
mount: /mnt/loc: cannot mount /dev/loop58 read-only.
$ sudo mount -o ro,norecovery,loop -t ext4 /dev/loop54p3 /mnt/loc;

The command that worked for us was the following:

1
sudo mount -o ro,norecovery,loop -t ext4 /dev/loop54p3 /mnt/loc;

The parameter that did the trick was norecovery. norecovery/noload instructs the system not to load the journal on mounting. Note that if the filesystem was not unmounted cleanly, skipping the journal replay will lead to the filesystem containing inconsistencies that can lead to any number of problems. This problem occurred because the machine did not shut down properly before it had its image cloned, so after we mount, we might not get the latest state of the disk.


How to access VMFS Datastore from Ubuntu GNU/Linux

Suppose the ESXi host fails, but the server’s local disk or disks are still operational. In that case, it is always possible to copy the virtual machine files (both data drives and configuration files) from the VMFS datastore and run the VM on a different server. This is true even if the ESXi host fails (even on VMware Workstation or Hyper-V). The most significant issue is that the widely used operating systems, such as Windows and Linux, do not have a VMFS driver, which causes them to be unable to recognize a partition that automatically has the VMFS file system.

To mount a VMFS file system on an Ubuntu, we will need to install the vmfs-tools package.

1
sudo apt-get install vmfs-tools;

Then, we need to create a folder where we will perform the mount later on:

1
2
# The folders does not have to be in the /mnt path, it can be anywhere on your file system where you have access.
sudo mkdir /mnt/vmfs;

Following that, we need to identify the disk we want to mount. There are two popular ways to do so, and the first is by executing the command fdisk -l on the terminal, which will show all physical disks attached to your system. You will get results that are similar to the ones below:

1
2
3
4
5
6
7
8
sudo fdisk -l;
 
...
 
Disk /dev/loop51: 884,85 GiB, 950075898880 bytes, 1855616990 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

From these results, the drive’s path is essential information for us. In this case, it was /dev/loop51.

The second method is to use the Gnome Disks Utility:

When you start the application, you will get an image similar to this:

If you have a physical VMFS datastore hard drive, it will appear on the list on the left. You will get more information on the right panel by clicking on it. The critical information is the value after the label Device. In this case, the vital value was /dev/loop51.

If you do not have a physical drive but an image of a drive, you can attach it by clicking on the menu button with the three lines on the top left and then selecting the Attach Disk Image... (.iso, .img) option. A new window will open, allowing you to navigate and find your image file.

After we acquire the path to the physical drive or the image file, we can mount it using the following command:

1
sudo vmfs-fuse /dev/loop51 /mnt/vmfs;

In case you get the following error:

VMFS: Unsupported version 6
Unable to open device/file "/dev/loop51".
Unable to open filesystem

Then, you need to install the package that can handle VMFS version 6. To install, use the following command:

1
sudo apt-get install vmfs6-tools;

Trying again to mount, this time with the tools that are appropriate for version 6, should do the trick:

1
sudo vmfs6-fuse /dev/loop51 /mnt/vmfs;

To unmount, we need to execute the following:

1
sudo umount /mnt/vmfs;