GNU/Linux


An easy way to SSH into a Gnome Boxes OS

Recently, we set up an Ubuntu Server in a Gnome Boxes virtual machine. We wanted to perform an ssh connection into it to make administration easier. In the properties of the VM that are visible from the GUI, there was no option to edit the network cards and set up a virtual network between the host and the virtual machine.

To allow ourselves to perform the ssh connection, we decided to go with the option of reverse ssh tunneling. To do so, we needed to install and start the ssh server.

After that, we got the IP of the host machine.

Then, we used the terminal of the virtual machine to execute the following ssh command:

ssh -N -T -R 22222:localhost:22 host_machine_user@host_machine_ip;

That created a connection to the host machine and blocked the terminal as expected since it was an active application.

Finally, from the host, we executed the following to ssh into the virtual machine:

ssh -p 22222 virtual_machine_user@localhost;

The biggest disadvantage of this method is that you need to enable ssh on your host machine.

The biggest advantage is the ease with which anyone can set it up.

Notes on the ssh parameters:

-N Do not execute a remote command. This is useful for just forwarding ports.

-T Disable pseudo-terminal allocation.

-R remote_socket:host:hostport Specifies that connections to the given TCP port or Unix socket on the remote (server) host are to be forwarded to the local side.


KVM Virtual Machines’ Backup and Restoration

Introduction

According to Redhat, KVM is an open-source virtualization technology that is incorporated into Linux. KVM stands for Kernel-based Virtual Machine. To be more specific, KVM enables the transformation of Linux into a hypervisor, which makes it possible for a single physical computer to operate simultaneously several distinct virtual environments, also known as guests or virtual machines (VMs). This post will focus mainly on backing up KVM virtual machines (VMs).

Backup your KVM VM

First, log in using the sudo user and list all of the KVM virtual machines available.

virsh list --all;
# virsh help list
#  NAME
#    list - list domains
#
#  SYNOPSIS
#    list [--inactive] [--all] [--transient] [--persistent] [--with-snapshot] [--without-snapshot] [--state-running] [--state-paused] [--state-shutoff] [--state-other] [--autostart] [--no-autostart] [--with-managed-save] [--without-managed-save] [--uuid] [--name] [--table] [--managed-save] [--title]
#
#  DESCRIPTION
#    Returns list of domains.
#
#  OPTIONS
#    --all            list inactive & active domains

Next, you will need to log out of the virtual machine (VM) you intend to back up.

virsh shutdown $VM_NAME;
# virsh help shutdown
#  NAME
#    shutdown - gracefully shutdown a domain
#
#  SYNOPSIS
#    shutdown <domain> [--mode <string>]
#
#  DESCRIPTION
#    Run shutdown in the target domain.
#
#  OPTIONS
#    [--domain] <string>  domain name, id or uuid
#    --mode <string>  shutdown mode: acpi|agent|initctl|signal|paravirt

Then it would be best if you executed the following to ensure that the command was executed successfully.

virsh list --all;

We can break down the process of backing up a KVM virtual machine into two essential parts:

The definition of the domain:

Specifying the physical components that make up the VM, such as its network interfaces, physical and virtual central processing units, RAM, and disk space. You will be able to see this information by executing the following command:

virsh dumpxml $VM_NAME;
# virsh help dumpxml
#  NAME
#    dumpxml - domain information in XML
#
#  SYNOPSIS
#    dumpxml <domain> [--inactive] [--security-info] [--update-cpu] [--migratable]
#
#  DESCRIPTION
#    Output the domain information as an XML dump to stdout.
#
#  OPTIONS
#    [--domain] <string>  domain name, id or uuid
#    --inactive       show inactive defined XML
#    --security-info  include security sensitive information in XML dump
#    --update-cpu     update guest CPU according to host CPU
#    --migratable     provide XML suitable for migrations

The data file:

The path to the source file includes the location of the data file that needs to be backed up by us. It is where the virtual machine’s hard drive is stored. This section contains the internal configuration, including services, databases, etc. We can use the domain definition to determine the location of this file, or we can use the following command, which will tell you the position of the hard drive. Either way, we can find the location of this file.

virsh domblklist $VM_NAME;
# virsh help domblklist
#  NAME
#    domblklist - list all domain blocks
#
#  SYNOPSIS
#    domblklist <domain> [--inactive] [--details]
#
#  DESCRIPTION
#    Get the summary of block devices for a domain.
#
#  OPTIONS
#    [--domain] <string>  domain name, id or uuid
#    --inactive       get inactive rather than running configuration
#    --details        additionally display the type and device value

Let’s assume that the following is the given location (which is usually the default location for virtual machines created with libvirt):

/var/lib/libvirt/images/

The above command will produce output similar to the following:

Target     Source
------------------------------------------------
hda        /var/lib/libvirt/images/nba.qcow2
hdb        -

The qcow2 images are the disk files that we need to back up.

First, we need to create a place to put the backups:

mkdir -p /opt/backup/kvm/;

The following command creates a backup of the domain definition:

virsh dumpxml $VM_NAME > "/opt/backup/kvm/$VM_NAME.xml";

The following is what we employ to create a backup of the hard drives based on the feedback of the domblklist command:

cp /var/lib/libvirt/images/nba.qcow2 /opt/backup/kvm/;

Restore your KVM virtual machine

We will begin by erasing the virtual machine’s hard disk and undefining the VM so that it does not exist any longer. Using the domblklist command, we need to identify the qcow2 files to be deleted. After we collect that information and make sure that the VM is stopped using the shutdown command, we need to delete the file(s) from the hard drive:

rm /var/lib/libvirt/images/nba.qcow2;

Remove the VM definition:

Then you need to remove the existing definition before restoring a backup.

virsh undefine $VM_NAME;
# virsh help undefine
#  NAME
#    undefine - undefine a domain
#
#  SYNOPSIS
#    undefine <domain> [--managed-save] [--storage <string>] [--remove-all-storage] [--delete-snapshots] [--wipe-storage] [--snapshots-metadata] [--nvram] [--keep-nvram]
#
#  DESCRIPTION
#    Undefine an inactive domain, or convert persistent to transient.
#
#  OPTIONS
#    [--domain] <string>  domain name, id or uuid
#    --managed-save   remove domain managed state file
#    --storage <string>  remove associated storage volumes (comma separated list of targets or source paths) (see domblklist)
#    --remove-all-storage  remove all associated storage volumes (use with caution)
#    --delete-snapshots  delete snapshots associated with volume(s), requires --remove-all-storage (must be supported by storage driver)
#    --wipe-storage   wipe data on the removed volumes
#    --snapshots-metadata  remove all domain snapshot metadata, if inactive
#    --nvram          remove nvram file, if inactive
#    --keep-nvram     keep nvram file, if inactive

Restore the virtual machine (VM).

Now, to bring back the virtual machine that was removed, we will first get back the hard drive:

cp /opt/backup/kvm/nba.qcow2 /var/lib/libvirt/images/;

Then, bring back the original definition of the domain.

virsh define --file "/opt/backup/kvm/$VM_NAME.xml";
# virsh help define
#  NAME
#    define - define (but don't start) a domain from an XML file
#
#  SYNOPSIS
#    define <file> [--validate]
#
#  DESCRIPTION
#    Define a domain.
#
#  OPTIONS
#    [--file] <string>  file containing an XML domain description
#    --validate       validate the XML against the schema

If you’re moving it to a different physical host, you can check to see if the information included within the XML file needs to be updated appropriately. Check to see if the new physical host has network interfaces, for instance, and so forth.

Execute the following to check that the parameters of your virtual machine (VM) have been successfully defined:

virsh list --all;

After that, begin using the VM by:

virsh start $VM_NAME;
# virsh help start
#  NAME
#    start - start a (previously defined) inactive domain
#
#  SYNOPSIS
#    start <domain> [--console] [--paused] [--autodestroy] [--bypass-cache] [--force-boot] [--pass-fds <string>]
#
#  DESCRIPTION
#    Start a domain, either from the last managedsave
#    state, or via a fresh boot if no managedsave state
#    is present.
#
#  OPTIONS
#    [--domain] <string>  name of the inactive domain
#    --console        attach to console after creation
#    --paused         leave the guest paused after creation
#    --autodestroy    automatically destroy the guest when virsh disconnects
#    --bypass-cache   avoid file system cache when loading
#    --force-boot     force fresh boot by discarding any managed save
#    --pass-fds <string>  pass file descriptors N,M,... to the guest

Once your virtual machine (VM) is up and running, you may use SSH or other methods to log into it and check that everything was correctly restored.


Ubuntu Distribution Upgrade: Not enough free disk space

Recently, we tried to upgrade an Ubuntu 20.04 desktop to a 22.04. At some point in the update, we got the following error:

The upgrade has aborted. The upgrade needs a total of 10,6 G free space on disk '/'. Please free at least an additional 8201 M of disk space on '/'. Empty your trash and remove temporary packages of former installations using 'sudo apt-get clean'. The upgrade has aborted. The upgrade needs a total of 430 M free space on disk '/boot'. Please free at least an additional 38,4 M of disk space on '/boot'. You can remove old kernels using 'sudo apt autoremove' and you could also set COMPRESS=xz in /etc/initramfs-tools/initramfs.conf to reduce the size of your initramfs.

First of all, we tried to use the command apt autoremove to clear up some space, which unfortunately was not enough.

sudo apt autoremove;

Then to clear up some space, we needed to find remnant memories of older versions of the Kernel. To do so, we used the following command. The following command finds the current version of the kernel and shows the user the remaining packages that do not reflect the active kernel.

dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d;/^linux-\(headers\|image\)/!d';

Then we removed all the headers and the images that we did not need using the command apt-get purge.

$ dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d;/^linux-\(headers\|image\)/!d'
linux-headers-5.13.0-52-generic
linux-headers-5.15.0-46-generic
linux-headers-generic-hwe-20.04
linux-image-5.15.0-46-generic
linux-image-generic-hwe-20.04

$ sudo apt-get -y purge linux-headers-5.13.0-52-generic linux-headers-5.15.0-46-generic linux-image-5.15.0-46-generic;

Doing so was enough to clear up the space that was needed for the upgrade to continue.


Error mounting filesystem

After installing the ewf-tools the right way on a GNU/Linux Ubuntu machine, we executed the following command to create the ewf1 mounting point for our .E01 image:

mkdir /mnt/ewf;
ewfmount ./DISK.E01 /mnt/ewf/;

After the operating system created the mounting point, we opened the ewf1 file that appeared in /mnt/ewf/ using the Gnome Disk Image Mounter. This action made a new entry in the Gnome Disks Utility, showing our new disk.

After clicking on the play button (labeled Mount selected partition) we got the following error:

We then tried to use the terminal to gain more control over the mounting parameters. To proceed with the following commands, we copied the Device value, which was /dev/loop54p3 in this case.

$ mkdir /mnt/loc;
$ sudo mount /dev/loop54p3 /mnt/loc;
mount: /mnt/loc: cannot mount /dev/loop54p3 read-only.
$ sudo mount -o ro /dev/loop54p3 /mnt/loc;
mount: /mnt/loc: cannot mount /dev/loop54p3 read-only.
$ sudo mount -o ro,loop /dev/loop54p3 /mnt/loc;
mount: /mnt/loc: cannot mount /dev/loop58 read-only.
$ sudo mount -o ro,loop -t ext4 /dev/loop54p3 /mnt/loc;
mount: /mnt/loc: cannot mount /dev/loop58 read-only.
$ sudo mount -o ro,norecovery,loop -t ext4 /dev/loop54p3 /mnt/loc;

The command that worked for us was the following:

sudo mount -o ro,norecovery,loop -t ext4 /dev/loop54p3 /mnt/loc;

The parameter that did the trick was norecovery. norecovery/noload instructs the system not to load the journal on mounting. Note that if the filesystem was not unmounted cleanly, skipping the journal replay will lead to the filesystem containing inconsistencies that can lead to any number of problems. This problem occurred because the machine did not shut down properly before it had its image cloned, so after we mount, we might not get the latest state of the disk.