bash


How to mount a qcow2 disk image that does not contain an Ubuntu LVM installation

Mounting a qcow2 disk image on your host server can be accomplished with the help of this fast method. Thanks to this feature, it is possible to reset passwords, alter files, or recover data even while the virtual machine is not running. This specific method does not allow mounting disks with LVM as they are not properly recognized the volume group tools (e.g. vgdisplay).

Enable Network block device (NBD) module on the host

sudo modprobe nbd max_part=8;

Network block device, or NBD, is a protocol on Linux that the OS can use to forward a block device (usually a hard disk or partition) from one system to another. This can be accomplished by sending the block device over the network.
For instance, a hard disk drive attached to another computer may be accessed by a local machine that is part of the same network.

Connect the QCOW2 image as a network block device

sudo qemu-nbd --connect=/dev/nbd0 /var/lib/libvirt/images/miner.qcow2;
#Use QEMU Disk Network Block Device Utility

We used the above command to export the QEMU disk image (miner.qcow2) using the NBD protocol and connect it to the NBD device (/dev/nbd0).

Identify the available partitions.

Check if the device has a UUID of an LVM partition in the QCOW2 image

sudo lsblk -f /dev/nbd0;

The lsblk command will provide information about all available block devices or the ones you choose. To obtain information, the lsblk command reads the sysfs filesystem and the udev db. It then attempts to read LABELs, UUIDs, and filesystem types from the block device if the udev db is unavailable, or if lsblk was compiled without udev support. In this particular scenario, root rights are required. Sample output can be seen below:

NAME         FSTYPE      FSVER    LABEL UUID                                   FSAVAIL FSUSE% MOUNTPOINTS
nbd0                                                                                          
├─nbd0p1                                                                                      
├─nbd0p2                                                                                      
└─nbd0p3     LVM2_member LVM2 001       xniXr3-gWWj-xS0J-8TaT-EtDt-vZtR-92Z5ms                
  └─ubuntu--vg-ubuntu--lv
             ext4        1.0            be0a2dba-ac27-4dfd-9f90-60ae9196d5e6

Identify the virtual machine partitions

fdisk /dev/nbd0 -l;

fdisk is a program that is driven by dialog to create and manipulate partition tables. It can read GPT, MBR, Sun, SGI, and BSD partition tables. If no devices are specified, the operating system will use the devices listed in /proc/partitions (provided that this file already exists). Devices are always displayed in the order that they are specified on the command line or in the order that they are listed by the kernel in /proc/partitions, whichever comes first.

Mount the partition of the virtual machine

After you identify the partition that you need to mount, use the mount command to perform the action to a mounting point of your choosing.

#In this example, we assume that we want to mount nbd0p1 to /mnt/miner that we created.
sudo mkdir /mnt/miner/;
sudo mount /dev/nbd0p1 /mnt/miner/;

Upon successful execution, all the files of that partition will be available through our mounting point. If you try to mount an LVM partition, you will get the following error:

sudo mount /dev/nbd0p3 /mnt/miner/
mount: /mnt/miner: unknown filesystem type 'LVM2_member'.

In this tutorial, we do not handle this problem using this method. See below how we handled it using the guestfish tool.

Clean Up

After you are done, unmount, disconnect, and remove the NBD module if you do not plan on using it further.

#Unmount the partition
umount /mnt/miner/;
#Disconnect the image from the NBD device
qemu-nbd --disconnect /dev/nbd0;
#Unload the NBD module
rmmod nbd;

How to mount a qcow2 disk image that contains an Ubuntu LVM installation

In one case, we had an issue where we needed to mount a disk image of a VM that contained an LVN installation. The above solution did not work, as we could not access the LVM partitions properly. The volume group tools did not recognize the partitions as they were network block devices. To handle this scenario, we used guestfish.

Examining and altering the filesystems of virtual machines is possible with the help of the shell and command-line tool known as Guestfish. It uses libguestfs and makes all of the features of the guestfs API available. So, we installed guestfish straight from the repositories as follows:

sudo apt-get install guestfish;

Then, we connected to the image that contained the LVM installation as follows:

sudo guestfish --rw -a /var/lib/libvirt/images/miner.qcow2;

After connecting to the image, we executed the following:

  • run
    With run, we initiated the library and attached the disk image
  • list-filesystems
    We listed the file systems found by libguestfs
  • mount
    After identifying the partition we needed to mount, we used this command to assign it to the root path /
  • ls
    This command works as expected, we were able to list the files in various directories, etc.
  • edit
    We used edit to modify the file we needed to process
  • exit
    We used exit to terminate this session

Below is a sample example of our execution.

Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.

Type: ‘help’ for help on commands
      ‘man’ to read the manual
      ‘quit’ to quit the shell

><fs> run
 100% ⟦▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒⟧ 00:00
><fs>  list-filesystems
/dev/sda1: unknown
/dev/sda2: ext4
/dev/ubuntu-vg/ubuntu-lv: ext4
><fs> mount /dev/ubuntu-vg/ubuntu-lv /
><fs> ls /home
tux
bob
><fs> edit /etc/default/grub

Using a CSV input file, find all documents that contain any of the items in a cell of a column

The following code will use as input one column from a CSV file, and for each element in the column, it will perform a full-text search in a folder to find all files that contain that element.

#!/bin/bash

#Execution parameters
# 1 - the folder to look in for the element
# 2 - the input file that contains the search terms
# 3 - the column of interest
# 4 - the delimiter to use to find the column
# e.g. ./searchEachElement.sh ./2\ Print/ book.csv 5 ','

folder="$1";
input="$2";
column="$3";
delimiter="$4"

while read -r line; do
  needle=`echo $line | cut -d "$delimiter" -f "$column"`; 
  echo ">>> $needle"
  find "$folder" -type f -exec grep "$needle" -s -l '{}' \;
done < "$input";

More information on the full-text search can be found here.


An easy way to SSH into a Gnome Boxes OS

Recently, we set up an Ubuntu Server in a Gnome Boxes virtual machine. We wanted to perform an ssh connection into it to make administration easier. In the properties of the VM that are visible from the GUI, there was no option to edit the network cards and set up a virtual network between the host and the virtual machine.

To allow ourselves to perform the ssh connection, we decided to go with the option of reverse ssh tunneling. To do so, we needed to install and start the ssh server.

After that, we got the IP of the host machine.

Then, we used the terminal of the virtual machine to execute the following ssh command:

ssh -N -T -R 22222:localhost:22 host_machine_user@host_machine_ip;

That created a connection to the host machine and blocked the terminal as expected since it was an active application.

Finally, from the host, we executed the following to ssh into the virtual machine:

ssh -p 22222 virtual_machine_user@localhost;

The biggest disadvantage of this method is that you need to enable ssh on your host machine.

The biggest advantage is the ease with which anyone can set it up.

Notes on the ssh parameters:

-N Do not execute a remote command. This is useful for just forwarding ports.

-T Disable pseudo-terminal allocation.

-R remote_socket:host:hostport Specifies that connections to the given TCP port or Unix socket on the remote (server) host are to be forwarded to the local side.


KVM Virtual Machines’ Backup and Restoration

Introduction

According to Redhat, KVM is an open-source virtualization technology that is incorporated into Linux. KVM stands for Kernel-based Virtual Machine. To be more specific, KVM enables the transformation of Linux into a hypervisor, which makes it possible for a single physical computer to operate simultaneously several distinct virtual environments, also known as guests or virtual machines (VMs). This post will focus mainly on backing up KVM virtual machines (VMs).

Backup your KVM VM

First, log in using the sudo user and list all of the KVM virtual machines available.

virsh list --all;
# virsh help list
#  NAME
#    list - list domains
#
#  SYNOPSIS
#    list [--inactive] [--all] [--transient] [--persistent] [--with-snapshot] [--without-snapshot] [--state-running] [--state-paused] [--state-shutoff] [--state-other] [--autostart] [--no-autostart] [--with-managed-save] [--without-managed-save] [--uuid] [--name] [--table] [--managed-save] [--title]
#
#  DESCRIPTION
#    Returns list of domains.
#
#  OPTIONS
#    --all            list inactive & active domains

Next, you will need to log out of the virtual machine (VM) you intend to back up.

virsh shutdown $VM_NAME;
# virsh help shutdown
#  NAME
#    shutdown - gracefully shutdown a domain
#
#  SYNOPSIS
#    shutdown <domain> [--mode <string>]
#
#  DESCRIPTION
#    Run shutdown in the target domain.
#
#  OPTIONS
#    [--domain] <string>  domain name, id or uuid
#    --mode <string>  shutdown mode: acpi|agent|initctl|signal|paravirt

Then it would be best if you executed the following to ensure that the command was executed successfully.

virsh list --all;

We can break down the process of backing up a KVM virtual machine into two essential parts:

The definition of the domain:

Specifying the physical components that make up the VM, such as its network interfaces, physical and virtual central processing units, RAM, and disk space. You will be able to see this information by executing the following command:

virsh dumpxml $VM_NAME;
# virsh help dumpxml
#  NAME
#    dumpxml - domain information in XML
#
#  SYNOPSIS
#    dumpxml <domain> [--inactive] [--security-info] [--update-cpu] [--migratable]
#
#  DESCRIPTION
#    Output the domain information as an XML dump to stdout.
#
#  OPTIONS
#    [--domain] <string>  domain name, id or uuid
#    --inactive       show inactive defined XML
#    --security-info  include security sensitive information in XML dump
#    --update-cpu     update guest CPU according to host CPU
#    --migratable     provide XML suitable for migrations

The data file:

The path to the source file includes the location of the data file that needs to be backed up by us. It is where the virtual machine’s hard drive is stored. This section contains the internal configuration, including services, databases, etc. We can use the domain definition to determine the location of this file, or we can use the following command, which will tell you the position of the hard drive. Either way, we can find the location of this file.

virsh domblklist $VM_NAME;
# virsh help domblklist
#  NAME
#    domblklist - list all domain blocks
#
#  SYNOPSIS
#    domblklist <domain> [--inactive] [--details]
#
#  DESCRIPTION
#    Get the summary of block devices for a domain.
#
#  OPTIONS
#    [--domain] <string>  domain name, id or uuid
#    --inactive       get inactive rather than running configuration
#    --details        additionally display the type and device value

Let’s assume that the following is the given location (which is usually the default location for virtual machines created with libvirt):

/var/lib/libvirt/images/

The above command will produce output similar to the following:

Target     Source
------------------------------------------------
hda        /var/lib/libvirt/images/nba.qcow2
hdb        -

The qcow2 images are the disk files that we need to back up.

First, we need to create a place to put the backups:

mkdir -p /opt/backup/kvm/;

The following command creates a backup of the domain definition:

virsh dumpxml $VM_NAME > "/opt/backup/kvm/$VM_NAME.xml";

The following is what we employ to create a backup of the hard drives based on the feedback of the domblklist command:

cp /var/lib/libvirt/images/nba.qcow2 /opt/backup/kvm/;

Restore your KVM virtual machine

We will begin by erasing the virtual machine’s hard disk and undefining the VM so that it does not exist any longer. Using the domblklist command, we need to identify the qcow2 files to be deleted. After we collect that information and make sure that the VM is stopped using the shutdown command, we need to delete the file(s) from the hard drive:

rm /var/lib/libvirt/images/nba.qcow2;

Remove the VM definition:

Then you need to remove the existing definition before restoring a backup.

virsh undefine $VM_NAME;
# virsh help undefine
#  NAME
#    undefine - undefine a domain
#
#  SYNOPSIS
#    undefine <domain> [--managed-save] [--storage <string>] [--remove-all-storage] [--delete-snapshots] [--wipe-storage] [--snapshots-metadata] [--nvram] [--keep-nvram]
#
#  DESCRIPTION
#    Undefine an inactive domain, or convert persistent to transient.
#
#  OPTIONS
#    [--domain] <string>  domain name, id or uuid
#    --managed-save   remove domain managed state file
#    --storage <string>  remove associated storage volumes (comma separated list of targets or source paths) (see domblklist)
#    --remove-all-storage  remove all associated storage volumes (use with caution)
#    --delete-snapshots  delete snapshots associated with volume(s), requires --remove-all-storage (must be supported by storage driver)
#    --wipe-storage   wipe data on the removed volumes
#    --snapshots-metadata  remove all domain snapshot metadata, if inactive
#    --nvram          remove nvram file, if inactive
#    --keep-nvram     keep nvram file, if inactive

Restore the virtual machine (VM).

Now, to bring back the virtual machine that was removed, we will first get back the hard drive:

cp /opt/backup/kvm/nba.qcow2 /var/lib/libvirt/images/;

Then, bring back the original definition of the domain.

virsh define --file "/opt/backup/kvm/$VM_NAME.xml";
# virsh help define
#  NAME
#    define - define (but don't start) a domain from an XML file
#
#  SYNOPSIS
#    define <file> [--validate]
#
#  DESCRIPTION
#    Define a domain.
#
#  OPTIONS
#    [--file] <string>  file containing an XML domain description
#    --validate       validate the XML against the schema

If you’re moving it to a different physical host, you can check to see if the information included within the XML file needs to be updated appropriately. Check to see if the new physical host has network interfaces, for instance, and so forth.

Execute the following to check that the parameters of your virtual machine (VM) have been successfully defined:

virsh list --all;

After that, begin using the VM by:

virsh start $VM_NAME;
# virsh help start
#  NAME
#    start - start a (previously defined) inactive domain
#
#  SYNOPSIS
#    start <domain> [--console] [--paused] [--autodestroy] [--bypass-cache] [--force-boot] [--pass-fds <string>]
#
#  DESCRIPTION
#    Start a domain, either from the last managedsave
#    state, or via a fresh boot if no managedsave state
#    is present.
#
#  OPTIONS
#    [--domain] <string>  name of the inactive domain
#    --console        attach to console after creation
#    --paused         leave the guest paused after creation
#    --autodestroy    automatically destroy the guest when virsh disconnects
#    --bypass-cache   avoid file system cache when loading
#    --force-boot     force fresh boot by discarding any managed save
#    --pass-fds <string>  pass file descriptors N,M,... to the guest

Once your virtual machine (VM) is up and running, you may use SSH or other methods to log into it and check that everything was correctly restored.