bash


Rough notes on setting up an Ubuntu 22.04LTS server with docker and snap 1

IP allocations

First, we set up a static IP on the network device that would handle all external traffic and a DHCP on the network device that would access the management network, which is connected for maintenance.

To do so, we created the following file:

/etc/netplan/01-netcfg.yaml

using the following command:

sudo nano /etc/netplan/01-netcfg.yaml;

and added the following content to it:

# This file describes the network interfaces available on your system
# For more information, see netplan(5).
network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      dhcp4: no
      addresses: [192.168.45.13/24]
      gateway4: 192.168.45.1
      nameservers:
          addresses: [1.1.1.1,8.8.8.8]
    eth1:
      dhcp4: yes

To apply the changes, we executed the following:

sudo netplan apply;

Update everything (the operating system and all packages)

Usually, it is a good idea to update your system before making significant changes to it:

sudo apt update -y; sudo apt upgrade -y; sudo apt autoremove -y;

Install docker via snap

In this setup, we did not use the docker version available on the Ubuntu repositories, we went for the ones from the snap. To install it, we used the following commands:

sudo apt install snapd;
sudo snap install docker;

Increase network pool for docker daemon

To handle the following problem:

ERROR: could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network

We modified the following file

/var/snap/docker/current/config/daemon.json

using the command:

sudo nano /var/snap/docker/current/config/daemon.json;

and set the content to be as follows:

{
    "log-level":        "error",
    "storage-driver":   "overlay2",
    "default-address-pools": [
        {
            "base": "172.80.0.0/16",
            "size": 24
        },
        {
            "base": "172.90.0.0/16",
            "size": 24
        }
    ]
}

We executed the following command to restart the docker daemon and get the network changes applied:

sudo snap disable docker;
sudo snap enable docker;

Gave access to our user to manage the docker

We added our user to the docker group so that we could manage the docker daemon without sudo rights.

sudo addgroup --system docker;
sudo adduser $USER docker;
newgrp docker;
sudo snap disable docker;
sudo snap enable docker;

After that, we made sure that the access rights to the volumes were correct:

sudo chown -R www-data:www-data /volumes/*
sudo chown -R tux:tux /volumes/letsencrypt/ /volumes/reverse/private/

Deploying

After we copied everything in place, we executed the following command to create our containers and start them with the appropriate networks and volumes:

export COMPOSE_HTTP_TIMEOUT=600;
docker-compose up -d --remove-orphans;

We had to increase the timeout as we were getting the following error:

ERROR: for container_a  UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).

Updating the databases and performing any repairs

First, we connected to a terminal of the database container using the following command:

docker exec -it mariadb_c1 /bin/bash;

From there, we executed the following commands:

mysql_upgrade --user=root --password;
mysqlcheck -p -o --all-databases;

Bulk / Batch stopping docker containers

The following commands will help you stop many docker containers simultaneously. Of course, you can change the command stop to another, for example rm or whatever suits your needs.

You need to keep in mind that if you have dependencies between containers, you might need to execute the commands below more than once.

Stop all docker containers.

docker container stop $(docker container ls -q);
#This command creates a list of all containers.
#Using the -q parameter, we only get back the container ID and not all information about them.
#Then it will stop each container one by one.

Stop specific docker containers using a filter on their name.

docker container stop $(docker container ls -q --filter name=_web);
#This command finds all containers that their name contains _web.
#Using the -q parameter, we only get back the container ID and not all information about them.
#Then it will stop each container one by one.

A personal note

Check the system for things you might need to configure, like a crontab or other services.

A script that handles privileges on the docker volumes

To avoid access problems with the various external volumes we created the mysql user and group on the host machine as follows:

sudo groupadd -g 999 mysql;
sudo useradd -u 999 mysql -g mysql;

Then we execute the following to repair ownership issues with our containers. Please note that this script is custom to a particular installation and might not meet your needs.

#!/bin/bash

sudo chown -R www-data:www-data ~/volumes/*;
sudo chown -R bob:bob ~/volumes/letsencrypt/ ~/volumes/reverse/private/;
find ~/volumes/ -maxdepth 2 -type d -name mysql -exec sudo chown -R mysql:mysql '{}' \;;

How to mount a qcow2 disk image that does not contain an Ubuntu LVM installation

Mounting a qcow2 disk image on your host server can be accomplished with the help of this fast method. Thanks to this feature, it is possible to reset passwords, alter files, or recover data even while the virtual machine is not running. This specific method does not allow mounting disks with LVM as they are not properly recognized the volume group tools (e.g. vgdisplay).

Enable Network block device (NBD) module on the host

sudo modprobe nbd max_part=8;

Network block device, or NBD, is a protocol on Linux that the OS can use to forward a block device (usually a hard disk or partition) from one system to another. This can be accomplished by sending the block device over the network.
For instance, a hard disk drive attached to another computer may be accessed by a local machine that is part of the same network.

Connect the QCOW2 image as a network block device

sudo qemu-nbd --connect=/dev/nbd0 /var/lib/libvirt/images/miner.qcow2;
#Use QEMU Disk Network Block Device Utility

We used the above command to export the QEMU disk image (miner.qcow2) using the NBD protocol and connect it to the NBD device (/dev/nbd0).

Identify the available partitions.

Check if the device has a UUID of an LVM partition in the QCOW2 image

sudo lsblk -f /dev/nbd0;

The lsblk command will provide information about all available block devices or the ones you choose. To obtain information, the lsblk command reads the sysfs filesystem and the udev db. It then attempts to read LABELs, UUIDs, and filesystem types from the block device if the udev db is unavailable, or if lsblk was compiled without udev support. In this particular scenario, root rights are required. Sample output can be seen below:

NAME         FSTYPE      FSVER    LABEL UUID                                   FSAVAIL FSUSE% MOUNTPOINTS
nbd0                                                                                          
├─nbd0p1                                                                                      
├─nbd0p2                                                                                      
└─nbd0p3     LVM2_member LVM2 001       xniXr3-gWWj-xS0J-8TaT-EtDt-vZtR-92Z5ms                
  └─ubuntu--vg-ubuntu--lv
             ext4        1.0            be0a2dba-ac27-4dfd-9f90-60ae9196d5e6

Identify the virtual machine partitions

fdisk /dev/nbd0 -l;

fdisk is a program that is driven by dialog to create and manipulate partition tables. It can read GPT, MBR, Sun, SGI, and BSD partition tables. If no devices are specified, the operating system will use the devices listed in /proc/partitions (provided that this file already exists). Devices are always displayed in the order that they are specified on the command line or in the order that they are listed by the kernel in /proc/partitions, whichever comes first.

Mount the partition of the virtual machine

After you identify the partition that you need to mount, use the mount command to perform the action to a mounting point of your choosing.

#In this example, we assume that we want to mount nbd0p1 to /mnt/miner that we created.
sudo mkdir /mnt/miner/;
sudo mount /dev/nbd0p1 /mnt/miner/;

Upon successful execution, all the files of that partition will be available through our mounting point. If you try to mount an LVM partition, you will get the following error:

sudo mount /dev/nbd0p3 /mnt/miner/
mount: /mnt/miner: unknown filesystem type 'LVM2_member'.

In this tutorial, we do not handle this problem using this method. See below how we handled it using the guestfish tool.

Clean Up

After you are done, unmount, disconnect, and remove the NBD module if you do not plan on using it further.

#Unmount the partition
umount /mnt/miner/;
#Disconnect the image from the NBD device
qemu-nbd --disconnect /dev/nbd0;
#Unload the NBD module
rmmod nbd;

How to mount a qcow2 disk image that contains an Ubuntu LVM installation

In one case, we had an issue where we needed to mount a disk image of a VM that contained an LVN installation. The above solution did not work, as we could not access the LVM partitions properly. The volume group tools did not recognize the partitions as they were network block devices. To handle this scenario, we used guestfish.

Examining and altering the filesystems of virtual machines is possible with the help of the shell and command-line tool known as Guestfish. It uses libguestfs and makes all of the features of the guestfs API available. So, we installed guestfish straight from the repositories as follows:

sudo apt-get install guestfish;

Then, we connected to the image that contained the LVM installation as follows:

sudo guestfish --rw -a /var/lib/libvirt/images/miner.qcow2;

After connecting to the image, we executed the following:

  • run
    With run, we initiated the library and attached the disk image
  • list-filesystems
    We listed the file systems found by libguestfs
  • mount
    After identifying the partition we needed to mount, we used this command to assign it to the root path /
  • ls
    This command works as expected, we were able to list the files in various directories, etc.
  • edit
    We used edit to modify the file we needed to process
  • exit
    We used exit to terminate this session

Below is a sample example of our execution.

Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.

Type: ‘help’ for help on commands
      ‘man’ to read the manual
      ‘quit’ to quit the shell

><fs> run
 100% ⟦▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒⟧ 00:00
><fs>  list-filesystems
/dev/sda1: unknown
/dev/sda2: ext4
/dev/ubuntu-vg/ubuntu-lv: ext4
><fs> mount /dev/ubuntu-vg/ubuntu-lv /
><fs> ls /home
tux
bob
><fs> edit /etc/default/grub

Using a CSV input file, find all documents that contain any of the items in a cell of a column

The following code will use as input one column from a CSV file, and for each element in the column, it will perform a full-text search in a folder to find all files that contain that element.

#!/bin/bash

#Execution parameters
# 1 - the folder to look in for the element
# 2 - the input file that contains the search terms
# 3 - the column of interest
# 4 - the delimiter to use to find the column
# e.g. ./searchEachElement.sh ./2\ Print/ book.csv 5 ','

folder="$1";
input="$2";
column="$3";
delimiter="$4"

while read -r line; do
  needle=`echo $line | cut -d "$delimiter" -f "$column"`; 
  echo ">>> $needle"
  find "$folder" -type f -exec grep "$needle" -s -l '{}' \;
done < "$input";

More information on the full-text search can be found here.