find


Rough notes on setting up an Ubuntu 22.04LTS server with docker and snap

IP allocations

First, we set up a static IP on the network device that would handle all external traffic and a DHCP on the network device that would access the management network, which is connected for maintenance.

To do so, we created the following file:

/etc/netplan/01-netcfg.yaml

using the following command:

sudo nano /etc/netplan/01-netcfg.yaml;

and added the following content to it:

# This file describes the network interfaces available on your system
# For more information, see netplan(5).
network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      dhcp4: no
      addresses: [192.168.45.13/24]
      gateway4: 192.168.45.1
      nameservers:
          addresses: [1.1.1.1,8.8.8.8]
    eth1:
      dhcp4: yes

To apply the changes, we executed the following:

sudo netplan apply;

Update everything (the operating system and all packages)

Usually, it is a good idea to update your system before making significant changes to it:

sudo apt update -y; sudo apt upgrade -y; sudo apt autoremove -y;

Install docker via snap

In this setup, we did not use the docker version available on the Ubuntu repositories, we went for the ones from the snap. To install it, we used the following commands:

sudo apt install snapd;
sudo snap install docker;

Increase network pool for docker daemon

To handle the following problem:

ERROR: could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network

We modified the following file

/var/snap/docker/current/config/daemon.json

using the command:

sudo nano /var/snap/docker/current/config/daemon.json;

and set the content to be as follows:

{
    "log-level":        "error",
    "storage-driver":   "overlay2",
    "default-address-pools": [
        {
            "base": "172.80.0.0/16",
            "size": 24
        },
        {
            "base": "172.90.0.0/16",
            "size": 24
        }
    ]
}

We executed the following command to restart the docker daemon and get the network changes applied:

sudo snap disable docker;
sudo snap enable docker;

Gave access to our user to manage the docker

We added our user to the docker group so that we could manage the docker daemon without sudo rights.

sudo addgroup --system docker;
sudo adduser $USER docker;
newgrp docker;
sudo snap disable docker;
sudo snap enable docker;

After that, we made sure that the access rights to the volumes were correct:

sudo chown -R www-data:www-data /volumes/*
sudo chown -R tux:tux /volumes/letsencrypt/ /volumes/reverse/private/

Deploying

After we copied everything in place, we executed the following command to create our containers and start them with the appropriate networks and volumes:

export COMPOSE_HTTP_TIMEOUT=600;
docker-compose up -d --remove-orphans;

We had to increase the timeout as we were getting the following error:

ERROR: for container_a  UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).

Updating the databases and performing any repairs

First, we connected to a terminal of the database container using the following command:

docker exec -it mariadb_c1 /bin/bash;

From there, we executed the following commands:

mysql_upgrade --user=root --password;
mysqlcheck -p -o --all-databases;

Bulk / Batch stopping docker containers

The following commands will help you stop many docker containers simultaneously. Of course, you can change the command stop to another, for example rm or whatever suits your needs.

You need to keep in mind that if you have dependencies between containers, you might need to execute the commands below more than once.

Stop all docker containers.

docker container stop $(docker container ls -q);
#This command creates a list of all containers.
#Using the -q parameter, we only get back the container ID and not all information about them.
#Then it will stop each container one by one.

Stop specific docker containers using a filter on their name.

docker container stop $(docker container ls -q --filter name=_web);
#This command finds all containers that their name contains _web.
#Using the -q parameter, we only get back the container ID and not all information about them.
#Then it will stop each container one by one.

A personal note

Check the system for things you might need to configure, like a crontab or other services.

A script that handles privileges on the docker volumes

To avoid access problems with the various external volumes we created the mysql user and group on the host machine as follows:

sudo groupadd -g 999 mysql;
sudo useradd -u 999 mysql -g mysql;

Then we execute the following to repair ownership issues with our containers. Please note that this script is custom to a particular installation and might not meet your needs.

#!/bin/bash

sudo chown -R www-data:www-data ~/volumes/*;
sudo chown -R bob:bob ~/volumes/letsencrypt/ ~/volumes/reverse/private/;
find ~/volumes/ -maxdepth 2 -type d -name mysql -exec sudo chown -R mysql:mysql '{}' \;;

Using a CSV input file, find all documents that contain any of the items in a cell of a column

The following code will use as input one column from a CSV file, and for each element in the column, it will perform a full-text search in a folder to find all files that contain that element.

#!/bin/bash

#Execution parameters
# 1 - the folder to look in for the element
# 2 - the input file that contains the search terms
# 3 - the column of interest
# 4 - the delimiter to use to find the column
# e.g. ./searchEachElement.sh ./2\ Print/ book.csv 5 ','

folder="$1";
input="$2";
column="$3";
delimiter="$4"

while read -r line; do
  needle=`echo $line | cut -d "$delimiter" -f "$column"`; 
  echo ">>> $needle"
  find "$folder" -type f -exec grep "$needle" -s -l '{}' \;
done < "$input";

More information on the full-text search can be found here.


ImageMagick apply blur to photo using a black and white mask

Recently, we were trying to apply blurriness to the frames of a video using a custom mask. Our needs would not be short of describing using geometric shapes, so we created the following image (blur.png) as a template for the blurring effect:

The above mask applies a blur effect to all black pixels and leaves all white pixels in the original image intact.

The command that we used was the following:

convert "${FILE}" -mask blur.png -blur 0x8 +mask "blur/${FILE}";

This command creates a new copy of the input file and places it into the folder named blur, so be sure to make the folder before using the above command (e.g., using the command mkdir blur).

Parameters and other information

  • -mask this flag assosiates the filename that is given with the mask of the command.
  • -blur defines the geometry that is used reduce image noise and reduce detail levels.
    To increase the blurriness you can increase the number in this variable 0x8.
  • +mask The ‘plus’ form of the operator +mask removes the mask from the input image.

The version of convert that we used for this example was the following:

Version: ImageMagick 6.9.10-23 Q16 x86_64 20190101 https://imagemagick.org
Copyright: © 1999-2019 ImageMagick Studio LLC

Below is a result frame from a video that we processed:

Additional material

To apply it to all video frames in the folder, we used the following command to make our life easier:

find . -maxdepth 1 -type f -name "*.ppm" -exec bash -c 'FILE="$1"; convert "${FILE}" -mask blur.png -blur 0x8 +mask "blur/${FILE}";' _ '{}' \;

The above command finds all frames in the current folder and executes the convert command described above. Since FFmpeg names the frames as PPM, we used that to filter our search. The blur folder is in the same folder as the original images. To avoid processing the pictures in that folder again, we defined the -maxdepth parameter in find that prevents it from navigating into child folders of the one we are working in.


Bash: Problem with reading files with spaces in the name using a for loop

Recently we were working on a bash script that was supposed to find and process some files that matched certain criteria. The script would process the files one by one and the criteria would be matched using the find command. To implement our solution, we returned the results of the find back to the for loop in an attempt to keep it simple and human readable.

Our original code was the following:
(do not use it, see explanation below)

for file in `find $search_path -type f -name '*.kml'`; do
  # Formatting KML file to be human friendly.
  xmllint --format "$file" > "$output_path/$file";
done

Soon we realized that we had a very nasty bug, the way we formatted the command it would break filenames that had spaces in them into multiple for loop entries and thus we would get incorrect filenames back to process.

To solve this issue we needed a way to force our loop to read the results of find one line at a time instead of one word at a time. The solution we used in the end was fairly different than the original code as it had the following significant changes:

  • the results of the find command were piped into the loop
  • the loop was not longer a for loop and a while loop was used instead
  • it used the read command that reads one line at a time to fill in the filename variable
    (the -r parameter does not allow backslashes to escape any characters)

Solution

find $search_path -type f -name '*.kml' | 
while read -r file; do
  # Formatting KML file to be human friendly.
  xmllint --format "$file" > "$output_path/$file";
done