GNU/Linux


Create an SSH tunnel for HTTP web server proxy

Once upon a time, in a kingdom of computers and networks, there lived a brave knight named “ssh”. He was known throughout the land for his bravery and cunning abilities to securely transport data between two distant lands.

One day, a young prince came to the knight with a request. The prince had a precious website that was housed in a remote castle, accessible only by a specific host known as “remotehost”. He wanted his people to be able to visit the website, but the path was treacherous and insecure.

The prince asked the knight if he could help him. The knight thought for a moment and then said, “Fear not, young prince! I can help you. I shall use my magical command ‘ssh -L 80:remotehost:80 [email protected]’ to create a secure pathway for your people to visit the website.”

The prince was overjoyed and asked the knight to explain how it worked.

“The ‘-L’ flag stands for Local Forwarding. It creates a tunnel between the local computer and the remote server, which we shall call ‘myserver’. This tunnel shall forward all requests from the local port 80 to the remote host ‘remotehost’ on port 80,” explained the knight.

“And ‘[email protected]’?”, asked the prince.

“Ah, yes. That is the credentials of the user that we shall use to log in to the remote server ‘myserver’. This shall ensure that the communication between your local computer and the remote host is secure and protected,” the knight replied with a nod.

The prince was grateful and thanked the knight for his help. The knight then used his magical command and created a secure pathway for the prince’s people to visit the website, which they did happily ever after.

And that, dear reader, is the story of the command “ssh -L 80:remotehost:80 [email protected]”.

ssh -L 80:remotehost:80 [email protected];

The command ssh -L 80:remotehost:80 [email protected] is an example of using the ssh utility to create a secure shell connection to a remote server. The command also establishes a local port forward, which forwards all incoming traffic on the local port 80 to the remote host remotehost on port 80.

ssh (Secure Shell) is a protocol for securely accessing a remote computer. The basic usage of ssh is to log in to a remote server using a username and password or an SSH key. The ssh command allows you to securely log in to a remote server, execute commands on the remote server, and transfer files between your local computer and the remote server.

In this particular command, the -L flag is used to specify a local port forward. A local port forward is a way of forwarding traffic from a local port to a remote host and port. In this case, the traffic is being forwarded from the local port 80 to the remote host remotehost on port 80.

The [email protected] part of the command is the credentials that are used to log in to the remote server myserver. The user is the username and myserver is the hostname or IP address of the remote server. The combination of the username and remote server information allows ssh to securely log in to the remote server.

Once the secure shell connection has been established and the local port forward has been created, any traffic sent to the local port 80 will be forwarded to the remote host remotehost on port 80. This allows the local computer to access services on the remote host as if they were running on the local computer.

In summary, the ssh -L 80:remotehost:80 [email protected] command is an example of using the ssh utility to create a secure shell connection to a remote server and establish a local port forward. The local port forward allows the local computer to access services on the remote host as if they were running on the local computer.

ssh -L 80:remotehost:80 [email protected];

Once the connection has been established using the command ssh -L 80:remotehost:80 [email protected], you can access the website hosted on the remote host remotehost by browsing to http://localhost in your web browser.

Since the local port 80 has been forwarded to the remote host remotehost on port 80, all traffic sent to http://localhost will be forwarded to the remote host. This allows you to access the website hosted on the remote host as if it were running on your local computer.

Keep in mind that the secure shell connection created using the ssh command must be active and running in order to access the website hosted on the remote host. If the connection is closed or terminated, the website will no longer be accessible through the local port forward.


Using minicom to connect to Cisco Console

sudo minicom --device /dev/ttyUSB0 --baudrate 9600 --8bit;

sudo is a command that allows the user to run another command with superuser privileges.

minicom is a terminal emulation program that allows the user to communicate with a serial device.
The --device flag followed by /dev/ttyUSB0 specifies the serial device that minicom should use for communication.
The --baudrate flag followed by 9600 specifies the baud rate (i.e. the speed at which data is transmitted) of the serial connection.
The --8bit flag sets the number of data bits to 8.

So this command is running minicom as a superuser, connecting to the device at “/dev/ttyUSB0” with a baud rate of 9600 and 8-bit data

In addition to the command line arguments above, we had to ensure that flow control (both hardware and software) was off and no parity was given.


Rough notes on setting up an Ubuntu 22.04LTS server with docker and snap

IP allocations

First, we set up a static IP on the network device that would handle all external traffic and a DHCP on the network device that would access the management network, which is connected for maintenance.

To do so, we created the following file:

/etc/netplan/01-netcfg.yaml

using the following command:

sudo nano /etc/netplan/01-netcfg.yaml;

and added the following content to it:

# This file describes the network interfaces available on your system
# For more information, see netplan(5).
network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      dhcp4: no
      addresses: [192.168.45.13/24]
      gateway4: 192.168.45.1
      nameservers:
          addresses: [1.1.1.1,8.8.8.8]
    eth1:
      dhcp4: yes

To apply the changes, we executed the following:

sudo netplan apply;

Update everything (the operating system and all packages)

Usually, it is a good idea to update your system before making significant changes to it:

sudo apt update -y; sudo apt upgrade -y; sudo apt autoremove -y;

Install docker via snap

In this setup, we did not use the docker version available on the Ubuntu repositories, we went for the ones from the snap. To install it, we used the following commands:

sudo apt install snapd;
sudo snap install docker;

Increase network pool for docker daemon

To handle the following problem:

ERROR: could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network

We modified the following file

/var/snap/docker/current/config/daemon.json

using the command:

sudo nano /var/snap/docker/current/config/daemon.json;

and set the content to be as follows:

{
    "log-level":        "error",
    "storage-driver":   "overlay2",
    "default-address-pools": [
        {
            "base": "172.80.0.0/16",
            "size": 24
        },
        {
            "base": "172.90.0.0/16",
            "size": 24
        }
    ]
}

We executed the following command to restart the docker daemon and get the network changes applied:

sudo snap disable docker;
sudo snap enable docker;

Gave access to our user to manage the docker

We added our user to the docker group so that we could manage the docker daemon without sudo rights.

sudo addgroup --system docker;
sudo adduser $USER docker;
newgrp docker;
sudo snap disable docker;
sudo snap enable docker;

After that, we made sure that the access rights to the volumes were correct:

sudo chown -R www-data:www-data /volumes/*
sudo chown -R tux:tux /volumes/letsencrypt/ /volumes/reverse/private/

Deploying

After we copied everything in place, we executed the following command to create our containers and start them with the appropriate networks and volumes:

export COMPOSE_HTTP_TIMEOUT=600;
docker-compose up -d --remove-orphans;

We had to increase the timeout as we were getting the following error:

ERROR: for container_a  UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).

Updating the databases and performing any repairs

First, we connected to a terminal of the database container using the following command:

docker exec -it mariadb_c1 /bin/bash;

From there, we executed the following commands:

mysql_upgrade --user=root --password;
mysqlcheck -p -o --all-databases;

Bulk / Batch stopping docker containers

The following commands will help you stop many docker containers simultaneously. Of course, you can change the command stop to another, for example rm or whatever suits your needs.

You need to keep in mind that if you have dependencies between containers, you might need to execute the commands below more than once.

Stop all docker containers.

docker container stop $(docker container ls -q);
#This command creates a list of all containers.
#Using the -q parameter, we only get back the container ID and not all information about them.
#Then it will stop each container one by one.

Stop specific docker containers using a filter on their name.

docker container stop $(docker container ls -q --filter name=_web);
#This command finds all containers that their name contains _web.
#Using the -q parameter, we only get back the container ID and not all information about them.
#Then it will stop each container one by one.

A personal note

Check the system for things you might need to configure, like a crontab or other services.

A script that handles privileges on the docker volumes

To avoid access problems with the various external volumes we created the mysql user and group on the host machine as follows:

sudo groupadd -g 999 mysql;
sudo useradd -u 999 mysql -g mysql;

Then we execute the following to repair ownership issues with our containers. Please note that this script is custom to a particular installation and might not meet your needs.

#!/bin/bash

sudo chown -R www-data:www-data ~/volumes/*;
sudo chown -R bob:bob ~/volumes/letsencrypt/ ~/volumes/reverse/private/;
find ~/volumes/ -maxdepth 2 -type d -name mysql -exec sudo chown -R mysql:mysql '{}' \;;

Extend LVM space to the rest of the free space on the disk

Recently, we formatted a server with Ubuntu 22.04 LTS. While selecting the disk settings, we selected the encrypted LVM partition scheme, and even though we selected the whole disk, we did not notice that the LVM would only allocate, by default, 100GB out of the 600GB available on the raid volume.

So, we proceeded with the installation, and at some point, we noticed that we ran out of space which should not have happened.

Using the command df -h we quickly spotted the problem:

$ df -h
Filesystem                 Size  Used Avail Use% Mounted on
tmpfs                      3,2G  3,9M  3,2G   1% /run
/dev/mapper/vgubuntu-root  100G   83G   17G  83% /
tmpfs                       16G   40M   16G   1% /dev/shm
tmpfs                      5,0M  4,0K  5,0M   1% /run/lock
/dev/sda5                  703M  257M  395M  40% /boot
/dev/sda1                  511M   24K  511M   1% /boot/efi
tmpfs                       16G     0   16G   0% /run/qemu
tmpfs                      3,2G  156K  3,2G   1% /run/user/1000

/dev/mapper/vgubuntu-root was only 100GB instead of the 600GB that we would expect it to be.

Using the command vgdisplay we verified that the space allocated to the logical volume group was not what we wanted.

To fix the problem, we issued the following commands:

lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv;
resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv;

lvextend instructed our logical volume to consume all the available space on the hosting disk.

Then resize2fs allocated all the available space to our partition.