pull


Bind for 0.0.0.0:443 failed: port is already allocated

On a Docker installation that we have, we updated the image files for our containers using the following command:

docker images --format "{{.Repository}}:{{.Tag}}" | grep ':latest' | xargs -L1 docker pull;

Then we tried to update our container, as usual, using the docker-compose command.

export COMPOSE_HTTP_TIMEOUT=180; # We extend the timeout to ensure there is enough time for all containers to start
docker-compose up -d --remove-orphans;

Unfortunately, we got the following error:

export COMPOSE_HTTP_TIMEOUT=180;
docker-compose up -d --remove-orphans;

Starting entry ... 
Starting entry ... error

ERROR: for entry  Cannot start service entry: driver failed programming external connectivity on endpoint entry (d3a5d95f55c4e872801e92b1f32d9693553bd553c414a371b8ba903cb48c2bd5): Bind for 0.0.0.0:443 failed: port is already allocated

ERROR: for entry  Cannot start service entry: driver failed programming external connectivity on endpoint entry (d3a5d95f55c4e872801e92b1f32d9693553bd553c414a371b8ba903cb48c2bd5): Bind for 0.0.0.0:443 failed: port is already allocated
ERROR: Encountered errors while bringing up the project.

We used the docker container ls command to check which container was hoarding port 443, but none was doing so. Because of this, we assumed that docker ran into a bug. The first step we took (and the last) which solved the problem was to restart the docker service as follows:

sudo service docker restart;

This command was enough to fix our problem without messing with docker further.


ERROR: for container_a UnixHTTPConnectionPool(host=’localhost’, port=None): Read timed out. (read timeout=60)

There is this docker server that we have access to, which probably due to lousy planning, we put way too many containers on it. The server does not have SSD disks, and for that reason, whenever there are too many IO operations, it becomes unresponsive. When we mass update all containers by updating the images using the following command and then issuing a fresh docker-compose, we get a lot of time-out errors.

The commands we use to update the images and recreate our containers using the new images are the following:
(Please note that these commands need to execute from the folder where the file docker-compose.yml resides)

#Update all docker images that have the 'latest' tag
docker images --format "{{.Repository}}:{{.Tag}}" | grep ':latest' | xargs -L1 docker pull;
#Rebuild all containers using the new images.
docker-compose up -d;

After executing the second command, we often get many copies of the following error:

ERROR: for container_a  UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)

This error indicates that the recreate command was waiting for too long for the docker daemon to respond with no success. At the end of the output, we can see that it was waiting for 60 seconds.

At the end of the output, we get the following information and advice:

ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).

Following the advice, we used the following command to overwrite the value of the COMPOSE_HTTP_TIMEOUT variable to a more significant number.

#Increase timeout period to 120 seconds.
export COMPOSE_HTTP_TIMEOUT=120;
#Rebuild all containers using the new images.
docker-compose up -d;

Doing so, we were able to rebuild all containers without reissuing many times the up command.

Sidenote

This server really does have a lot of containers, we had to create the file /etc/docker/daemon.json so that we would have enough network addressing space to handle all the bridges and sub-networks.

The contents of /etc/docker/daemon.json are:

{
  "default-address-pools": [
    {
      "base": "172.80.0.0/16",
      "size": 24
    },
    {
      "base": "172.90.0.0/16",
      "size": 24
    }
  ]
}

The above configuration solved the following problem for us:

ERROR: could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network

How to update all pulled Docker images that are tagged as latest

Recently, we moved a client to Docker and we needed to give them a way to automagically update all “latest” Docker images.
Since Docker does not have a single command to update all pulled images we used this one-liner to update all images at once:


docker images --format "{{.Repository}}:{{.Tag}}" | grep ':latest' | xargs -L1 docker pull;

The above command will:

  1. Print all images in the format RepositoryName:Tag
  2. Then it will filter all lines that end with the suffix :latest (which is the tag we are interested in)
  3. Finally, for each result (which is one per line) it will be fed  via the command xargs -L1 to the command docker pull

Please note that you cannot really update an existing container using docker commands, what you need to do is actually:

  1. Stop the container whose image was updated
  2. Delete it
  3. Recreate it using the parameters of the previous container

As you understand, it is a good practice to save all of your data in volumes outside the container to make the update process easy.

For example, below you will find the commands using which we updated the jwilder/nginx-proxy and the jrcs/letsencrypt-nginx-proxy-companion images along with the two containers that were using them:


docker container stop nginx-proxy nginx-letsencrypt;
docker container rm nginx-proxy nginx-letsencrypt;
docker run -d -p 443:443 \
     --name nginx-proxy \
     --net reverse-proxy \
     -v $HOME/certs:/etc/nginx/certs:ro \
     -v /etc/nginx/vhost.d \
     -v /usr/share/nginx/html \
     -v /var/run/docker.sock:/tmp/docker.sock:ro \
     -v $HOME/my_proxy.conf:/etc/nginx/conf.d/my_proxy.conf:ro \
     --label com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true \
     jwilder/nginx-proxy:latest;

docker run -d \
     --name nginx-letsencrypt \
     --net reverse-proxy \
     --volumes-from nginx-proxy \
     -v $HOME/certs:/etc/nginx/certs:rw \
     -v /var/run/docker.sock:/var/run/docker.sock:ro \
     jrcs/letsencrypt-nginx-proxy-companion:latest;


How does ‘git pull’ and ‘git fetch’ differ?

In simple terms, the difference between the two Git commands is that git pull is composed by a git fetch followed by a git merge.

When you use git pull, Git will automatically merge any pulled commits into the branch you are currently working in, without letting you review them first. You will get a prompt only if there is conflict found during automatic merging.

When you use git fetch, Git retrieves any commits from the target branch that do not exist in your current branch and stores them in your local repository. However, it will not merge them with your current branch. To integrate the new commits into your current branch, you need to use git merge manually.
This command is particularly useful if you need to keep your repository up to date, but are working on something that might break if you merge your files.
For example, if you will go offline and you need to have those commits available to you but cannot merge at the time, using git fetch you will download the new commits to your machine without affecting your current code. Later you will be able to merge the new commits to your branch as/when you please.