TensorFlow


Using Neural Style Transfer on videos

We decided to revisit some old work on Neural Style Transfer and TensorFlow. Using the sample code for Fast Neural Style Transfer from this page https://www.tensorflow.org/tutorials/generative/style_transfer#fast_style_transfer_using_tf-hub and the image stylization model from here https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2, we created a tool.

The goal of our venture was to simplify the procedure of changing the style of media. The input could either be an image, a series of images, a video, or a group of videos.

This tool (for which the code is below) comprises a bash script and a python code.
On a high level, it reads all videos from one folder and all styles from another. Then it recreates all those videos with all those styles making new videos out of combinations of the two.

Hardware

Please note that we enabled CUDA and GPU processing on our computer before using the tool. Without them, execution would be prolonged dramatically due to the inability of a general-purpose CPU to make many mathematic operations as fast as the GPU.
To enable CUDA, we followed the steps found in these notes: https://bytefreaks.net/gnulinux/rough-notes-on-how-to-install-cuda-on-an-ubuntu-20-04lts

Software

Conda / Anaconda

We installed and activated anaconda on an Ubuntu 20.04LTS desktop. To do so, we installed the following dependencies from the repositories:

sudo apt-get install libgl1-mesa-glx libegl1-mesa libxrandr2 libxrandr2 libxss1 libxcursor1 libxcomposite1 libasound2 libxi6 libxtst6;

Then, we downloaded the 64-Bit (x86) Installer from (https://www.anaconda.com/products/individual#linux).

Using a terminal, we followed the instructions here (https://docs.anaconda.com/anaconda/install/linux/) and performed the installation.

Python environment and OpenCV for Python

Following the previous step, we used the commands below to create a virtual environment for our code. We needed python version 3.9 (as highlighted here https://www.anaconda.com/products/individual#linux) and the following packages tensorflow matplotlib tensorflow_hub for python.

source ~/anaconda3/bin/activate;
conda create --yes --name FastStyleTransfer python=3.9;
conda activate FastStyleTransfer;
pip install --upgrade pip;
pip install tensorflow matplotlib tensorflow_hub;

faster.py

import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub

from os import listdir
from os.path import isfile, join

import argparse

print("TF Version: ", tf.__version__)
print("TF Hub version: ", hub.__version__)
print("Eager mode enabled: ", tf.executing_eagerly())
print("GPU available: ", tf.config.list_physical_devices('GPU'))

# Parsing command line arguments while making sure they are mandatory/required
parser = argparse.ArgumentParser()
parser.add_argument(
    "--input",
    type=str,
    required=True,
    help="The directory that contains the input video frames.")
parser.add_argument(
    "--output",
    type=str,
    required=True,
    help="The directory that will contain the output video frames.")
parser.add_argument(
    "--style",
    type=str,
    required=True,
    help="The location of the style frame.")


# Press the green button in the gutter to run the script.
if __name__ == '__main__':
    
    args = parser.parse_args()
    input_path = args.input + '/'
    output_path = args.output + '/'
    # List all files from the input directory. This directory should contain at least one image/video frame.
    onlyfiles = [f for f in listdir(input_path) if isfile(join(input_path, f))]

    # Loading the input style image.
    style_image_path = args.style  # @param {type:"string"}
    style_image = plt.imread(style_image_path)

    # Convert to float32 numpy array, add batch dimension, and normalize to range [0, 1]. Example using numpy:
    style_image = style_image.astype(np.float32)[np.newaxis, ...] / 255.

    # Optionally resize the images. It is recommended that the style image is about
    # 256 pixels (this size was used when training the style transfer network).
    # The content image can be any size.
    style_image = tf.image.resize(style_image, (256, 256))
    
    # Load image stylization module.
    # Enable the following line and disable the next two to load the stylization module from a local folder.
    # hub_module = hub.load('magenta_arbitrary-image-stylization-v1-256_2')
    # Disable the above line and enable these two to load the stylization module from the internet.
    hub_handle = 'https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2'
    hub_module = hub.load(hub_handle)
 

    for inputfile in onlyfiles:
        content_image_path = input_path + inputfile  # @param {type:"string"}
        content_image = plt.imread(content_image_path)
        # Convert to float32 numpy array, add batch dimension, and normalize to range [0, 1]. Example using numpy:
        content_image = content_image.astype(np.float32)[np.newaxis, ...] / 255.

        # Stylize image.
        outputs = hub_module(tf.constant(content_image), tf.constant(style_image))
        stylized_image = outputs[0]

        # Saving stylized image to disk.
        content_outimage_path = output_path + inputfile  # @param {type:"string"}
        tf.keras.utils.save_img(content_outimage_path, stylized_image[0])


The above code can be invoked as follows:

python3 faster.py --input "$input_frames_folder" --output "$output_frames_folder" --style "$style";

It requires the user to define:

  1. The folder in which all input images should be.
  2. The folder where the user wants the stylized images to be saved in. Please note that the folder needs to be created by the user before the execution.
  3. The path to the image that will be used as input to the neural style transfer.

execute.sh

#!/bin/bash
#source ~/anaconda3/bin/activate;
#conda create --yes --name FastStyleTransfer python=3.9;
#pip install --upgrade pip;
#pip install tensorflow matplotlib tensorflow_hub;
#conda activate FastStyleTransfer;

source ~/anaconda3/bin/activate;
conda activate FastStyleTransfer;

input_videos="./input/videos/*";
input_styles="./input/styles/*";
input_frames="./input/frames";
input_audio="./input/audio";
output_frames="./output/frames";
output_videos="./output/videos";

# Loop on each video in the input folder.
for video in $input_videos;
do
  echo "$video";
  videoname=$(basename "$video");

  # Extract all frames from the video file and save them in a new folder using 8-digit numbers with zero padding in an incremental order.
  input_frames_folder="$input_frames/$videoname";
  mkdir -p "$input_frames_folder";
  ffmpeg -v quiet -i "$video" "$input_frames_folder/%08d.ppm";

  # Extract the audio file from the video to the format of an mp3. We will need this audio later to add it to the final product.
  input_audio_folder="$input_audio/$videoname";
  mkdir -p "$input_audio_folder";

  audio="";
  # Only VP8 or VP9 or AV1 video and Vorbis or Opus audio and WebVTT subtitles are supported for WebM.
  if [[ $videoname == *.webm ]]; then
    audio="$input_audio_folder/$videoname.ogg";
    ffmpeg -v quiet -i "$video" -vn -c:a libvorbis -y "$audio";  
  else
    audio="$input_audio_folder/$videoname.mp3";
    ffmpeg -v quiet -i "$video" -vn -c:a libmp3lame -y "$audio";
  fi

  # Retrieve the frame rate from the input video. We will need it to configure the final video later.
  frame_rate=`ffprobe -v 0 -of csv=p=0 -select_streams v:0 -show_entries stream=r_frame_rate "$video"`;

  # Loop on each image style from the input styles folder.
  for style in $input_styles;
  do
    echo "$style";
    stylename=$(basename "$style");
    output_frames_folder="$output_frames/$videoname/$stylename";
    mkdir -p "$output_frames_folder";

    # Stylize all frames using the input image and write all processed frames to the output folder.
    python3 faster.py --input "$input_frames_folder" --output "$output_frames_folder" --style "$style";

    # Combine all stylized video frames and the exported audio into a new video file.
    output_videos_folder="$output_videos/$videoname/$stylename";
    mkdir -p "$output_videos_folder";
    ffmpeg -v quiet -framerate "$frame_rate" -i "$output_frames_folder/%08d.ppm" -i "$audio" -pix_fmt yuv420p -acodec copy -y "$output_videos_folder/$videoname";
    
    rm -rf "$output_frames_folder";
  done
  rm -rf "$output_frames/$videoname";
  rm -rf "$input_frames_folder";
  rm -rf "$input_audio_folder";
done

The above script does not accept parameters, but you should load the appropriate environment before calling it. For example:

source ~/anaconda3/bin/activate;
conda activate FastStyleTransfer;
./execute.sh;

Please note that this procedure consumes significant space on your hard drive; once you are done with a video, you should probably delete all data from the output folders.


Rough notes on how to install CUDA on an Ubuntu 20.04LTS 1

To anyone coming across this post, please note that Canonical does not officially support CUDA. Not being supported formally means that you could face problems that we did not while setting up our machine.

Recently we wanted to use our GPU to execute various TensorFlow projects. We had to use version 1, specifically version 1.15 of TensorFlow, on one of our attempts. That setup was causing many problems even after version 2 was working perfectly. There must be something different with version 1, and it is not supported correctly anymore. We recommend avoiding using it unless needed.

Finding out what graphics card we have

Before getting started, we executed the following command that gave us the model of our graphics card:

lspci | grep -i nvidia;
$ lspci | grep -i nvidia
01:00.0 VGA compatible controller: NVIDIA Corporation TU104 [GeForce RTX 2080 Rev. A] (rev a1)
01:00.1 Audio device: NVIDIA Corporation TU104 HD Audio Controller (rev a1)
01:00.2 USB controller: NVIDIA Corporation TU104 USB 3.1 Host Controller (rev a1)
01:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU104 USB Type-C UCSI Controller (rev a1)

Installing dependencies and NVidia repositories

Then we proceeded to install the headers of our Linux kernel, the repositories of NVidia, and finally install CUDA using apt-get.

sudo apt-get install linux-headers-$(uname -r);
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin;
sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600;
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/7fa2af80.pub;
sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /";
sudo apt-get update;
sudo apt-get -y install cuda;
sudo apt-get install nvidia-gds;

After this step, we rebooted the computer to load the NVidia graphics driver.


Playing with MASK RCNN on videos .. again

Source code for the implementation that created this video will be uploaded soon.

A first attempt at using a pre-trained implementation of Mask R-CNN on Python 3, Keras, and TensorFlow. The model generates bounding boxes and segmentation masks for each instance of an object in each frame. It’s based on Feature Pyramid Network (FPN) and a ResNet101 backbone.

Setup

Conda / Anaconda

First of all, we installed and activated anaconda on an Ubuntu 20.04LTS desktop. To do so, we installed the following dependencies from the repositories:

sudo apt-get install libgl1-mesa-glx libegl1-mesa libxrandr2 libxrandr2 libxss1 libxcursor1 libxcomposite1 libasound2 libxi6 libxtst6;

Then, we downloaded the 64-Bit (x86) Installer from (https://www.anaconda.com/products/individual#linux).

Using a terminal, we followed the instructions here (https://docs.anaconda.com/anaconda/install/linux/) and performed the installation.

Python environment and OpenCV for Python

Following the previous step, we used the commands below to create a virtual environment for our code. We needed python version 3.9 (as highlighted here https://www.anaconda.com/products/individual#linux) and OpenCV for python.

source ~/anaconda3/bin/activate;
conda create --name MaskRNN python=3.9;
conda activate MaskRNN;
pip install numpy opencv-python;

Problems that we did not anticipate

When we tried to execute our code in the virtual environment:

python3 main.py --video="/home/bob/Videos/Live @ Santa Claus Village 2021-11-13 12_12.mp4";

We got the following error:

Traceback (most recent call last):
  File "/home/bob/MaskRCNN/main.py", line 6, in <module>
    from cv2 import cv2
  File "/home/bob/anaconda3/envs/MaskRNN/lib/python3.9/site-packages/cv2/__init__.py", line 180, in <module>
    bootstrap()
  File "/home/bob/anaconda3/envs/MaskRNN/lib/python3.9/site-packages/cv2/__init__.py", line 152, in bootstrap
    native_module = importlib.import_module("cv2")
  File "/home/bob/anaconda3/envs/MaskRNN/lib/python3.9/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
ImportError: libGL.so.1: cannot open shared object file: No such file or directory

We realized that we were missing some additional dependencies for OpenCV as our Ubuntu installation was minimal. To fix this issue, we installed the following package from the repositories:

sudo apt-get update;
sudo apt-get install -y python3-opencv;

Installing TensorFlow 2 Object detection on Ubuntu 18.04 LTS 1

Following are some rough notes on Installing TensorFlow 2 Object detection on Ubuntu 18.04 LTS.
We were following this guide (https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/install.html) so we will be skipping some steps.

We had conda installed already from an older attempt so the following steps worked just fine.

conda create -n tensorflow pip python=3.8;
conda activate tensorflow;

We got an error with the following command so we used pip3 instead of pip.

pip install --ignore-installed --upgrade tensorflow==2.2.0;
Command 'pip' not found, but there are 18 similar ones.
pip3 install --ignore-installed --upgrade tensorflow==2.2.0;

Executing the above gave us another error:

Collecting tensorflow==2.2.0
Could not find a version that satisfies the requirement tensorflow==2.2.0 (from versions: 0.12.1, 1.0.0, 1.0.1, 1.1.0rc0, 1.1.0rc1, 1.1.0rc2, 1.1.0, 1.2.0rc0, 1.2.0rc1, 1.2.0rc2, 1.2.0, 1.2.1, 1.3.0rc0, 1.3.0rc1, 1.3.0rc2, 1.3.0, 1.4.0rc0, 1.4.0rc1, 1.4.0, 1.4.1, 1.5.0rc0, 1.5.0rc1, 1.5.0, 1.5.1, 1.6.0rc0, 1.6.0rc1, 1.6.0, 1.7.0rc0, 1.7.0rc1, 1.7.0, 1.7.1, 1.8.0rc0, 1.8.0rc1, 1.8.0, 1.9.0rc0, 1.9.0rc1, 1.9.0rc2, 1.9.0, 1.10.0rc0, 1.10.0rc1, 1.10.0, 1.10.1, 1.11.0rc0, 1.11.0rc1, 1.11.0rc2, 1.11.0, 1.12.0rc0, 1.12.0rc1, 1.12.0rc2, 1.12.0, 1.12.2, 1.12.3, 1.13.0rc0, 1.13.0rc1, 1.13.0rc2, 1.13.1, 1.13.2, 1.14.0rc0, 1.14.0rc1, 1.14.0, 2.0.0a0, 2.0.0b0, 2.0.0b1)
No matching distribution found for tensorflow==2.2.0

To fix it we upgraded pip using the following command.

python3 -m pip install --upgrade pip;

Then we tried again, which installed most packets but gave a new error:

pip3 install --ignore-installed --upgrade tensorflow==2.2.0;
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
launchpadlib 1.10.6 requires testresources, which is not installed.
Successfully installed absl-py-0.11.0 astunparse-1.6.3 cachetools-4.2.1 certifi-2020.12.5 chardet-4.0.0 gast-0.3.3 google-auth-1.27.0 google-auth-oauthlib-0.4.2 google-pasta-0.2.0 grpcio-1.35.0 h5py-2.10.0 idna-2.10 importlib-metadata-3.4.0 keras-preprocessing-1.1.2 markdown-3.3.3 numpy-1.19.5 oauthlib-3.1.0 opt-einsum-3.3.0 protobuf-3.14.0 pyasn1-0.4.8 pyasn1-modules-0.2.8 requests-2.25.1 requests-oauthlib-1.3.0 rsa-4.7.1 scipy-1.4.1 setuptools-53.0.0 six-1.15.0 tensorboard-2.2.2 tensorboard-plugin-wit-1.8.0 tensorflow-2.2.0 tensorflow-estimator-2.2.0 termcolor-1.1.0 typing-extensions-3.7.4.3 urllib3-1.26.3 werkzeug-1.0.1 wheel-0.36.2 wrapt-1.12.1 zipp-3.4.0

To fix this error we used:

sudo apt install python3-testresources;

Then tried again the pip installation with success.

pip3 install --ignore-installed --upgrade tensorflow==2.2.0;
Successfully installed absl-py-0.11.0 astunparse-1.6.3 cachetools-4.2.1 certifi-2020.12.5 chardet-4.0.0 gast-0.3.3 google-auth-1.27.0 google-auth-oauthlib-0.4.2 google-pasta-0.2.0 grpcio-1.35.0 h5py-2.10.0 idna-2.10 importlib-metadata-3.4.0 keras-preprocessing-1.1.2 markdown-3.3.3 numpy-1.19.5 oauthlib-3.1.0 opt-einsum-3.3.0 protobuf-3.14.0 pyasn1-0.4.8 pyasn1-modules-0.2.8 requests-2.25.1 requests-oauthlib-1.3.0 rsa-4.7.1 scipy-1.4.1 setuptools-53.0.0 six-1.15.0 tensorboard-2.2.2 tensorboard-plugin-wit-1.8.0 tensorflow-2.2.0 tensorflow-estimator-2.2.0 termcolor-1.1.0 typing-extensions-3.7.4.3 urllib3-1.26.3 werkzeug-1.0.1 wheel-0.36.2 wrapt-1.12.1 zipp-3.4.0

We then executed the following to test the installation:

python3 -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))";

Then we proceeded to get the TensorFlow models:

mkdir ~/TensorFlow;
cd ~/TensorFlow;
git clone https://github.com/tensorflow/models;

We then downloaded protobufs and extracted them to our home directory.
To test the installation we did the following.

export PATH="/home/bob/protoc-3.14.0-linux-x86_64:$PATH";
cd /home/bob/TensorFlow/models/research;
protoc object_detection/protos/*.proto --python_out=.;

Then we proceeded to the COCO installation:

pip3 install cython;

The above will solve the problem of:

gcc: error: pycocotools/_mask.c: No such file or directory
cd ~;
git clone https://github.com/cocodataset/cocoapi.git;
cd cocoapi/PythonAPI;
make;
cp -r pycocotools ~/TensorFlow/models/research/;

Finally we proceeded to Install the Object Detection API.

cd ~/TensorFlow/models/research/;
cp object_detection/packages/tf2/setup.py .;
python3 -m pip install .;

To test the installation we executed the following:

python3 object_detection/builders/model_builder_tf2_test.py;

We then downloaded the samples and executed the camera sample with success!!

To check against a video instead of a camera, we changed the following line from:

cap = cv2.VideoCapture(0)

to

cap = cv2.VideoCapture('/home/bob/Desktop/a2/A01_20210210164306.mp4')