We decided to revisit some old work on Neural Style Transfer and TensorFlow. Using the sample code for Fast Neural Style Transfer from this page https://www.tensorflow.org/tutorials/generative/style_transfer#fast_style_transfer_using_tf-hub and the image stylization model from here https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2, we created a tool.
The goal of our venture was to simplify the procedure of changing the style of media. The input could either be an image, a series of images, a video, or a group of videos.
This tool (for which the code is below) comprises a bash script and a python code.
On a high level, it reads all videos from one folder and all styles from another. Then it recreates all those videos with all those styles making new videos out of combinations of the two.
Hardware
Please note that we enabled CUDA and GPU processing on our computer before using the tool. Without them, execution would be prolonged dramatically due to the inability of a general-purpose CPU to make many mathematic operations as fast as the GPU.
To enable CUDA, we followed the steps found in these notes: https://bytefreaks.net/gnulinux/rough-notes-on-how-to-install-cuda-on-an-ubuntu-20-04lts
Software
Conda / Anaconda
We installed and activated anaconda
on an Ubuntu 20.04LTS
desktop. To do so, we installed the following dependencies from the repositories:
sudo apt-get install libgl1-mesa-glx libegl1-mesa libxrandr2 libxrandr2 libxss1 libxcursor1 libxcomposite1 libasound2 libxi6 libxtst6;
Then, we downloaded the 64-Bit (x86) Installer
from (https://www.anaconda.com/products/individual#linux).
Using a terminal, we followed the instructions here (https://docs.anaconda.com/anaconda/install/linux/) and performed the installation.
Python environment and OpenCV for Python
Following the previous step, we used the commands below to create a virtual environment for our code. We needed python
version 3.9 (as highlighted here https://www.anaconda.com/products/individual#linux) and the following packages tensorflow
matplotlib
tensorflow_hub
for python
.
source ~/anaconda3/bin/activate;
conda create --yes --name FastStyleTransfer python=3.9;
conda activate FastStyleTransfer;
pip install --upgrade pip;
pip install tensorflow matplotlib tensorflow_hub;
faster.py
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
from os import listdir
from os.path import isfile, join
import argparse
print("TF Version: ", tf.__version__)
print("TF Hub version: ", hub.__version__)
print("Eager mode enabled: ", tf.executing_eagerly())
print("GPU available: ", tf.config.list_physical_devices('GPU'))
# Parsing command line arguments while making sure they are mandatory/required
parser = argparse.ArgumentParser()
parser.add_argument(
"--input",
type=str,
required=True,
help="The directory that contains the input video frames.")
parser.add_argument(
"--output",
type=str,
required=True,
help="The directory that will contain the output video frames.")
parser.add_argument(
"--style",
type=str,
required=True,
help="The location of the style frame.")
# Press the green button in the gutter to run the script.
if __name__ == '__main__':
args = parser.parse_args()
input_path = args.input + '/'
output_path = args.output + '/'
# List all files from the input directory. This directory should contain at least one image/video frame.
onlyfiles = [f for f in listdir(input_path) if isfile(join(input_path, f))]
# Loading the input style image.
style_image_path = args.style # @param {type:"string"}
style_image = plt.imread(style_image_path)
# Convert to float32 numpy array, add batch dimension, and normalize to range [0, 1]. Example using numpy:
style_image = style_image.astype(np.float32)[np.newaxis, ...] / 255.
# Optionally resize the images. It is recommended that the style image is about
# 256 pixels (this size was used when training the style transfer network).
# The content image can be any size.
style_image = tf.image.resize(style_image, (256, 256))
# Load image stylization module.
# Enable the following line and disable the next two to load the stylization module from a local folder.
# hub_module = hub.load('magenta_arbitrary-image-stylization-v1-256_2')
# Disable the above line and enable these two to load the stylization module from the internet.
hub_handle = 'https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2'
hub_module = hub.load(hub_handle)
for inputfile in onlyfiles:
content_image_path = input_path + inputfile # @param {type:"string"}
content_image = plt.imread(content_image_path)
# Convert to float32 numpy array, add batch dimension, and normalize to range [0, 1]. Example using numpy:
content_image = content_image.astype(np.float32)[np.newaxis, ...] / 255.
# Stylize image.
outputs = hub_module(tf.constant(content_image), tf.constant(style_image))
stylized_image = outputs[0]
# Saving stylized image to disk.
content_outimage_path = output_path + inputfile # @param {type:"string"}
tf.keras.utils.save_img(content_outimage_path, stylized_image[0])
The above code can be invoked as follows:
python3 faster.py --input "$input_frames_folder" --output "$output_frames_folder" --style "$style";
It requires the user to define:
- The folder in which all input images should be.
- The folder where the user wants the stylized images to be saved in. Please note that the folder needs to be created by the user before the execution.
- The path to the image that will be used as input to the neural style transfer.
execute.sh
#!/bin/bash
#source ~/anaconda3/bin/activate;
#conda create --yes --name FastStyleTransfer python=3.9;
#pip install --upgrade pip;
#pip install tensorflow matplotlib tensorflow_hub;
#conda activate FastStyleTransfer;
source ~/anaconda3/bin/activate;
conda activate FastStyleTransfer;
input_videos="./input/videos/*";
input_styles="./input/styles/*";
input_frames="./input/frames";
input_audio="./input/audio";
output_frames="./output/frames";
output_videos="./output/videos";
# Loop on each video in the input folder.
for video in $input_videos;
do
echo "$video";
videoname=$(basename "$video");
# Extract all frames from the video file and save them in a new folder using 8-digit numbers with zero padding in an incremental order.
input_frames_folder="$input_frames/$videoname";
mkdir -p "$input_frames_folder";
ffmpeg -v quiet -i "$video" "$input_frames_folder/%08d.ppm";
# Extract the audio file from the video to the format of an mp3. We will need this audio later to add it to the final product.
input_audio_folder="$input_audio/$videoname";
mkdir -p "$input_audio_folder";
audio="";
# Only VP8 or VP9 or AV1 video and Vorbis or Opus audio and WebVTT subtitles are supported for WebM.
if [[ $videoname == *.webm ]]; then
audio="$input_audio_folder/$videoname.ogg";
ffmpeg -v quiet -i "$video" -vn -c:a libvorbis -y "$audio";
else
audio="$input_audio_folder/$videoname.mp3";
ffmpeg -v quiet -i "$video" -vn -c:a libmp3lame -y "$audio";
fi
# Retrieve the frame rate from the input video. We will need it to configure the final video later.
frame_rate=`ffprobe -v 0 -of csv=p=0 -select_streams v:0 -show_entries stream=r_frame_rate "$video"`;
# Loop on each image style from the input styles folder.
for style in $input_styles;
do
echo "$style";
stylename=$(basename "$style");
output_frames_folder="$output_frames/$videoname/$stylename";
mkdir -p "$output_frames_folder";
# Stylize all frames using the input image and write all processed frames to the output folder.
python3 faster.py --input "$input_frames_folder" --output "$output_frames_folder" --style "$style";
# Combine all stylized video frames and the exported audio into a new video file.
output_videos_folder="$output_videos/$videoname/$stylename";
mkdir -p "$output_videos_folder";
ffmpeg -v quiet -framerate "$frame_rate" -i "$output_frames_folder/%08d.ppm" -i "$audio" -pix_fmt yuv420p -acodec copy -y "$output_videos_folder/$videoname";
rm -rf "$output_frames_folder";
done
rm -rf "$output_frames/$videoname";
rm -rf "$input_frames_folder";
rm -rf "$input_audio_folder";
done
The above script does not accept parameters, but you should load the appropriate environment before calling it. For example:
source ~/anaconda3/bin/activate;
conda activate FastStyleTransfer;
./execute.sh;
Please note that this procedure consumes significant space on your hard drive; once you are done with a video, you should probably delete all data from the output folders.
This post is also available in: Greek