We tried to upload a large file to a WordPress site and got the following error:
413 Request Entity Too Large
The WordPress installation was behind an Nginx reverse proxy.
To fix this, we added the following line in the /etc/nginx/nginx.conf configuration file inside the http section/context:
client_max_body_size 64M;
http {
...
client_max_body_size 64M;
...
}
Syntax: client_max_body_size size; When client_max_body_size is not set, it defaults to the value of one megabyte; It can be set to any of the three following contexts: http, server, location client_max_body_size sets the maximum allowed size of the client request body. If the size in a request exceeds the configured value, the 413 (Request Entity Too Large) error is returned to the client. Please be aware that browsers cannot correctly display this error. Setting size to 0 disables checking of client request body size.
Recently, we were trying to apply blurriness to the frames of a video using a custom mask. Our needs would not be short of describing using geometric shapes, so we created the following image (blur.png) as a template for the blurring effect:
The above mask applies a blur effect to all black pixels and leaves all white pixels in the original image intact.
This command creates a new copy of the input file and places it into the folder named blur, so be sure to make the folder before using the above command (e.g., using the command mkdir blur).
Parameters and other information
-mask this flag assosiates the filename that is given with the mask of the command.
-blur defines the geometry that is used reduce image noise and reduce detail levels. To increase the blurriness you can increase the number in this variable 0x8.
+mask The ‘plus’ form of the operator +mask removes the mask from the input image.
The version of convert that we used for this example was the following:
The above command finds all frames in the current folder and executes the convert command described above. Since FFmpeg names the frames as PPM, we used that to filter our search. The blur folder is in the same folder as the original images. To avoid processing the pictures in that folder again, we defined the -maxdepth parameter in find that prevents it from navigating into child folders of the one we are working in.
We used this batch of notes to encrypt email communication between us and the https://www.offensive-security.com/ website contact. Precisely, we needed to encrypt some email attachments with sensitive data.
After receiving the plaintext version of the registrar.asc file, we were able to proceed with the encryption steps. The first thing we did was to import their key:
gpg --import registrar.asc;
$ gpg --import registrar.asc
gpg: key 6C12FFD0BFCBFAE2: 9 signatures not checked due to missing keys
gpg: key 6C12FFD0BFCBFAE2: public key "Offensive Security (Offensive Security Registrar) <[email protected]>" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: marginals needed: 3 completes needed: 1 trust model: pgp
gpg: depth: 0 valid: 2 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 2u
gpg: next trustdb check due at 2023-12-13
Using the following command, we were able to encrypt the sensitive data and send them to via mail:
The PGP command automatically used the public key that we imported in the previous step to perform the encryption. PGP named the encrypted file sensitive.mp4.gpg. We only needed to send that file, and the corresponding party had all other information to decrypt it.
Bonus: Create our own public Key so that people can contact you with encryption
gpg --gen-key;
Executing the above command asked us to provide a name, an email, and a password to encrypt the data. Below is the sample output generated for us:
$ gpg --gen-key
gpg (GnuPG) 2.2.19; Copyright (C) 2019 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Note: Use "gpg --full-generate-key" for a full featured key generation dialog.
GnuPG needs to construct a user ID to identify your key.
Real name: John Doe
Email address: [email protected]
You selected this USER-ID:
"John Doe <[email protected]>"
Change (N)ame, (E)mail, or (O)kay/(Q)uit? O
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: key A53FEA7768D67D2A marked as ultimately trusted
gpg: revocation certificate stored as '/home/john/.gnupg/openpgp-revocs.d/D1660B83341AEF2852A2A4C6A53FEA7768D67D2A.rev'
public and secret key created and signed.
pub rsa3072 2021-12-13 [SC] [expires: 2023-12-13]
D1660B83341AEF2852A2A4C6A53FEA7768D67D2A
uid John Doe <[email protected]>
sub rsa3072 2021-12-13 [E] [expires: 2023-12-13]
Then, we exported our public key using the command below.
The goal of our venture was to simplify the procedure of changing the style of media. The input could either be an image, a series of images, a video, or a group of videos.
This tool (for which the code is below) comprises a bash script and a python code. On a high level, it reads all videos from one folder and all styles from another. Then it recreates all those videos with all those styles making new videos out of combinations of the two.
Hardware
Please note that we enabled CUDA and GPU processing on our computer before using the tool. Without them, execution would be prolonged dramatically due to the inability of a general-purpose CPU to make many mathematic operations as fast as the GPU. To enable CUDA, we followed the steps found in these notes: https://bytefreaks.net/gnulinux/rough-notes-on-how-to-install-cuda-on-an-ubuntu-20-04lts
Software
Conda / Anaconda
We installed and activated anaconda on an Ubuntu 20.04LTS desktop. To do so, we installed the following dependencies from the repositories:
Following the previous step, we used the commands below to create a virtual environment for our code. We needed python version 3.9 (as highlighted here https://www.anaconda.com/products/individual#linux) and the following packages tensorflowmatplotlibtensorflow_hub for python.
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
from os import listdir
from os.path import isfile, join
import argparse
print("TF Version: ", tf.__version__)
print("TF Hub version: ", hub.__version__)
print("Eager mode enabled: ", tf.executing_eagerly())
print("GPU available: ", tf.config.list_physical_devices('GPU'))
# Parsing command line arguments while making sure they are mandatory/required
parser = argparse.ArgumentParser()
parser.add_argument(
"--input",
type=str,
required=True,
help="The directory that contains the input video frames.")
parser.add_argument(
"--output",
type=str,
required=True,
help="The directory that will contain the output video frames.")
parser.add_argument(
"--style",
type=str,
required=True,
help="The location of the style frame.")
# Press the green button in the gutter to run the script.
if __name__ == '__main__':
args = parser.parse_args()
input_path = args.input + '/'
output_path = args.output + '/'
# List all files from the input directory. This directory should contain at least one image/video frame.
onlyfiles = [f for f in listdir(input_path) if isfile(join(input_path, f))]
# Loading the input style image.
style_image_path = args.style # @param {type:"string"}
style_image = plt.imread(style_image_path)
# Convert to float32 numpy array, add batch dimension, and normalize to range [0, 1]. Example using numpy:
style_image = style_image.astype(np.float32)[np.newaxis, ...] / 255.
# Optionally resize the images. It is recommended that the style image is about
# 256 pixels (this size was used when training the style transfer network).
# The content image can be any size.
style_image = tf.image.resize(style_image, (256, 256))
# Load image stylization module.
# Enable the following line and disable the next two to load the stylization module from a local folder.
# hub_module = hub.load('magenta_arbitrary-image-stylization-v1-256_2')
# Disable the above line and enable these two to load the stylization module from the internet.
hub_handle = 'https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2'
hub_module = hub.load(hub_handle)
for inputfile in onlyfiles:
content_image_path = input_path + inputfile # @param {type:"string"}
content_image = plt.imread(content_image_path)
# Convert to float32 numpy array, add batch dimension, and normalize to range [0, 1]. Example using numpy:
content_image = content_image.astype(np.float32)[np.newaxis, ...] / 255.
# Stylize image.
outputs = hub_module(tf.constant(content_image), tf.constant(style_image))
stylized_image = outputs[0]
# Saving stylized image to disk.
content_outimage_path = output_path + inputfile # @param {type:"string"}
tf.keras.utils.save_img(content_outimage_path, stylized_image[0])
The folder where the user wants the stylized images to be saved in. Please note that the folder needs to be created by the user before the execution.
The path to the image that will be used as input to the neural style transfer.
execute.sh
#!/bin/bash
#source ~/anaconda3/bin/activate;
#conda create --yes --name FastStyleTransfer python=3.9;
#pip install --upgrade pip;
#pip install tensorflow matplotlib tensorflow_hub;
#conda activate FastStyleTransfer;
source ~/anaconda3/bin/activate;
conda activate FastStyleTransfer;
input_videos="./input/videos/*";
input_styles="./input/styles/*";
input_frames="./input/frames";
input_audio="./input/audio";
output_frames="./output/frames";
output_videos="./output/videos";
# Loop on each video in the input folder.
for video in $input_videos;
do
echo "$video";
videoname=$(basename "$video");
# Extract all frames from the video file and save them in a new folder using 8-digit numbers with zero padding in an incremental order.
input_frames_folder="$input_frames/$videoname";
mkdir -p "$input_frames_folder";
ffmpeg -v quiet -i "$video" "$input_frames_folder/%08d.ppm";
# Extract the audio file from the video to the format of an mp3. We will need this audio later to add it to the final product.
input_audio_folder="$input_audio/$videoname";
mkdir -p "$input_audio_folder";
audio="";
# Only VP8 or VP9 or AV1 video and Vorbis or Opus audio and WebVTT subtitles are supported for WebM.
if [[ $videoname == *.webm ]]; then
audio="$input_audio_folder/$videoname.ogg";
ffmpeg -v quiet -i "$video" -vn -c:a libvorbis -y "$audio";
else
audio="$input_audio_folder/$videoname.mp3";
ffmpeg -v quiet -i "$video" -vn -c:a libmp3lame -y "$audio";
fi
# Retrieve the frame rate from the input video. We will need it to configure the final video later.
frame_rate=`ffprobe -v 0 -of csv=p=0 -select_streams v:0 -show_entries stream=r_frame_rate "$video"`;
# Loop on each image style from the input styles folder.
for style in $input_styles;
do
echo "$style";
stylename=$(basename "$style");
output_frames_folder="$output_frames/$videoname/$stylename";
mkdir -p "$output_frames_folder";
# Stylize all frames using the input image and write all processed frames to the output folder.
python3 faster.py --input "$input_frames_folder" --output "$output_frames_folder" --style "$style";
# Combine all stylized video frames and the exported audio into a new video file.
output_videos_folder="$output_videos/$videoname/$stylename";
mkdir -p "$output_videos_folder";
ffmpeg -v quiet -framerate "$frame_rate" -i "$output_frames_folder/%08d.ppm" -i "$audio" -pix_fmt yuv420p -acodec copy -y "$output_videos_folder/$videoname";
rm -rf "$output_frames_folder";
done
rm -rf "$output_frames/$videoname";
rm -rf "$input_frames_folder";
rm -rf "$input_audio_folder";
done
The above script does not accept parameters, but you should load the appropriate environment before calling it. For example:
Please note that this procedure consumes significant space on your hard drive; once you are done with a video, you should probably delete all data from the output folders.