Playing with ChatGPT to create python scripts

In the following post, we asked ChatGPT to write some code for us and this is the “conversation” we had:

write a python script that accepts http request in rest format, executes nmap with the parameters given from the http request and returns the results in json format.

Here is an example of a Python script that can handle HTTP requests in REST format, execute the nmap command with the parameters specified in the request, and return the results in JSON format:

from flask import Flask, request, jsonify
import subprocess

app = Flask(__name__)

@app.route('/scan', methods=['POST'])
def scan():
    data = request.get_json()
    target = data['target']
    options = data['options']
    result =['nmap', target, options], capture_output=True, text=True)
    return jsonify({'output': result.stdout, 'error': result.stderr})

if __name__ == '__main__':

You can run this script by executing python in your terminal and then sending a POST request to http://localhost:5000/scan with a JSON payload containing the target and options for the nmap command, for example:

    "target": "",
    "options": "-p 80,443"

It will execute the nmap command nmap -p 80,443 and return the output and error in JSON format.


  • Make sure you have nmap installed on your machine.
  • This script is not handling error case properly, it will return error as it is, you may want to handle it properly.
  • This script is not handling multiple targets, options and other complex use cases.
  • This script will run on single thread, you may want to use multi threading or asyncio to handle multiple requests.

update code to handle the error cases, multiple targets and options and use multi threading

Here is an updated version of the script that handles error cases, multiple targets and options, and uses multi-threading to handle multiple requests simultaneously:

import threading
from concurrent.futures import ThreadPoolExecutor
from flask import Flask, request, jsonify
import subprocess

app = Flask(__name__)
executor = ThreadPoolExecutor()

def nmap_scan(target, options):
        result =['nmap', target, options], capture_output=True, text=True)
        return jsonify({'output': result.stdout, 'error': result.stderr}), 200
    except Exception as e:
        return jsonify({'error': str(e)}), 500

@app.route('/scan', methods=['POST'])
def scan():
    data = request.get_json()
    targets = data['targets']
    options = data['options']
    futures = [executor.submit(nmap_scan, target, options) for target in targets]
    results = [future.result() for future in futures]
    return results

if __name__ == '__main__':

In this updated version, the nmap_scan() function uses a try-except block to catch any errors that occur during the execution of the nmap command. If an error occurs, it returns a JSON object containing an error message and a status code of 500. If the command completes successfully, it returns the output and error in a JSON object and a status code of 200.

The scan() function now takes a list of targets and options as input, and uses a ThreadPoolExecutor to execute the nmap_scan() function for each target in the list. The results of all the scans are returned as a list of JSON objects.

This version of the script can handle multiple targets and options, and it can handle multiple requests simultaneously thanks to the use of multi-threading.


  • Make sure you have nmap installed on your machine.
  • In this script, all the requests are handled by single process, you may want to use multiple process or containers to handle high traffic.
  • This script is not handling other complex use cases, you may want to handle it as per your requirement.

update to handle high traffic

Handling high traffic can be achieved using multiple processes or by running the script in a containerized environment such as Docker.

One way to handle high traffic with multiple processes is to use a process manager such as Gunicorn. Gunicorn is a pre-fork worker model which means it forks multiple worker processes to handle incoming requests. Here is an example of how you can run the script using Gunicorn with 4 worker processes:

gunicorn script:app -w 4

Another way to handle high traffic is to run the script in a containerized environment. You can use a container orchestration platform like Kubernetes to manage multiple containers running the script. This allows you to easily scale up or down the number of containers based on the traffic, and also provides other benefits such as automatic load balancing.

Here is an example of a Dockerfile that builds an image of your script:

FROM python:3.8

COPY . /app


RUN pip install -r requirements.txt

CMD ["gunicorn", "script:app", "-w", "4"]

You can build the image using:

docker build -t script .

And then run the container using:

docker run -p 5000:5000 script

You can then use Kubernetes to manage multiple replicas of this container, and also use other features such as automatic load balancing, automatic failover, and monitoring of the containers.

This version of the script should be able to handle high traffic by using multiple processes or running in a containerized environment, however, it’s not a silver bullet, you may want to test it with different traffic patterns and tweak the configurations as per your requirement.

The best part about this engine is that it reuses previous content and produces new results like having a conversion.

Python script to parse the GAM / Google Workspace CSV that contains all members of all groups

This script will parse the data generated by GAM and create an XLSX out of them.

Each sheet will be named by the group, containing all the emails of that group, including the sub-groups.

# This script will parse the data generated by GAM and create an XLSX out of them.
# Each sheet will be named by the group and it will contain all the emails of that group, including the sub-groups.

# Using sys to get command line arguments for the input file
import sys
# Using CSV to parse the input CSV file that GAM (GAMADV-XTD3) created after using the following command:
# gam print group-members > emails.2022.csv;
# Source:
import csv
# Using pandas to create an XLSX file with multiple sheets
import pandas as pd

# Creating an empty dictionary.
dictionary = {}

# Opening the CSV file that is the first command line argument
with open(sys.argv[1], newline='') as csvfile:
    reader = csv.reader(csvfile, delimiter=',', quotechar='"')
    # For each row, we are getting only the first column which is the group and the last which is the email
    for row in reader:
        dictionary.setdefault(row[0], []).append(row[5])

# Create a Pandas Excel writer using XlsxWriter as the engine.
writer = pd.ExcelWriter(sys.argv[1]+'.xlsx', engine='xlsxwriter')

# Iterating in Sorted Order
for key in sorted(dictionary):
    # Create a Pandas dataframes to add the data.
    df = pd.DataFrame({'Members': dictionary[key]})
    # Write each dataframe to a different worksheet.
    # To avoid the following exception:
    # xlsxwriter.exceptions.InvalidWorksheetName: Excel worksheet name '[email protected]' must be <= 31 chars.
    # We are truncating the domain from the group.
    group = key.split("@")[0]
    # In case the name is still to big, we truncate it further and append an ellipsis to indicate the extra truncation
    sheet = (group[:29] + '..') if len(group) > 31 else group
    # We are also removing the header and index of the data frame
    df.to_excel(writer, sheet_name=sheet, header=False, index=False)

# Close the Pandas Excel writer and output the Excel file.

Using Neural Style Transfer on videos

We decided to revisit some old work on Neural Style Transfer and TensorFlow. Using the sample code for Fast Neural Style Transfer from this page and the image stylization model from here, we created a tool.

The goal of our venture was to simplify the procedure of changing the style of media. The input could either be an image, a series of images, a video, or a group of videos.

This tool (for which the code is below) comprises a bash script and a python code.
On a high level, it reads all videos from one folder and all styles from another. Then it recreates all those videos with all those styles making new videos out of combinations of the two.


Please note that we enabled CUDA and GPU processing on our computer before using the tool. Without them, execution would be prolonged dramatically due to the inability of a general-purpose CPU to make many mathematic operations as fast as the GPU.
To enable CUDA, we followed the steps found in these notes:


Conda / Anaconda

We installed and activated anaconda on an Ubuntu 20.04LTS desktop. To do so, we installed the following dependencies from the repositories:

sudo apt-get install libgl1-mesa-glx libegl1-mesa libxrandr2 libxrandr2 libxss1 libxcursor1 libxcomposite1 libasound2 libxi6 libxtst6;

Then, we downloaded the 64-Bit (x86) Installer from (

Using a terminal, we followed the instructions here ( and performed the installation.

Python environment and OpenCV for Python

Following the previous step, we used the commands below to create a virtual environment for our code. We needed python version 3.9 (as highlighted here and the following packages tensorflow matplotlib tensorflow_hub for python.

source ~/anaconda3/bin/activate;
conda create --yes --name FastStyleTransfer python=3.9;
conda activate FastStyleTransfer;
pip install --upgrade pip;
pip install tensorflow matplotlib tensorflow_hub;

import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub

from os import listdir
from os.path import isfile, join

import argparse

print("TF Version: ", tf.__version__)
print("TF Hub version: ", hub.__version__)
print("Eager mode enabled: ", tf.executing_eagerly())
print("GPU available: ", tf.config.list_physical_devices('GPU'))

# Parsing command line arguments while making sure they are mandatory/required
parser = argparse.ArgumentParser()
    help="The directory that contains the input video frames.")
    help="The directory that will contain the output video frames.")
    help="The location of the style frame.")

# Press the green button in the gutter to run the script.
if __name__ == '__main__':
    args = parser.parse_args()
    input_path = args.input + '/'
    output_path = args.output + '/'
    # List all files from the input directory. This directory should contain at least one image/video frame.
    onlyfiles = [f for f in listdir(input_path) if isfile(join(input_path, f))]

    # Loading the input style image.
    style_image_path =  # @param {type:"string"}
    style_image = plt.imread(style_image_path)

    # Convert to float32 numpy array, add batch dimension, and normalize to range [0, 1]. Example using numpy:
    style_image = style_image.astype(np.float32)[np.newaxis, ...] / 255.

    # Optionally resize the images. It is recommended that the style image is about
    # 256 pixels (this size was used when training the style transfer network).
    # The content image can be any size.
    style_image = tf.image.resize(style_image, (256, 256))
    # Load image stylization module.
    # Enable the following line and disable the next two to load the stylization module from a local folder.
    # hub_module = hub.load('magenta_arbitrary-image-stylization-v1-256_2')
    # Disable the above line and enable these two to load the stylization module from the internet.
    hub_handle = ''
    hub_module = hub.load(hub_handle)

    for inputfile in onlyfiles:
        content_image_path = input_path + inputfile  # @param {type:"string"}
        content_image = plt.imread(content_image_path)
        # Convert to float32 numpy array, add batch dimension, and normalize to range [0, 1]. Example using numpy:
        content_image = content_image.astype(np.float32)[np.newaxis, ...] / 255.

        # Stylize image.
        outputs = hub_module(tf.constant(content_image), tf.constant(style_image))
        stylized_image = outputs[0]

        # Saving stylized image to disk.
        content_outimage_path = output_path + inputfile  # @param {type:"string"}
        tf.keras.utils.save_img(content_outimage_path, stylized_image[0])

The above code can be invoked as follows:

python3 --input "$input_frames_folder" --output "$output_frames_folder" --style "$style";

It requires the user to define:

  1. The folder in which all input images should be.
  2. The folder where the user wants the stylized images to be saved in. Please note that the folder needs to be created by the user before the execution.
  3. The path to the image that will be used as input to the neural style transfer.

#source ~/anaconda3/bin/activate;
#conda create --yes --name FastStyleTransfer python=3.9;
#pip install --upgrade pip;
#pip install tensorflow matplotlib tensorflow_hub;
#conda activate FastStyleTransfer;

source ~/anaconda3/bin/activate;
conda activate FastStyleTransfer;


# Loop on each video in the input folder.
for video in $input_videos;
  echo "$video";
  videoname=$(basename "$video");

  # Extract all frames from the video file and save them in a new folder using 8-digit numbers with zero padding in an incremental order.
  mkdir -p "$input_frames_folder";
  ffmpeg -v quiet -i "$video" "$input_frames_folder/%08d.ppm";

  # Extract the audio file from the video to the format of an mp3. We will need this audio later to add it to the final product.
  mkdir -p "$input_audio_folder";

  # Only VP8 or VP9 or AV1 video and Vorbis or Opus audio and WebVTT subtitles are supported for WebM.
  if [[ $videoname == *.webm ]]; then
    ffmpeg -v quiet -i "$video" -vn -c:a libvorbis -y "$audio";  
    ffmpeg -v quiet -i "$video" -vn -c:a libmp3lame -y "$audio";

  # Retrieve the frame rate from the input video. We will need it to configure the final video later.
  frame_rate=`ffprobe -v 0 -of csv=p=0 -select_streams v:0 -show_entries stream=r_frame_rate "$video"`;

  # Loop on each image style from the input styles folder.
  for style in $input_styles;
    echo "$style";
    stylename=$(basename "$style");
    mkdir -p "$output_frames_folder";

    # Stylize all frames using the input image and write all processed frames to the output folder.
    python3 --input "$input_frames_folder" --output "$output_frames_folder" --style "$style";

    # Combine all stylized video frames and the exported audio into a new video file.
    mkdir -p "$output_videos_folder";
    ffmpeg -v quiet -framerate "$frame_rate" -i "$output_frames_folder/%08d.ppm" -i "$audio" -pix_fmt yuv420p -acodec copy -y "$output_videos_folder/$videoname";
    rm -rf "$output_frames_folder";
  rm -rf "$output_frames/$videoname";
  rm -rf "$input_frames_folder";
  rm -rf "$input_audio_folder";

The above script does not accept parameters, but you should load the appropriate environment before calling it. For example:

source ~/anaconda3/bin/activate;
conda activate FastStyleTransfer;

Please note that this procedure consumes significant space on your hard drive; once you are done with a video, you should probably delete all data from the output folders.