Tux


Automatic generation of phone background images using ImageMagick

#!/bin/bash

#Once upon a time, there was a script that had a mission to create some wallpapers. First, it created a variable called "EXPORT_FOLDER" and assigned it the value "Wallpapers". Then, it made a new directory called "Wallpapers" where it would save the wallpapers it creates.

EXPORT_FOLDER="Wallpapers";

mkdir "$EXPORT_FOLDER";

#The script defined two variables, "FLOOR" and "CEILING" and assigned them the values of -180 and 180 respectively. It also created a variable "RANGE" which was the difference between CEILING and FLOOR + 1. Additionally, it created two variables "WALLPAPER_WIDTH" and "WALLPAPER_HEIGHT" and assigned them 1440 and 3040 respectively, which would be the size of the wallpapers it creates.

FLOOR=-180;
CEILING=180;
RANGE=$(($CEILING-$FLOOR+1));

WALLPAPER_WIDTH=1440;
WALLPAPER_HEIGHT=3040;

#The script then began its main task: a loop that would run 10 times. Within the loop, it would create a variable "RESULT" and assign it a random number using the $RANDOM variable. Then, it would use the modulo operator to calculate the remainder of dividing "RESULT" by "RANGE", and assigns the result back to "RESULT". Next, it would add "FLOOR" to "RESULT" and assigns it back to "RESULT".

for i in {1..10}
do
 RESULT=$RANDOM;
 let "RESULT %= $RANGE";
 RESULT=$(($RESULT+$FLOOR));

 #After all these calculations, the script uses the convert command from the ImageMagick suite to generate an image using the plasma:fractal with a blur of 0x2 and swirl of "RESULT" and shave 20x20 pixels from the edges. The generated image is saved to "$EXPORT_FOLDER/plasma_swirl_$i.jpg".
 convert -size "$WALLPAPER_WIDTH"x"$WALLPAPER_HEIGHT"  plasma:fractal -blur 0x2  -swirl $RESULT  -shave 20x20  "$EXPORT_FOLDER/plasma_swirl_$i.jpg";

 #Finally, the script used the convert command again to composite two other images "ByteFreaks.net_.png" and "cropped-ByteFreaks.net_.png" onto the plasma_swirl_$i.jpg and saves the result as "lock_$i.jpg" and "home_$i.jpg". And after 10 loops of all these steps, the script had successfully created 10 unique and interesting wallpapers, saving them all in the "Wallpapers" folder. The script was proud of its accomplishment and the wallpapers were enjoyed by many.

 convert "$EXPORT_FOLDER/plasma_swirl_$i.jpg"  "ByteFreaks.net_.png" -gravity southeast -geometry +333+1600 -composite "$EXPORT_FOLDER/lock_$i.jpg";

 convert "$EXPORT_FOLDER/plasma_swirl_$i.jpg"  "cropped-ByteFreaks.net_.png" -gravity southeast -geometry +0+0 -composite "$EXPORT_FOLDER/home_$i.jpg";

done

This script is written in Bash and it does the following:

  1. It creates a variable called “EXPORT_FOLDER” and assigns it the value “Wallpapers”.
  2. It creates a directory with the name of the variable “EXPORT_FOLDER” (i.e. “Wallpapers”).
  3. It creates two variables, “FLOOR” and “CEILING” and assigns them the values of -180 and 180 respectively. It also creates a variable “RANGE” which is the difference between CEILING and FLOOR + 1.
  4. It creates two variables “WALLPAPER_WIDTH” and “WALLPAPER_HEIGHT” and assigns them 1440 and 3040 respectively.
  5. It starts a loop that runs 10 times. Within the loop, it does the following:
    • It creates a variable “RESULT” and assigns it a random number using the $RANDOM variable.
    • It uses the modulo operator to calculate the remainder of dividing “RESULT” by “RANGE”, and assigns the result back to “RESULT”.
    • It adds “FLOOR” to “RESULT” and assigns it back to “RESULT”
    • It uses the convert command from the ImageMagick suite to generate an image using the plasma:fractal with a blur of 0x2 and swirl of “RESULT” and shave 20×20 pixels from the edges. The generated image is saved to “$EXPORT_FOLDER/plasma_swirl_$i.jpg”
    • It then uses the convert command again to composite two other images “ByteFreaks.net_.png” and “cropped-ByteFreaks.net_.png” onto the plasma_swirl_$i.jpg and saves the result as “lock_$i.jpg” and “home_$i.jpg”

In short, this script creates 10 jpg images by applying a swirl effect on a fractal plasma image and compositing two other images onto it. These images are saved in the “Wallpapers” folder.


Playing with ChatGPT to create python scripts

In the following post, we asked ChatGPT to write some code for us and this is the “conversation” we had:

write a python script that accepts http request in rest format, executes nmap with the parameters given from the http request and returns the results in json format.

Here is an example of a Python script that can handle HTTP requests in REST format, execute the nmap command with the parameters specified in the request, and return the results in JSON format:

from flask import Flask, request, jsonify
import subprocess

app = Flask(__name__)

@app.route('/scan', methods=['POST'])
def scan():
    data = request.get_json()
    target = data['target']
    options = data['options']
    result = subprocess.run(['nmap', target, options], capture_output=True, text=True)
    return jsonify({'output': result.stdout, 'error': result.stderr})

if __name__ == '__main__':
    app.run(debug=True)

You can run this script by executing python script.py in your terminal and then sending a POST request to http://localhost:5000/scan with a JSON payload containing the target and options for the nmap command, for example:

{
    "target": "example.com",
    "options": "-p 80,443"
}

It will execute the nmap command nmap example.com -p 80,443 and return the output and error in JSON format.

Note

  • Make sure you have nmap installed on your machine.
  • This script is not handling error case properly, it will return error as it is, you may want to handle it properly.
  • This script is not handling multiple targets, options and other complex use cases.
  • This script will run on single thread, you may want to use multi threading or asyncio to handle multiple requests.

update code to handle the error cases, multiple targets and options and use multi threading

Here is an updated version of the script that handles error cases, multiple targets and options, and uses multi-threading to handle multiple requests simultaneously:

import threading
from concurrent.futures import ThreadPoolExecutor
from flask import Flask, request, jsonify
import subprocess

app = Flask(__name__)
executor = ThreadPoolExecutor()

def nmap_scan(target, options):
    try:
        result = subprocess.run(['nmap', target, options], capture_output=True, text=True)
        return jsonify({'output': result.stdout, 'error': result.stderr}), 200
    except Exception as e:
        return jsonify({'error': str(e)}), 500

@app.route('/scan', methods=['POST'])
def scan():
    data = request.get_json()
    targets = data['targets']
    options = data['options']
    futures = [executor.submit(nmap_scan, target, options) for target in targets]
    results = [future.result() for future in futures]
    return results

if __name__ == '__main__':
    app.run(debug=True)

In this updated version, the nmap_scan() function uses a try-except block to catch any errors that occur during the execution of the nmap command. If an error occurs, it returns a JSON object containing an error message and a status code of 500. If the command completes successfully, it returns the output and error in a JSON object and a status code of 200.

The scan() function now takes a list of targets and options as input, and uses a ThreadPoolExecutor to execute the nmap_scan() function for each target in the list. The results of all the scans are returned as a list of JSON objects.

This version of the script can handle multiple targets and options, and it can handle multiple requests simultaneously thanks to the use of multi-threading.

Note

  • Make sure you have nmap installed on your machine.
  • In this script, all the requests are handled by single process, you may want to use multiple process or containers to handle high traffic.
  • This script is not handling other complex use cases, you may want to handle it as per your requirement.

update to handle high traffic

Handling high traffic can be achieved using multiple processes or by running the script in a containerized environment such as Docker.

One way to handle high traffic with multiple processes is to use a process manager such as Gunicorn. Gunicorn is a pre-fork worker model which means it forks multiple worker processes to handle incoming requests. Here is an example of how you can run the script using Gunicorn with 4 worker processes:

gunicorn script:app -w 4

Another way to handle high traffic is to run the script in a containerized environment. You can use a container orchestration platform like Kubernetes to manage multiple containers running the script. This allows you to easily scale up or down the number of containers based on the traffic, and also provides other benefits such as automatic load balancing.

Here is an example of a Dockerfile that builds an image of your script:

FROM python:3.8

COPY . /app

WORKDIR /app

RUN pip install -r requirements.txt

CMD ["gunicorn", "script:app", "-w", "4"]

You can build the image using:

docker build -t script .

And then run the container using:

docker run -p 5000:5000 script

You can then use Kubernetes to manage multiple replicas of this container, and also use other features such as automatic load balancing, automatic failover, and monitoring of the containers.

This version of the script should be able to handle high traffic by using multiple processes or running in a containerized environment, however, it’s not a silver bullet, you may want to test it with different traffic patterns and tweak the configurations as per your requirement.

The best part about this engine is that it reuses previous content and produces new results like having a conversion.


Join IEEEXtreme Coding Competition

Are you a university or college student looking for a challenge? Look no further than the IEEEXtreme international programming competition!

IEEEXtreme is a 24-hour, online, global coding competition that brings together students from all over the world to test their skills and knowledge against their peers. This competition is organized by the IEEE, the world’s largest technical professional organization for the advancement of technology.

Participating in IEEEXtreme provides a unique opportunity for students to showcase their skills and gain recognition in the global tech community. Not only that, but it also allows students to connect with their peers from around the world and learn from each other.

But the benefits of IEEEXtreme don’t stop there. Participating in the competition can also help students develop essential skills employers highly value. These skills include problem-solving, critical thinking, and the ability to work under pressure.

Another great thing about IEEEXtreme is that it’s open to students of all skill levels. Whether a beginner or an experienced coder, you can participate and challenge yourself to improve. And with a wide variety of categories to choose from, there’s something for everyone.

So if you’re a university or college student looking for a challenge and an opportunity to showcase your skills and connect with others in the tech community, then IEEEXtreme is the perfect competition for you. Sign up today and see how far you can go!

In summary, IEEEXtreme is a 24-hour, online, global coding competition which is organized by IEEE, it gives the opportunity for university and college students to showcase their skills, connect with peers from around the world, develop important skills that are highly valued by employers, and it’s open to students of all skill levels. It is a perfect competition for anyone looking for a challenge.


Using minicom to connect to Cisco Console

sudo minicom --device /dev/ttyUSB0 --baudrate 9600 --8bit;

sudo is a command that allows the user to run another command with superuser privileges.

minicom is a terminal emulation program that allows the user to communicate with a serial device.
The --device flag followed by /dev/ttyUSB0 specifies the serial device that minicom should use for communication.
The --baudrate flag followed by 9600 specifies the baud rate (i.e. the speed at which data is transmitted) of the serial connection.
The --8bit flag sets the number of data bits to 8.

So this command is running minicom as a superuser, connecting to the device at “/dev/ttyUSB0” with a baud rate of 9600 and 8-bit data

In addition to the command line arguments above, we had to ensure that flow control (both hardware and software) was off and no parity was given.