Create an SSH tunnel for HTTP web server proxy

Once upon a time, in a kingdom of computers and networks, there lived a brave knight named “ssh”. He was known throughout the land for his bravery and cunning abilities to securely transport data between two distant lands.

One day, a young prince came to the knight with a request. The prince had a precious website that was housed in a remote castle, accessible only by a specific host known as “remotehost”. He wanted his people to be able to visit the website, but the path was treacherous and insecure.

The prince asked the knight if he could help him. The knight thought for a moment and then said, “Fear not, young prince! I can help you. I shall use my magical command ‘ssh -L 80:remotehost:80 user@myserver’ to create a secure pathway for your people to visit the website.”

The prince was overjoyed and asked the knight to explain how it worked.

“The ‘-L’ flag stands for Local Forwarding. It creates a tunnel between the local computer and the remote server, which we shall call ‘myserver’. This tunnel shall forward all requests from the local port 80 to the remote host ‘remotehost’ on port 80,” explained the knight.

“And ‘user@myserver’?”, asked the prince.

“Ah, yes. That is the credentials of the user that we shall use to log in to the remote server ‘myserver’. This shall ensure that the communication between your local computer and the remote host is secure and protected,” the knight replied with a nod.

The prince was grateful and thanked the knight for his help. The knight then used his magical command and created a secure pathway for the prince’s people to visit the website, which they did happily ever after.

And that, dear reader, is the story of the command “ssh -L 80:remotehost:80 user@myserver”.

ssh -L 80:remotehost:80 user@myserver;

The command ssh -L 80:remotehost:80 user@myserver is an example of using the ssh utility to create a secure shell connection to a remote server. The command also establishes a local port forward, which forwards all incoming traffic on the local port 80 to the remote host remotehost on port 80.

ssh (Secure Shell) is a protocol for securely accessing a remote computer. The basic usage of ssh is to log in to a remote server using a username and password or an SSH key. The ssh command allows you to securely log in to a remote server, execute commands on the remote server, and transfer files between your local computer and the remote server.

In this particular command, the -L flag is used to specify a local port forward. A local port forward is a way of forwarding traffic from a local port to a remote host and port. In this case, the traffic is being forwarded from the local port 80 to the remote host remotehost on port 80.

The user@myserver part of the command is the credentials that are used to log in to the remote server myserver. The user is the username and myserver is the hostname or IP address of the remote server. The combination of the username and remote server information allows ssh to securely log in to the remote server.

Once the secure shell connection has been established and the local port forward has been created, any traffic sent to the local port 80 will be forwarded to the remote host remotehost on port 80. This allows the local computer to access services on the remote host as if they were running on the local computer.

In summary, the ssh -L 80:remotehost:80 user@myserver command is an example of using the ssh utility to create a secure shell connection to a remote server and establish a local port forward. The local port forward allows the local computer to access services on the remote host as if they were running on the local computer.

ssh -L 80:remotehost:80 user@myserver;

Once the connection has been established using the command ssh -L 80:remotehost:80 user@myserver, you can access the website hosted on the remote host remotehost by browsing to http://localhost in your web browser.

Since the local port 80 has been forwarded to the remote host remotehost on port 80, all traffic sent to http://localhost will be forwarded to the remote host. This allows you to access the website hosted on the remote host as if it were running on your local computer.

Keep in mind that the secure shell connection created using the ssh command must be active and running in order to access the website hosted on the remote host. If the connection is closed or terminated, the website will no longer be accessible through the local port forward.


Glaciator, an AI-Generated Pokemon

Name

Glaciator

Type

Ice/Dragon

Appearance

Glaciator is a quadrupedal dragon-like Pokemon with a body made entirely of ice. It has a long, thick tail and sharp, icy claws. Its head has two large, curved horns, and its eyes are a bright blue. Its breath is made of freezing mist.

Abilities

Glaciator has the ability “Frostbite,” which allows it to freeze its opponents and weaken them. Its ice attacks are also significantly stronger than those of other ice-type Pokemon. It also has a strong defense and a special defense stat.

Weaknesses

Glaciator is weak to fire and rock-type moves, as well as dragon-type moves. Its ice body makes it vulnerable to melting in hot temperatures.

Evolution

Glaciator evolves from a Pokemon called “Frostbite,” which is ice-type Pokemon.

Images


Automatic generation of phone background images using ImageMagick

#!/bin/bash

#Once upon a time, there was a script that had a mission to create some wallpapers. First, it created a variable called "EXPORT_FOLDER" and assigned it the value "Wallpapers". Then, it made a new directory called "Wallpapers" where it would save the wallpapers it creates.

EXPORT_FOLDER="Wallpapers";

mkdir "$EXPORT_FOLDER";

#The script defined two variables, "FLOOR" and "CEILING" and assigned them the values of -180 and 180 respectively. It also created a variable "RANGE" which was the difference between CEILING and FLOOR + 1. Additionally, it created two variables "WALLPAPER_WIDTH" and "WALLPAPER_HEIGHT" and assigned them 1440 and 3040 respectively, which would be the size of the wallpapers it creates.

FLOOR=-180;
CEILING=180;
RANGE=$(($CEILING-$FLOOR+1));

WALLPAPER_WIDTH=1440;
WALLPAPER_HEIGHT=3040;

#The script then began its main task: a loop that would run 10 times. Within the loop, it would create a variable "RESULT" and assign it a random number using the $RANDOM variable. Then, it would use the modulo operator to calculate the remainder of dividing "RESULT" by "RANGE", and assigns the result back to "RESULT". Next, it would add "FLOOR" to "RESULT" and assigns it back to "RESULT".

for i in {1..10}
do
 RESULT=$RANDOM;
 let "RESULT %= $RANGE";
 RESULT=$(($RESULT+$FLOOR));

 #After all these calculations, the script uses the convert command from the ImageMagick suite to generate an image using the plasma:fractal with a blur of 0x2 and swirl of "RESULT" and shave 20x20 pixels from the edges. The generated image is saved to "$EXPORT_FOLDER/plasma_swirl_$i.jpg".
 convert -size "$WALLPAPER_WIDTH"x"$WALLPAPER_HEIGHT"  plasma:fractal -blur 0x2  -swirl $RESULT  -shave 20x20  "$EXPORT_FOLDER/plasma_swirl_$i.jpg";

 #Finally, the script used the convert command again to composite two other images "ByteFreaks.net_.png" and "cropped-ByteFreaks.net_.png" onto the plasma_swirl_$i.jpg and saves the result as "lock_$i.jpg" and "home_$i.jpg". And after 10 loops of all these steps, the script had successfully created 10 unique and interesting wallpapers, saving them all in the "Wallpapers" folder. The script was proud of its accomplishment and the wallpapers were enjoyed by many.

 convert "$EXPORT_FOLDER/plasma_swirl_$i.jpg"  "ByteFreaks.net_.png" -gravity southeast -geometry +333+1600 -composite "$EXPORT_FOLDER/lock_$i.jpg";

 convert "$EXPORT_FOLDER/plasma_swirl_$i.jpg"  "cropped-ByteFreaks.net_.png" -gravity southeast -geometry +0+0 -composite "$EXPORT_FOLDER/home_$i.jpg";

done

This script is written in Bash and it does the following:

  1. It creates a variable called “EXPORT_FOLDER” and assigns it the value “Wallpapers”.
  2. It creates a directory with the name of the variable “EXPORT_FOLDER” (i.e. “Wallpapers”).
  3. It creates two variables, “FLOOR” and “CEILING” and assigns them the values of -180 and 180 respectively. It also creates a variable “RANGE” which is the difference between CEILING and FLOOR + 1.
  4. It creates two variables “WALLPAPER_WIDTH” and “WALLPAPER_HEIGHT” and assigns them 1440 and 3040 respectively.
  5. It starts a loop that runs 10 times. Within the loop, it does the following:
    • It creates a variable “RESULT” and assigns it a random number using the $RANDOM variable.
    • It uses the modulo operator to calculate the remainder of dividing “RESULT” by “RANGE”, and assigns the result back to “RESULT”.
    • It adds “FLOOR” to “RESULT” and assigns it back to “RESULT”
    • It uses the convert command from the ImageMagick suite to generate an image using the plasma:fractal with a blur of 0x2 and swirl of “RESULT” and shave 20×20 pixels from the edges. The generated image is saved to “$EXPORT_FOLDER/plasma_swirl_$i.jpg”
    • It then uses the convert command again to composite two other images “ByteFreaks.net_.png” and “cropped-ByteFreaks.net_.png” onto the plasma_swirl_$i.jpg and saves the result as “lock_$i.jpg” and “home_$i.jpg”

In short, this script creates 10 jpg images by applying a swirl effect on a fractal plasma image and compositing two other images onto it. These images are saved in the “Wallpapers” folder.


Playing with ChatGPT to create python scripts

In the following post, we asked ChatGPT to write some code for us and this is the “conversation” we had:

write a python script that accepts http request in rest format, executes nmap with the parameters given from the http request and returns the results in json format.

Here is an example of a Python script that can handle HTTP requests in REST format, execute the nmap command with the parameters specified in the request, and return the results in JSON format:

from flask import Flask, request, jsonify
import subprocess

app = Flask(__name__)

@app.route('/scan', methods=['POST'])
def scan():
    data = request.get_json()
    target = data['target']
    options = data['options']
    result = subprocess.run(['nmap', target, options], capture_output=True, text=True)
    return jsonify({'output': result.stdout, 'error': result.stderr})

if __name__ == '__main__':
    app.run(debug=True)

You can run this script by executing python script.py in your terminal and then sending a POST request to http://localhost:5000/scan with a JSON payload containing the target and options for the nmap command, for example:

{
    "target": "example.com",
    "options": "-p 80,443"
}

It will execute the nmap command nmap example.com -p 80,443 and return the output and error in JSON format.

Note

  • Make sure you have nmap installed on your machine.
  • This script is not handling error case properly, it will return error as it is, you may want to handle it properly.
  • This script is not handling multiple targets, options and other complex use cases.
  • This script will run on single thread, you may want to use multi threading or asyncio to handle multiple requests.

update code to handle the error cases, multiple targets and options and use multi threading

Here is an updated version of the script that handles error cases, multiple targets and options, and uses multi-threading to handle multiple requests simultaneously:

import threading
from concurrent.futures import ThreadPoolExecutor
from flask import Flask, request, jsonify
import subprocess

app = Flask(__name__)
executor = ThreadPoolExecutor()

def nmap_scan(target, options):
    try:
        result = subprocess.run(['nmap', target, options], capture_output=True, text=True)
        return jsonify({'output': result.stdout, 'error': result.stderr}), 200
    except Exception as e:
        return jsonify({'error': str(e)}), 500

@app.route('/scan', methods=['POST'])
def scan():
    data = request.get_json()
    targets = data['targets']
    options = data['options']
    futures = [executor.submit(nmap_scan, target, options) for target in targets]
    results = [future.result() for future in futures]
    return results

if __name__ == '__main__':
    app.run(debug=True)

In this updated version, the nmap_scan() function uses a try-except block to catch any errors that occur during the execution of the nmap command. If an error occurs, it returns a JSON object containing an error message and a status code of 500. If the command completes successfully, it returns the output and error in a JSON object and a status code of 200.

The scan() function now takes a list of targets and options as input, and uses a ThreadPoolExecutor to execute the nmap_scan() function for each target in the list. The results of all the scans are returned as a list of JSON objects.

This version of the script can handle multiple targets and options, and it can handle multiple requests simultaneously thanks to the use of multi-threading.

Note

  • Make sure you have nmap installed on your machine.
  • In this script, all the requests are handled by single process, you may want to use multiple process or containers to handle high traffic.
  • This script is not handling other complex use cases, you may want to handle it as per your requirement.

update to handle high traffic

Handling high traffic can be achieved using multiple processes or by running the script in a containerized environment such as Docker.

One way to handle high traffic with multiple processes is to use a process manager such as Gunicorn. Gunicorn is a pre-fork worker model which means it forks multiple worker processes to handle incoming requests. Here is an example of how you can run the script using Gunicorn with 4 worker processes:

gunicorn script:app -w 4

Another way to handle high traffic is to run the script in a containerized environment. You can use a container orchestration platform like Kubernetes to manage multiple containers running the script. This allows you to easily scale up or down the number of containers based on the traffic, and also provides other benefits such as automatic load balancing.

Here is an example of a Dockerfile that builds an image of your script:

FROM python:3.8

COPY . /app

WORKDIR /app

RUN pip install -r requirements.txt

CMD ["gunicorn", "script:app", "-w", "4"]

You can build the image using:

docker build -t script .

And then run the container using:

docker run -p 5000:5000 script

You can then use Kubernetes to manage multiple replicas of this container, and also use other features such as automatic load balancing, automatic failover, and monitoring of the containers.

This version of the script should be able to handle high traffic by using multiple processes or running in a containerized environment, however, it’s not a silver bullet, you may want to test it with different traffic patterns and tweak the configurations as per your requirement.

The best part about this engine is that it reuses previous content and produces new results like having a conversion.