Bash


Automatically download possibly a whole public website using wget recursively

wget -r -k -np --user-agent="Mozilla/5.0 (iPhone; CPU iPhone OS 7_0 like Mac OS X; en-us) AppleWebKit/537.51.1 (KHTML, like Gecko) Version/7.0 Mobile/11A465 Safari/9537.53" --wait=2 --limit-rate=200K --recursive --no-clobber --page-requisites --convert-links --domains bytefreaks.net https://bytefreaks.net/;

Introduction:

The “wget” command is a powerful tool used to download files and web pages from the internet. It is commonly used in Linux/Unix environments but can also be used on other operating systems. The command comes with various options and parameters that can be customized to suit your specific download requirements. In this post, we will discuss the wget command with a breakdown of its various options, and how to use it to download files and web pages.

Command Explanation:

Here is a detailed explanation of the options used in the command:

  1. “-r” : This option is used to make the download recursive, which means that it will download the entire website.
  2. “-k” : This option is used to convert the links in the downloaded files so that they point to the local files. This is necessary to ensure that the downloaded files can be viewed offline.
  3. “-np” : This option prevents wget from ascending to the parent directory when downloading. This is helpful when you want to limit the download to a specific directory.
  4. “–user-agent” : This option allows you to specify the user agent string that wget will use to identify itself to the server. In this case, the user agent string is set to a mobile device (iPhone).
  5. “–wait” : This option adds a delay (in seconds) between requests. This is useful to prevent the server from being overloaded with too many requests at once.
  6. “–limit-rate” : This option is used to limit the download speed to a specific rate (in this case, 200K).
  7. “–recursive” : This option is used to make the download recursive, which means that it will download the entire website.
  8. “–no-clobber” : This option prevents wget from overwriting existing files.
  9. “–page-requisites” : This option instructs wget to download all the files needed to display the webpage, including images, CSS, and JavaScript files.
  10. “–convert-links” : This option is used to convert the links in the downloaded files so that they point to the local files. This is necessary to ensure that the downloaded files can be viewed offline.
  11. “–domains” : This option allows you to specify the domain name(s) that you want to download.
  12. https://bytefreaks.net/” : This is the URL of the website that you want to download.

Conclusion:

The wget command is a powerful tool that can be used to download files and web pages from the internet. By using the various options and parameters available, you can customize your download to suit your specific requirements. In this post, we have discussed the wget command and its various options, and how to use it to download files and web pages. We hope that this post has been helpful and informative, and that it has given you a better understanding of the wget command.

Same command without setting the user agent:

The following command will try to download a full website with all pages it can find through public links.

wget --wait=2 --limit-rate=200K --recursive --no-clobber --page-requisites --convert-links --domains example.com http://example.com/;

Parameters:

  • --wait Wait the specified number of seconds between the retrievals.  We use this option to lighten the server load by making the requests less frequent.
  • --limit-rate Limit the download speed to amount bytes per second. We use this option to lighten the server load and to reduce the bandwidth we consume on our own network.
  • --recursive Turn on recursive retrieving.
  • --no-clobber If a file is downloaded more than once in the same directory, we prevent multiple version saving.
  • --page-requisites This option causes Wget to download all the files that are necessary to properly display a given HTML page.
  • --convert-links After the download is complete, convert the links in the document to make them suitable for local viewing.
  • --domains Set domains to be followed.  It accepts a domain-list as a comma-separated list of domains.

Bash: Show GIT Remote Origin for each immediate subfolder

To print on screen all the immediate subfolders and their GIT Remote Origin URL configuration we used the following command

find . -maxdepth 1 -type d \( ! -name . \) -exec bash -c "cd '{}' && echo '{}' && git config --get remote.origin.url" \;

We used the find general command to get all folders that are in depth 1, in other words all folders that are in the specific folder where we started the search.
In our case we used the dot as the starting folder which means that it will run the find in the same folder as we were navigating in.
We passed as a parameter the -type d to instruct find to show only folders and ignore the files.
The \( ! -name . \) prevents executing the command in current directory by removing it from the result set.
With the results we executed some commands on each.

Specifically, we created a new bash session for each result that navigated in the folder, printed out the name of the matched folder and then print the Remote Origin URL using the command git config --get remote.origin.url


Grep: Print only the words of the line that matched the regular expression, one per line

grep -oh "\w*$YOUR_PATTERN\w*" *

We used the following parameters on our command:

-h, –no-filename : Suppress the prefixing of file names on output. This is the default when there is only  one  file  (or only standard input) to search.
-o, –only-matching : Print  only  the matched (non-empty) parts of a matching line,  with each such part on a separate output line.

Also, we wrapped out pattern with the \w* that matches  all word-constituent characters on either side. The * character states that it should find 0 or more of those characters in the pattern to match.


Bash: Determine state of file

Following you will find some tests one can perform on a file to identify its state

Check if file $FILE does not exist

if [ ! -f "$FILE" ]; then
    echo "File $FILE does not exist";
fi

Check if file $FILE exists and is a directory

if [ -d "$FILE" ]; then
    echo "File $FILE exists and is a directory";
fi

Check if file $FILE exists and is a regular file (not a directory)

if [ -f "$FILE" ]; then
    echo "File $FILE exists and is a regular file (not a directory)";
fi

Check if file $FILE exists, we do not know what type it is (if it is a directory, socket, node, etc.)

if [ -e "$FILE" ]; then
    echo "File $FILE exists, we do not know what type it is (if it is a directory, socket, node, etc.)";
fi

Check if file $FILE exists and is a symbolic link

if [ -L "$FILE" ]; then
    echo "File $FILE exists and is a symbolic link";
fi

Check if file $FILE exists and is a socket

if [ -S "$FILE" ]; then
    echo "File $FILE exists and is a socket";
fi

Check if file $FILE exists and is not empty

if [ -s "$FILE" ]; then
    echo "File $FILE exists and is not empty";
fi

Check if file $FILE exists and is readable

if [ -r "$FILE" ]; then
    echo "File $FILE exists and is readable";
fi

Check if file $FILE exists and is writable

if [ -w "$FILE" ]; then
    echo "File $FILE exists and is writable";
fi

Check if file $FILE exists and is executable

if [ -x "$FILE" ]; then
    echo "File $FILE exists and is executable";
fi