GNU/Linux


Bash: Extract data from files both filtering filename, the path and doing internal processing

The following code will find all files that match the pattern 2016_*_*.log (all the log files for the year 2016).

To avoid finding log files from other services than the Web API service, we filter only the files that their path contains the folder webapi. Specifically, we used "/ServerLogs/*/webapi/*" with the following command to match all files that are under the folder /ServerLogs/ and somewhere in the path there is another folder named webapi, we do that to match files that are like /ServerLogs/Production/01/webapi/* only. The way we coded our regular expression, it will not match if there is a folder called webapi directly under the /ServerLogs/ (e.g. /ServerLogs/webapi/*).

For each result, we execute an awk script that will split the lines using the comma (FS=",";) character, then check if the line contains exactly 4 tokens (if (NF == 4) {). Later, we get the 4th token and check if it contains the substring "MASTER=" (if (match($4,"MASTER=")) {), if it does contain it we split it using the space character and assign the result to the variable named tokens. From tokens, we get the first token and use substr to remove the first character. Finally, we use the formatted result to create an array where the keys are the values we just created and it is used as a hashmap to keep record of all unique strings. In the end clause, we print all the elements of our hash map.

Finally, we sort all the results from all the awk executions and remove duplicates using sort --unique.


find /ServerLogs/ \
    -iname "2016_*_*.log" \
    -ipath "/ServerLogs/*/webapi/*" \
    -exec awk '
        BEGIN {
            FS=",";
        }
        {
            if (NF == 4) {
                if (match($4,"MASTER=")) {
                    split($4, tokens, " ");
                    instances[substr(tokens[1], 2)];
                }
            }
        }
        END {
            for (element in instances) {
                print element;
            }
        }
    ' \
    '{}' \; | sort --unique;

Following is the same code in one line.

 find /ServerLogs/ -iname "2016_*_*.log" -ipath "/ServerLogs/*/webapi/*" -exec awk 'BEGIN {FS=",";} {if (NF == 4) {if (match($4,"MASTER=")){split($4, tokens, " "); instances[substr(tokens[1], 2)];}}} END {for (element in instances) {print element;}}' '{}' \; | sort --unique 

Another way

Another way to do similar functionality would be the following


find /ServerLogs/ \
    -iname "2016_*_*.log" \
    -ipath "/ServerLogs/*/webapi/*" \
    -exec sh -c '
        grep "MASTER=" -s "$0" | awk "BEGIN {FS=\",\";} NF==4" | cut -d "," -f4 | cut -c 3- | cut -d " " -f1 | sort --unique
    ' \
    '{}' \; | sort --unique;

What we changed is the -exec part. Instead of calling a awk script, we create a new sub-shell using sh -c, then we define the source to be executed inside the single codes and we pass as the first parameter of the shell the filename that matched.

Inside the shell, we find all lines that contain the string MASTER= using the grep command. Later we filter out all lines that do not have four columns when we tokenize using the comma character using awk. Then, we get the 4th column using cut and delimiter the comma. We remove the first two characters of the input string using cut -c 3- and later we get only the first column by reusing cut and changing the delimiter to be the space character. With those results we perform a sort that eliminates duplicates and we pass the results to the parent process to perform other operations.

Following is the same code in one line


find /ServerLogs/ -iname "2016_*_*.log" -ipath "/ServerLogs/*/webapi/*" -exec sh -c 'grep "MASTER=" -s "$0" | awk "BEGIN {FS=\",\";} NF==4" | cut -d "," -f4 | cut -c 3- | cut -d " " -f1 | sort --unique' '{}' \; | sort --unique;


Ignore SSL certificates for GIT

The background

So, recently a new firewall was installed, this firewall performs SSL/TLS decryption on all encrypted traffic…

In order for machines to continue operating normally, a custom certificate was issued and installed on each one. On certain machines though, the certificate was not installed and this caused verification problems.

The story

While trying to clone a git project from github we got the following output


$ git clone https://github.com/ioi/translation.git
Cloning into 'translation'...
fatal: unable to access 'https://github.com/ioi/translation.git/': server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none

The horrible solution

To mitigate the problem (not solve it), we directed git to ignore the SSL certificates and not verify them using the following call right before the clone command.


export GIT_SSL_NO_VERIFY=true

As expected, the execution went smoothly after this change


$ git clone https://github.com/ioi/translation.git
Cloning into 'translation'...
remote: Counting objects: 297, done.
remote: Total 297 (delta 0), reused 0 (delta 0), pack-reused 297
Receiving objects: 100% (297/297), 4.40 MiB | 1.50 MiB/s, done.
Resolving deltas: 100% (39/39), done.
Checking connectivity... done.


bash: Simple way to get n-th column

Using cut you can select any column and define a custom delimiter to support multiple input formats you can select a column (or more) with barely minimum code.

cut -d',' -f2 myFile.csv

The above command will read the file myFile.csv (which is a CSV file) break it down to columns using the ‘,‘ character and then get the second column.

The option -f specifies which field (column) you want to extract, and the option -d specifies what is the field delimiter (column) that is used in the input file.

The -f parameter allows you to select multiple columns at the same time. You can achieve that by defining multiple columns separated using the ‘,‘ and by defining ranges using the - character.

Examples

  • -f1 selects the first column
  • -f1,3,4 selects columns 1, 3 and 4
  • -f1-4 selects all columns in the range 1-4
  • -f1,3,5-7,9 selects columns 1,3,8 and all the columns in the range 5-7

Fedora 23: Support exfat

The Background:

Recently we got our hands on a GoPro Hero4 Black camera and we used a 64GB memory card as storage. The file system on the card is exFAT (Extended File Allocation Table), which is a Microsoft file system designed for flash drives. It is proprietary and Microsoft owns patents on several elements of its design. exFAT has been adopted by the SD Card Association as the default file system for SDXC cards larger than 32GiB.

The issue:

Fedora does not support exFat due to the licensing issues that Microsoft applied to the product and thus we could not mount the card on our machine.

The solution:

We installed the fuse-exfat driver from rpmfusion.org using the following commands.

#Enable access to both the free and the nonfree repository
#free repository: for Open Source Software (as defined by the Fedora Licensing Guidelines) which the Fedora project cannot ship due to other reasons
#nonfree repository: for redistributable software that is not Open Source Software (as defined by the Fedora Licensing Guidelines); this includes software with publicly available source-code that has "no commercial use"-like restrictions 
su -c 'dnf install http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm';

#Perform the installation
sudo dnf install fuse-exfat;

More background:

RPM Fusion provides software that the Fedora Project or Red Hat doesn’t want to ship. That software is provided as precompiled RPMs for all current Fedora versions and Red Hat Enterprise Linux 5 and 6.