Ετήσια αρχεία: 2023


Mastering Desktop Zoom: A Guide to Keyboard Shortcuts on GNOME

In today’s fast-paced digital world, accessibility features are vital in empowering users with different abilities. One such feature is desktop zoom, which allows users to magnify their screen content for better visibility. GNOME, a popular desktop environment for Linux, offers a convenient way to activate and utilize desktop zoom through keyboard shortcuts. This blog post will explore how to make the most of these shortcuts and enhance your GNOME experience.

Activating Desktop Zoom via the Settings:

To activate desktop zoom on GNOME, follow these steps:

Step 1: Open the GNOME Settings: Click on the “Activities” button in the screen’s top-left corner or press the “Super” key on your keyboard. Then type “Settings” and select the “Settings” application.

Step 2: Navigate to Accessibility Settings: In the GNOME Settings window, select the “Accessibility” category on the left sidebar.

Step 3: Enable Desktop Zoom: Within the Accessibility settings, locate the “Zoom” section. Toggle the switch to the “ON” position to activate desktop zoom.

Using Keyboard Shortcuts for Desktop Zoom:

You can use the following keyboard shortcuts to control and customize your zoom experience:

  1. Toggle Zoom On/Off: Super + Alt + 8 Pressing the Super (Windows) key, Alt key, and the number 8 simultaneously will toggle the zoom functionality on or off.
  2. Zoom In: Super + Alt + plus (+) Pressing the Super key, Alt key, and the plus (+) key simultaneously will zoom in, magnifying the content on your screen.
  3. Zoom Out: Super + Alt + minus (-) Pressing the Super key, Alt key, and the minus (-) key simultaneously will zoom out, reducing the magnification of the screen content.
  4. Zoom Reset: Super + Alt + 0 Pressing the Super key, Alt key, and the number 0 simultaneously will reset the zoom level to its default state.
  5. Pan Around the Screen: Super + Alt + left-click and drag While zoomed in, holding down the Super key, Alt key, and left-clicking the mouse button while dragging will allow you to pan around the zoomed-in screen area.

Customizing Desktop Zoom Options:

If you wish to customize your desktop zoom experience further, you can access additional settings through the GNOME Settings application. Here, you can modify options such as zoom factor, mouse wheel behavior, and more to suit your preferences.

Conclusion:

The keyboard shortcuts provided by GNOME for desktop zoom offer a convenient and efficient way to magnify your screen content. By activating and using these shortcuts, you can enhance your productivity and accessibility within the GNOME desktop environment. Explore additional customization options to tailor the desktop zoom feature to your needs. Embrace the power of keyboard shortcuts and take complete control of your GNOME experience.


Troubleshooting Access Rights and Memory Allocation for Elastic Enterprise Search using Docker Compose

Introduction

Setting up Elastic Enterprise Search using Docker Compose provides a convenient and scalable solution for deploying Elasticsearch’s Enterprise Search. However, during our implementation of this setup, we encountered issues related to access rights and memory allocation. In this blog post, we will walk you through the steps we took to resolve these challenges and successfully configure the system.

Problem: Access Rights

The initial hurdle we encountered was related to access rights when running Elastic Enterprise Search using Docker Compose. While following the official guide provided by Elastic, we realized that certain permissions were causing problems. As a result, we could not access the files and directories required for proper functioning.

Solution: Modifying the YML and Adjusting Access Rights

We modified the YAML file used in the Docker Compose setup to overcome the access rights issue. Specifically, we relocated all volumes to a folder our user owns, ensuring we had complete control and ownership over these files. Additionally, we adjusted the access rights to 777, granting all users full read, write, and execute permissions. Please note that this is used for a development environment that will be destroyed later.

version: '2.2'

services:

  setup:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - /home/tux/docker/volumes/es/certs:/usr/share/elasticsearch/config/certs
    user: "0"
    command: >
      bash -c '
        if [ x${ELASTIC_PASSWORD} == x ]; then
          echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
          exit 1;
        elif [ x${KIBANA_PASSWORD} == x ]; then
          echo "Set the KIBANA_PASSWORD environment variable in the .env file";
          exit 1;
        fi;
        if [ ! -f certs/ca.zip ]; then
          echo "Creating CA";
          bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
          unzip config/certs/ca.zip -d config/certs;
        fi;
        if [ ! -f certs/certs.zip ]; then
          echo "Creating certs";
          echo -ne \
          "instances:\n"\
          "  - name: es01\n"\
          "    dns:\n"\
          "      - es01\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          > config/certs/instances.yml;
          bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
          unzip config/certs/certs.zip -d config/certs;
        fi;
        echo "Setting file permissions"
        chown -R root:root config/certs;
        find . -type d -exec chmod 750 \{\} \;;
        find . -type f -exec chmod 640 \{\} \;;
        echo "Waiting for Elasticsearch availability";
        until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
        echo "Setting kibana_system password";
        until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
        echo "All done!";
      '
    healthcheck:
      test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
      interval: 1s
      timeout: 5s
      retries: 120

  es01:
    depends_on:
      setup:
        condition: service_healthy
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - /home/tux/docker/volumes/es/certs:/usr/share/elasticsearch/config/certs
      - /home/tux/docker/volumes/es/esdata01:/usr/share/elasticsearch/data
    ports:
      - ${ES_PORT}:9200
    environment:
      - node.name=es01
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es01/es01.key
      - xpack.security.http.ssl.certificate=certs/es01/es01.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.http.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es01/es01.key
      - xpack.security.transport.ssl.certificate=certs/es01/es01.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
            "CMD-SHELL",
            "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  kibana:
    depends_on:
      es01:
        condition: service_healthy
    image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
    volumes:
      - /home/tux/docker/volumes/es/certs:/usr/share/kibana/config/certs
      - /home/tux/docker/volumes/es/kibanadata:/usr/share/kibana/data
    ports:
      - ${KIBANA_PORT}:5601
    environment:
      - SERVERNAME=kibana
      - ELASTICSEARCH_HOSTS=https://es01:9200
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
      - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
      - ENTERPRISESEARCH_HOST=http://enterprisesearch:${ENTERPRISE_SEARCH_PORT}
      - xpack.reporting.kibanaServer.hostname=localhost
    mem_limit: ${MEM_LIMIT}
    healthcheck:
      test:
        [
            "CMD-SHELL",
            "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  enterprisesearch:
    depends_on:
      es01:
        condition: service_healthy
      kibana:
        condition: service_healthy
    image: docker.elastic.co/enterprise-search/enterprise-search:${STACK_VERSION}
    volumes:
      - /home/tux/docker/volumes/es/certs:/usr/share/enterprise-search/config/certs
      - /home/tux/docker/volumes/es/enterprisesearchdata:/usr/share/enterprise-search/config
    ports:
      - ${ENTERPRISE_SEARCH_PORT}:3002
    environment:
      - SERVERNAME=enterprisesearch
      - secret_management.encryption_keys=[${ENCRYPTION_KEYS}]
      - allow_es_settings_modification=true
      - elasticsearch.host=https://es01:9200
      - elasticsearch.username=elastic
      - elasticsearch.password=${ELASTIC_PASSWORD}
      - elasticsearch.ssl.enabled=true
      - elasticsearch.ssl.certificate_authority=/usr/share/enterprise-search/config/certs/ca/ca.crt
      - kibana.external_url=http://kibana:5601
    mem_limit: ${MEM_LIMIT}
    healthcheck:
      test:
        [
            "CMD-SHELL",
            "curl -s -I http://localhost:3002 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

volumes:
  certs:
    driver: local
  enterprisesearchdata:
    driver: local
  esdata01:
    driver: local
  kibanadata:
    driver: local

Problem: Memory Allocation

In addition to the access rights challenge, we also encountered an issue related to memory allocation. The default settings for the virtual memory map count (vm.max_map_count) were insufficient for our requirements, leading to errors during the deployment.

Solution: Increasing Memory Allocation

To address the memory allocation problem, we followed the guide provided by Elastic on how to increase the vm.max_map_count. By executing the command sysctl -w vm.max_map_count=262144, we set the required value for optimal memory allocation. This adjustment allowed Elastic Enterprise Search to function properly and utilize the necessary resources.

Problem: Version 8 does not deploy correctly and fails to authenticate

When we tried to bring up the docker containers, we got the following problems with access rights:

From ES01:
{"@timestamp":"2023-05-22T05:27:10.294Z", "log.level":"ERROR", "message":"failed to retrieve password hash for reserved user [kibana_system]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es01][transport_worker][T#5]","log.logger":"org.elasticsearch.xpack.security.authc.esnative.ReservedRealm","trace.id":"8639207dbfa64551d7a302a1df3bda26","elasticsearch.cluster.uuid":"Zxd3v00YQyqJtFTsDw143w","elasticsearch.node.id":"v1WIc0TUReqqZFcTRzsyOw","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"es-cluster","error.type":"org.elasticsearch.action.UnavailableShardsException",

From Kibana:
[2023-05-21T13:56:30.497+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. security_exception
	Root causes:
		security_exception: unable to authenticate user [kibana_system] for REST request [/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip]
[2023-05-21T13:56:30.868+00:00][INFO ][plugins.screenshotting.chromium] Browser executable: /usr/share/kibana/x-pack/plugins/screenshotting/chromium/headless_shell-linux_x64/headless_shell

Solution: Downgrade to version 7

It appears that version 7 handles correctly the access management issues, so we updated our .env file to the following:

STACK_VERSION=7.17.10
ELASTIC_PASSWORD=7XQ2aR7q2g2Q994msd942sLJoAJ7LUvD
KIBANA_PASSWORD=2M7nl2hCH11b7NLht2l8A8ZHBjAKTkxI
ES_PORT=9200
CLUSTER_NAME=es-cluster
LICENSE=basic
MEM_LIMIT=1073741824
KIBANA_PORT=5601
ENTERPRISE_SEARCH_PORT=3002
ENCRYPTION_KEYS=5df1f9f9590ac53b29db6f5eeb9f63be04a43018af3095ef2bfc8a043d38fc7c

Conclusion

While setting up Run Enterprise Search using Docker Compose can be a straightforward process, it is not uncommon to encounter challenges along the way. In this blog post, we discussed the problems we faced regarding access rights and memory allocation and the steps we took to resolve them.

By modifying the YAML file to ensure proper ownership and permissions for the required volumes and adjusting the memory allocation using the recommended command, we successfully overcame these issues. With our configuration now in place, we enjoy the benefits of Elastic Enterprise Search, leveraging its powerful capabilities for our organization’s search and discovery needs.

We hope that sharing our experience and solutions will assist others in overcoming similar obstacles and make their journey toward implementing Elastic Enterprise Search using Docker Compose smoother.


Bulk convert PNG images to JPG / JPEG

for i in *.png ; do convert "$i" "${i%.*}.jpg" ; done

The command “for i in .png ; do convert “$i” “${i%.}.jpg” ; done” is a Bash script that converts all PNG files in the current directory to JPEG files.

Let’s break down this command:

  • “for i in *.png ;” is a loop that iterates over each PNG file in the current directory.
  • “$i” is the name of the current PNG file being processed.
  • “convert” is a command-line tool that is part of the ImageMagick software suite. It is used for image conversion, resizing, and manipulation.
  • “${i%.}.jpg” is the new filename that the PNG file will be converted to. The “${i%.}” syntax is used to remove the file extension from the original PNG file name, leaving just the base filename, which is then followed by “.jpg” to indicate that the new file should be a JPEG file.

In summary, this command converts each PNG file in the current directory to a JPEG file with the same base filename. For example, “example.png” would be converted to “example.jpg”. This command can be useful when you have a large number of PNG files that you need to convert to JPEG format quickly and easily.


How to monitor all outgoing requests/connections from your GNU/Linux machine

netstat -nputw;

The “netstat” command is a network utility tool used to display information about active network connections, including the protocol used (TCP or UDP), the local and remote addresses and port numbers, and the current state of the connection.

The options used in this command are as follows:

  • “n” displays addresses and port numbers in numerical form rather than converting them to hostnames and service names.
  • “p” shows the process ID (PID) and program name using the connection.
  • “u” displays UDP connections.
  • “t” displays TCP connections.
  • “w” displays raw sockets.
  • “;” separates the command from other commands that may follow.

Therefore, the command netstat -nputw; will display all current network connections on the machine, including the corresponding processes and raw socket connections, in a numerical format without resolving hostnames and service names.