ubuntu


Troubleshooting Access Rights and Memory Allocation for Elastic Enterprise Search using Docker Compose

Introduction

Setting up Elastic Enterprise Search using Docker Compose provides a convenient and scalable solution for deploying Elasticsearch’s Enterprise Search. However, during our implementation of this setup, we encountered issues related to access rights and memory allocation. In this blog post, we will walk you through the steps we took to resolve these challenges and successfully configure the system.

Problem: Access Rights

The initial hurdle we encountered was related to access rights when running Elastic Enterprise Search using Docker Compose. While following the official guide provided by Elastic, we realized that certain permissions were causing problems. As a result, we could not access the files and directories required for proper functioning.

Solution: Modifying the YML and Adjusting Access Rights

We modified the YAML file used in the Docker Compose setup to overcome the access rights issue. Specifically, we relocated all volumes to a folder our user owns, ensuring we had complete control and ownership over these files. Additionally, we adjusted the access rights to 777, granting all users full read, write, and execute permissions. Please note that this is used for a development environment that will be destroyed later.

version: '2.2'

services:

  setup:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - /home/tux/docker/volumes/es/certs:/usr/share/elasticsearch/config/certs
    user: "0"
    command: >
      bash -c '
        if [ x${ELASTIC_PASSWORD} == x ]; then
          echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
          exit 1;
        elif [ x${KIBANA_PASSWORD} == x ]; then
          echo "Set the KIBANA_PASSWORD environment variable in the .env file";
          exit 1;
        fi;
        if [ ! -f certs/ca.zip ]; then
          echo "Creating CA";
          bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
          unzip config/certs/ca.zip -d config/certs;
        fi;
        if [ ! -f certs/certs.zip ]; then
          echo "Creating certs";
          echo -ne \
          "instances:\n"\
          "  - name: es01\n"\
          "    dns:\n"\
          "      - es01\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          > config/certs/instances.yml;
          bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
          unzip config/certs/certs.zip -d config/certs;
        fi;
        echo "Setting file permissions"
        chown -R root:root config/certs;
        find . -type d -exec chmod 750 \{\} \;;
        find . -type f -exec chmod 640 \{\} \;;
        echo "Waiting for Elasticsearch availability";
        until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
        echo "Setting kibana_system password";
        until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
        echo "All done!";
      '
    healthcheck:
      test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
      interval: 1s
      timeout: 5s
      retries: 120

  es01:
    depends_on:
      setup:
        condition: service_healthy
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - /home/tux/docker/volumes/es/certs:/usr/share/elasticsearch/config/certs
      - /home/tux/docker/volumes/es/esdata01:/usr/share/elasticsearch/data
    ports:
      - ${ES_PORT}:9200
    environment:
      - node.name=es01
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es01/es01.key
      - xpack.security.http.ssl.certificate=certs/es01/es01.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.http.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es01/es01.key
      - xpack.security.transport.ssl.certificate=certs/es01/es01.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
            "CMD-SHELL",
            "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  kibana:
    depends_on:
      es01:
        condition: service_healthy
    image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
    volumes:
      - /home/tux/docker/volumes/es/certs:/usr/share/kibana/config/certs
      - /home/tux/docker/volumes/es/kibanadata:/usr/share/kibana/data
    ports:
      - ${KIBANA_PORT}:5601
    environment:
      - SERVERNAME=kibana
      - ELASTICSEARCH_HOSTS=https://es01:9200
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
      - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
      - ENTERPRISESEARCH_HOST=http://enterprisesearch:${ENTERPRISE_SEARCH_PORT}
      - xpack.reporting.kibanaServer.hostname=localhost
    mem_limit: ${MEM_LIMIT}
    healthcheck:
      test:
        [
            "CMD-SHELL",
            "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  enterprisesearch:
    depends_on:
      es01:
        condition: service_healthy
      kibana:
        condition: service_healthy
    image: docker.elastic.co/enterprise-search/enterprise-search:${STACK_VERSION}
    volumes:
      - /home/tux/docker/volumes/es/certs:/usr/share/enterprise-search/config/certs
      - /home/tux/docker/volumes/es/enterprisesearchdata:/usr/share/enterprise-search/config
    ports:
      - ${ENTERPRISE_SEARCH_PORT}:3002
    environment:
      - SERVERNAME=enterprisesearch
      - secret_management.encryption_keys=[${ENCRYPTION_KEYS}]
      - allow_es_settings_modification=true
      - elasticsearch.host=https://es01:9200
      - elasticsearch.username=elastic
      - elasticsearch.password=${ELASTIC_PASSWORD}
      - elasticsearch.ssl.enabled=true
      - elasticsearch.ssl.certificate_authority=/usr/share/enterprise-search/config/certs/ca/ca.crt
      - kibana.external_url=http://kibana:5601
    mem_limit: ${MEM_LIMIT}
    healthcheck:
      test:
        [
            "CMD-SHELL",
            "curl -s -I http://localhost:3002 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

volumes:
  certs:
    driver: local
  enterprisesearchdata:
    driver: local
  esdata01:
    driver: local
  kibanadata:
    driver: local

Problem: Memory Allocation

In addition to the access rights challenge, we also encountered an issue related to memory allocation. The default settings for the virtual memory map count (vm.max_map_count) were insufficient for our requirements, leading to errors during the deployment.

Solution: Increasing Memory Allocation

To address the memory allocation problem, we followed the guide provided by Elastic on how to increase the vm.max_map_count. By executing the command sysctl -w vm.max_map_count=262144, we set the required value for optimal memory allocation. This adjustment allowed Elastic Enterprise Search to function properly and utilize the necessary resources.

Problem: Version 8 does not deploy correctly and fails to authenticate

When we tried to bring up the docker containers, we got the following problems with access rights:

From ES01:
{"@timestamp":"2023-05-22T05:27:10.294Z", "log.level":"ERROR", "message":"failed to retrieve password hash for reserved user [kibana_system]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es01][transport_worker][T#5]","log.logger":"org.elasticsearch.xpack.security.authc.esnative.ReservedRealm","trace.id":"8639207dbfa64551d7a302a1df3bda26","elasticsearch.cluster.uuid":"Zxd3v00YQyqJtFTsDw143w","elasticsearch.node.id":"v1WIc0TUReqqZFcTRzsyOw","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"es-cluster","error.type":"org.elasticsearch.action.UnavailableShardsException",

From Kibana:
[2023-05-21T13:56:30.497+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. security_exception
	Root causes:
		security_exception: unable to authenticate user [kibana_system] for REST request [/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip]
[2023-05-21T13:56:30.868+00:00][INFO ][plugins.screenshotting.chromium] Browser executable: /usr/share/kibana/x-pack/plugins/screenshotting/chromium/headless_shell-linux_x64/headless_shell

Solution: Downgrade to version 7

It appears that version 7 handles correctly the access management issues, so we updated our .env file to the following:

STACK_VERSION=7.17.10
ELASTIC_PASSWORD=7XQ2aR7q2g2Q994msd942sLJoAJ7LUvD
KIBANA_PASSWORD=2M7nl2hCH11b7NLht2l8A8ZHBjAKTkxI
ES_PORT=9200
CLUSTER_NAME=es-cluster
LICENSE=basic
MEM_LIMIT=1073741824
KIBANA_PORT=5601
ENTERPRISE_SEARCH_PORT=3002
ENCRYPTION_KEYS=5df1f9f9590ac53b29db6f5eeb9f63be04a43018af3095ef2bfc8a043d38fc7c

Conclusion

While setting up Run Enterprise Search using Docker Compose can be a straightforward process, it is not uncommon to encounter challenges along the way. In this blog post, we discussed the problems we faced regarding access rights and memory allocation and the steps we took to resolve them.

By modifying the YAML file to ensure proper ownership and permissions for the required volumes and adjusting the memory allocation using the recommended command, we successfully overcame these issues. With our configuration now in place, we enjoy the benefits of Elastic Enterprise Search, leveraging its powerful capabilities for our organization’s search and discovery needs.

We hope that sharing our experience and solutions will assist others in overcoming similar obstacles and make their journey toward implementing Elastic Enterprise Search using Docker Compose smoother.


Mount a Windows share on a GNU/Linux server

sudo mount -t cifs //$WINDOWS_FILE_SERVER/$SHARED_FOLDER /var/www/bytefreaks.net/shared/ -o username=remoteUser,password='123abc',domain=bytefreaks.net,file_mode=0777,dir_mode=0777;

The command sudo mount -t cifs //$WINDOWS_FILE_SERVER/$SHARED_FOLDER /var/www/bytefreaks.net/shared/ -o username=remoteUser,password='123abc',domain=bytefreaks.net,file_mode=0777,dir_mode=0777 is used to mount a shared folder from a Windows file server onto a Linux system.

To break down the command further, here is a detailed explanation of each component:

sudo – this command is used to run the following command as a superuser or root. It is required in this instance as mounting requires administrative privileges.

mount – the mount command is used to mount a file system onto a directory in the Linux file system hierarchy.

-t cifs – this option specifies the type of file system that is being mounted. In this case, it is the Common Internet File System (CIFS), which is used for file sharing between Windows and Linux systems.

//$WINDOWS_FILE_SERVER/$SHARED_FOLDER – this is the network path to the shared folder on the Windows file server. The $WINDOWS_FILE_SERVER and $SHARED_FOLDER are placeholders for the actual Windows file server name and shared folder name, respectively.

/var/www/bytefreaks.net/shared/ – this is the mount point, or the location in the Linux file system hierarchy where the shared folder will be mounted.

-o username=remoteUser,password='123abc',domain=bytefreaks.net,file_mode=0777,dir_mode=0777 – these are the mount options that are specified when mounting the shared folder.

The username option specifies the username of the remote user that has access to the shared folder. In this case, the remote user is named remoteUser.

The password option specifies the password for the remote user.

The domain option specifies the domain or workgroup that the Windows file server belongs to. In this case, the domain is bytefreaks.net.

The file_mode and dir_mode options specify the permissions that should be set on the files and directories within the mounted shared folder. In this case, both are set to 0777, which means that all users have full read, write, and execute permissions on all files and directories within the mounted shared folder.

In summary, the sudo mount -t cifs //$WINDOWS_FILE_SERVER/$SHARED_FOLDER /var/www/bytefreaks.net/shared/ -o username=remoteUser,password='123abc',domain=bytefreaks.net,file_mode=0777,dir_mode=0777 command is used to mount a shared folder from a Windows file server onto a Linux system with the specified mount options.


How to empty the gnome tracker3 cache?

To empty the cache of gnome tracker3, you can follow the steps below:

Open a terminal window by pressing Ctrl + Alt + T.

Type the following command to stop the tracker daemon:

tracker3 daemon -t;

Type the following command to clear the tracker database:

tracker reset --filesystem;

This command will remove all indexed data from the tracker and clear its cache. (Remove filesystem indexer database)

Restart the tracker daemon by typing the following command:

tracker daemon -s ;

This will start the tracker daemon again, and it will begin to rebuild its database and cache.

After following these steps, the cache of gnome tracker3 will be emptied.

Execution example:

$ tracker3 daemon -t
Found 1 PID…
  Killed process 13705 — “tracker-miner-fs-3”
$ tracker3 reset --filesystem
Found 1 PID…
  Killed process 13705 — “tracker-miner-fs-3”
$ tracker3 daemon -s
Starting miners…
  ✓ File System

5 packages can be upgraded. Run ‘apt list –upgradable’ to see them.

Updating an operating system is a critical task that ensures the system’s security, stability, and performance. In Ubuntu, updating the system involves running several commands in the terminal. In this technical post, we will explain the process of updating an Ubuntu Server installation and how to fix the issue of pending package upgrades.

The first command that needs to be executed is sudo apt update. This command updates the list of available packages from the Ubuntu repositories. The -y option instructs the system to answer “yes” to prompts, ensuring the process runs uninterrupted.

The second command is sudo apt upgrade. This command upgrades all installed packages to their latest versions. Again, the -y option is used to answer “yes” to any prompts.

The third command, sudo apt autoremove, removes any unnecessary dependencies that are no longer required by the system.

After executing these three commands, the system displayed a warning message that says that five packages can be upgraded. To see the list of upgradable packages, we needed to execute the command apt list --upgradable. This command lists all the upgradable packages along with their versions.

The following command we executed was sudo apt-get dist-upgrade. This command upgrades the system to the latest distribution release, including kernel upgrades. However, the command failed to upgrade the pending packages in this case.

To fix the issue, we executed the command sudo apt-get install --only-upgrade $PACKAGE for each pending package. This command upgrades the specified package to its latest version. The $PACKAGE variable should be replaced with the package name that needs to be upgraded.

In summary, updating an Ubuntu Server installation involves running the sudo apt update, sudo apt upgrade, and sudo apt autoremove commands in the terminal. If there are any pending package upgrades, we can use the apt list --upgradable command to see the list of upgradable packages. If the sudo apt-get dist-upgrade command fails to upgrade the pending packages, we can use the sudo apt-get install --only-upgrade $PACKAGE command to upgrade each package individually.