Yearly Archives: 2021


How to Reset Password on an Ubuntu with LVM

A few days ago, a client tasked us to recover the password of an Ubuntu server 20.04LTS. The machine owner only knew the username but had no idea about the complexity of the password. We’ve asked the client if it was OK for us to reset the password instead of recovering it (meaning that we would not even try to crack the mystery of what the previous password was and just set a new one), and thankfully, the client accepted our request.

The client set up the server using Ubuntu server edition 20.04LTS, and the disk partitions were using LVM (Logical Volume Manager). To our good luck, they were not using encrypted partitions. The procedure we followed to reset the password of that server was like so:

First of all, we shut down the server and booted it with a Live USB of an Ubuntu desktop 20.04LTS. Then we started a terminal and executed the following to get root access on the live system:

sudo su;

Then, we executed pvscan to list all physical volumes and gain some intelligence on which disk we needed to work on:

pvscan;
root@ubuntu:/home/ubuntu# pvscan
  /dev/sdc: open failed: No medium found
  PV /dev/sda3   VG ubuntu-vg       lvm2 [<3.64 TiB / 3.44 TiB free]
  Total: 1 [<3.64 TiB] / in use: 1 [<3.64 TiB] / in no VG: 0 [0   ]

Following that, we used vgscan to search for all volume groups:

vgscan;
root@ubuntu:/home/ubuntu# vgscan
  /dev/sdc: open failed: No medium found
  Found volume group "ubuntu-vg" using metadata type lvm2

From these two commands, it was clear that the disk /dev/sda3 contained an LVM partition with the logical volume group name ubuntu-vg. That logical volume group held the server’s filesystem, and it was the place we needed to access to change the user’s password.

So, we used vgchange to change the attributes of the volume group and activate it like so:

vgchange -a y;
root@ubuntu:/home/ubuntu# vgchange -a y
  /dev/sdc: open failed: No medium found
  /dev/sdc: open failed: No medium found
  1 logical volume(s) in volume group "ubuntu-vg" now active

Using lvscan, we were able to list all logical volumes in all volume groups and verify that we activated the volume group of interest successfully.

lvscan;
root@ubuntu:/home/ubuntu# lvscan
  /dev/sdc: open failed: No medium found
  ACTIVE            '/dev/ubuntu-vg/ubuntu-lv' [200.00 GiB] inherit

After these steps, we were ready to reset the password of the user finally. We continued to mount the logical volume group like any other disk on the /mnt folder:

mount /dev/ubuntu-vg/ubuntu-lv /mnt/;

Then, we used chroot to change the apparent root directory for the currently running process (and its children). This command allowed our terminal to work inside the logical volume as if we had booted the server OS itself.

chroot /mnt/;

Finally, using the passwd command, we changed the user password as so:

passwd -S bob;

To clean up, we exited the chroot environment:

exit;

Then, we unmounted the logical volume group:

umount /mnt;

And finally, we set the active flag of the volume group to no.

vgchange -a n;

After the above steps, we had safely applied all changes, so we rebooted the machine using its hard drive.


Playing the QMIX Two-step game on Ray

We are trying to expand the code of the Two-step game (which is an example from the QMIX paper) using the Ray framework. The changes we want to apply should extract the best checkpoint from some trial of a tune.run(), restore it on a new QMixTrainer, and then use it on a new environment to compute the subsequent actions.

The code we tried to use is the following:

"""The two-step game from QMIX: https://arxiv.org/pdf/1803.11485.pdf

Configurations you can try:
    - normal policy gradients (PG)
    - contrib/MADDPG
    - QMIX

See also: centralized_critic.py for centralized critic PPO on this game.
"""

import argparse
from gym.spaces import Tuple, MultiDiscrete, Dict, Discrete
import os

import ray
from ray import tune
from ray.rllib.agents.qmix import QMixTrainer
from ray.tune import register_env, grid_search
from ray.rllib.env.multi_agent_env import ENV_STATE
from ray.rllib.examples.env.two_step_game import TwoStepGame
from ray.rllib.utils.test_utils import check_learning_achieved

import numpy as np

parser = argparse.ArgumentParser()
parser.add_argument("--run", type=str, default="QMIX")
parser.add_argument("--num-cpus", type=int, default=0)
parser.add_argument("--as-test", action="store_true")
parser.add_argument("--torch", action="store_true")
parser.add_argument("--stop-reward", type=float, default=7.0)
parser.add_argument("--stop-timesteps", type=int, default=50000)

if __name__ == "__main__":
    args = parser.parse_args()

    grouping = {
        "group_1": [0, 1],
    }
    obs_space = Tuple([
        Dict({
            "obs": MultiDiscrete([2, 2, 2, 3]),
            ENV_STATE: MultiDiscrete([2, 2, 2])
        }),
        Dict({
            "obs": MultiDiscrete([2, 2, 2, 3]),
            ENV_STATE: MultiDiscrete([2, 2, 2])
        }),
    ])
    act_space = Tuple([
        TwoStepGame.action_space,
        TwoStepGame.action_space,
    ])
    register_env(
        "grouped_twostep",
        lambda config: TwoStepGame(config).with_agent_groups(
            grouping, obs_space=obs_space, act_space=act_space))

    if args.run == "contrib/MADDPG":
        obs_space_dict = {
            "agent_1": Discrete(6),
            "agent_2": Discrete(6),
        }
        act_space_dict = {
            "agent_1": TwoStepGame.action_space,
            "agent_2": TwoStepGame.action_space,
        }
        config = {
            "learning_starts": 100,
            "env_config": {
                "actions_are_logits": True,
            },
            "multiagent": {
                "policies": {
                    "pol1": (None, Discrete(6), TwoStepGame.action_space, {
                        "agent_id": 0,
                    }),
                    "pol2": (None, Discrete(6), TwoStepGame.action_space, {
                        "agent_id": 1,
                    }),
                },
                "policy_mapping_fn": lambda x: "pol1" if x == 0 else "pol2",
            },
            "framework": "torch" if args.torch else "tf",
            # Use GPUs iff `RLLIB_NUM_GPUS` env var set to > 0.
            "num_gpus": int(os.environ.get("RLLIB_NUM_GPUS", "0")),
        }
        group = False
    elif args.run == "QMIX":
        config = {
            "rollout_fragment_length": 4,
            "train_batch_size": 32,
            "exploration_config": {
                "epsilon_timesteps": 5000,
                "final_epsilon": 0.05,
            },
            "num_workers": 0,
            "mixer": grid_search([None, "qmix", "vdn"]),
            "env_config": {
                "separate_state_space": True,
                "one_hot_state_encoding": True
            },
            # Use GPUs iff `RLLIB_NUM_GPUS` env var set to > 0.
            "num_gpus": int(os.environ.get("RLLIB_NUM_GPUS", "0")),
            "framework": "torch" if args.torch else "tf",
        }
        group = True
    else:
        config = {
            # Use GPUs iff `RLLIB_NUM_GPUS` env var set to > 0.
            "num_gpus": int(os.environ.get("RLLIB_NUM_GPUS", "0")),
            "framework": "torch" if args.torch else "tf",
        }
        group = False

    ray.init(num_cpus=args.num_cpus or None)

    stop = {
        "episode_reward_mean": args.stop_reward,
        "timesteps_total": args.stop_timesteps,
    }

    config = dict(config, **{
        "env": "grouped_twostep" if group else TwoStepGame,
    })

    results = tune.run(args.run, stop=stop, config=config, verbose=1, checkpoint_freq=1, checkpoint_at_end=True)

    if args.as_test:
        check_learning_achieved(results, args.stop_reward)

    best_checkpoint = results.get_best_checkpoint(results.trials[0], mode="max")
    print(f".. best checkpoint was: {best_checkpoint}")

    env = TwoStepGame(config).with_agent_groups(grouping, obs_space=obs_space, act_space=act_space)
    obs = env.reset()

    rllib_config = config.copy()
    rllib_config["mixer"] = "qmix"
    new_trainer = QMixTrainer(config=rllib_config)
    new_trainer.restore(best_checkpoint)

    a1 = new_trainer.compute_action(observation=obs['group_1'])
    a2 = new_trainer.compute_action(observation=np.concatenate([obs['group_1'], [1]]))

    ray.shutdown()

To make it easier for you to see the changes from the original, this is the patch of the changes:

Index: main.py

<+>UTF-8
===================================================================
diff --git a/main.py b/main.py
--- a/main.py	(revision 80b3473ef3eede5f94e4805797556940bee91bc8)
+++ b/main.py	(date 1637485442837)
@@ -14,13 +14,16 @@
 
 import ray
 from ray import tune
+from ray.rllib.agents.qmix import QMixTrainer
 from ray.tune import register_env, grid_search
 from ray.rllib.env.multi_agent_env import ENV_STATE
 from ray.rllib.examples.env.two_step_game import TwoStepGame
 from ray.rllib.utils.test_utils import check_learning_achieved
 
+import numpy as np
+
 parser = argparse.ArgumentParser()
-parser.add_argument("--run", type=str, default="PG")
+parser.add_argument("--run", type=str, default="QMIX")
 parser.add_argument("--num-cpus", type=int, default=0)
 parser.add_argument("--as-test", action="store_true")
 parser.add_argument("--torch", action="store_true")
@@ -120,9 +123,23 @@
         "env": "grouped_twostep" if group else TwoStepGame,
     })
 
-    results = tune.run(args.run, stop=stop, config=config, verbose=1)
+    results = tune.run(args.run, stop=stop, config=config, verbose=1, checkpoint_freq=1, checkpoint_at_end=True)
 
     if args.as_test:
         check_learning_achieved(results, args.stop_reward)
 
+    best_checkpoint = results.get_best_checkpoint(results.trials[0], mode="max")
+    print(f".. best checkpoint was: {best_checkpoint}")
+
+    env = TwoStepGame(config).with_agent_groups(grouping, obs_space=obs_space, act_space=act_space)
+    obs = env.reset()
+
+    rllib_config = config.copy()
+    rllib_config["mixer"] = "qmix"
+    new_trainer = QMixTrainer(config=rllib_config)
+    new_trainer.restore(best_checkpoint)
+
+    a1 = new_trainer.compute_action(observation=obs['group_1'])
+    a2 = new_trainer.compute_action(observation=np.concatenate([obs['group_1'], [1]]))
+
     ray.shutdown()

When we execute, we get the following errors:

a1 = new_trainer.compute_action(observation=obs['group_1'])

Produces:

ValueError: ('Observation ({}) outside given space ({})!', [0, 3], Tuple(Dict(obs:MultiDiscrete([2 2 2 3]), state:MultiDiscrete([2 2 2])), Dict(obs:MultiDiscrete([2 2 2 3]), state:MultiDiscrete([2 2 2]))))
a2 = new_trainer.compute_action(observation=np.concatenate([obs['group_1'], [1]]))

Produces:

ValueError: ('Observation ({}) outside given space ({})!', array([0, 3, 1]), Tuple(Dict(obs:MultiDiscrete([2 2 2 3]), state:MultiDiscrete([2 2 2])), Dict(obs:MultiDiscrete([2 2 2 3]), state:MultiDiscrete([2 2 2]))))

We are currently trying to figure out how we should change the observation to get accepted by the check_shape() function of the preprocessor.

def check_shape(self, observation: Any) -> None:
"""Checks the shape of the given observation."""
if self._i % VALIDATION_INTERVAL == 0:
    if type(observation) is list and isinstance(
            self._obs_space, gym.spaces.Box):
        observation = np.array(observation)
    try:
        if not self._obs_space.contains(observation):
            raise ValueError(
                "Observation ({}) outside given space ({})!",
                observation, self._obs_space)
    except AttributeError:
        raise ValueError(
            "Observation for a Box/MultiBinary/MultiDiscrete space "
            "should be an np.array, not a Python list.", observation)
self._i += 1

When calling the check_shape() function, these are the values that are processed:

observation:
value = [0, 3]
type = <class 'list'>

self._obs_space:
value = Tuple(Dict(obs:MultiDiscrete([2 2 2 3]), state:MultiDiscrete([2 2 2])), Dict(obs:MultiDiscrete([2 2 2 3]), state:MultiDiscrete([2 2 2])))
type = <class 'gym.spaces.tuple.Tuple'>

and this line fails:

if not self._obs_space.contains(observation)

Any positive feedback is welcome!


Revisiting neural-style-tf in 2021

We decided to revisit this post (https://bytefreaks.net/applications/neural-style-tf-another-open-source-alternative-to-prisma-for-advanced-users) in 2021 and provide the installation manual for Ubuntu 20.04LTS.

Setup

Conda / Anaconda

First of all, we installed and activated anaconda on an Ubuntu 20.04LTS desktop. To do so, we installed the following dependencies from the repositories:

sudo apt-get install libgl1-mesa-glx libegl1-mesa libxrandr2 libxrandr2 libxss1 libxcursor1 libxcomposite1 libasound2 libxi6 libxtst6;

Then, we downloaded the 64-Bit (x86) Installer from (https://www.anaconda.com/products/individual#linux).

Using a terminal, we followed the instructions here (https://docs.anaconda.com/anaconda/install/linux/) and performed the installation.

Python environment and OpenCV for Python

Following the previous step, we used the commands below to create a virtual environment for our code. We needed python version 3.7 (even though anaconda highlights version 3.9 here https://www.anaconda.com/products/individual#linux) and OpenCV for python.

source ~/anaconda3/bin/activate;
# We need python 3.7 at max to support TensorFlow version 1
conda create --yes --name Style python=3.7;
conda activate Style;
# Version 1 of TensorFlow is needed for the project that we will clone, version 1.15 is the latest and greatest version of TensorFlow 1.
pip install tensorflow==1.15 tensorflow-gpu==1.15 scipy numpy opencv-python;

Cloning the project and all necessary files

git clone https://github.com/cysmith/neural-style-tf.git;
cd neural-style-tf/;
wget http://www.vlfeat.org/matconvnet/models/imagenet-vgg-verydeep-19.mat;
#After everything is complete, it is time to create our first 'artistic' image.
python neural_style.py --content_img "/home/bob/Pictures/Aphrodite Hills Golf Course - Paphos, Cyprus.jpg" --style_imgs "/home/bob/Pictures/Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg" --max_size 400 --max_iterations 500 --device /cpu:0 --verbose;

Results

Result
Original Content
Adapted Style Input

Problems that you might get

If you get the following error:

ImportError: libGL.so.1: cannot open shared object file: No such file or directory

You will need to install some additional dependencies for OpenCV as your Ubuntu installation might have been minimal. To fix this issue, install the following package from the repositories:

sudo apt-get update;
sudo apt-get install -y python3-opencv;


Testing free Text to Speech engines on Ubuntu GNU/Linux

Recently, we decided to test a few free text to speech engines (TtS) on GNU/Linux. We were curious on what the current capabilities that are available as we wanted to create a few videos. To play a bit, we tested espeak, festival and pico. For this reason, we created a text file (called text.txt) and added the following content to it:

I triple E has a lot of scholarships, awards, and opportunities, but it doesn't have a centralized site where members can quickly identify the right ones.
Many problems arise as a result of the lack of this platform.
One crucial issue is that many people are unaware of specific opportunities until it is too late. Many projects are squandered each year because there is insufficient knowledge about these opportunities, resulting in low participation.
Another critical difficulty is having to start over with each application. Many people find it frustrating, and it prevents them from doing so.
The lack of real-time support to answer issues while an applicant is applying is critical, leading to discouragement and abandonment of the application process.
Providing references is a topic that many individuals are uncomfortable with. They are embarrassed to seek references that need to learn new systems and maybe answer the same questions posed in other ways.

Our solution is utilizing the Collabratec platform and storing all of these opportunities there:
Collabratec already has numerous key capabilities in place, lowering development costs.
Each application may have its own community or working group where an applicant can seek special clarifications or support. Collabratec will save money on development by repurposing existing technology. It will also give such features a new purpose.
Through those working groups, experienced members can share their knowledge and potentially coach applicants during their application process. Many members would be willing to help others attain their objectives, especially after they've gone through the process and understand the frustrations others are experiencing. We could utilize badges to reward individuals who aid others and those who apply for these possibilities, which is a frequent practice in Collabratec to make certain members stand out. This approach will assist members in getting to know one another and expanding their network outside their geographic zones, resulting in a genuinely global I triple E experience.
People who create opportunities can utilize the I triple E profile of a user to pre-populate elements of their application. As a result, the applicants' effort will be reduced because they will only fill in the questions related to that particular opportunity.
Without any additional work, the system may reuse earlier references. Assume that a reference has to be updated or validated to ensure that it is still valid. In that situation, the system may send an automatic notification to that person, asking them to approve, alter, or delete their earlier contribution.
Because users can readily share each application form and the corresponding working group information, Collabratec's capabilities as a social network will significantly enhance each opportunity's reach and all related documents, public comments, and discussions.

espeak

We started off with espeak and we used the following commands to test it:

# Command to install espeak;
sudo apt install espeak;
# Command that reads the text.txt file creates an audio file from its content.
espeak -f text.txt -w espeak.wav;

The result from espeak is below:

espeak definitely does not sound human-like. It is a fun tool if you need to create an audio file that sounds robotic! In our case, it was not a solution as we wanted to use a long text, listening to a robotic voice for a lot of time can be tiring.

festival

After that, we tested the text2wave tool of festival as follows:

sudo apt install festival;
cat text.txt | text2wave -o festival.wav;

The results from festival/text2wave are the following:

festival does sound a bit better than espeak, it’s almost smooth but not quite. You can easily tell that this is a computer-generated voice.

pico

Finally, we tested pico utilities. We set it up and used it as follows:

sudo apt install libttspico-utils;
pico2wave -l en-US -w test.wav "$(cat text.txt)";

The results of pico2wave were pretty good! Not perfect but still good! The voice is nearly human-like and fairly smooth. Below is the result of our test:

From the three utilities, pico was the most human-like and it fit our needs more. With this tool, we will be able to create certain videos with narration without being too annoying.

Other information

To create the videos, we used ffmpeg. As in the following commands, we combined the audio wave files with static images that were looped forever.

ffmpeg -loop 1 -i TtS-pico2wave.png -i test.wav -c:a aac -c:v libx264 -pix_fmt yuv420p -shortest TtS-pico2wave.mp4;
ffmpeg -loop 1 -i TtS-espeak.png -i espeak.wav -c:a aac -c:v libx264 -pix_fmt yuv420p -shortest TtS-espeak.mp4;
ffmpeg -loop 1 -i TtS-festival.png -i festival.wav -c:a aac -c:v libx264 -pix_fmt yuv420p -shortest TtS-festival.mp4;