Ολυμπιάδα Πληροφορικής


Installing Task Translation System for IOI competitions (Linguist) on Ubuntu Server 14.04LTS

First, install any missing (also for font) related dependencies:

sudo apt-get update;
sudo apt-get install git unzip build-essential chrpath libssl-dev libxft-dev libfreetype6 libfreetype6-dev libfontconfig1 libfontconfig1-dev fontconfig fontconfig-config fonts-dejavu-core fonts-droid fonts-freefont-ttf fonts-kacst fonts-kacst-one fonts-khmeros-core fonts-lao fonts-liberation fonts-lklug-sinhala fonts-nanum fonts-opensymbol fonts-sil-abyssinica fonts-sil-padauk fonts-takao-pgothic fonts-thai-tlwg fonts-tibetan-machine fonts-tlwg-garuda fonts-tlwg-kinnari fonts-tlwg-loma fonts-tlwg-mono fonts-tlwg-norasi fonts-tlwg-purisa fonts-tlwg-sawasdee fonts-tlwg-typewriter fonts-tlwg-typist fonts-tlwg-typo fonts-tlwg-umpush fonts-tlwg-waree -y;

Then, get a copy of the repository:

git clone https://github.com/ioi/translation.git

Switch to the newly created directory:

cd translation/

And, add the gpg keys needed to make the setup.
The gpg command in the setup will contact a public key server (hkp://keys.gnupg.net) and request the key associated with the RVM project key which is used to sign each RVM release. To get the key, we need to provide the ID that is related to the key, in this case the ID is 409B6B1796C275462A1703113804BB82D39DC0E3. Having the RVM project’s public key allows us to verify the legitimacy of the RVM release we will be downloading, which is signed with the matching private key.

gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3

If the above command fails with the following error

$ gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
gpg: requesting key D39DC0E3 from hkp server keys.gnupg.net
?: [fd 4]: read error: Connection reset by peer
gpgkeys: HTTP fetch error 7: couldn't connect: eof
gpg: no valid OpenPGP data found.
gpg: Total number processed: 0

Try using the following alternative:

command curl -sSL https://rvm.io/mpapis.asc | gpg --import -

Afterwards, perform the installation (this might take some time, depending on your connection to the server where it will download the necessary packages and your CPU performance as it will perform some compilation locally):

./deploy.sh

While installing codemirror we got the following prompt:

replace public/codemirror-3.22/doc/upgrade_v3.html? [y]es, [n]o, [A]ll, [N]one, [r]ename:

We typed A and pressed Enter. We are not sure if this is what we are supposed to do but it worked later on properly.

Edit the ./config.yml file and set new values for the api_token and cookie_secret.

api_token: "4c0a6fe55f3d4aa9c5dbb9a59db7b20e"
cookie_secret: "SDfadadsf90u84oh23jnrwf"
  • api_token is a 32 characters long random key. It can contain only character a-f and numbers 0-9.
  • cookie_secret is a 23 characters long value. This example contains any character from a-z and A-Z and the numbers 0-9.

You can generate random passwords from here http://bytefreaks.net/random-password-generator

Then start the redis service using the following:

redis-server

 

In an new terminal, go to DbInit folder:

cd translation/DbInit/

and update the files users.json and tasks.json to prepare the initial data to be imported in redis:

  • users.json Be sure to update the passwords of the users.
  • tasks.json Set the names of your tasks and the .md filenames of the original content.

The original content of tasks.json is:

[
    { "id": "1",    "title": "notice",     "filename": "notice.md" },
    { "id": "2",    "title": "gondola",    "filename": "gondola.md" },
    { "id": "3",    "title": "friends",    "filename": "friends.md" },
    { "id": "4",    "title": "holiday",    "filename": "holiday.md" }
]

Place the original .md files in this folder. And then initialize redis using the following command:

ruby dbinit.rb

Finally, go to the previous folder and start the translation system:

cd ..;
shotgun -o 0.0.0.0 -p 8080;

Visit http://SERVER_NAME_OR_IP:8080 to view the translation system.

You can use the admin account to make changes to users, to send notifications (if you omit the to field, then the notification will be send to all users) and to check out all task with their translations.

Staff accounts allow you to check out all task with their translations.

Note on the architecture

The deploy.sh script assumes your architecture and OS is 64bit. To find what type of architecture you are using execute uname -i. If the result is not x86_64, then phantomjs will not work for you and you will not be able to generate the PDFs. To fix this issue you need to download the correct version from https://bitbucket.org/ariya/phantomjs/downloads. At the time this tutorial was written deploy.sh was installing version https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-1.9.7-linux-x86_64.tar.bz2, so we installed https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-1.9.7-linux-i686.tar.bz2 using the following commands:

wget https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-1.9.7-linux-i686.tar.bz2;
tar xf phantomjs-1.9.7-linux-i686.tar.bz2;
sudo cp phantomjs-1.9.7-linux-i686/bin/phantomjs /usr/local/bin;
rm -rf phantomjs-1.9.7-linux-i686*;

The above commands follow the example in deploy.sh.

Updating logos and website look

To update the views of the website you can do the by editing the .erb files that are in the translation/views folder.

Examples:

  • login.erb You need to modify this file to change the login screen
  • tasks_index.erb This file holds the structure of the main index the user sees after login (/tasks)
  • _navbar.erb Modify this file to change the navigation bar on top of each page
  • .. and more

Installing CMS (Contest Management System) on multiple servers

For the needs of the 24th BOI (Balkan Olympiad in Informatics) that will be held in Cyprus in June 2016 we setup CMS (Contest Management System)  on multiple servers.

We used Ubuntu 14.04 LTS (Trusty Tahr) for the OS of the servers, the installations were made with encrypted LVM.

We used 3 servers in total, the first (alpha) would serve as a master that holds all services for the competition except for the workers that test the submissions. The other two (beta and gamma) were setup to hold the workers.

On Alpha:

We installed various packages that are needed for the installation of the system

sudo apt-get install build-essential fpc postgresql postgresql-client gettext python2.7 python-setuptools python-tornado python-psycopg2 python-sqlalchemy python-psutil python-netifaces python-crypto python-tz python-six iso-codes shared-mime-info stl-manual python-beautifulsoup python-mechanize python-coverage python-mock cgroup-lite python-requests python-werkzeug python-gevent patool;

We downloaded the CMS and created the basic configuration.

#Download stable version from GitHub
wget https://github.com/cms-dev/cms/releases/download/v1.2.0/v1.2.0.tar.gz
#Extract the archive
tar -xf v1.2.0.tar.gz 
cd cms
#Copy the sample configuration files to be modified
cp config/cms.conf.sample config/cms.conf
cp config/cms.ranking.conf.sample config/cms.ranking.conf

cms.conf

Using a text editor we modified file config/cms.conf. The changes we did are the following:

  • We replaced "database": "postgresql+psycopg2://cmsuser:password@localhost/database"
    With the username, the password and database name we will use for our database configuration in a while. e.g.
    "postgresql+psycopg2://myuser:myPassword@alpha/mydatabase". Note: we chose a username and database name with no capital letters.
  • We replaced "secret_key": "8e045a51e4b102ea803c06f92841a1fb" with another random 32 character string containing a-z and 0-9 (hex characters)
  • We replaced "rankings": ["http://usern4me:passw0rd@alpha:8890/"] to some other username password configuration. e.g. "rankings": ["http://myUsern4me:myPassw0rd@localhost:8890/"]
  • We changed the "_section": "AsyncLibrary", to hold the configuration of the services as follows
"_section": "AsyncLibrary",

"core_services":
{
    "LogService": [["alpha", 29000]],
    "ResourceService": [["alpha", 28000], ["beta", 28000], ["gamma", 28000]],
    "ScoringService": [["alpha", 28500]],
    "Checker": [["alpha", 22000]],
    "EvaluationService": [["alpha", 25000]],
    "Worker": [["beta", 26000], ["beta", 26001], ["beta", 26002], ["beta", 26003], ["gamma", 26000], ["gamma", 26001], ["gamma", 26002], ["gamma", 26003]],
    "ContestWebServer": [["alpha", 21000]],
    "AdminWebServer": [["alpha", 21100]],
    "ProxyService": [["alpha", 28600]],
    "PrintingService": [["alpha", 25123]]
},

"other_services":
{
    "TestFileCacher": [["alpha", 27501]]
},

We replaced all instances of localhost with alpha . We did not put any workers on alpha for fairness.

cms.ranking.conf

Using a text editor we modified file config/cms.ranking.conf. The changes we did were to set

    "username":   "usern4me",
    "password":   "passw0rd",

to the values we set in the file cms.conf at the section rankings. In our example the values would be myUsern4me and myPassw0rd.

Setting up the system

After we were done with the configuration changes, we executed the following ./setup.py build so that the system would perform the basic checks and setup. Then, we executed sudo ./setup.py install to make the installation.
Following, we added the user we will use to start the competition system to the user group of the cms sudo usermod -a -G cmsuser george.

And we switched to the DB user to take the necessary actions.

Database Configuration

sudo su - postgres to switch to the DB user.
We created the DB user we defined in the cms.conf file before (myuser) with the password we defined using the command createuser myuser -P, the password in our case is myPassword.
After that, we created our database (mydatabase) and we assigned our user to be the owner with full access rights of that database using the following commands: createdb -O myuser mydatabase then psql mydatabase -c 'ALTER SCHEMA public OWNER TO myuser' finally psql mydatabase -c 'GRANT SELECT ON pg_largeobject TO myuser'.

Allowing connections from other machines

We needed to find the hba file for psql to allow incoming connections from other machines. We got the location of the file using psql -t -P format=unaligned -c 'show hba_file';. In our case the location was /etc/postgresql/9.3/main/pg_hba.conf. Using a text editor we added at the end of the file the following line:

host  mydatabase  myuser  0.0.0.0/0  md5

The above line instructs the system to allow our user to connect from any IP to our database.
For better security, you can use the following lines instead:

host  mydatabase  myuser  beta  md5
host  mydatabase  myuser  gamma  md5

Then we used psql -t -P format=unaligned -c 'show config_file'; which gave use the file /etc/postgresql/9.3/main/postgresql.conf, using a text editor we changed #listen_addresses = 'localhost' to listen_addresses = '*' to enable listening on all network devices.

On the terminal, we exited the console of the database user, we re-installed the cms using sudo ./setup.py install to create all database tables and we rebooted the machine using sudo reboot to apply group changes to the user as well.

Once the system reboots, execute cmsInitDB to create the tables for our database.

Create a file named updateHosts.sh and copy the following code in it. We will use it to update the host file of the machine so that it will identify the external IP of the machine first.

#!/bin/bash

FILE="/etc/hosts";
BACKUP="$FILE.original";

if [ -f "$BACKUP" ]; then
	echo "--- --- --- BackUp exists --- --- ---";
else
	cp "$FILE" "$BACKUP";
	if [ $? -ne 0 ]; then
		echo -e "--- --- --- Could not create backUp. Terminating! --- --- ---";
		exit -1;
	fi
	echo "--- --- --- BackUp Created --- --- ---";
fi

echo "--- --- --- Original '$FILE' --- --- ---";
cat "$BACKUP";

DEFAULT_DEVICE="p7p1";
DEVICE=${1:-$DEFAULT_DEVICE};
IP=`ifconfig "$DEVICE" | grep "inet addr" | cut -d ':' -f 2 | cut -d ' ' -f 1`;
HOST=`hostname`;

> "$FILE";
while read LINE; do
	if [[ "$LINE" == *"$HOST"* ]]; then
		echo -ne "$IP\t" >> "$FILE";
		echo "$LINE" | cut -f2- >> "$FILE";
	else
		echo "$LINE" >> "$FILE";
	fi
done < "$BACKUP";

echo "--- --- --- Updated '$FILE' --- --- ---";
cat "$FILE";

exit 0;

Execute chmod +x updateHosts.sh to make the script into an executable one.

The script needs to know the name of the network device it is supposed to tamper with. Type ifconfig in your terminal to find the available device names. The one you most likely are interested in would have an IP similar to the one of your PC or smartphone e.g. inet addr:192.168.10.5. If the device is an ethernet device, then you will find the following next to the device name Link encap:Ethernet. The name you need to copy could be eth0 or p7p1 or something similar to these.

Once you get the name execute sudo ./updateHosts.sh DEVICE_NAME and replace DEVICE_NAME with the name you copied in the previous step.

Starting the system on Alpha:

In a terminal execute cmsAdminWebServer 0 to start the administration server and use it to create a new contest from http://alpha:8889/.

Once you created you competition from http://alpha:8889/, open up 3 terminals to alpha.

On the first one execute cmsLogService 0 to start the logging service.

On the second execute cmsRankingWebServer to start the Ranking Server that can be reached at http://alpha:8890/.

And on the third one cmsResourceService 0 -a 1 to start the competition we just created along with all services that are needed for the competition from alpha. The competition can be reached at http://alpha:8888/.

Setting up the live replication of the database:

Switch to the postgres user sudo su - postgres and create an ssh key ssh-keygen.
Copy on both servers the key you just created using ssh-copy-id contest@beta and ssh-copy-id contest@gamma.

Next we will create a user called replicator that can be used solely for the replication process and set a random password of it:

psql -c "CREATE USER replicator REPLICATION LOGIN CONNECTION LIMIT -1 ENCRYPTED PASSWORD 'randomPassword';"

Then edit the file /etc/postgresql/9.3/main/pg_hba.conf (we found the location of pg_hba.conf using the command psql -t -P format=unaligned -c 'show hba_file';) and add the following lines:

host    replication     replicator     beta   md5
host    replication     replicator     gamma   md5

This will allow user replicator to connect from the machines beta and gamma.
You could give the following line, instead of the two above:

host    replication     replicator     0.0.0.0/0   md5

This will allow user replicator to connect from ANY IP.
Also, if you know from beforehand the IPs that beta and gamma are going to have, it is best to replace the above with two lines containing the IPs of the two machines as it is more secure.

host    replication     replicator     IP_address_of_beta/32   md5
host    replication     replicator     IP_address_of_gamma/32   md5

Following, using psql -t -P format=unaligned -c 'show config_file'; we got the path to the file postgresql.conf (/etc/postgresql/9.3/main/postgresql.conf) which we edited using a text editor. The following changes were made to the file:

  • We uncommented and changed #wal_level = minimal to wal_level = hot_standby
  • We uncommented and changed #archive_mode = off to archive_mode = on
  • We uncommented and changed #archive_command = '' to archive_command = 'cd .'
  • We uncommented and changed #max_wal_senders = 0 to max_wal_senders = 10
  • We uncommented and changed #hot_standby = off to hot_standby = on

Save the file and execute service postgresql restart to apply the changes.

Execute psql -c "ALTER USER replicator WITH CONNECTION LIMIT -1"; to remove the limit of connections for our replication user.
You can verify the settings using psql -c "SELECT rolname, rolconnlimit FROM pg_roles";, this command will show you the connection limits per user. If the value is -1 then it means that there is no restriction.

On Beta and Gamma:

We installed various packages that are needed for the installation of the system, we intended in using these machines both as the workers for the competition but also as live backups in case alpha goes down. Later on, we will enable live copying of the database in alpha to these machines.

sudo apt-get install build-essential fpc postgresql postgresql-client gettext python2.7 python-setuptools python-tornado python-psycopg2 python-sqlalchemy python-psutil python-netifaces python-crypto python-tz python-six iso-codes shared-mime-info stl-manual python-beautifulsoup python-mechanize python-coverage python-mock cgroup-lite python-requests python-werkzeug python-gevent patool;

We switched to the postgres user sudo su - postgres and we stopped the postgres service service postgresql stop.

Then edit the file /etc/postgresql/9.3/main/pg_hba.conf (we found the location of pg_hba.conf using the command psql -t -P format=unaligned -c 'show hba_file';) and add the following line:

host    replication     replicator     alpha  md5

Following, using psql -t -P format=unaligned -c 'show config_file'; we got the path to the file postgresql.conf (/etc/postgresql/9.3/main/postgresql.conf) which we edited using a text editor. The following changes were made to the file:

  • We uncommented and changed #listen_addresses = 'localhost' to listen_addresses = 'localhost,beta' for beta and to listen_addresses = 'localhost,gamma' for gamma
  • We uncommented and changed #wal_level = minimal to wal_level = hot_standby
  • We uncommented and changed #archive_mode = off to archive_mode = on
  • We uncommented and changed #archive_command = '' to archive_command = 'cd .'
  • We uncommented and changed #max_wal_senders = 0 to max_wal_senders = 1
  • We uncommented and changed #hot_standby = off to hot_standby = on

Execute pg_basebackup -h alpha -D /var/lib/postgresql/9.3/main/ -U replicator -v -P --xlog-method=stream to copy the database from Alpha. If you get the error pg_basebackup: directory "/var/lib/postgresql/9.3/main/" exists but is not empty then delete the contents of the folder with the command rm -rf /var/lib/postgresql/9.3/main/* and try again. You will be prompted for a password, use the random password we assigned the user replicator before. In this scenario it will be randomPassword.

Afterwards, create the file /var/lib/postgresql/9.3/main/recovery.conf and add the following content:

standby_mode = 'on'
primary_conninfo = 'host=alpha port=5432 user=replicator password=randomPassword sslmode=require'
trigger_file = '/tmp/postgresql.trigger'

Finally, start the service using service postgresql start and exit to stop using the postgres user.

Starting the workers on beta and gamma:

First, we downloaded the CMS:

#Download stable version from GitHub
wget https://github.com/cms-dev/cms/releases/download/v1.2.0/v1.2.0.tar.gz
#Extract the archive
tar -xf v1.2.0.tar.gz

Then, we copied the configuration files from alpha. Specifically, we copied the configuration files from alpha to 'cms/config/cms.conf' and 'cms/config/cms.ranking.conf' using the following commands:

scp contest@alpha:~/cms/config/cms.conf ~/cms/config/cms.conf
scp contest@alpha:~/cms/config/cms.ranking.conf ~/cms/config/cms.ranking.conf

Afterwards, we entered the CMS folder cd cms, build it ./setup.py build and installed it sudo ./setup.py install.
Following, we added the user we will use to start the competition system to the user group of the cms sudo usermod -a -G cmsuser george. We rebooted the machine using sudo reboot to apply all changes including the group change to the user.

Then we copied the updateHosts.sh file from alpha using scp contest@alpha:~/updateHosts.sh ~/updateHosts.sh, executed chmod +x updateHosts.sh to make the script into an executable one. And, we executed sudo ./updateHosts.sh DEVICE_NAME. Like before you need to replace DEVICE_NAME with the network device name that will be used for the communication.

To start the workers, execute cmsResourceService 1 -a 1 on beta and cmsResourceService 2 -a 1 on gamma.

Next steps

In case you have DNS problems, modify the file /etc/hosts and add the following entries to it:

10.1.10.3       alpha
10.1.10.4       beta
10.1.10.5       gamma

In the end it should look something similar to this:

127.0.0.1    localhost
10.1.10.3    alpha
10.1.10.4    beta
10.1.10.5    gamma

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Setting up the contest’s environment

The contest environment of the Balkan Olympiad in Informatics 2016 (BOI 2016) that will be hosted in Cyprus will have the following configuration.

For the contestants:

The operating system of the competition will be Ubuntu 16.04 LTS (XenialXerus) Desktop edition x64 bit architecture.

On the system we have two accounts:

  • contestant – this is the account the contestants will use. It is set to auto-login, to not have a password and be a normal account.
  • maintenance – this is the account the administrators will use. It is an administrative account.

Using the administrative account:

Before proceeding with any changes, we updated the whole system.

sudo apt-get -y update; 
sudo apt-get -y upgrade;

Later, some applications that are not needed for the competition, were removed from the installation environment in an attempt to keep the installation less than 5.5GB.

sudo apt-get remove transmission-* thunderbird* shotwell* rhythmbox* gnome-mines gnome-sudoku simple-scan remmina* gnome-mahjongg cheese* aisleriot libreoffice-*;

After that, we installed the additional software that is needed for the competition from the Ubuntu repositories.

sudo apt-get -y install build-essential codeblocks codeblocks-contrib ddd emacs geany gedit nano scite vim mc stl-manual valgrind fpc fp-docs lazarus terminator;

Some cleanup on the disk was needed at this point which we did with the commands below.

#Please note that the following commands will remove applications and services, be sure to read what it is about to be removed.
#You might want to keep some of the stuff that are being deleted.
sudo apt autoremove;
sudo apt-get autoclean;
sudo apt-get clean;

Using the contestant’s account:

Following, we created the desktop shortcuts of the applications that should be used by the contestants, making it easier for them to find.

for name in codeblocks ddd emacs firefox geany gedit gnome-calculator gnome-terminal lazarus mc python SciTE  terminator vim; do
	cp /usr/share/applications/$name*.desktop ~/contestant/Desktop;
done

One last step that we had to take to finish the setup on what a contestant needs, we started Firefox and set the homepages to be http://alpha:8888/|file:///usr/share/doc/stl-manual/html/index.html|http://alpha:8890/. alpha is the hostname of our grading environment.

For maintainers:

On the contestant’s machine:

The machines have ssh servers enabled to allow administrating personnel to perform maintenance operations.

#The following command installs and enables ssh server on an Ubuntu 16.04 desktop installation.
sudo apt-get install openssh-server;
#We will create a read only copy of the original configuration.
#Everybody should do this to make sure if they do not manage to configure properly their sshd to be able to restore the default configuration.
sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.factory-defaults;
sudo chmod a-w /etc/ssh/sshd_config.factory-defaults;

On the administration machine:

We created a new public/private rsa key pair using the command ssh-keygen, which we uploaded to the contestant’s machine using ssh-copy-id maintenance@machine. We will use this key to connect to the contestant’s machine without using a password.

On the contestant’s machine:

Using a text editor (like gedit, nano, vi etc) we edited the /etc/ssh/sshd_config configuration file to reflect some security changes.

  • We changed the #PasswordAuthentication yes to PasswordAuthentication no to disable logins using password. Only users holding our private ssh key will be able to login.
  • At the end of the file we added AllowTcpForwarding no and X11Forwarding no to disable forwarding.
  • At the end of the file we added AllowUsers maintenance, to whitelist maintenance on the ssh service, while at the same time blocking everyone else from using it. In other words, only user maintenance will be able to use the ssh service.

When we are done with the changes, we saved the file and issued the following command to restart the ssh service.

sudo systemctl restart ssh

Next, we had to block any network activity the contestants should not have.

To do so, we installed squid on the contestant’s machine and configured it to allow access only to the STL documentation, the contest environment and the results page.

sudo apt-get install squid;
#We will create a read only copy of the original configuration.
#Everybody should do this to make sure if they do not manage to configure properly their squid to be able to restore the default configuration.
#We used move instead of copy here, the reason is that the original file is HUGE (~8K lines).
#We moved the file to create a new one that will contain only what we need.
sudo mv /etc/squid/squid.conf /etc/squid/squid.conf.factory-defaults;
sudo chmod a-w /etc/squid/squid.conf.factory-defaults;

Afterwards, we created a new configuration file /etc/squid/squid.conf and used the following as content.

acl Safe_ports port 8888	# competition
acl Safe_ports port 8890	# ranking

acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT all
http_access allow localhost manager
http_access deny manager
http_access allow localhost
acl whitelist dstdomain .alpha .beta
http_access allow whitelist
http_access deny all

http_port 3128 transparent

coredump_dir /var/spool/squid
refresh_pattern ^ftp:		1440	20%	10080
refresh_pattern ^gopher:	1440	0%	1440
refresh_pattern -i (/cgi-bin/|\?) 0	0%	0
refresh_pattern (Release|Packages(.gz)*)$      0       20%     2880
refresh_pattern .		0	20%	4320

cache deny all

To redirect all outgoing traffic to our squid proxy server and complete the procedure we used the following iptables command.

sudo squid -k reconfigure;
sudo iptables -t nat -A OUTPUT -p tcp -m owner ! --uid-owner proxy --dport 1:65535 -j REDIRECT --to-port 3128;
#The simplest method to make the change permanent is to use iptables-save and iptables-restore to save the currently-defined iptables rules to a file and (re)load them (e.g., upon reboot).
sudo sh -c "iptables-save > /etc/iptables.conf";
#Then modify file /etc/rc.local and add right above the 'exit 0' command the following:
# Load iptables rules from this file
iptables-restore < /etc/iptables.conf

Following, we disabled the guest account as it would cause trouble if used since it does not have permanent storage, so on restart all files of the contestant would be deleted.

Everybody should do this to make sure if they do not manage to configure properly their lightdm to be able to restore the default configuration.
sudo cp /etc/lightdm/lightdm.conf /etc/lightdm/lightdm.conf.factory-defaults;
sudo chmod a-w /etc/lightdm/lightdm.conf.factory-defaults;

To disable guest session edit the file /etc/lightdm/lightdm.conf using a text editor and add at the end of the file the following allow-guest=false. Save the file and close it.
To make the change become active you either have to restart the machine or lightdm itself, in any case all all open graphical programs will close and you’ll lose unsaved work in all of them.

sudo restart lightdm;

 

Pending

disable mounting other disks

disable usb

back up data

block any connection outside the specific labs

To copy the flash drive

sudo dd if=/dev/sdd of=/dev/sdc bs=64K conv=noerror,sync