mysqldump


A solution to running out of memory while executing mysqldump

Are you trying to perform a mysqldump on a large table and running out of memory every time? This can be a frustrating experience. Even if you try to use the –quick parameter, you may still run out of memory. In this blog post, we will discuss a solution to this problem.

One option is to create a swap file to add more swap space. A swap file differs from a swap partition but can be accessible and dynamic. In the following steps, we will show you how to create a swap file.

First, create an empty file. This file will contain virtual memory contents, so make sure to create a file big enough for your needs. The following command will create a 1GiB file, which means +1GiB swap space for your system:

dd if=/dev/zero of=/media/tux/bigdisk/swapfile.img bs=1024 count=1M;

If you want to create a 3GiB file, change the count value to count=3M. Refer to the man dd for more information.

Next, make a “swap filesystem” inside your new swap file using the following command:

mkswap /media/tux/bigdisk/swapfile.img;
chmod 600 /media/tux/bigdisk/swapfile.img;
chown root:root /media/tux/bigdisk/swapfile.img;

To ensure that your new swap space is activated while booting up your computer, add it to the filesystem configuration file /etc/fstab. Add the following line to the end of the file:

/media/tux/bigdisk/swapfile.img swap swap sw 0 0

This is recommended because other filesystems (at least one that contains a swap file) must be mounted in read-write mode before we can access any files.

Finally, you can either reboot your computer or activate the new swap file manually with the following command:

swapon /media/tux/bigdisk/swapfile.img;

If everything goes well, you should see that more swap space is available for use. You can use the following commands to check your new swap and confirm that it is active:

cat /proc/swaps;

This should display something like:

Filename                           Type       Size    Used    Priority
/swapfile                          file       16777212 1048796    -2
/media/tux/bigdisk/swapfile.img    file       67108860 0          -3

You can also use the following command to check your swap usage:

grep 'Swap' /proc/meminfo;

This should display something like:

SwapCached:         132456 kB
SwapTotal:        83886072 kB
SwapFree:         82837276 kB

Creating a swap file can be an effective solution to running out of memory while performing a mysqldump on a large table. It is a simple, dynamic solution that can be implemented easily on most Linux systems. Following the steps outlined in this post, you should be able to create a swap file and add more swap space to your system.


dbeaver: native client is not specified for connection

If you’re using DBeaver to perform a database dump, you may encounter an error that says, “native client is not specified for connection.” This error typically occurs when DBeaver can’t find the mysqldump executable on your system. Fortunately, there is a simple solution to this problem.

To resolve this issue, you need to specify the location of the mysqldump executable in DBeaver. Here are the steps you can follow:

  1. Click on the “Local Client …” button in the Export dialog of DBeaver. This will open a new pop-up window where you can specify the location of the mysqldump executable.
  2. From the drop-down menu in the pop-up window, select the “Browse …” option. This will allow you to navigate to the installation folder where mysqldump is located on your system.
  3. Once you’ve located the mysqldump executable, click OK on both windows. This will save your settings and allow you to perform a database dump using DBeaver.

To find the location of the mysqldump executable on your system, you can use the following command in a terminal window:

which mysqldump;

This command will display the full path to the mysqldump executable. Once you have this information, you can follow the steps above to specify the location of mysqldump in DBeaver.

In summary, if you’re getting the “native client is not specified for connection” error when trying to perform a database dump in DBeaver, you can resolve it by specifying the location of the mysqldump executable using the steps outlined above. The “which mysqldump” command can be used to find the location of mysqldump on your system.


Compressing mysqldump with pipe: MySQL

When working with MySQL databases, it’s common to create backups of the database using the mysqldump utility. However, these backups can often take up a significant amount of disk space, especially for large databases. One way to reduce the size of these backups is to compress them using a compression algorithm. In this post, we will explore how to compress a mysqldump using a pipe.

First, let’s review the basic syntax for creating a mysqldump:

mysqldump -u [username] -p [database_name] > [backup_file].sql

This command will create a plain-text backup file of the specified database, which can then be restored using the mysql command. However, this backup file can be quite large, especially for large databases.

To compress the backup file, we can use a pipe to redirect the output of the mysqldump command to a compression utility. One common compression utility is gzip, which uses the gzip algorithm to compress files. Here’s how we can use gzip to compress the mysqldump:

mysqldump -u [username] -p [database_name] | gzip > [backup_file].sql.gz

In this command, we use the | symbol to pipe the output of the mysqldump command to the gzip command. The > symbol is then used to redirect the compressed output to a file with a .sql.gz extension.

The resulting backup file will be compressed using the gzip algorithm, which typically results in significant reduction in file size. To restore the backup, we can use the following command:

gunzip < [backup_file].sql.gz | mysql -u [username] -p [database_name]

In this command, we use the gunzip command to decompress the compressed backup file, which is then piped to the mysql command to restore the database.

In conclusion, compressing a mysqldump using a pipe is a simple and effective way to reduce the size of backup files. By using a compression utility such as gzip, we can significantly reduce the amount of disk space required to store backups, while still being able to restore the database using standard MySQL commands.


Copy all databases to another host

The following command will use mysqldump to create a dump of all available databases in the OLD_HOST that are available for the user OLD_USER.
The results will be imported to another server via the mysql pipe.

OLD_USER="myUser"; OLD_PASS="myPASS"; OLD_HOST="myHost";
NEW_USER="myUserNEW"; NEW_PASS="myPASSNEW"; NEW_HOST="myHostNEW";
mysqldump -u "$OLD_USER" -p"$OLD_PASS" -h "$OLD_HOST" --all-databases | mysql -h "$NEW_HOST" -u "$NEW_USER" -p"$NEW_PASS";

The user must have the LOCK TABLES privilege for the above command to work or else you will get the following error.

mysqldump: Got error: 1044: "Access denied for user 'OLD_USER'@'OLD_HOST' to database 'DBNAME'" when using LOCK TABLES

In case you cannot give the privilege to the user, then use the parameter --single-transaction to mitigate the problem. The command changes as follows.

OLD_USER="myUser"; OLD_PASS="myPASS"; OLD_HOST="myHost";
NEW_USER="myUserNEW"; NEW_PASS="myPASSNEW"; NEW_HOST="myHostNEW";
mysqldump -u "$OLD_USER" -p"$OLD_PASS" -h "$OLD_HOST" --single-transaction --all-databases | mysql -h "$NEW_HOST" -u "$NEW_USER" -p"$NEW_PASS";

In case you want to copy only specific databases, use the following

OLD_USER="myUser"; OLD_PASS="myPASS"; OLD_HOST="myHost"; OLD_DBS=("DB1" "DB2");
NEW_USER="myUserNEW"; NEW_PASS="myPASSNEW"; NEW_HOST="myHostNEW";
mysqldump -u "$OLD_USER" -p"$OLD_PASS" -h "$OLD_HOST" "${OLD_DBS[@]}" | mysql -h "$NEW_HOST" -u "$NEW_USER" -p"$NEW_PASS";

In case you want to copy only specific tables from a database, use the following

OLD_USER="myUser"; OLD_PASS="myPASS"; OLD_HOST="myHost"; OLD_DB="DB1"; OLD_TABLES=("TBL1" "TBL2");
NEW_USER="myUserNEW"; NEW_PASS="myPASSNEW"; NEW_HOST="myHostNEW"; NEW_DB="NewDB";
mysqldump -u "$OLD_USER" -p"$OLD_PASS" -h "$OLD_HOST" "$OLD_DB" "${OLD_TABLES[@]}" | mysql -h "$NEW_HOST" -u "$NEW_USER" -p"$NEW_PASS" "$NEW_DB";