Posts Tagged ‘dumps’

How to dump and restore Zabbix database to do monitoring database snapshot, PostGreSQL continuous archiving Point in time recovery setup

Monday, November 21st, 2022

postgresql-db-backup-and-continuous-archiving-on-linux-keep-zabbix-backup-regularly

If you have set up a Zabbix server that is using as a database storage postgresql at home or company, then you will need to make sure you don’t loose any data by producing regular postgresql backups. This small article will show how to do the db backup in conventional way with pg_dump / pg_restore as well as how to set the continuous archiving point in time recovery end point recovery so called PITR..
 

1. Connect to the database and some postgres very basics

-bash-4.2$ psql DBNAME USERNAME
 
-bash-4.2$ psql zabbix zabbix
 

After you access a PostgreSQL database, you can run SQL queries and more. Here are some common psql commands:
 
•    To view help for psql commands, type \?.
•    To view help for SQL commands, type \h.
•    To view information about the current database connection, type \conninfo.
•    To list the database's tables and their respective owners, type \dt.
•    To list all of the tables, views, and sequences in the database, type \z.
•    To exit the psql program, type \q.

       Fundamental backup approaches with postgres

As with everything that contains valuable data, PostgreSQL databases should be backed up regularly. While the procedure is essentially simple, it is important to have a clear understanding of the underlying techniques and assumptions.
 
There are three fundamentally different approaches to backing up PostgreSQL data:
 
•    SQL dump
•    File system level backup
•    Continuous archiving

 
Each has its own strengths and weaknesses; each is discussed in turn in the following sections.
 

2.    Creating SQL Dump of postgresql target database

2.1 Manual SQL dump (custom dump format)

 
Use pg_dump's custom dump format. If PostgreSQL was built on a system with the zlib compression library installed, the custom dump format will compress data as it writes it to the output file.
This will produce dump file sizes similar to using gzip, but it has the added advantage that tables can be restored selectively.
The following command dumps a database using the custom dump format:
 

# sudo su – postgres
-bash-4.2% cd  /var/lib/pgsql/9.5/backups/
 
-bash-4.2$  pg_dump -Fc zabbix > /var/lib/pgsql/9.5/backups/zabbixdbdump`date "+%d%m%y%H%M%S"`.gz

2.2 Schedule backups with cron job

To automate the job schedule a cron job to do the work, somethjing like below:
 

# Minute   Hour   Day of Month       Month          Day of Week        Command
# (0-59)  (0-23)     (1-31)    (1-12 or Jan-Dec)  (0-6 or Sun-Sat)

 
@daily  pg_dump -Fc zabbix > /var/lib/pgsql/9.5/backups/Zabbix_db_dump`date "+\%d\%m\%y\%H\%M\%S"`.gz

3. Restore a database from the dump file

3.1. Restoring a database using pg_restore (usually used)

A custom-format dump is not a script for psql, but instead must be restored with pg_restore, for example:
 

# pg_restore -d zabbix /var/lib/pgsql/9.5/backups/Zabbix_db_dump281118151005.gz

3.2. Second Option restoing with psql after creating a restore database

 
# su postgres

-bash-4.2$ psql template1
 
CREATE DATABASE zabbix OWNER zabbix ENCODING 'UTF8';
\q

-bash-4.2$ psql zabbix < /var/lib/pgsql/9.5/backups/zabbixdbdump291117151002.gz
exit

 
NOTE: if you get a permission denied error when trying to restore, check the Unix permissions on the backup file and all the parent directories.

4.    Continuous Archiving and Point-in-Time Recovery (PITR) backups

At all times, PostgreSQL maintains a write ahead log (WAL) in the pg_xlog/ subdirectory of the cluster's data directory. The log records every change made to the database's data files. This log exists primarily for crash-safety purposes: if the system crashes, the database can be restored to consistency by "replaying" the log entries made since the last checkpoint. However, the existence of the log makes it possible to use a third strategy for backing up databases: we can combine a file-system-level backup with backup of the WAL files. If recovery is needed, we restore the file system backup and then replay from the backed-up WAL files to bring the system to a current state. This approach is more complex to administer than either of the previous approaches, but it has some significant benefits:

           Setting Up the WAL Archiving

In an abstract sense, a running PostgreSQL system produces an indefinitely long sequence of WAL records. The system physically divides this sequence into WAL segment files, which are normally 16MB a piece (although the segment size can be altered when building PostgreSQL). The segment files are given numeric names that reflect their position in the abstract WAL sequence. When not using WAL archiving, the system normally creates just a few segment files and then "recycles" them by renaming no-longer-needed segment files to higher segment numbers. It's assumed that segment files whose contents precede the checkpoint-before-last are no longer of interest and can be recycled.
 
When archiving WAL data, we need to capture the contents of each segment file once it is filled, and save that data somewhere before the segment file is recycled for reuse.
Depending on the application and the available hardware, there could be many different ways of "saving the data somewhere": we could copy the segment files to an NFS-mounted directory on another machine, write them onto a tape drive (ensuring that you have a way of identifying the original name of each file), or batch them together and burn them onto CDs, or something else entirely. To provide the database administrator with flexibility, PostgreSQL tries not to make any assumptions about how the archiving will be done. Instead, PostgreSQL lets the administrator specify a shell command to be executed to copy a completed segment file to wherever it needs to go. The command could be as simple as a cp, or it could invoke a complex shell script — it's all up to you.
 
To enable WAL archiving, set the wal_level configuration parameter to archive or higher, archive_mode to on, and specify the shell command to use in the archive_command configuration parameter.

In practice these settings will always be placed in the postgresql.conf e.g. (/etc/postgresql/12/main/postgresql.conf) file.

In archive_command, %p is replaced by the path name of the file to archive, while %f is replaced by only the file name. (The path name is relative to the current working directory, i.e., the cluster's data directory.)

Use %% if you need to embed an actual % character in the command.

The simplest useful command to enable PITR would, be something like:
 

# – Archiving config setction
 
archive_mode = on                    # enables archiving; off, on, or always
# (change requires restart)
 
archive_command = '/bin/xz -2 -z < %p > /var/lib/pgsql/9.5/archivedir/%f'               # command to use to archive a logfile segment
 
archive_timeout = 1h            # force a logfile segment switch after this
                                                   # number of seconds; 0 disables

Make daily Linux MySQL database backups with shell script

Thursday, May 23rd, 2013

Creating database backup with MySQL with mysqlbackupper and mysqlback shell scripts easy create mysql backups

Some time ago, I've written a tiny shell script which does dumps of Complete (SQL Script) MySQL databases. There are plenty of ways to backup MySQL database and plenty of scripts on the net but I like doing it my own way. I have few backup scripts. I prefer script database over keeping binary logs, or using some un-traditional backup methods like backing all binary data in /var/lib/mysql.

One was intended to backup with mysqldump whole database and later upload to a central server running tsh (shell). Using tsh maybe not the best method to upload, but the script can easily be modified to use ssh passwordless authentication as a method to upload.

I'm not a pro shell scripter, but MySQLBackupper script can be used as useful for learning some simple bash  shell scripting.

To use the script as intended you will have to build tsh from source. Tsh is in very early development stage (ver 0.2) but as far as I tested it before some years it does great what it is intended for. You can  MySQLBackupper.sh script from here.
Earlier, I used MysqlBackupper.sh to upload all SQL dumps to /backups directory on central backup storage server, thus I had written secondary script to classify uploaded backups based on backup archive name. Script used is mysqldumps-classify.sh and can be viewed here. Though this way of making backups, needs a bit of custom work for managing backups up to 10 / 20 servers it worked well.

I have written also another mysqlbackup script which is much more simplistic and only dumps with mysqldump and stores copies on hard disk in tar.gz archive. You can download my other simple mysqkbackup.sh here.

Only inconvenient thing about above scripts is they dump all SQL databases. Hence whether necessary to get content for single database from (complete) All database SQL (script backup), I use SED (stream editor) one liner script.

It is interesting to hear how others prepare their MySQL db backups.

Fixing MySQL server start up “ERROR 1045 (28000): Access denied for user ‘debian-sys-maint’@’localhost’ (using password: YES)”

Friday, September 7th, 2012

I'm still busy configuring the new Lenovo (Le-novo) 🙂 ThinkCenter server necessery for migration of old machines. I've done a lot yesterday but really moving all this stuff takes time …

I moved the SQL databases from the old MySQL server host to the new Debian Linux host.

In order to move databsaes, I did the usual SQL dump from current working host with:

mysql:~# mysqldump --opt --allow-keywords --add-drop-table --all-databases -u root > dump.sql

After that did the standard move of dump.sql to the new host with sftp

mysql-new:~# sftp root@mysql-host
Connected to mysql-host.
sftp> get dump.sql
....
sftp> exit

and imported dump:

mysql-new:~# mysql -u root -p < dump.sql
Enter password:

Databases dump grow really a lot (7GB)!, so I had to wait for dump.sql to import about 20 minutes – (the host configuration is Dual core 6Ghz 3MB Cache CPU, 4GB DDR3 RAM, 7200 500 GB Hitachi ExcelStor Techno V32O HDD).

The dumps migration was between identical release Debian Linux – (Squeeze 6.0.5) servers running identical versions of MySQL.

mysql-new:~# mysql --version
mysql Ver 14.14 Distrib 5.1.63, for debian-linux-gnu (i486) using readline 6.1

Because of that the whole db import worked like a charm.

Once moved the SQL started re-started normally but there was an on screen warning:

ERROR 1045 (28000): Access denied for user
'debian-sys-maint'@'localhost' (using password: YES)"

The cause of this warning error is because of way /etc/init.d/mysql script is written and in particular the custom MySQL (Debian specific start-up philosophy).

/etc/init.d/mysql is written in a way that on every restart a check of Database consistency is done. There in the script the user debian-sys-maint (a user with mysql administrator root privileges) is used to do the quick consistency check. The debian-sys-maint password which is used on start-up is stored in /etc/mysql/debian.cnf:

mysql-new:~# less /etc/mysql/debian.cnf
# Automatically generated for Debian scripts. DO NOT TOUCH!
[client]
host = localhost
user = debian-sys-maint
password = pQFM9RetOHFjewwn
socket = /var/run/mysqld/mysqld.sock
[mysql_upgrade]
host = localhost
user = debian-sys-maint
password = pQFM9RetOHFjewwn
socket = /var/run/mysqld/mysqld.sock
basedir = /usr

The whole problem is that during, the old SQL import the password set for user debian-sys-maint is different and once SQL starts the init script reads this pass and fails to login to SQL server.

The warning (error):

ERROR 1045 (28000): Access denied for user
'debian-sys-maint'@'localhost' (using password: YES)
hence appears on every SQL start (including on every system boot). The err is generally harmless and SQL seems to work fine with or without it. However since the consistency check is not done at start up, if there are some CORRUPT tables not initiating the start up check is not a good idea.

There are two options to get rid of the warning one and better one is to check in /etc/mysql/debian.cnf for password string and change the pwd with mysql cli e.g.:

new-mysql:~# grep -i pass /etc/mysql/debian.cnf | uniq
password = pQFM9RetOHFjewwn

GRANT SELECT on `mysql`.`user` to 'debian-sys-maint'@'localhost' identified by 'pQFM9RetOHFjewwn';
Query OK, 1 row affected (0.01 sec)
GRANT SELECT ON mysql.user TO 'debian-sys-main'@'localhost';Query OK, 0 rows affected (0.00 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

Second option (not recommended in terms of security) is to set user/pass to root values in /etc/mysql/debian.cnf.

That's all. N-joy 🙂

To Lumier with a bike the yesterday “action”

Monday, February 16th, 2009

Saturday was quite shaking. The morning start yearly around 8:30.Anton a friend of mine for whom I am working and for whom I am admining twoservers. Called yearly in the morning and informed me that winner.bg is not working for several hours. I logged on the server and tried to see what’s wrong.It seemed that the whole MySQL was quite messy. It even refuses to start.There were corrupted data the system seemed to be restarted twice.I won’t enter into much details here just put some moral mostly.I was desperate things looked like everything was lost. The old sqlbackups made by the automated script I use were completely useless causethey were dumped with characterset latin1 …. So everything which was in cp1251appeared like “?” questionmarks. I thought there was encoding problem and the problem might be solved with iconv, however unfortunately that was not the case.The dumps were completely useless. So as a prescription if you use cp1251 or koi8r or any cyrillic encoding and you’ve explicitly definition in /etc/my.cnfoutlining that be sure not to dump with –default-character-set=latin1 ! Never ever! do this. At a moment I felt completely forgotten by God doubt came along for a second, nevertheless I started praying even though only with hope and without faith enough I screamed “Lord Jesus Christ, Son of the Blessed God, have mercy on me the sinner!”. Eventually until 16:45 most of the problems were fixed. Praise the Lord! Hallelujah! I could see one more time clearly God fixing things for me. To be honest I was so messed at a poing before all came to its place that I was not knowing exactly what I am doing. I followed a couple of steps one of which was completely unsinstall the mysql server and exchange it with 5.0.75 from 5.0.65. I had to switch to innodb recovery mode level 4 and dump some of the databases and import them back. A lot of the databases I simply copied in binary format to the newly created sql server. The sql server started working again ! Blessed be God My helped and refugee! I did some shopping on Saturday 5:30 ’till 18:00. Then I tried to recover the databaess with the screwed cyrillic letters. I had to contact ganchev “shudder” a friend of mine who is pretty good in coding and worked at the same company I did for some years. He couldn’t help however he advised me to check the dumps with hexdump -C and see if the “?” questionmarks are questionmarks. In my case they were so the backups were completely useless. I was lucky that one of the database my friend Tony has backed up and the other one was for a website who was started just a few days before so data there could be recovered with a little effort and it’s not gonna be so fatal I guess. Later on during the evening I updated a couple of services like apache php eaccelerator and so on on the two freebsd servers I take care for. On the Sunday morning I had to fix a little thing a consequence from the nightly update. The php5-gd port didn’t upgraded with the portupgrade -ri cause according to portaudit it has a security flaw. However quite flashy and luckily I fixed the problem. The rest of the Sunday I spend in talks with Paco, then we went to Sali and went to Lumiere the coffee restaurant where Sali managed to arrange work for Papi. The idea of us going there was to negotiate if possible to increase Papi’s daily sallary cause today he receives only 20 EUR per day for 10 hours of work. We went to Lumiere with Bikes that Sali gave us. On our way Papi fall off his bike and hurt his leg badly … 🙁 We went there and drinked coffee, thanks God they didn’t charged me for the coffee because the restaurant owners (Aidyn and Tazira) said the coffee I don’t need to pay for. After that we went back home we had small argue with Papi for which I deeply regret. I should thanks God for granting me from his divine mercy and doing so much for me the sinner. Quite in a few minutes I’ll be praying a bit and going to bed. Let’s hope that the Lord will be blessing me and helping me in my work and studies in the coming week just like he did so far. Just to conclude my post. Glory be to you Almighty and all merciful Lord my stronghold and my refugee! Hallelujah!END—–

How to Split files on Linux FreeBSD, NetBSD and OpenBSD

Sunday, July 31st, 2011

Split large files in pieces Scissors

Did you have the need to sometimes split an SQL extra large files to few pieces in order to be able to later upload it via phpmyadmin?
Did you needed an extra large video or data file to be cut in few pieces in order to transfer it in few pieces over an USB stick?
Or just to give you an another scenario where I sometimes need to have an enormous file let’s say 3G split in few pieces, in order to later read it in vim or mcedit .
I sometimes need to achieve this on FreeBSD and Linux hosts thus I thought it will be helpful to somebody to give a very quick tutorial on the way large files can be cut in pieces on Linux and BSD hosts.

GNU/Linux and FreeBSD are equipped with the split command. The purpose of this command is exactly the cutting of a file to a number of pieces.

On Linux the split command comes by default install to the system with the coreutils package on most Debian (deb) based and Redhat based (rpm) distributions, theerefore Linux’s version of split is GNU/split since it’s part of the GNU Coreutils package. An interesting fact about Linux split is that one of the two programmers who has coded it is Richard Stallman 😉

On BSD Unix split is the AT&T UNIX (BSD) split

In the past splitting files in pieces was much more needed than today, as people used floppy drives to transfer data, though today with the bloom of Internet and the improve of the data carriers transferring even an extra large files from one place to another is a way more trivial task still at many occasions splitting it in pieces is needed.

Even though today splitting file is very rarely required, still there are times when being able to split a file in X number of parts is very much needed.
Maybe the most common use of splitting a file today is necessery when a large SQL file dumps, like let’s say 200 MBytes of info database needs to be moved from ane hosting provider to another one.
Many hosting providers does disallow direct access with standard mySQL client programs to the database directly and only allow a user to connect only via phpMyAdmin or some other web interface like Cpanel to improve data into the SQL or PostgreSQL server.

In such times, having knowledge on the Unix split command is a priceless asset.

Even though on Linux and BSD the code for the split command is not identical and GNU/split and BSD/split has some basic differences, the use of split on both of these Unices is identical.
The way to split a file in few pieces using on both Linux and BSD OSes is being done with one and the same command, here is how:

1. Splitting file in size of 40 mb On Linux

linux:~# split -b 40m SQL-Backup-Data.sql SQL-Backup-Data_split

2. Splitting file in size of 40mb on BSD (FreeBSD, OpenBSD, NetBSD)

freebsd# split -b 40m SQL-Backup-Data.sql SQL-Backup-Data_split

The Second argument the split command takes is actually called a prefix, the prefix is used as a basis name for the creation of the newly generated files cut in pieces file based on SQL-Backup-Data.sql.

As I said identical command will split the SQL-Backup-Data.sql files in a couple of parts which of it will be sized 40 megas.

These command will generate few files output like:

freebsd# ls -1 SQL-Backup-Dat*SQL-Backup-Data.sql
SQL-Backup-Dataa
SQL-Backup-Dataab
SQL-Backup-Dataac
SQL-Backup-Dataad
SQL-Backup-Dataae

As you see the SQL-Backup-Data.sql with size 200MB is being split in four files each of which is sized 40mbytes.

After the files are transfered to another Linux or BSD host, they can easily be again united in the original file with the command:

linux:~# for i in $(ls -1 SQL-Backup-Data_split*); echo $i >> SQL-Backup-Data.sql

Alternatively in most Unices also using cat should be enough to collect back the pieces into the original file, like so:

freebsd# cat SQL-Backup-Data_split* >> SQL-Backup-Data.sql

Enjoy splitting