Archive for the ‘Linux Backup tools’ Category

Play Midis on Linux / Make Linux MIDI Ready for the Future – Enable embedded MIDI music to play in a Browser, Play MIDIs with VLC and howto enjoy Midis in Text Console

Wednesday, October 4th, 2017

how-to-play-midi-on-gnu-linux-in-graphic-environment-console-and-browser-midi-synthesizer-and-linux-tux-together

 

Play Midis on Linux or Make Linux MIDI Ready for the Future – Enable embedded MIDI music to play in a Browser, Play MIDIs with VLC and howto enjoy Midis in Text Console HOWTO

 

Playing MIDI has been quite a lot of fun historically,

if you grow up in the days when personal computers were still young and the Sound Blaster was a luxury, before the raise of Mp3 music format, you have certainly enjoyed the beeping of PC Speaker and later on during 386 and 486 / 586 computers the enjoyment of playing tracked music such as S3M and MOD,

in that good days playing MIDI music was the only alternative for PC maniacs who doesn't own a CD Drive (which itself) was another luxury and even thouse who had a CD ROM device, were mainly playing music in CD audio format (.CDA).
Anyhow MIDI was a cheap and a CPU unintensive way to listen to equivalent of favourite popular Audio Songs and for those who still remember many of the songs were recreated in MIDI format, just with a number of synthesized instruments without any voice (as MIDI is usually).

The same was true also for the good old days of raise of Mobile Phones, when polyphonic was a standard as CPU power was low MIDI was a perfect substitute for the CPU heavy Encoded MP3s / OGG and other formats that required a modern for that time Intel CPU running in 50+ Mhz usually 100 / 166Mhz was perfect for the days to play Mp3 but still even on that PCs we listened to Midi songs.

Therefore if you're one of those people like me who still enjoy to play some Midi Music in the year 2017 and feel a bit like Back into the Future movie and a Free Software fan and user, especially if you're a novice GNU  / Linux Free Software user, you will be unpleasently surprised that most today's default Linux distributions doesn't have an easy way to play Midi music format out of the box right after install.

Hence below article aims to give you an understanding on

How you can play Midi Music on GNU / Linux Operating System

First, lets Prepare to load necessery Linux kernel modules to make sure MIDI can be played by soundcard:

In /etc/modules make sure you have the following list of modules loaded:
 

linux-desktop:~# cat /etc/modules
3c59x
snd-emu10k1
snd-pcm-oss
snd-mixer-oss
snd-seq-oss

!Note the modules are working as of time of writting and in time can change to some other modules, depending on how the development of ALSA (Advanced Linux Sound Architecture) goes, and if the developers decide to rename the upmentioned modules

If you just have added the modules to /etc/modules with vim / nano to reload modules into the Linux kernel run:

 

linux-desktop:~# modprobe -a


Secondly, Installing a whole bunch of MIDI music related program tools can be achieved in Debian by installing the multimedia-midi package, e.g.:

 

linux-desktop:~# apt-get install –yes multimedia-midi

 

1. Playing Midi in Graphical environment with a double click using VLC


How to make MIDI easy listanable in Linux graphical environment like GNOME / KDE / XFCE desktop ?

 

If you want to make Midi music execution sa easy as  just clicking on the .MIDI file format on Linux you can do that with a midi extension available for VLC (Video Lan Client) Universal Multi Platform Media Player player

To install it on Debian Ubuntu GNU / Linux
 

# apt-get install –yes vlc-plugin-fluidsynth

 

Необходимо е да се изтеглят 6754 B архиви.
След тази операция ще бъде използвано 35,8 kB допълнително дисково пространство.
Изт:1 http://deb.debian.org/debian stretch/main amd64 vlc-plugin-fluidsynth amd64 2.2.6-1~deb9u1 [6754 B]
Изтеглени 6754 B за 0с (33,6 kB/сек)           
Selecting previously unselected package vlc-plugin-fluidsynth:amd64.
(Reading database … 382976 files and directories currently installed.)
Preparing to unpack …/vlc-plugin-fluidsynth_2.2.6-1~deb9u1_amd64.deb …
Unpacking vlc-plugin-fluidsynth:amd64 (2.2.6-1~deb9u1) …
Setting up vlc-plugin-fluidsynth:amd64 (2.2.6-1~deb9u1) …
Processing triggers for libvlc-bin:amd64 (2.2.6-1~deb9u1) …


Besides making your MIDI play on the GUI environment easy as a a point and click VLC will also be able to play MIDIs on GNU / Linux from your favourite browser (nomatter Firefox / Chrome or Opera), even though the player would play in a new PopUP Window it is easy to select once MIDI file from a random website for example – here is a directory listing of Webserver with Doom II Soundtrack in MIDI format , click over any file from list and Choose option for VLC to always remember that MIDI files has to be opened with VLC player.
 


2. Enable Firefox / IceWeasel browser to Support Website embedded MIDI files

 

 

So VLC could make you listen the downloadable MIDIs from Web pages but,
 

What if you have stumbled on an old website which was configured with very OLD HTML Code to play some nice music (or even different MIDI songs) for each part of the website (for each webpage) and you want to have the Websites created with embedded MIDIs to automatically play on Linux oncce you visit the site?


Sadly default support in Browser for MIDI across all GNU / Linux, I've used so far never worked out of the box, not that still anyone is developing modern websites with MIDIs, but still for the sake of backward compitability and for sake of interactivity it is worthy to enable embedded MIDI support in Linux

But with a couple of tunings as usual GNU / Linux can do almost everything, so here is how to enable embedded browser support for Midi on Linux (That should work with minor modifications not only on Debian / Ubuntu / ArchLinux but also on Fedoras, CentOS etc.
If you try it on any of this distributions, please drop a short comment and tell me in few lines how you made embedded midi worked on that distros.

 

apt-get install –yes timidity mozplugger

Next do restart firefox

Sometimes in order to work you might need to delete /home/[YOUR_USERNAME]/.mozilla/pluginreg.dat and restart firefox again, e.g. make a backup and give it a try:

 

cp -rpf /home/hipo/.mozilla/pluginreg.dat /home/hipo/.mozilla/pluginreg.dat.bak
rm -f /home/hipo/.mozilla/pluginreg.dat

 

Another good tip as talking for embedding MIDI support is to embed XPDF to render PDF pages inside the Browser, by default this is done by GNOME's Evince PDF reader but as it is sometimes buggy and might crash it is generally a good idea to switch to xpdf instead, if for some reason PDF is not directly displaying in browser or suddenly stopped working after some distro uipgrade, you might want to do below as well:
 

apt-get install xpdf

vim /etc/mozpluggerrc

Fin d and Comment out the line starting with:

It should look like this afterwards:

 Repeat Swallow ….
 

text/x-pdf: pdf: PDF file
#      repeat swallow(documentShell) fill: acroread -geometry +9000+9000 +useFrontEndProgram "$file"
        repeat noisy swallow(Xpdf) fill: xpdf -g +9000+9000 "$file"
        repeat noisy swallow(gv) fill: gv –safer –quiet –antialias -geometry +9000+9000 "$file"


 

3. Play Midi music in Linux text console / terminal


There is a console tool that historically has been like the Linux standard for playing midis over the years as I remember, its called timidity

 


To install timidity on .Deb based Linux:
 

linux-desktop:~$ su root
Password:
linux-desktop:~# apt-get install –yes timidity

Необходимо е да се изтеглят 0 B/580 kB архиви.
След тази операция ще бъде използвано 0 B допълнително дисково пространство.
(Reading database … 382981 files and directories currently installed.)
Preparing to unpack …/timidity_2.13.2-40.5_amd64.deb …
Unpacking timidity (2.13.2-40.5) over (2.13.2-40.5) …
Processing triggers for menu (2.1.47+b1) …
Processing triggers for man-db (2.7.6.1-2) …
Setting up timidity (2.13.2-40.5) …
Processing triggers for menu (2.1.47+b1) …

 

To test your new MIDI Synthesizer tool and make the enjoyment full you can download Doom 2 extracted MIDI Soundtrack from here
 

Once you have downloaded above Metal MIDI DOOM old school arcade soundtrack and untarred it into your home directory be it ~/doom-midis

A remark to make here is timidity is quite CPU intensive, but on modern Dual and Quad-Core PC Notebooks, the CPU load is not of a big concern.

To test and play with timidity:
 

linux-desktop~$ timidity ~/mp3/midis/*


timidity-playing-doom-midi-bunny-song-on-debian-stretch-gnome-terminal-screenshot
 

hipo@jericho:~/mp3/midis$ aplaymidi -l
 Port    Client name                      Port name
 14:0    Midi Through                     Midi Through Port-0
128:0    TiMidity                         TiMidity port 0
128:1    TiMidity                         TiMidity port 1
128:2    TiMidity                         TiMidity port 2
128:3    TiMidity                         TiMidity port 3

 


We have also the playmidi  (simple midi text console terminal player), which historically was working quite decent and I use it to in the past on my RedHat 6.0 and RedHat 7.0 to listen to my .MID format files but unfortunately as of time of writting something is wrong with it, so when I try to play MIDIs with it instead of timidity I get this erro:

 

$ playmidi *.mid
Playmidi 2.4 Copyright (C) 1994-1997 Nathan I. Laredo, AWE32 by Takashi Iwai
This is free software with ABSOLUTELY NO WARRANTY.
For details please see the file COPYING.
open /dev/sequencer: No such file or directory

Even though I tried hard to resolve that error by loading various midi related MIDI modules and following a lot of the suggestions online on how to  make /dev/sequencer work again it was all no luck.
 

Some people back in the distant year 2005, reported the problem was solved by simply loading snd-seq

But as of time of writting:

 

# modprobe snd-seq

 

Some people said in archlinux's Forum

/dev/sequencer sequencer: No such file or directory

 

is solved by loading snd-seq-oss kernel module, but on my Debian Linux 9.1 Stretch, this ain't work as well :

 

root@jericho:/home/hipo/mp3/midis# modprobe snd-seq-oss
modprobe: FATAL: Module snd-seq-oss not found in directory /lib/modules/4.9.0-3-amd64
root@jericho:/home/hipo/mp3/midis# uname -a;
Linux jericho 4.9.0-3-amd64 #1 SMP Debian 4.9.30-2+deb9u5 (2017-09-19) x86_64 GNU/Linux


Another invention of mine was to try to also link /dev/snd/seq to /dev/sequencer but this produced no positive result either:

 

# ln -sf /dev/snd/seq /dev/sequencer
# ls -al /dev/sequencer
lrwxrwxrwx 1 root root 12 окт  4 16:48 /dev/sequencer -> /dev/snd/seq


Note that after lining in that way I got following error with my attempt to play MIDIs with playmidi

# playmidi *.mid
Playmidi 2.4 Copyright (C) 1994-1997 Nathan I. Laredo, AWE32 by Takashi Iwai
This is free software with ABSOLUTELY NO WARRANTY.
For details please see the file COPYING.
there is no soundcard


Anyhow on some other Linux distributions (especially with Older Kernel versions), some of the above 3 suggested Fix might work perfectly fine so if you have some time give it a try please and drop me  a comment on how it went, you will help the GNU / Linux community out there that way.

Well never mind the bollocks, so

Now back to where I started timidity even though it will play fine it will not give any indication on the lenght of the midi song (precious information such as how much time is left until the end is over).

Hence if you prefer a player that gives you an indicator on how much is left towards the end length of each of the played MIDI file you can give a try to wildmidi:

 

linux-desktop:~$ apt-cache show wildmidi|grep -i description -A 2

Description-en: software MIDI player
 Minimal MIDI player implementation based on the wildmidi library that
 can either dump to WAV or playback over ALSA. It is intended to

Description-md5: b4b34070ae88e73e3289b751230cfc89
Homepage: http://www.mindwerks.net/projects/wildmidi/
Tag: implemented-in::c, role::program, sound::midi, sound::player,

Description: software MIDI player
Description-md5: 4673a7051f104675c73eb344bb045607
Homepage: http://wildmidi.sourceforge.net/
Bugs: https://bugs.launchpad.net/ubuntu/+filebug


If yet not installed install it after becoming admin user:

 

linux-desktop:~$ su root
Password:

linux-desktop:~# apt-get install –yes wildmidi


wildmidi is much less CPU intensive (it uses gstreamer to play (Gstreamer – open source multimedia framework)

And next give it a try by running:

 

linux-desktop:~$ wildmidi ~/mp3/midis/*

 

wildmidi-midi-lenght-status-text-console-player-for-linux-ubuntu-debian-fedora-suse

 

 

4. Editting MIDI files with Free Software and Proprietary MIDI Editor Programs

 


If you want a professional software that can play Midi in a fuzzy interactive GUI way and have some extra possibilities to edit MIDIs and other format give a try to Muse Sequencer:
 

 

linux-desktop:~$ sudo apt-get install –yes muse

The following NEW packages will be installed:
  muse
0 upgraded, 1 newly installed, 0 to remove and 38 not upgraded.
Need to get 5814 kB of archives.
After this operation, 21.0 MB of additional disk space will be used.
Get:1 http://deb.debian.org/debian stretch/main amd64 muse amd64 2.1.2-3+b1 [5814 kB]
Fetched 5814 kB in 2s (2205 kB/s)                             
    are supported and installed on your system.
Preconfiguring packages …
Selecting previously unselected package muse.
(Reading database … 382981 files and directories currently installed.)
Preparing to unpack …/muse_2.1.2-3+b1_amd64.deb …
Unpacking muse (2.1.2-3+b1) …
Processing triggers for mime-support (3.60) …
Processing triggers for desktop-file-utils (0.23-1) …
Processing triggers for doc-base (0.10.7) …
Processing 1 added doc-base file…
Registering documents with scrollkeeper…
Processing triggers for man-db (2.7.6.1-2) …
Processing triggers for shared-mime-info (1.8-1) …
Unknown media type in type 'all/all'
Unknown media type in type 'all/allfiles'
Processing triggers for gnome-menus (3.13.3-9) …
Setting up muse (2.1.2-3+b1) …
Processing triggers for hicolor-icon-theme (0.15-1) …


 

Below is short description what Muse can do for you:

 

MusE is a MIDI/audio sequencer with recording and editing capabilities.
 Some Highlights:
 

  * Standard midifile (smf) import-/export.
  * Organizes songs in tracks and parts which you can arrange with
    the part editor.
  * MIDI editors: pianoroll, drum, list, controller.
  * Score editor with high quality postscript printer output.
  * Realtime: editing while playing.
  * Unlimited number of open editors.
  * Unlimited undo/redo.
  * Realtime and step-recording.
  * Multiple MIDI devices.
  * Unlimited number of tracks.
  * Sync to external devices: MTC/MMC, Midi Clock, Master/Slave.
  * Audio tracks, LADSPA host for master effects.
  * Multithreaded.
  * Uses raw MIDI devices.
  * XML project file.
  * Project file contains complete app state (session data).
  * Application spanning Cut/Paste Drag/Drop.

 

linux-desktop~:$ muse

muse-advanced-midi-editor-free-software-for-linux

 

Below is another non-free program that you might, try if MusE doesn't fit your needs (is not rich enough for editting capabilities is bitwig (though I don't recommend since it is not free software)

bitwig – Bitwig Studio is a multi-platform music-creation system for production, performance and DJing, with a focus on flexible editing tools and a super-fast workflow.
 


bitwig-midi-and-audio-non-free-software-advanced-useful-sound-editor-for-linx


 

5. Some examples for Text editing and MIDI Conversion to CSV and ABC file formats There is pretty much more

For the MIDI Extremists who or people that create MIDIs and want to learn how a MIDI is made (the content of it etc.), I suggest you take a look at these 3 command line MIDI editing / conversion tools
 

  • midi2abc – A little tool to create MIDI formats to ABC format
  • midi2csv – Conver tour Favourite MIDI files to CSV for educational purposes so see what Channels, Tracks and Time Intervals is a MIDI song mad
  • midicopy – Copy selected, track, channel, time interval of MIDI file to another MIDI file3

 

Well, that's all folks now enjoy your MIDIs and don't forget to donate, as I'm jobless at the moment and the only profit I make is just a few bucks out of advertisement on this blog.
 

Share this on

How to fix unfixable broken package dependencies on Debian GNU / Linux – Fix package mismatch

Wednesday, September 27th, 2017

how-to-fix-unfixable-broken-package-dependency-on-debian-ubuntu-linux-icon

I just tried to upgrade my Debian Wheezy 7 to the latest stable Debian Stretch 9 by not thinking too much and just changing the word wheezy with stretch in /etc/apt/sources.list so onwards on it looked like so:
 

cat /etc/apt/sources.list

 

deb http://ftp.bg.debian.org/debian/ stretch main contrib non-free
deb-src http://ftp.bg.debian.org/debian/ stretch main

deb http://security.debian.org/ stretch/updates main
deb-src http://security.debian.org/ stretch/updates main 

# stretch-updates, previously known as 'volatile'
##deb http://deb.debian.org/debian/ stretch-updates main
deb-src http://deb.debian.org/debian/ stretch-updates main

 

I also make sure all the defined Google Chrome / Opera / Skype and Squeeze Backports repositories existent in /etc/apt/sources.list.d directory files which in my case were like so;

 

root@noah:/etc/apt/sources.list.d# ls
google-chrome.list  opera-stable.list  squeeze-backports.list
opera.list          skype-stable.list


 were commented out because they were producing extra apt update errors …

And afterwards ran as usual:

 

apt-get update
apt-get –yes upgrade


The upgrade command executed fine and a lot of packages got downloaded and reinstalled without much issue, so I thought everything would be fine and just proceeded with the attempt to finalize the distribution major release 7 to major release 9 by running:

 

apt-get –yes dist-upgrade


But guess what now I got some dependency errors with cron and other installed packages that depend on package versions that are not going to be installed as the apt-get tool informed me.

I tried to out-smart the dpkg dependency system and removed all the packages reporting to have a missing dependencies with a short for bash loop after duming all the problematic packages showing dependency issues with commands such as:

apt-get -f dist-upgrade >> out.txt
for i in $(cat out.txt); awk '{ print $1 }' >> to_delete.txt; done


Before proceeding further I had to manually edit few lines in a text editor to remove some of the junk left from apt-get too.

So i was brave and just removed the dependency missing packages with following other for loop:

 

for i in $(cat to_delete.txt); do dpkg -r –force-all $i; done


Now I was hoping that rerunning:

 

apt-get autoremove

dpkg --configure -a

apt-get update -f
apt-get dist-upgrade -f


would no longer complain and I would just install the removed packages in another for shell loop once every other packages gets installed.

But guess what I was wrong … the system entered into another bunch of depedency terribly issues and messed up so badly that there were at least 50 packages reporting to have a missing / broken or uninstallable deb version depedency …

I got totally Angry, I knew already from experience that just trying to jump over while skipping a major release e.g. upgrade Debian 7 to Debian 9, instead of first upgrading to Debian 8 Linux and then upgrading Debian 8 to Debian 9 have always produced the same mess but I was lame and stupid again to f**k it up and I was out of mind swearing (a truly bad habid I'm not proud of) …

So as the notebook with Linux so far was perfectly working with Debian 7 and had a tons of old installed software and I was in a state where if I restart the system it was very likely my Thinkpad r61 laptop won't boot at all, I googled around to find a solution unfortunately without any luck, so finally I used the good old and tested method to DO IT MYSELF and Find the Fix without Uncle Google's help and by God's grace I did, after experimenting a while with the aptitude package / install / remove update tool without much success, finally I find the solution to the totally messed up Debian package dependencies and it all came to a simply reverting back my /etc/apt/source.list to look like following:

 

# deb cdrom:[Debian GNU/Linux 7.0.0 _Wheezy_ – Official amd64 CD Binary-1 20130504-14:44]/ wheezy main

##deb cdrom:[Debian GNU/Linux 7.0.0 _Wheezy_ – Official amd64 CD Binary-1 20130504-14:44]/ wheezy main

deb http://ftp.bg.debian.org/debian/ wheezy main contrib non-free
deb-src http://ftp.bg.debian.org/debian/ wheezy main

deb http://security.debian.org/ wheezy/updates main
deb-src http://security.debian.org/ wheezy/updates main

# wheezy-updates, previously known as 'volatile'
##deb http://deb.debian.org/debian/ wheezy-updates main
deb-src http://deb.debian.org/debian/ wheezy-updates main
##deb http://www.deb-multimedia.org wheezy main non-free
#deb http://ftp.debian.org/debian/ wheezy-backports main
###deb http://ftp.debian.org/debian/ wheezy-backports main contrib non-free
##deb http://dl.google.com/linux/chrome/deb/ wheezy main
#deb http://ftp2.de.debian.org/debian-volatile wheezy/volatile main
###deb http://www.deb-multimedia.org wheezy main non-free


run of the following two depedency fix commands !!!!

 

aptitude upgrade –full-resolver

aptitude full-upgrade –full-resolver


After a while a Debian LinuxOS system downgrade was initated and the missing packages were found, downloaded from the correct wheezy repositories and all broken and missing dependencies packages were fixed !!! HOORAY IT WORKS AGAIN!!

 

Share this on

Resume sftp / scp cancelled (interrupted) network transfer – Continue (large) partially downloaded files on Linux / Windows

Thursday, April 23rd, 2015

resume-sftp-scp-cancelled-interrupted-file-transfer-download-upload-network-transfer-continue-large-partially-downloaded-file-howto-linux-windows
I've recentely have a task to transfer some huge Application server long time stored data (about 70GB) of data after being archived between an old Linux host server and a new one to where the new Tomcat Application (Linux) server will be installed to fit the increased sites accessibility (server hardware overload).

The two systems are into a a paranoid DMZ network and does not have access between each other via SSH / FTP / FTPs and even no Web Access on port (80 or SSL – 443) between the two hosts, so in order to move the data I had to use a third HOP station Windows (server) which have a huge SAN network attached storage of 150 TB (as a Mapped drive I:/).

On the Windows HOP station which is giving me access via Citrix Receiver to the DMZ-ed network I'm using mobaxterm so I have the basic UNIX commands such as sftp / scp already existing on the Windows system via it.
Thus to transfer the Chronos Tomcat application stored files .tar.gz archived I've sftp-ed into the Linux host and used get command to retrieve it, e.g.:

 

sftp UserName@Linux-server.net
Password:
Connected to Linux-server.
sftp> get Chronos_Application_23_04_2015.tar.gz

….


The Secured DMZ Network seemed to have a network shaper limiting my get / Secured SCP download to be at 2.5MBytes / sec, thus the overall file transfer seemed to require a lot of time about 08:30 hours to complete. As it was the middle of day about 13:00 and my work day ends at 18:00 (this meant I would be able to keep the file retrieval session for a maximum of 5 hrs) and thus file transfer would cancel when I logout of the HOP station (after 18:00). However I've already left the file transfer to continue for 2hrs and thus about 23% of file were retrieved, thus I wondered whether SCP / SFTP Protocol file downloads could be resumed. I've checked thoroughfully all the options within sftp (interactive SCP client) and the scp command manual itself however none of it doesn't have a way to do a resume option. Then I thought for a while what I can use to continue the interrupted download and I remembered good old rsync (versatile remote and local file copying tool) which I often use to create customer backup stragies has the ability to resume partially downloaded files I wondered whether this partially downloaded file resume could be done only if file transfer was only initiated through rsync itself and luckily rsync is able to continue interrupted file transfers no matter what kind of HTTP / HTTPS / SCP / FTP program was used to start file retrievalrsync is able to continue cancelled / failed transfer due to network problems or user interaction activity), that turned even pretty easy to continue failed file transfer download from where it was interrupted I had to change to directory where file is located:
 

cd /path/to/interrupted_file/


and issue command:
 

rsync -av –partial username@Linux-server.net:/path/to/file .


the –partial option is the one that does the file resume trick, -a option stands for –archive and turns on the archive mode; equals -rlptgoD (no -H,-A,-X) arguments and -v option shows a file transfer percantage status line and an avarage estimated time for transfer to complete, an easier to remember rsync resume is like so:
 

rsync -avP username@Linux-server.net:/path/to/file .
Password:
receiving incremental file list
chronos_application_23_04_2015.tar.gz
  4364009472   8%    2.41MB/s    5:37:34

To continue a failed file upload with rsync (e.g. if you used sftp put command and the upload transfer failed or have been cancalled:
 

rsync -avP chronos_application_23_04_2015.tar.gz username@Linux-server.net:/path/where_to/upload


Of course for the rsync resume to work remote Linux system had installed rsync (package), if rsync was not available on remote system this would have not work, so before using this method make sure remote Linux / Windows server has rsync installed. There is an rsync port also for Windows so to resume large Giga or Terabyte file archive downloads easily between two Windows hosts use cwRsync.

Share this on

Fix “tar: Error exit delayed from previous errors” and its cause and solution

Monday, August 18th, 2014

fix-solve-tar-error-delayed-exit-from-previous-errors-tarball-error

tar: Error exit delayed from previous errors

error is a very common error encountered when creating archives (or backing up server configurations / websites / sql binary data). The error is quite unexplanatory and whenever creating files verbose in order to see the files added to archve in "real time" with lets say:

tar -czvf /tmp/filename_backup_date-of-backup.tar.gz /home/websites /home/sql


its pretty hard to track on exactly which file is the backup producing the Error exit delayed from previous errors, this is especially the case whenever adding to archive directories containing millions of tiny few kilobyte sized files. Many novice on uncautious Linux admins , might simply ignore the warning if they're in a hurry / are having excessive work to be done as there will be .tar.gz backup produced and whenever uncompressed most of the files are there and the backup error would seem not of a big issue.

However as backuping files is vital stuff, especially when moving the files from a server to be decomissioned you have to be extra careful and make the backup properly, e.g. figure out the cause of the error, to do so log the full output of tar operations with tee command, like so:

tar -czvf /tmp/filename_backup_date-of-backup.tar.gz /home/websites/ /home/sql | tee /tmp/backup_tar_full_output.log

Then you will have to review the file and lookup for errors with less search string – / (slash) – look for "error" and "permission den" keywords and this should point you to what is causing the error. In cases when millions of files are to be archived, the log might grow really big and hard to process, therefore a much quicker way to understand what's happening is to only log and show in shell standard output last file error with > (shell redirect):
 

tar -czvf /tmp/filename_backup_date-of-backup.tar.gz /home/websites /home/sql > /tmp/backup_failure-cause.log

 

tar: www.ur-website.com-http/2.0.63/conf/tnsnames.ora.20080918: Cannot open: Permission denied
tar: Removing leading `/' from member names

The error indicates clearly the cause of error is lack of Permissions to read the file tnsnames.ora.20080918 so solution is to either grant permissions to non-root user with (chmod / chown) cmds, in my case grant perms to user hipo with which tar is ran, or run again the website backup with superuser, I usually just run with root user to prevent tampering with original permissions, e.g. to solve the error, either:

$ su root
# tar -czvf /tmp/filename_backup_date-of-backup.tar.gz /home/websites /home/sql

Or even better if sudo is installed and user is added to /etc/sudoers file

$ sudo tar -czvf /tmp/filename_backup_date-of-backup.tar.gz /home/websites /home/sql


Though permission errors is the most often reason for:

tar: Error exit delayed from previous errors, you should keep in mind that in some cases the error might be caused due to failing RAID membered disk drive or single hdd failure on systems that are not in some RAID array

 

Share this on

Save data from failing hard disk on Linux – Rescuing data from failing disk with bad blocks

Wednesday, April 16th, 2014

save-data-from-failing-hard-drive-data-recovery-badblocks-linux_1.jpg
Sooner or later your Linux Desktop or Linux server hard drive will start breaking up, whether you have a hardware or software RAID 1, 6 or 10 you can  and good hard disk health monitoring software you can react on time but sometimes as admins we have to take care of old servers which either have RAID 0 or missing RAID configuration and or disk firmware is unable to recognize failing blocks on time and remap them. Thus it is quite useful to have techniques to save data from failing hard disk drives with physical badblocks.

With ddrescue tool there is still hope for your Linux data though disk is full of unrecoverable I/O errors.

apt-cache show ddrescue
 

apt-cache show ddrescue|grep -i description -A 12

Description: copy data from one file or block device to another
 dd_rescue is a tool to help you to save data from crashed
 partition. Like dd, dd_rescue does copy data from one file or
 block device to another. But dd_rescue does not abort on errors
 on the input file (unless you specify a maximum error number).
 It uses two block sizes, a large (soft) block size and a small
 (hard) block size. In case of errors, the size falls back to the
 small one and is promoted again after a while without errors.
 If the copying process is interrupted by the user it is possible
 to continue at any position later. It also does not truncate
 the output file (unless asked to). It allows you to start from
 the end of a file and move backwards as well. dd_rescue does
 not provide character conversions.

 

To use ddrescue for saving data first thing is to shutdown the Linux host boot the system with a Rescue LiveCD like SystemRescueCD – (Linux system rescue disk), Knoppix (Most famous bootable LiveCD / LiveDVD), Ubuntu Rescue Remix or BackTrack LiveCD – (A security centered "hackers" distro which can be used also for forensics and data recovery), then mount the failing disk (I assume disk is still mountable :). Note that it is very important to mount the disk as read only, because any write operation on hard drive increases chance that it completely becomes unusable before saving your data!

To make backup of your whole hard disk data to secondary mounted disk into /mnt/second_disk

# mkdir /mnt/second_disk/rescue
# mount /dev/sda2 /mnt/second_disk/rescue
# dd_rescue -d -r 10 /dev/sda1 /mnt/second_disk/rescue/backup.img

# mount -o loop /mnt/second_disk/rescue/backup.img

In above example change /dev/sda2 to whatever your hard drive device is named.

Whether you have already an identical secondary drive attached to the Linux host and you would like to copy whole failing Linux partition (/dev/sda) to the identical drive (/dev/sdb) issue:

ddrescue -d -f -r3 /dev/sda /dev/sdb /media/PNY_usb/rescue.logfile

If you got just a few unreadable files and you would like to recover only them then run ddrescue just on the damaged files:

ddrescue -d –R -r 100 /damaged/disk/some_dir/damaged_file /mnt/secondary_disk/some_dir/recoveredfile

-d instructs to use direct I/O
-R retrims the error area on each retry
-r 100 sets the retry limit to 100 (tries to read data 100 times before resign)

Of course this is not always working as on some HDDs recovery is impossible due to hard physical damages, if above command can't recover a file in 10 attempts it is very likely that it never succeeds …

A small note to make here is that there is another tool dd_rescue (make sure you don't confuse them) – which is also for recovery but GNU ddrescue performs better with recovery.
How ddrescue works is it keeps track of the bad sectors, and go back and try to do a slow read of that data in order to read them.
By the way BSD users would happy to know there is ddrescue port already, so data recovery on BSDs *NIX filesystems if you're a Windows user you can use ddrescue to recover data too via Cygwin.
Of course final data recovery is also very much into God's hands so before launching ddrescue, don't forget to say a prayer 🙂

Share this on

Creating data backups on Debian and Ubuntu servers with Bacula professional backup tool

Wednesday, April 17th, 2013

Bacula professional GNU Linux Freebsd Netbsd backup software logo with bat

1. Install Bacula Backup System

root@pcfreak:~# apt-cache show bacula |grep -i description -A 5

Description: network backup, recovery and verification – meta-package
 Bacula is a set of programs to manage backup, recovery and verification
 of computer data across a network of computers of different kinds.
 .
 It is efficient and relatively easy to use, while offering many advanced
 storage management features that make it easy to find and recover lost or
 damaged files. Due to its modular design, Bacula is scalable from small
 single computer systems to networks of hundreds of machines.
 .

root@pcfreak:~# apt-get install bacula

 

Reading package lists… Done
Building dependency tree      
Reading state information… Done
The following extra packages will be installed:
  bacula-client bacula-common bacula-common-sqlite3 bacula-console bacula-director-common bacula-director-sqlite3 bacula-fd bacula-sd
  bacula-sd-sqlite3 bacula-server bacula-traymonitor libsqlite0 mt-st mtx sqlite sqlite3
Suggested packages:
  bacula-doc dds2tar scsitools sg3-utils kde gnome-desktop-environment sqlite-doc sqlite3-doc
The following NEW packages will be installed:
  bacula bacula-client bacula-common bacula-common-sqlite3 bacula-console bacula-director-common bacula-director-sqlite3 bacula-fd bacula-sd
  bacula-sd-sqlite3 bacula-server bacula-traymonitor libsqlite0 mt-st mtx sqlite sqlite3
0 upgraded, 17 newly installed, 0 to remove and 0 not upgraded.
2 not fully installed or removed.
Need to get 2,859 kB of archives.
After this operation, 6,992 kB of additional disk space will be used.
Do you want to continue [Y/n]? Y
Get:1 http://security.debian.org/ squeeze/updates/main bacula-common amd64 5.0.2-2.2+squeeze1 [637 kB]
Get:2 http://security.debian.org/ squeeze/updates/main bacula-common-sqlite3 amd64 5.0.2-2.2+squeeze1 [102 kB]
Get:3 http://security.debian.org/ squeeze/updates/main bacula-console amd64 5.0.2-2.2+squeeze1 [67.6 kB]
Get:4 http://security.debian.org/ squeeze/updates/main bacula-director-common amd64 5.0.2-2.2+squeeze1 [56.6 kB]
Get:5 http://security.debian.org/ squeeze/updates/main bacula-director-sqlite3 amd64 5.0.2-2.2+squeeze1 [308 kB]
Get:6 http://security.debian.org/ squeeze/updates/main bacula-sd amd64 5.0.2-2.2+squeeze1 [459 kB]
Get:7 http://security.debian.org/ squeeze/updates/main bacula-sd-sqlite3 amd64 5.0.2-2.2+squeeze1 [435 kB]
Get:8 http://security.debian.org/ squeeze/updates/main bacula-server all 5.0.2-2.2+squeeze1 [48.5 kB]
Get:9 http://security.debian.org/ squeeze/updates/main bacula-fd amd64 5.0.2-2.2+squeeze1 [124 kB]
Get:10 http://security.debian.org/ squeeze/updates/main bacula-client all 5.0.2-2.2+squeeze1 [48.5 kB]
Get:11 http://security.debian.org/ squeeze/updates/main bacula all 5.0.2-2.2+squeeze1 [1,030 B]
Get:12 http://security.debian.org/ squeeze/updates/main bacula-traymonitor amd64 5.0.2-2.2+squeeze1 [70.0 kB]
Get:13 http://ftp.uk.debian.org/debian/ squeeze/main sqlite3 amd64 3.7.3-1 [100 kB]
Get:14 http://ftp.uk.debian.org/debian/ squeeze/main libsqlite0 amd64 2.8.17-6 [188 kB]
Get:15 http://ftp.uk.debian.org/debian/ squeeze/main sqlite amd64 2.8.17-6 [22.0 kB]
Get:16 http://ftp.uk.debian.org/debian/ squeeze/main mtx amd64 1.3.12-3 [154 kB]
Get:17 http://ftp.uk.debian.org/debian/ squeeze/main mt-st amd64 1.1-4 [35.6 kB]                                                            
Fetched 2,859 kB in 6s (471 kB/s)                                                                                                           
Selecting previously deselected package bacula-common.
(Reading database … 86693 files and directories currently installed.)
Unpacking bacula-common (from …/bacula-common_5.0.2-2.2+squeeze1_amd64.deb) …
Adding user 'bacula'… Ok.
Selecting previously deselected package bacula-common-sqlite3.
Unpacking bacula-common-sqlite3 (from …/bacula-common-sqlite3_5.0.2-2.2+squeeze1_amd64.deb) …
Selecting previously deselected package bacula-console.
Unpacking bacula-console (from …/bacula-console_5.0.2-2.2+squeeze1_amd64.deb) …
Processing triggers for man-db …
Setting up bacula-common (5.0.2-2.2+squeeze1) …
Selecting previously deselected package bacula-director-common.
(Reading database … 86860 files and directories currently installed.)
Unpacking bacula-director-common (from …/bacula-director-common_5.0.2-2.2+squeeze1_amd64.deb) …
Selecting previously deselected package sqlite3.
Unpacking sqlite3 (from …/sqlite3_3.7.3-1_amd64.deb) …
Selecting previously deselected package libsqlite0.
Unpacking libsqlite0 (from …/libsqlite0_2.8.17-6_amd64.deb) …
Selecting previously deselected package sqlite.
Unpacking sqlite (from …/sqlite_2.8.17-6_amd64.deb) …
Selecting previously deselected package bacula-director-sqlite3.
Unpacking bacula-director-sqlite3 (from …/bacula-director-sqlite3_5.0.2-2.2+squeeze1_amd64.deb) …
Selecting previously deselected package mtx.
Unpacking mtx (from …/mtx_1.3.12-3_amd64.deb) …
Selecting previously deselected package bacula-sd.
Unpacking bacula-sd (from …/bacula-sd_5.0.2-2.2+squeeze1_amd64.deb) …
Selecting previously deselected package bacula-sd-sqlite3.
Unpacking bacula-sd-sqlite3 (from …/bacula-sd-sqlite3_5.0.2-2.2+squeeze1_amd64.deb) …
Selecting previously deselected package bacula-server.
Unpacking bacula-server (from …/bacula-server_5.0.2-2.2+squeeze1_all.deb) …
Selecting previously deselected package bacula-fd.
Unpacking bacula-fd (from …/bacula-fd_5.0.2-2.2+squeeze1_amd64.deb) …
Selecting previously deselected package bacula-client.
Unpacking bacula-client (from …/bacula-client_5.0.2-2.2+squeeze1_all.deb) …
Selecting previously deselected package bacula.
Unpacking bacula (from …/bacula_5.0.2-2.2+squeeze1_all.deb) …
Selecting previously deselected package bacula-traymonitor.
Unpacking bacula-traymonitor (from …/bacula-traymonitor_5.0.2-2.2+squeeze1_amd64.deb) …
Selecting previously deselected package mt-st.
Unpacking mt-st (from …/archives/mt-st_1.1-4_amd64.deb) …
Processing triggers for man-db …
Setting up acct (6.5.4-2.1) …
Setting up bacula-director-common (5.0.2-2.2+squeeze1) …
Setting up bacula-director-sqlite3 (5.0.2-2.2+squeeze1) …
config: Running dbc_go bacula-director-sqlite3 configure
Stopping Bacula Director…:.
 *** Checking type of existing DB at /var/lib/bacula/bacula.db: None
 *** Will create new database at this location.
dbconfig-common: writing config to /etc/dbconfig-common/bacula-director-sqlite3.conf

Creating config file /etc/dbconfig-common/bacula-director-sqlite3.conf with new version
creating database bacula.db: success.
verifying database bacula.db exists: success.
populating database via sql…  done.
Processing configuration…Ok.
Starting Bacula Director…:.
Setting up bacula-sd (5.0.2-2.2+squeeze1) …
Starting Bacula Storage daemon…:.
Setting up acct (6.5.4-2.1) …
insserv: warning: script 'K02courier-imap' missing LSB tags and overrides
insserv: script iptables: service skeleton already provided!
insserv: warning: script 'courier-imap' missing LSB tags and overrides
Turning on process accounting, file set to '/var/log/account/pacct'.
Done..
Setting up bacula-sd-sqlite3 (5.0.2-2.2+squeeze1) …
Setting up bacula-server (5.0.2-2.2+squeeze1) …
Setting up bacula-fd (5.0.2-2.2+squeeze1) …
Starting Bacula File daemon…:.
Setting up bacula-client (5.0.2-2.2+squeeze1) …
Setting up bacula (5.0.2-2.2+squeeze1) …
Setting up proftpd-basic (1.3.3a-6squeeze6) …
Starting ftp server: proftpd.
Setting up mt-st (1.1-4) …
update-alternatives: using /bin/mt-st to provide /bin/mt (mt) in auto mode.
 

 

Once installed you will have 3 processes running in background used by Bacula backup system (bacula-dir, bacula-sd and bacula-fd)
root@pcfreak:~# ps ax |grep -i bacula|grep -v grep
6044 ? Ssl 0:00 /usr/sbin/bacula-dir -c /etc/bacula/bacula-dir.conf -u bacula -g bacula
6089 ? Ssl 0:00 /usr/sbin/bacula-sd -c /etc/bacula/bacula-sd.conf -u bacula -g tape
6167 ? Ssl 0:00 /usr/sbin/bacula-fd -c /etc/bacula/bacula-fd.conf

Here is what each of them does:

a) Bacula-dir or Bacula-Director is main Bacula Backup system component. Bacula-dir controls the whole backup system and the various other 2 daemons Bacula-FD and  Bacula-SD.

b) Bacula-fd – (Bacula File Daemon) acts as the interface between  Bacula network backup system and the filesystems to be backed up:  it  is  responsible for   reading/writing/verifying the files to be  backup'd/verified/restored. Network transfer can optionally be compressed.

c) Bacula-sd – (Bacula Storage Daemon) acts as interface between Bacula network backup system and Tape Drive or filesystem where backups will be stored

Each of 3 processes bacula-dir, bacula-fd and bacula-sd has their own init script in /etc/rc.d/, e.g.:

# /etc/init.d/bacula-directory
# /etc/init.d/bacula-fd
# /etc/init.d/bacula-sd

2. Configuring Bacula Backup System

Configuring Bacula is done via configuration files located in /etc/bacula

root@pcfreak:~# cd /etc/bacula
root@pcfreak:/etc/bacula# ls -1
bacula-dir.conf
bacula-fd.conf
bacula-fd.conf.dist
bacula-sd.conf
bacula-sd.conf.dist
bconsole.conf
common_default_passwords
scripts/
tray-monitor.conf

3. Defining what needs to be backed up

Here is a short description of most important configuration blocks in Bacula's main config bacula-dir.conf
 

1.Director resource defines the Director’s parameters. Name, Password, WorkingDirectory, and PidDirectory must be set. QueryFile specifies where the Director can find the SQL queries.

2.Job defines a backup or restore to perform. You will need at least one job per client. To simplify configuration of similar clients, create a common JobDefs resource and refer to it from within a Job. For example, if you have one set of defaults for desktops and another set for servers, you can create a Desktop and Server (these names are arbitrary and set with the Name attribute) JobDefs and refer to those two collections of settings from a Job.

3. Schedule resource is referred to within a Job to allow it to occur automatically.

4. FileSet resource defines which files are to be backed up. You can both Include and Exclude files.

5.Each Client resource details the clients that this Director can back up.

6.Storage resource specifies the storage daemon available to the Director.

7.Pool identifies a set of storage volumes (tapes/files) that Bacula can write data to. Each Pool can be configured to use different sets of tapes for different jobs.

8.Catalog resource defines Bacula catalog (database) to be used.

9. Messages resource captures where to send messages and which messages to send.
 

a) Defining directories to be backed up

Defining what needs to be backed up is done through bacula-dir.conf ( /etc/bacula/bacula-dir.conf ). In the file there is a FileSet section, where dirs to backed up have to be included, below config defines to backup /usr/sbin, /etc/, /root, /usr and /var directories
 

# List of files to be backed up
FileSet {
  Name = "Full Set"
  Include {
    Options {
      signature = MD5
    }
#   
#  Put your list of files here, preceded by 'File =', one per line
#    or include an external list with:
#
#    File = <file-name
#
#  Note: / backs up everything on the root partition.
#    if you have other partitions such as /usr or /home
#    you will probably want to add them too.
#
#  By default this is defined to point to the Bacula binary
#    directory to give a reasonable FileSet to backup to
#    disk storage during initial testing.
#
    File = /usr/sbin
    File = /root
    File = /etc
    File = /usr
    File = /var

  }

b) Defining where to store back ups

All configuration of where Bacula will store created backups is done through /etc/bacula/bacula-sd.conf

There are few configurations that needs to be tuned according to custom user purposes, below I paste them from config:
 

Storage {                             # definition of myself
  Name = pcfreak-sd
  SDPort = 9103                  # Director's port     
  WorkingDirectory = "/var/lib/bacula"
  Pid Directory = "/var/run/bacula"
  Maximum Concurrent Jobs = 20
  SDAddress = 127.0.0.1
}

Device {
  Name = FileStorage
  Media Type = File
  Archive Device = /nonexistant/path/to/file/archive/dir
  LabelMedia = yes;                   # lets Bacula label unlabeled media
  Random Access = Yes;
  AutomaticMount = yes;               # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
}

Messages {
  Name = Standard
  director = pcfreak-dir = all

}

 

Storage sets working directory where temporary backups are created on backup creation time – default is /var/lib/bacula

Device – defines exact directory where backups will be stored after created – usually this is a directory with  mounted hard disk specially for backups. Bacula default is /nonexistant/path/to/file/archive/dir

Messages – configures where and what kind of messages are send on bacula operations

c) Configuring Bacula to create backups via network

Configuring where Bacula will act just on server localhost, or will bind and be visible to store backups via network IP is done from Bacula-FD (Bacula File Daemon).

By default it listens to localhost127.0.0.1. Bacula-FD configurations are done from /etc/bacula/bacula-fd.conf. Most important section configuring where bacula listens is named FileDaemon.
 

#
# "Global" File daemon configuration specifications
#
FileDaemon {                          # this is me
  Name = pcfreak-fd
  FDport = 9102                  # where we listen for the director
  WorkingDirectory = /var/lib/bacula
  Pid Directory = /var/run/bacula
  Maximum Concurrent Jobs = 20
  FDAddress = 127.0.0.1
}
 

 

By commenting FDAddress, Bacula will automatically listen to external IP configured on lan interface eth0

4. Managing Bacula Command Line Interfa – bconsole

Managing bacula interactively is done through bconsole (Bacula's Management Console) command.

root@pcfreak:~# bconsole

Connecting to Director localhost:9101
1000 OK: pcfreak-dir Version: 5.0.2 (28 April 2010)
Enter a period to cancel a command.
*
*help
  Command       Description
  =======       ===========
  add           Add media to a pool
  autodisplay   Autodisplay console messages
  automount     Automount after label
  cancel        Cancel a job
  create        Create DB Pool from resource
  delete        Delete volume, pool or job
  disable       Disable a job
  enable        Enable a job
  estimate      Performs FileSet estimate, listing gives full listing
  exit          Terminate Bconsole session
  gui           Non-interactive gui mode
  help          Print help on specific command
  label         Label a tape
  list          List objects from catalog
  llist         Full or long list like list command
  messages      Display pending messages
  memory        Print current memory usage
  mount         Mount storage
  prune         Prune expired records from catalog
  purge         Purge records from catalog
  python        Python control commands
  quit          Terminate Bconsole session
  query         Query catalog
  restore       Restore files
  relabel       Relabel a tape
  release       Release storage
  reload        Reload conf file
  run           Run a job
  status        Report status
  setdebug      Sets debug level
  setip         Sets new client address — if authorized
  show          Show resource records
  sqlquery      Use SQL to query catalog
  time          Print current time
  trace         Turn on/off trace to file
  unmount       Unmount storage
  umount        Umount – for old-time Unix guys, see unmount
  update        Update volume, pool or stats
  use           Use catalog xxx
  var           Does variable expansion
  version       Print Director version
  wait          Wait until no jobs are running

When at a prompt, entering a period cancels the command.

You have messages.
*
 

On run bconsole launches another service bacula-console.

root@pcfreak:~# ps ax |grep -i bacula-console|grep -v grep 13959 pts/5 Sl+ 0:00 /usr/sbin/bacula-console -c /etc/bacula/bconsole.conf

There are 4 tcp/ip ports via which communication between Bacula processes is done;

a) Communication from bconsole to Bacula is throigh Port Number 9101
b) Communication from bacula-dir to bacula-sd is done using Port Number 9103
c) bacula-dir to bacula-fd talks via Port Number 9102
d) Messages between Bacula-fd to bacula-sd is via port num 9103

Both of 4 ports are only listening on (127.0.0.1) / localhost and thus there is no security risk from external malicious users to enter Bacula remotely.

a) some essential commands while in bconsole shell

*show pools
Pool: name=Default PoolType=Backup
      use_cat=1 use_once=0 cat_files=1
      max_vols=0 auto_prune=1 VolRetention=1 year
      VolUse=0 secs recycle=1 LabelFormat=*None*
      CleaningPrefix=*None* LabelType=0
      RecyleOldest=0 PurgeOldest=0 ActionOnPurge=0
      MaxVolJobs=0 MaxVolFiles=0 MaxVolBytes=0
      MigTime=0 secs MigHiBytes=0 MigLoBytes=0
      JobRetention=0 secs FileRetention=0 secs
Pool: name=File PoolType=Backup
      use_cat=1 use_once=0 cat_files=1
      max_vols=100 auto_prune=1 VolRetention=1 year
      VolUse=0 secs recycle=1 LabelFormat=*None*
      CleaningPrefix=*None* LabelType=0
      RecyleOldest=0 PurgeOldest=0 ActionOnPurge=0
      MaxVolJobs=0 MaxVolFiles=0 MaxVolBytes=53687091200
      MigTime=0 secs MigHiBytes=0 MigLoBytes=0
      JobRetention=0 secs FileRetention=0 secs
Pool: name=Scratch PoolType=Backup
      use_cat=1 use_once=0 cat_files=1
      max_vols=0 auto_prune=1 VolRetention=1 year
      VolUse=0 secs recycle=1 LabelFormat=*None*
      CleaningPrefix=*None* LabelType=0
      RecyleOldest=0 PurgeOldest=0 ActionOnPurge=0
      MaxVolJobs=0 MaxVolFiles=0 MaxVolBytes=0
      MigTime=0 secs MigHiBytes=0 MigLoBytes=0
      JobRetention=0 secs FileRetention=0 secs
You have messages.

*status
Status available for:
     1: Director
     2: Storage
     3: Client
     4: All
Select daemon type for status (1-4):

*label
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
Automatically selected Storage: File
Enter new Volume name:

*messages

b) Restoring Backups with bconsole

Restoring from backups is done with restore command

*restore
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"

First you select one or more JobIds that contain files
to be restored. You will be presented several methods
of specifying the JobIds. Then you will be allowed to
select which files from those JobIds are to be restored.

To select the JobIds, you have the following choices:
     1: List last 20 Jobs run
     2: List Jobs where a given File is saved
     3: Enter list of comma separated JobIds to select
     4: Enter SQL list command
     5: Select the most recent backup for a client
     6: Select backup for a client before a specified time
     7: Enter a list of files to restore
     8: Enter a list of files to restore before a specified time
     9: Find the JobIds of the most recent backup for a client
    10: Find the JobIds for a backup for a client before a specified time
    11: Enter a list of directories to restore for found JobIds
    12: Select full restore to a specified Job date
    13: Cancel
Select item:  (1-13):

 

Bacula can create backups on Tapes as well as tapes are still heavily used for backing data in some Banks, airports and other organizations where data is crucial.

Bacula is not among the easiest systems to create backups but for Backup administrators who work with Linux and FreeBSD it is great. Its scalability allows to make a very robust and complex backupping scheme which are hardly achievalable with other less professional backup tools like rsnapshot or rsync.
 

Share this on

Create Easy Data Backups with Rsnapshot back-up tool on GNU / Linux

Monday, April 15th, 2013

 

rsnapshot Linux and FreeBSD easy data backup tool logo
Backing up information on Linux servers is essential part of routine system adminsitrator job. Thus I decided to write for those interested in how one can easily create backups of important data through a tiny tool called rsnapshot which I prior used to make periodic data incremental backups on few of Debian Linux servers I manage. In case you wonder why use rsnapshot and not just rsync – the reasons are 2.
a. Rsnapshot is very easy to configure and use and you don't need to have deep understanding on  rsync numerous options to use it.
b. Rsnapshot does support incremental data backups – saving a lot of disk space on backup host.

 

 

 

Mentioning  incremental data backups for some those term might be a news so I will in short explain here what is Incremental Data Backups?

Incremental Data Backups are such backups which only create new backup of system scheduled files to backup only whether there are changes in files to backup or new ones are added to directory/directories set to be routinely backed up. Incremental backups are often desirable as they consume minimum storage space and are quicker to perform than normal periodic whole data archiving (differential backups). rsync has also support for incremental backups but configuring it to do so takes time and requires extra time on reading and understanding how they work, so I personally prefer simplicity rsnapshot brings.

1. Installing rsnapshot with apt-get

Here is rsnapshot debian package description;

debian:~#  apt-cache show rsnapshot|grep -i description -A 5

 

Description: local and remote filesystem snapshot utility
 rsnapshot is an rsync-based filesystem snapshot utility. It can take
 incremental backups of local and remote filesystems for any number of
 machines. rsnapshot makes extensive use of hard links, so disk space is
 only used when absolutely necessary.
Homepage: http://www.rsnapshot.org/

As you can read from description, rsnapshot is a frontend command using rsync to make data backups.

Install of rsnapshot is done through;

 debian:~# apt-get install --yes rsnapshot

Reading package lists… Done
Building dependency tree      
Reading state information… Done
The following NEW packages will be installed:
  rsnapshot
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/140 kB of archives.
After this operation, 598 kB of additional disk space will be used.
Selecting previously deselected package rsnapshot.
(Reading database … 87026 files and directories currently installed.)
Unpacking rsnapshot (from …/rsnapshot_1.3.1-1_all.deb) … –
Processing triggers for man-db …
Setting up rsnapshot (1.3.1-1) …

2. Rsnapshot  package content and Documentation

Once installed here is file content of rsnapshot deb package;

debian:~# dpkg -L rsnapshot

 

/.
/usr
/usr/share
/usr/share/doc-base
/usr/share/doc-base/rsnapshot
/usr/share/doc
/usr/share/doc/rsnapshot
/usr/share/doc/rsnapshot/TODO
/usr/share/doc/rsnapshot/changelog.gz
/usr/share/doc/rsnapshot/Upgrading_from_1.1.gz
/usr/share/doc/rsnapshot/examples
/usr/share/doc/rsnapshot/examples/rsnapshot.conf.default.gz
/usr/share/doc/rsnapshot/examples/utils
/usr/share/doc/rsnapshot/examples/utils/backup_mysql.sh
/usr/share/doc/rsnapshot/examples/utils/mysqlbackup.pl
/usr/share/doc/rsnapshot/examples/utils/random_file_verify.sh
/usr/share/doc/rsnapshot/examples/utils/rsnapreport.pl.gz
/usr/share/doc/rsnapshot/examples/utils/make_cvs_snapshot.sh
/usr/share/doc/rsnapshot/examples/utils/backup_pgsql.sh
/usr/share/doc/rsnapshot/examples/utils/rsnapshotdb
/usr/share/doc/rsnapshot/examples/utils/rsnapshotdb/CHANGES.txt
/usr/share/doc/rsnapshot/examples/utils/rsnapshotdb/rsnapshotDB.pl.gz
/usr/share/doc/rsnapshot/examples/utils/rsnapshotdb/INSTALL.txt
/usr/share/doc/rsnapshot/examples/utils/rsnapshotdb/TODO.txt
/usr/share/doc/rsnapshot/examples/utils/rsnapshotdb/rsnapshotDB.xsd
/usr/share/doc/rsnapshot/examples/utils/rsnapshotdb/rsnapshotDB.conf.sample
/usr/share/doc/rsnapshot/examples/utils/rsnapshotdb/README.txt
/usr/share/doc/rsnapshot/examples/utils/rsnapshot-copy
/usr/share/doc/rsnapshot/examples/utils/backup_rsnapshot_cvsroot.sh
/usr/share/doc/rsnapshot/examples/utils/backup_dpkg.sh
/usr/share/doc/rsnapshot/examples/utils/sign_packages.sh
/usr/share/doc/rsnapshot/examples/utils/mkmakefile.sh
/usr/share/doc/rsnapshot/examples/utils/rsnaptar
/usr/share/doc/rsnapshot/examples/utils/rsnapshot_invert.sh
/usr/share/doc/rsnapshot/examples/utils/rsnapshot_if_mounted.sh
/usr/share/doc/rsnapshot/examples/utils/README
/usr/share/doc/rsnapshot/examples/utils/debug_moving_files.sh
/usr/share/doc/rsnapshot/examples/utils/backup_smb_share.sh
/usr/share/doc/rsnapshot/README.gz
/usr/share/doc/rsnapshot/changelog.Debian.gz
/usr/share/doc/rsnapshot/copyright
/usr/share/doc/rsnapshot/README.Debian
/usr/share/doc/rsnapshot/html
/usr/share/doc/rsnapshot/html/rsnapshot-HOWTO.en.html
/usr/share/doc/rsnapshot/NEWS.Debian.gz
/usr/share/lintian
/usr/share/lintian/overrides
/usr/share/lintian/overrides/rsnapshot
/usr/share/man
/usr/share/man/man1
/usr/share/man/man1/rsnapshot.1.gz
/usr/share/man/man1/rsnapshot-diff.1.gz
/usr/bin
/usr/bin/rsnapshot-diff
/usr/bin/rsnapshot
/var
/var/cache
/var/cache/rsnapshot
/etc
/etc/cron.d
/etc/cron.d/rsnapshot
/etc/rsnapshot.conf
/etc/logrotate.d
/etc/logrotate.d/rsnapshot

To get basic idea, on rsnapshot and how it can be configured and run manually as well as how it can be set-up to run periodic via a cronjob README shipped with package is a good start point.

debian:~# zless /usr/share/doc/rsnapshot/README.gz
....

It is also useful to check program documentation in HTML, whether you have some text browser installed – i.e. lynx or links:

debian:~# links /usr/share/doc/rsnapshot/html/rsnapshot-HOWTO.en.html

Note that many of information in rsnapshot-HOWTO is related to how rsnapshot is installed manually from source, so for Deb based distro users reading these sections can be safely skipped. For Debian users hence it is useful to read howto from section 4.A onwards. man rsnapshot's Examle section is very good reading too as it gives a lot of use scenarios necessary in more complicated backup situations.

3. Configuring Rsnapshot – Setting Data Directories to Backup

Configuration of Rsnapshot is done through /etc/rsnapshot.conf file. There is plenty of comments in file, so opening in text editor and taking few minutes to read commented lines is necessery. Configuration options just like with most Linux tool config files is done through config directives, not commented.

debian:~# cat /etc/rsnapshot.conf |grep -v "#"|uniq

 

 

config_version    1.2

snapshot_root    /var/cache/rsnapshot/

cmd_rm        /bin/rm

cmd_rsync    /usr/bin/rsync

cmd_logger    /usr/bin/logger

interval    hourly    6
interval    daily    7
interval    weekly    4

verbose        2

loglevel    3

lockfile    /var/run/rsnapshot.pid

backup    /home/        localhost/
backup    /etc/        localhost/
backup    /usr/local/    localhost/

 

 

Above config options are clear to understand, there is interval of backups to set (hourly, daily, weekly), verbose level of rsnapshot backup operation log file, lockfile which will be used by rsnapshot to prevent duplicate rsnapshot runs and last backup directive in which you need to specify what needs to be backed up. In config file there is also commented variable for creating rsnapshot backup once a month

#interval   monthly 3

If you need to create backups once a month uncomment it.

In backup directive add all directories from filesystem which need to have routine backup, for example I keep my Apache Web server files in /var/www/, store various install software in
/root/

and keep backup of Qmail (Vpopmail) old emails kept in
/var/vpopmail
.
To make rsnapshot backup those I add after rest of backup directives:

backup  /var/www/   localhost/
backup  /var/vpopmail/  localhost/
backup  /root/  localhost/


It is good practice to change snapshot_root directive to /root/.backups or whether you prefer to keep snapshot_root to default /var/cache/rsnapshot at least link with ln command /root/.backups to -> /root/.backups.

debian:~# ln -sf /var/cache/rsnapshot /root/.backups

If you change snapshot_root to /root/.backups, don't forget to create /root/.backups and set chmod  dir persmissions only readable to owner, i.e.:

debian:~# mkdir /root/.rsnapshot
debian:~# chmod -R 700 /root/.backups

Note that, it is important to use tab delimiters, everywhere in /etc/rsnapshot.conf, if you use space key delimiter instead of Tab you will end up with errors preventing rsnapshot to run.

4. Testing rsnapshot configuration and launching it first time

I will say it once again use Tab key for delimiters in config. It was my mistake on first time Rsnapshot launch to use spaces to delimiter my config options, thus testing my configuration, rsnapshot print an error and failed:

debian:~# rsnapshot configtest

 

———————————————————
rsnapshot encountered an error! The program was invoked with these options: /usr/bin/rsnapshot configtest ———————————————————
ERROR: /etc/rsnapshot.conf on line 199: ERROR: backup /var/www/ localhost/
ERROR: ———————————————————
ERROR: Errors were found in /etc/rsnapshot.conf, ERROR: rsnapshot can not continue. If you think an entry looks right, make
ERROR: sure you don't have spaces where only tabs should be.  

After changing, Space delimiters with Tabs and re-running rsnapshot configtest if all fine you get:

debian:~# rsnapshot configtest
Syntax OK

Once all good with config to launch Rsnapshot do its first complete incremental data backup, to display what rsnapshot will backup and what exact rsync invocations will it use type:


debian:~# rsnapshot -t hourly

echo 5644 > /var/run/rsnapshot.pid
mv /var/cache/rsnapshot/hourly.2/ /var/cache/rsnapshot/hourly.3/
mv /var/cache/rsnapshot/hourly.1/ /var/cache/rsnapshot/hourly.2/
native_cp_al("/var/cache/rsnapshot/hourly.0", \
    "/var/cache/rsnapshot/hourly.1")
/usr/bin/rsync -a –delete –numeric-ids –relative –delete-excluded /home \
    /var/cache/rsnapshot/hourly.0/localhost/
/usr/bin/rsync -a –delete –numeric-ids –relative –delete-excluded /etc \
    /var/cache/rsnapshot/hourly.0/localhost/
/usr/bin/rsync -a –delete –numeric-ids –relative –delete-excluded \
    /usr/local /var/cache/rsnapshot/hourly.0/localhost/
/usr/bin/rsync -a –delete –numeric-ids –relative –delete-excluded \
    /var/www /var/cache/rsnapshot/hourly.0/localhost/
/usr/bin/rsync -a –delete –numeric-ids –relative –delete-excluded \
    /var/vpopmail /var/cache/rsnapshot/hourly.0/localhost/
/usr/bin/rsync -a –delete –numeric-ids –relative –delete-excluded /root \
    /var/cache/rsnapshot/hourly.0/localhost/
touch /var/cache/rsnapshot/hourly.0/

To launch backup first time manually:

debian:~# rsnapshot hourly

Depending on backupped data (Mega/Giga/Terabytes) size and the number of files which had to be backed up, backup takes from minutes to hours.
Note that it is always good idea to create backups on separate hard disk configured in some kind of RAID array, preferrably (RAID 1 or RAID 5). Creating backups on separate hard disk has numerous advantages, the most important one is it doesn't put too much Input / Output (I/O) stress on hard disk and thus will not create server downtimes on High traffic – Busy servers slow old Hard Disks or servers with Big amount of I/O HDD read/writes .

5. Enabling Rsnapshot to create backups via scheduled cron job

On package install Rsnapshot creates a skele file for running via cronjob in /etc/cron.d/rsnapshot.

debian:~# cat /etc/cron.d/rsnapshot

 

 

# This is a sample cron file for rsnapshot.
# The values used correspond to the examples in /etc/rsnapshot.conf.
# There you can also set the backup points and many other things.
#
# To activate this cron file you have to uncomment the lines below.
# Feel free to adapt it to your needs.

# 0 */4        * * *        root    /usr/bin/rsnapshot hourly
# 30 3      * * *        root    /usr/bin/rsnapshot daily
# 0  3      * * 1        root    /usr/bin/rsnapshot weekly
# 30 2      1 * *        root    /usr/bin/rsnapshot monthly
 

To make hourly, daily, weekly, monthly backup uncomment one of above 4 lines. For paranoid admins scared to loose even a bit of data, hourly data is a good solution. For me personally I prefer configuring weekly backups for the reason I routinely monitor servers – keeping an eye regularly on dmesg and checking Linux smard / smartmontools logs to find out whether a hard disk or RAID has bad blocks

6. Checking backup size / backup difference and backup structure

Checking size of backups can be done by using standard du command on backup directory:

debian:~# du -hsc /var/cache/rsnapshot/*
4.3G /var/cache/rsnapshot/hourly.0
4.5M /var/cache/rsnapshot/hourly.1
68M /var/cache/rsnapshot/hourly.2
4.4G total

rsnapshot also has du argument via which backup size can be viewed:

debian:~# rsnapshot du 4.3G /var/cache/rsnapshot/hourly.0/
4.5M /var/cache/rsnapshot/hourly.1/
68M /var/cache/rsnapshot/hourly.2/
4.4G total

As you can see each new incremental backup is with new number after hourly{0,1,2} etc.

To check difference between two different backups:

debian:~# rsnapshot diff /var/cache/rsnapshot/hourly.0/ /var/cache/rsnapshot/hourly.1/
Comparing /var/cache/rsnapshot/hourly.1 to /var/cache/rsnapshot/hourly.0
Between /var/cache/rsnapshot/hourly.1 and /var/cache/rsnapshot/hourly.0:
660 were added, taking 3728377727 bytes;
492 were removed, saving 17623 bytes;

Structure of backed up files is identical to normal copy of files without any compression:

debian:~# cd /root/.backups/hourly.0/localhost/
debian:~/.backups/hourly.0/localhost# ls

etc/ home/ root/ usr/ var/

 

7. Restoing files or directory from rsnapshot backup

To restore lets say /var directory cd into it:

debian:~/.backups/hourly.0/localhost# cd var
debian:~/.backups/hourly.0/localhost/var#

Then use rsync as follows:

debian:~/.backups/hourly.0/localhost/var# rsync -avr * /
 

 

8. Creating rsnapshot backups from remote server via SSH protocol

In /etc/rsnapshot.conf you should have set SSH port on which remote server is accepting SSH connections. Standard port is 22, however it is wise to configure on backup server SSH to listen to some other non standard port.

In config variables to look on are:

ssh_args -p 22

and

Onwards to enable remote login via ssh uncomment in /etc/rsnapshot.conf :

# cmd_ssh /usr/bin/ssh

to

cmd_ssh /usr/bin/ssh

Before starting rsnapshot to create backups on remote host2 you need to Configure automatic SSH passwordless login by generating DSA or RSA key pair between host1 and host2. Where host1 is machine on which rsnapshot is run and to which backups will be copied from host2
Once passwordless ssh to remote host is active, to force rsnapshot create backups from host1 you will need to add near end of /etc/rsnapshot.conf .

backup  root@host2.com:/root/ host2.com/

The same way you can add a number of remote hosts from which periodic backups will be created to central host1. Only condition is on each node – host3, host4, host5.

backup  root@host3.com:/root/root/ host3.com
backup  root@host4.com:/home/ host4.com
backup  root@host4.com:/var/ host4.com

To create on host1 public key (id_dsa.pub) file with command:

debian:~# ssh-keygen -t dsa
...
....
debian:~# ssh-copy-id -i ~/.ssh/id_dsa.pub root@host3

Once all hosts that needs to get backed up to central backup host – host1. To test if backups gets uploaded manually issue:

debian:~# rsnapshot -v hourly
...

Rsnapshot has a number of other scripts which can be easily integrated with it in /usr/share/doc/rsnapshot/examples/utils.
Inside you can find example scripts on how to create MySQL / PostgreSQL database backup, Samba Share backups, backup CVS repositories and so on. The scripts can be easily modified and work with mostly any data or protocol with a bit of tweaking. Short description of each of example scripts can be found in /usr/share/doc/rsnapshot/examples/utils/README

Share this on