Posts Tagged ‘rsync’

How to synchronize with / from Remote FTP server using LFTP like with rsync

Sunday, October 15th, 2017

how-to-synchronize-from-remote-ftp-server-easily-like-rsync.jpg

Have you ever been in a need to easily synchronize with a remote host which only runs FTP server?

Or are you in a local network and you need to mirror a directory or a couple of directories in a fast and easy to remember way?

If so then you'll be happy to use below LFTP command that is doing pretty much the same as Rsync, with only difference that it can mirror files over FTP (old but gold File Transfer Protocol).
 

lftp -u FTP_USERNAME,FTP_PASSWORD -e 'mirror REMOTE_DIRECTORY LOCAL_DIRECTORY' FTP_SERVER_HOSTNAME


Enjoy and thanks to my dear friend Amridikon for the tip ! 🙂

Resume sftp / scp cancelled (interrupted) network transfer – Continue (large) partially downloaded files on Linux / Windows

Thursday, April 23rd, 2015

resume-sftp-scp-cancelled-interrupted-file-transfer-download-upload-network-transfer-continue-large-partially-downloaded-file-howto-linux-windows
I've recentely have a task to transfer some huge Application server long time stored data (about 70GB) of data after being archived between an old Linux host server and a new one to where the new Tomcat Application (Linux) server will be installed to fit the increased sites accessibility (server hardware overload).

The two systems are into a a paranoid DMZ network and does not have access between each other via SSH / FTP / FTPs and even no Web Access on port (80 or SSL – 443) between the two hosts, so in order to move the data I had to use a third HOP station Windows (server) which have a huge SAN network attached storage of 150 TB (as a Mapped drive I:/).

On the Windows HOP station which is giving me access via Citrix Receiver to the DMZ-ed network I'm using mobaxterm so I have the basic UNIX commands such as sftp / scp already existing on the Windows system via it.
Thus to transfer the Chronos Tomcat application stored files .tar.gz archived I've sftp-ed into the Linux host and used get command to retrieve it, e.g.:

 

sftp UserName@Linux-server.net
Password:
Connected to Linux-server.
sftp> get Chronos_Application_23_04_2015.tar.gz

….


The Secured DMZ Network seemed to have a network shaper limiting my get / Secured SCP download to be at 2.5MBytes / sec, thus the overall file transfer seemed to require a lot of time about 08:30 hours to complete. As it was the middle of day about 13:00 and my work day ends at 18:00 (this meant I would be able to keep the file retrieval session for a maximum of 5 hrs) and thus file transfer would cancel when I logout of the HOP station (after 18:00). However I've already left the file transfer to continue for 2hrs and thus about 23% of file were retrieved, thus I wondered whether SCP / SFTP Protocol file downloads could be resumed. I've checked thoroughfully all the options within sftp (interactive SCP client) and the scp command manual itself however none of it doesn't have a way to do a resume option. Then I thought for a while what I can use to continue the interrupted download and I remembered good old rsync (versatile remote and local file copying tool) which I often use to create customer backup stragies has the ability to resume partially downloaded files I wondered whether this partially downloaded file resume could be done only if file transfer was only initiated through rsync itself and luckily rsync is able to continue interrupted file transfers no matter what kind of HTTP / HTTPS / SCP / FTP program was used to start file retrievalrsync is able to continue cancelled / failed transfer due to network problems or user interaction activity), that turned even pretty easy to continue failed file transfer download from where it was interrupted I had to change to directory where file is located:
 

cd /path/to/interrupted_file/


and issue command:
 

rsync -av –partial username@Linux-server.net:/path/to/file .


the –partial option is the one that does the file resume trick, -a option stands for –archive and turns on the archive mode; equals -rlptgoD (no -H,-A,-X) arguments and -v option shows a file transfer percantage status line and an avarage estimated time for transfer to complete, an easier to remember rsync resume is like so:
 

rsync -avP username@Linux-server.net:/path/to/file .
Password:
receiving incremental file list
chronos_application_23_04_2015.tar.gz
  4364009472   8%    2.41MB/s    5:37:34

To continue a failed file upload with rsync (e.g. if you used sftp put command and the upload transfer failed or have been cancalled:
 

rsync -avP chronos_application_23_04_2015.tar.gz username@Linux-server.net:/path/where_to/upload


Of course for the rsync resume to work remote Linux system had installed rsync (package), if rsync was not available on remote system this would have not work, so before using this method make sure remote Linux / Windows server has rsync installed. There is an rsync port also for Windows so to resume large Giga or Terabyte file archive downloads easily between two Windows hosts use cwRsync.

luckyBackup Linux GUI back-up and synchronization tool

Wednesday, May 14th, 2014

luckybackup_best-linux-graphical-tool-for-backup_linux_gui-defacto-standard-tool
If you're a using GNU / Linux  for Desktop and you're already tired of creating backups by your own hacks using terminal and you want to make your life a little bit more easier and easily automate your important files back up through GUI program take a look at luckyBackup.

Luckibackup is a GUI frontend to the infamous rsync command line backup  tool. Luckibackup is available as a package in almost all modern Linux distributions its very easy to setup and can save you a lot of time especially if you have to manage a number of your Workplace Desktop Office Linux based computers.
Luckibackup is an absolute must have program for Linux Desktop start-up users. If you're migrating from Microsoft Windows realm and you're used to BackupPC, Luckibackup is probably the defacto Linux BackupPC substitute.

The sad news for Linux GNOME Desktop users is luckibackup is written in QT and it using it will load up a bit your notebook.
It is not installed by default so once a new Linux Desktop is installed you will have to install it manually on Debian and Ubuntu based Linux-es to install Luckibackup apt-get it.

debian:~# apt-get install --yes luckibackup
...

On Fedora and CentOS Linux install LuckiBackup via yum rpm package manager

[root@centos :~]# yum -y install luckibackup
.

Luckibackup is also ported for OpenSuSE Slackware, Gentoo, Mandriva and ArchLinux. In 2009 Luckibackup won the prize of Sourceforge Community Choice Awards for "best new project".

luckyBackup copies over only the changes you've made to the source directory and nothing more.
You will be surprised when your huge source is backed up in seconds (after the first backup).

Whatever changes you make to the source including adding, moving, deleting, modifying files / directories etc, will have the same effect to the destination.
Owner, group, time stamps, links and permissions of files are preserved (unless stated otherwise).

Luckibackup creates different multiple backup "snapshots".Each snapshot is an image of the source data that refers to a specific date-time.
Easy rollback to any of the snapshots is possible. Besides that luckibackup support Sync (just like rsync) od any directories keeping the files that were most recently modified on both of them.

Useful if you modify files on more than one PCs (using a flash-drive and don't want to bother remembering what did you use last. Luckibackup is capable of excluding certain files or directories from backupsExclude any file, folder or pattern from backup transfer.

After each operation a logfile is created in your home folder. You can have a look at it any time you want.

luckyBackup can run in command line if you wish not to use the gui, but you have to first create the profile that is going to be executed.
Type "luckybackup –help" at a terminal to see usage and supported options.
There is also TrayNotification – Visual feedback at the tray area informs you about what is going on.
 

 

 

How to copy / clone installed packages from one Debian server to another

Friday, April 13th, 2012

1. Dump all installed server packages from Debian Linux server1

First it is necessery to dump a list of all installed packages on the server from which the intalled deb packages 'selection' will be replicated.

debian-server1:~# dpkg --get-selections \* > packages.txt

The format of the produced packages.txt file will have only two columns, in column1 there will be the package (name) installed and in column 2, the status of the package e.g.: install or deinstall

Note that you can only use the –get-selections as root superuser, trying to run it with non-privileged user I got:

hipo@server1:~$ dpkg --set-selections > packages.txt
dpkg: operation requires read/write access to dpkg status area

2. Copy packages.txt file containing the installed deb packages from server1 to server2

There is many way to copy the packages.txt package description file, one can use ftp, sftp, scp, rsync … lftp or even copy it via wget if placed in some Apache directory on server1.

A quick and convenient way to copy the file from Debian server1 to server2 is with scp as it can also be used easily for an automated script to do the packages.txt file copying (if for instance you have to implement package cloning on multiple Debian Linux servers).

root@debian-server1:~# scp ./packages.txt hipo@server-hostname2:~/packages.txt
The authenticity of host '83.170.97.153 (83.170.97.153)' can't be established. RSA key fingerprint is 38:da:2a:79:ad:38:5b:64:9e:8b:b4:81:09:cd:94:d4. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '83.170.97.153' (RSA) to the list of known hosts. hipo@83.170.97.153's password:
packages.txt

As this is the first time I make connection to server2 from server1, I'm prompted to accept the host RSA unique fingerprint.

3. Install the copied selection from server1 on server2 with apt-get or dselect

debian-server2:/home/hipo# apt-get update
...
debian-server2:/home/hipo# apt-get upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
debian-server2:/home/hipo# dpkg --set-selections < packages.txt
debian-server2:/home/hipo# apt-get -u dselect-upgrade --yes

The first apt-get update command assures the server will have the latest version of the packages currently installed, this will save you from running an outdated versions of the installed packages on debian-server2

Bear in mind that using apt-get sometimes, might create dependency issues. This is depending on the exact package names, being replicated in between the servers

Therefore it is better to use another approach with bash for loop to "replicate" installed packages between two servers, like so:

debian-server2:/home/hipo# for i in $(cat packages.txt |awk '{ print $1 }'); do aptitude install $i; done

If you want to automate the questioning about aptitude operations pass on the -y

debian-server2:/home/hipo# for i in $(cat packages.txt |awk '{ print $1 }'); do aptitude -y install $i; done

Be cautious if the -y is passed as sometimes some packages might be removed from the server to resolve dependency issues, if you need this packages you will have to again install them manually.

4. Mirroring package selection from server1 to server2 using one liner

A quick one liner, that does replicate a set of preselected packages from server1 to server2 is also possible with either a combination of apt, ssh, awk and dpkg or with ssh + dpkg + dselect :

a) One-liner code with apt-get unifying the installed packages between 2 or more servers

debian-server2:~# apt-get --yes install `ssh root@debian-server1 "dpkg -l | grep -E ^ii" | awk '{print $2}'`
...

If it is necessery to install on more than just debian-server2, copy paste the above code to all servers you want to have identical installed packages as with debian-server1 or use a shor for loop to run the commands for each and every host of multiple servers group.

In some cases it might be better to use dselect instead as in some situations using apt-get might not correctly solve the package dependencies, if encountering problems with dependencies better run:

debian-server2:/home/hipo# ssh root@debian-server1 'dpkg --get-selections' | dpkg --set-selections && dselect install

As you can see using this second dselect installed "package" mirroring is also way easier to read and understand than the prior "cryptic" method with apt-get, hence I personally think using dselect method is a better.

Well that's basically it. If you need to synchronize also configurations, either an rsync/scp shell script, should be used with all defined server1 config files or in case if a cloning of packages between identical server machines is necessery dd or some other tool like Norton Ghost could be used.
Hope this helps, someone.

How to exclude files on copy (cp) on GNU / Linux / Linux copy and exclude files and directories (cp -r) exclusion

Saturday, March 3rd, 2012

I've recently had to make a copy of one /usr/local/nginx directory under /usr/local/nginx-bak, in order to have a working copy of nginx, just in case if during my nginx update to new version from source mess ups.

I did not check the size of /usr/local/nginx , so just run the usual:

nginx:~# cp -rpf /usr/local/nginx /usr/local/nginx-bak
...

Execution took more than 20 seconds, so I check the size and figured out /usr/local/nginx/logs has grown to 120 gigabytes.

I didn't wanted to extra load the production server with copying thousands of gigabytes so I asked myself if this is possible with normal Linux copy (cp) command?. I checked cp manual e.g. man cp, but there is no argument like –exclude or something.

Even though the cp command exclude feature is not implemented by default there are a couple of ways to copy a directory with exclusion of subdirectories of files on G / Linux.

Here are the 3 major ones:

1. Copy directory recursively and exclude sub-directories or files with GNU tar

Maybe the quickest way to copy and exclude directories is through a littke 'hack' with GNU tarnginx:~# mkdir /usr/local/nginx-new;
nginx:~# cd /usr/local/nginx#
nginx:/usr/local/nginx# tar cvf - \. --exclude=/usr/local/nginx/logs/* \
| (cd /usr/local/nginx-new; tar -xvf - )

Copying that way however is slow, in my case it fits me perfectly but for copying large chunks of data it is better not to use pipe and instead use regular tar operation + mv

# cd /source_directory
# tar cvf test.tar --exclude=dir_to_exclude/*\--exclude=dir_to_exclude1/* . \
# mv test.tar /destination_directory
# cd /destination# tar xvf test.tar

2. Copy folder recursively excluding some directories with rsync

P>eople who has experience with rsync , already know how invaluable this tool is. Rsync can completely be used as for substitute=de.a# rsync -av –exclude='path1/to/exclude' –exclude='path2/to/exclude' source destination

This example, can also be used as a solution to my copy nginx and exclude logs directory casus like so:

nginx:~# rsync -av --exclude='/usr/local/nginx/logs/' /usr/local/nginx/ /usr/local/nginx-new

As you can see for yourself, this is a way more readable for the tar, however it will not work on servers, where rsync is not installed and it is unusable if you have to do operations as a regular users on such for that case surely the GNU tar hack is more 'portable' across systems.
rsync has also Windows version and therefore, the same methodology should be working on MS Windows and good for batch scripting.
I've not tested it myself, yet as I've never used rsync on Windows, if someone has tried and it works pls drop me a short msg in comments.
3. Copy directory and exclude sub directories and files with find

Find in collaboration with cp can also be used to exclude certain directories while copying. Actually this method is better than the GNU tar hack and surely more efficient. For machines, where rsync is not installed it is just a perfect way to copy files from location to location, while excluding some directories, here is an example use of find and cp, for the above nginx case:

nginx:~# cd /usr/local/nginx
nginx:~# mkdir /usr/local/nginx
nginx:/usr/local/nginx# find . -type d \( ! -name logs \) -print -exec cp -rpf '{}' /usr/local/nginx-bak \;

This will find all directories inside /usr/local/nginx with find command print them on the screen, then execute recursive copy over each found directory and copy to /usr/local/nginx-bak

This example will work fine in the nginx case because /usr/local/nginx does not contain any files but only sub-directories. In other occwhere the directory does contain some files besides sub-directories the files had to also be copied e.g.:

# for i in $(ls -l | egrep -v '^d'); do\
cp -rpf $i /destination/directory

This will copy the files from source directory (for instance /usr/local/nginx/my_file.txt, /usr/local/nginx/my_file1.txt etc.), which doesn't belong to a subdirectory.

The cmd expression:

# ls -l | egrep -v '^d'

Lists only the files while excluding all the directories and in a for loop each of the files is copied to /destination/directory

If someone has better ideas, please share with me 🙂

Use rsync to copy from files from destination host to source host (rsync reverse copy) / few words on rsync

Monday, January 9th, 2012

I've recently had to set up a backup system to synchronize backup archive files between two remote servers and as I do usually with this situation I just set up a crontab job to periodically execute rsync to copy data from source server to the destination server . Copying SRC to DEST is the default behaviour rsync uses, however in this case I had to copy from the destination server to the source server host (in other words sync files the reversely.

The usual way to copy with rsync via SSH (from SRC to DEST) is using a cmd line like:

debian:~$ /usr/bin/rsync -avz -e ssh backup-user@xxx.xxx.xxx.xxx:/home/backup-user/my-directory .

Where the xxx.xxx.xxx.xxx is my remote server IP with which files are synched.
According to rsync manual, the proposed docs SYNOPSIS is in the format;
Local: rsync [OPTION…] SRC… [DEST

Obviusly the default way to use rsync is to copy source to destination which I used until now, but in this case I had to the other way around and copy files from a destination host to the source server. It was logical that swapping the SRC and DEST would complete my required task. Anyways I consulted with some rsync gurus in irc.freenode.net , just to make sure it is proper to just swap the SRC, DEST arguments.
I was told this is possible, so I swapped args;

debian:~$ /usr/bin/rsync -avz -e ssh . backup-user@xxx.xxx.xxx.xxx:/home/backup-user/my-directory
...

Surprisingly this worked 😉 Anyways I was adviced by by a good guy nick named scheel , that putting -e ssh to command line is generally unnecessery except if there is no some uncommon used SSH port over which the data is transferred. An example case in which -e 'ssh is necessery would be if transferring via lets say SSH port 1234;

rsync -avz -e 'ssh -p1234' /source user@host:/dest

In all other cases omitting '-e ssh' is better as '-e ssh' is rsync default. Therefore my final swapped line I put in cron to copy from a destinatio to source host with rsync looked like so:

05 03 2 * * /usr/bin/ionice -c 3 /usr/bin/rsync -avz my-directory backup-user@xxx.xxx.xxx.xxx:/home/backup-user/ >/dev/null 2>&1
 

Using rsync to copy / synchronize files or backups between Linux / BSD / Unix servers

Monday, November 21st, 2011

Rsync and Rsync over ssh logo picture

Many of us have already taken advantage of the powerful Rsync proggie, however I'm quite sure there are still people who never used rsync to transfer files between servers.. That's why I came with this small post to possibly introduce rsync to my blog readers.
Why Rsync and not Scp or SFTP? Well Rsync is designed from the start for large files transfer and optimized to do the file copying job really efficient. Some tests with scp against rsync will clearly show rsync's superiority.
Rsync is also handy to contiue copying of half copied files or backups and thus in many cases saves bandwidth and machine hdd i/o operations.

The most simple way to use rsync is:

server:~# rsync -avz -e ssh remoteuser@remotehost:/remote/directory /local/directory/

Where remoteuser@remotehost — is the the username and hostname of remote server to copy files to.
/remote/directory — is the directory where the rsync copied files should be stored
/local/directory — is the local directory from which files will be copied to remote directory

If not a preliminary passwordless ssh key (RSA / DSA) authentication is configured on remote server, the above command will prompt for a password otherwise the rsync will start doing the transfer.

If one needs to have a RSA or DSA (public / private key) passwordless SSH key authentication , a RSA key first should be generated and copied over to the remote server, like so:

server:~# ssh-keygen -t dsa
...
server:~# ssh-copy-id -i ~/.ssh/id_dsa.pub root@remotehost
...

That's all folks, enjoy rsyncing 😉