Posts Tagged ‘command’

Improve Apache Load Balancing with mod_cluster – Apaches to Tomcats Application servers Get Better Load Balancing

Thursday, March 31st, 2016


Earlier I've blogged on How to set up Apache to to serve as a Load Balancer for 2, 3, 4  etc. Tomcat / other backend application servers with mod_proxy and mod_proxy_balancer, however though default Apache provided mod_proxy_balancer works fine most of the time, If you want a more precise and sophisticated balancing with better load distribuion you will probably want to install and use mod_cluster instead.


So what is Mod_Cluster and why use it instead of Apache proxy_balancer ?

Mod_cluster is an innovative Apache module for HTTP load balancing and proxying. It implements a communication channel between the load balancer and back-end nodes to make better load-balancing decisions and redistribute loads more evenly.

Why use mod_cluster instead of a traditional load balancer such as Apache's mod_balancer and mod_proxy or even a high-performance hardware balancer?

Thanks to its unique back-end communication channel, mod_cluster takes into account back-end servers' loads, and thus provides better and more precise load balancing tailored for JBoss and Tomcat servers. Mod_cluster also knows when an application is undeployed, and does not forward requests for its context (URL path) until its redeployment. And mod_cluster is easy to implement, use, and configure, requiring minimal configuration on the front-end Apache server and on the back-end servers.

So what is the advantage of mod_cluster vs mod proxy_balancer ?

Well here is few things that turns the scales  in favour for mod_cluster:


  •     advertises its presence via multicast so as workers can join without any configuration
  •     workers will report their available contexts
  •     mod_cluster will create proxies for these contexts automatically
  •     if you want to, you can still fine-tune this behaviour, e.g. so as .gif images are served from httpd and not from workers…
  •     most importantly: unlike pure mod_proxy or mod_jk, mod_cluster knows exactly how much load there is on each node because nodes are reporting their load back to the balancer via special messages
  •     default communication goes over AJP, you can use HTTP and HTTPS


1. How to install mod_cluster on Linux ?

You can use mod_cluster either with JBoss or Tomcat back-end servers. We'll install and configure mod_cluster with Tomcat under CentOS; using it with JBoss or on other Linux distributions is a similar process. I'll assume you already have at least one front-end Apache server and a few back-end Tomcat servers installed.

To install mod_cluster, first download the latest mod_cluster httpd binaries. Make sure to select the correct package for your hardware architecture – 32- or 64-bit.
Unpack the archive to create four new Apache module files:,,, and We won't need; it advertises the location of the load balancer through multicast packets, but we will use a static address on each back-end server.

Copy the other three .so files to the default Apache modules directory (/etc/httpd/modules/ for CentOS).
Before loading the new modules in Apache you have to remove the default proxy balancer module ( because it is not compatible with mod_cluster.

Edit the Apache configuration file (/etc/httpd/conf/httpd.conf) and remove the line


LoadModule proxy_balancer_module modules/


Create a new configuration file and give it a name such as /etc/httpd/conf.d/mod_cluster.conf. Use it to load mod_cluster's modules:




LoadModule slotmem_module modules/
LoadModule manager_module modules/
LoadModule proxy_cluster_module modules/

In the same file add the rest of the settings you'll need for mod_cluster something like:

And for permissions and Virtualhost section

        Order deny,allow
        Allow from all 192.168
    ManagerBalancerName mymodcluster

ProxyPass / balancer://mymodcluster/

The above directives create a new virtual host listening on port 9999 on the Apache server you want to use for load balancing, on which the load balancer will receive information from the back-end application servers. In this example, the virtual host is listening on IP address, and for security reasons it allows connections only from the network.
The directive ManagerBalancerName defines the name of the cluster – mymodcluster in this example. The directive EnableMCPMReceive allows the back-end servers to send updates to the load balancer. The standard ProxyPass and ProxyPassReverse directives instruct Apache to proxy all requests to the mymodcluster balancer.
That's all you need for a minimal configuration of mod_cluster on the Apache load balancer. At next server restart Apache will automatically load the file mod_cluster.conf from the /etc/httpd/conf.d directory. To learn about more options that might be useful in specific scenarios, check mod_cluster's documentation.

While you're changing Apache configuration, you should probably set the log level in Apache to debug when you're getting started with mod_cluster, so that you can trace the communication between the front- and the back-end servers and troubleshoot problems more easily. To do so, edit Apache's configuration file and add the line LogLevel debug, then restart Apache.

2. How to set up Tomcat appserver for mod_cluster ?

Mod_cluster works with Tomcat version 6, 7 and 8, to set up the Tomcat back ends you have to deploy a few JAR files and make a change in Tomcat's server.xml configuration file.
The necessary JAR files extend Tomcat's default functionality so that it can communicate with the proxy load balancer. You can download the JAR file archive by clicking on "Java bundles" on the mod_cluster download page. It will be saved under the name mod_cluster-parent-1.2.6.Final-bin.tar.gz.

Create a new directory such as /root/java_bundles and extract the files from mod_cluster-parent-1.2.6.Final-bin.tar.gz there. Inside the directory /root/java_bundlesJBossWeb-Tomcat/lib/*.jar you will find all the necessary JAR files for Tomcat, including two Tomcat version-specific JAR files – mod_cluster-container-tomcat6-1.2.6.Final.jar for Tomcat 6 and mod_cluster-container-tomcat7-1.2.6.Final.jar for Tomcat 7. Delete the one that does not correspond to your Tomcat version.

Copy all the files from /root/java_bundlesJBossWeb-Tomcat/lib/ to your Tomcat lib directory – thus if you have installed Tomcat in


run the command:


cp -rpf /root/java_bundles/JBossWeb-Tomcat/lib/* /srv/tomcat/lib/.


Then edit your Tomcat's server.xml file


After the default listeners add the following line:


<listener classname="org.jboss.modcluster.container.catalina.standalone.ModClusterListener" proxylist=""> </listener>

This instructs Tomcat to send its mod_cluster-related information to IP on TCP port 9999, which is what we set up as Apache's dedicated vhost for mod_cluster.
While that's enough for a basic mod_cluster setup, you should also configure a unique, intuitive JVM route value on each Tomcat instance so that you can easily differentiate the nodes later. To do so, edit the server.xml file and extend the Engine property to contain a jvmRoute, like this:



<engine defaulthost="localhost" jvmroute="node2" name="Catalina"></engine>

Assign a different value, such as node2, to each Tomcat instance. Then restart Tomcat so that these settings take effect.

To confirm that everything is working as expected and that the Tomcat instance connects to the load balancer, grep Tomcat's log for the string "modcluster" (case-insensitive). You should see output similar to:

Mar 29, 2016 10:05:00 AM org.jboss.modcluster.ModClusterService init
INFO: MODCLUSTER000001: Initializing mod_cluster ${project.version}
Mar 29, 2016 10:05:17 AM org.jboss.modcluster.ModClusterService connectionEstablished
INFO: MODCLUSTER000012: Catalina connector will use /

This shows that mod_cluster has been successfully initialized and that it will use the connector for, the configured IP address for the main listener.
Also check Apache's error log. You should see confirmation about the properly working back-end server:

[Tue Mar 29 10:05:00 2013] [debug] proxy_util.c(2026): proxy: ajp: has acquired connection for (
[Tue Mar 29 10:05:00 2013] [debug] proxy_util.c(2082): proxy: connecting ajp:// to
[Tue Mar 29 10:05:00 2013] [debug] proxy_util.c(2209): proxy: connected / to
[Tue Mar 29 10:05:00 2013] [debug] mod_proxy_cluster.c(1366): proxy_cluster_try_pingpong: connected to backend
[Tue Mar 29 10:05:00 2013] [debug] mod_proxy_cluster.c(1089): ajp_cping_cpong: Done
[Tue Mar 29 10:05:00 2013] [debug] proxy_util.c(2044): proxy: ajp: has released connection for (

This Apache error log shows that an AJP connection with was successfully established and confirms the working state of the node, then shows that the load balancer closed the connection after the successful attempt.

You can start testing by opening in a browser the example servlet SessionExample, which is available in a default installation of Tomcat.
Access this servlet through a browser at the URL http://balancer_address/examples/servlets/servlet/SessionExample. In your browser you should see first a session ID that contains the name of the back-end node that is serving your request – for instance, Session ID: 5D90CB3C0AA05CB5FE13121E4B23E670.node2.

Next, through the servlet's web form, create different session attributes. If you have a properly working load balancer with sticky sessions you should always (that is, until your current browser session expires) access the same node, with the previously created session attributes still available.

To test further to confirm load balancing is in place, at the same time open the same servlet from another browser. You should be redirected to another back-end server where you can conduct a similar session test.
As you can see, mod_cluster is easy to use and configure. Give it a try to address sporadic single-back-end overloads that cause overall application slowdowns.

Share this on

No space left on device with free disk space / Why no space left on device while there is plenty of disk space on drive – Running out of Inodes

Tuesday, November 17th, 2015



On one of the servers, I'm administrating the websites started showing some Mysql database table corrup errors like:


Table './database_name/site_news_list_com' is marked as crashed and last (automatic?) repair failed

The server is using Oracle MySQL server community stable edition on Debian GNU / Linux 6.0, so I first thought during work the server crashed either due to some bug issue in MySQL or it crashed due to some PHP cron job that did something messy. Thus to solve the crashed tables, tried using mysqlcheck tool which helped pretty fine, at many times whether there were database / table corruptions. I've run the following set of mysqlcheck commands with root (superuser) in a bash shell after logging in through SSH:


server:~# /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–check –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log
server:~# /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf –analyze –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log
server:~# /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–auto-repair –optimize –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log
server:~# /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–optimize –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log

In order for above commands to work, I've created the /root/.my.cnf containing my root (mysql CLI) mysql username and password, e.g. file has content like below:




Btw a good note here is its generally a good idea (if you want to have consistent mysql databases) to automatically execute via a cron job 2 times a month, I've in root cronjob the following:


crontab -u root -l |grep -i mysqlcheck
04 06 5,10,15,20,25,1 * * /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–check –all-databases –silent -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log 07 06 5,10,15,20,25,1 * * /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf –analyze –all-databases –silent -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log 12 06 5,10,15,20,25,1 * * /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–auto-repair –optimize –all-databases –silent -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log 17 06 5,10,15,20,25,1 * * /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–optimize –all-databases –silent -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log

Strangely I got a lot of errors that some .MYI / .MYD .frm temp files, necessery for the mysql tables recovery can't be written inside /home/mysql/database_name

That was pretty weird and I thought there might be some issues with permissions, causing the inability to write, due to some bug or something so I went straight and checked /home/mysql/database_name permissions, e.g.::


server:/home/mysql/database_name# ls -ld soccerfame
drwx—— 2 mysql mysql 36864 Nov 17 12:00 soccerfame
server:/home/mysql/database_name# ls -al1|head -n 10
total 1979012
drwx—— 2 mysql mysql 36864 Nov 17 12:00 .
drwx—— 36 mysql mysql 4096 Nov 17 11:12 ..
-rw-rw—- 1 mysql mysql 8712 Nov 17 10:26 1_campaigns_diez.frm
-rw-rw—- 1 mysql mysql 14672 Jul 8 18:57 1_campaigns_diez.MYD
-rw-rw—- 1 mysql mysql 1024 Nov 17 11:38 1_campaigns_diez.MYI
-rw-rw—- 1 mysql mysql 8938 Nov 17 10:26 1_campaigns.frm
-rw-rw—- 1 mysql mysql 8738 Nov 17 10:26 1_campaigns_logs.frm
-rw-rw—- 1 mysql mysql 883404 Nov 16 22:01 1_campaigns_logs.MYD
-rw-rw—- 1 mysql mysql 330752 Nov 17 11:38 1_campaigns_logs.MYI

As seen from above output, all was perfect with permissions, so it should have been something else, so I decided to try to create a random file with touch command inside /home/mysql/database_name directory:


touch /home/mysql/database_name/somefile-to-test-writtability.txt touch: cannot touch ‘/scr1/data/somefile-to-test-writtability.txt‘: No space left on device

Then logically I thought the /home/mysql/ mounted ext4 partition got filled, because of crashed SQL database or a bug thus, checked with disk free command df whether there is enough space on server:

server:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md1 20G 7.6G 11G 42% /
udev 10M 0 10M 0% /dev
tmpfs 13G 1.3G 12G 10% /run
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/md2 256G 134G 110G 55% /home

Well that's weird? Obviously only 55% of available disk space is used and available 134G which was more than enough so I got totally puzzled why, files can't be written.

Then very logically, I thought it might be that /home directory has remounted as read only, because the SSD memory disk on server is failing and checked for errors in dmesg, i.e.:


server:~# dmesg|grep -i error

Also checked how exactly was partition mounted, to check whether it is (RO) read-only:


server:~# mount -l|grep -i /home
/dev/md2 on /home type ext4 (rw,relatime,discard,data=ordered)

Now everything become even more weirder, as obviously the disk continued to be claiming no space left on device, while in reality there was plenty of disk space.

Then after running a quick research on the internet for the no space left on device with free disk space, I've come across this great thread which let me realize the partition run out of inodes and that's why no new file inodes could be assigned and therefore, the linux kernel is refusing to write the file on ext4 partition.

For those who haven't heard of Linux Partition Inodes here is link to Wikipedia and a quick quote:


In a Unix-style file system, the inode is a data structure used to represent a filesystem object, which can be one of various things including a file or a directory. Each inode stores the attributes and disk block location(s) of the filesystem object's data.[1] Filesystem object attributes may include manipulation metadata (e.g. change,[2] access, modify time), as well as owner and permission data (e.g. group-id, user-id, permissions).[3]
Directories are lists of names assigned to inodes. The directory contains an entry for itself, its parent, and each of its children.

Once I understood it is the inodes, I checked how many of them are occupied with cmd:


server:~# df -i /home
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/md2 17006592 17006592 0 100% /home

You see, there were 0 (zero) free file inodes on server and that was the reason for no space left on device while there was actually free disk space

To clean up (free) some inodes on partition, first thing I did is to delete all old logs which were inside /home and files I positively know not to be necessery, then to find which directories allocating most innodes used:


server:~# find . -xdev -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n

If you're on a regular old fashined IDE Hard Drive and not SSD or you have too much files inside this command will take really long …:

Therefore a better solution might be to frist:

a) Try to find root folders with large inodes count:

for i in /home/*; do echo $i; find $i |wc -l; done
Try to find specific folders:

You should get output like:



b) Then once you know the directory allocating most inodes, run the command again to see the sub-directories with most files (eating) partition innodes:


for i in /home/webservice/*; do echo $i; find $i |wc -l; done


One usual large folder which could free you some nodes is the linux source headers, but in my case it was simply a lot of tiny old logs being logged on the system for few years in the past without cleaning:

After deleting the log dirs and cache folder in my case /home/new_website/{log,cache}:

server:~# rm -rf /home/new_website/log/*
server:~# rm -rf /home/new_website/cache/*



a) Then, stopping Apache webserver to check prevent Apache to use MySQl databases while running database repair and restaring MySQL:

server:~# /etc/init.d/apache2 stop Restarting MySQL server
server:~# /etc/init.d/mysql restart

b) And re-issuing MySQL Check / Repair / Optimize database commands:


mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–check –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log

mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf –analyze –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log

mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–auto-repair –optimize –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log

mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–optimize –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log

c) And finally starting the Apache Webserver again:

server:~# /etc/init.d/apache2 start

Some innodse got freed up:

server:~# df -i /home Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/md2 17006592 16797196 209396 99% /home

And hooray by God's Grace and with help of prayers of The most Holy Theotokos (Virgin) Mary, websites started again !

Share this on

How to update macos from terminal / Check and update remotely Mac OS X software from console

Friday, October 23rd, 2015


If you happen to have to deal with Mac OS X (Apple) notebook or Desktop PC (Hackintosh) etc. and you’re sysadmin or console freak being pissed off Mac’s GUI App Store update interface and you want to “keep it simple stupid” (KISS) in an Debian Linux like apt-get manner then you can also use Mac’s console application (cli) terminal to do the updates manually from command line with:




To get help about softwareupdate pass it on the -h flag:

softwareupdate -h

1. Get a list of available Mac OS updates

Though not a very likely scenario of course before installing it is always a wise thing to see what is being updated to make sure you will not upgrade something that you don’t want to.
This is done with:

softwareupdate -l

However in most cases you can simply skip this step as updating directly every package installed on the Mac with a new version from Apple will not affect your PC.
Anyways it is always a good idea to keep a backup image of your OS before proceeding with updates with let’s say Time Machine Mac OS backup app.

2. Install only recommended Updates from Apple store

softwareupdate -irv

Above will download all updates that are critical and thus a must to have in order to keep Mac OS security adequate.
Translated into Debian / Ubuntu Linux language, the command does pretty much the same as Linux’s:

apt-get –yes update

3. Install All Updates available from AppleStore

To install absolutely all updates provided by Apple’s package repositories run:

softwareupdate -iva

One note to make here is that always when you keep updating make sure your notebook is switched on to electricity grid otherwise if due to battery discharge it shutoffs during update your Mac will crash in a very crappy hard to recover state that might even cost you a complete re-install or a need to bring a PC to a Mac Store technical support guy so beware, you’re warned!

4. Installing all updates except Specific Softwares from Terminal

Often if you have a cracked software or a software whose GUI interface changed too much and you don’t want to upgrade it but an update is offered by Apple repos you can add the -i ingnore option:

softwareupdate -i [update_name(s)]

For example:

softwareupdate -i Safari-version-XXXX

5. View Mac OS Software Update History

The quickest way to see the update history is with System Information app, e.g.:


Share this on

Fix “Secure Connection Failed” – An error occured SSL received a record that exceeded the maximum permissible length howto

Monday, September 14th, 2015

When I was trying to establish a new Internal Business SSL certificate on one of the 6 months planned SPLIT projects (e.g. duplicate a range systems environment to another one), I've stumbled a very odd SSL issue. Once I've setup all the virtualhost SSL configurations properly (identical SSL configuration directives and Apache Webserver version to another host and testing in a browser I was getting the following error:

Secure Connection Failed

An error occurred during a connection to

SSL received a record that exceeded the maximum permissible length.

(Error code: ssl_error_rx_record_too_long)

Below is a screenshot:

The page you are trying to view can not be shown because the authenticity of the received data could not be verified. Please contact the web site owners to inform them of this problem. Alternatively, use the command found in the help menu to report this broken site.

The first logical thing to do was to check the error.log but there was no any errors there that point me to anything meaningful, besides that the queries I was making to the Domain doesn't show off as requests neither in Apache access.log nor in error.log so this was puzzling.
I thought I might have messed up something during Key file / CSR generation time so I revoked old certificate and reissued it.


$ openssl x509 -text -in |less ertificate: Data: Version: 3 (0x2) Serial Number:

Shows that all is fine with certificate Then when trying to test remote certificate with SSL command:


openssl s_client -CApath -connect

: There was an error After plenty of research in Google I come to conclusion something is either wrong with Listen httpd.conf directive or NameVirtualHost is binded to port 80 or some other port different from 443, however surprisingly I did not used the NameVirtualHost at all in my apache config. After a lot of pondering I finally spot it. The whole certificate isseus were caused by:

< – Less than sign

which I missaw and forget to clean up from template during IP paste (obtained from /sbin/ifconfig |grep -i xx.xx.xx.xx). So finally in order to fix the SSL error I had to just delete <, e.g.:

<VirtualHost <>

had to become:



Such a minor thing took me 3 hours of pondering to resolve and thanksfully it is finally fixed! Then of course had to restart Apache to make fixed Vhost settings working:

# apachectl stop; sleep 2; apachectl start

So now the SSL works again, thanks God!

Share this on

Check Windows load avarage command – Get CPU usage from Windows XP / 7 / 8 / 2012 server cmd prompt

Wednesday, August 19th, 2015


If you used to be a long years Linux / UNIX sysadmin and you suddenly have to also admistrate a bunch of Windows hosts via RDP (Remote Desktop Protocol)  / Teamviewer etc. and you need to document The Load Avarage of a Windows XP / 7 / 8 servers but you're puzzled how to get an overall load avarage of Windows host via command in a UNIX way like with the good old uptime  Linux / BSD command e.g.

 ruth:$ uptime
 11:43  up 713 days 22:44,  1 user,  load average: 0.22, 0.17, 0.15

Then its time to you to get used to WMICWMIC extends WMI for operation from several command-line interfaces and through batch scripts. wmic is a wonderful command for Command addicted Linux guys and gives a lot of opportunities to query and conduct various sysadmin tasks from Windows command prompt.

To get an loadavarage with wmic use:

C:\>wmic cpu get loadpercentage



@for /f "skip=1" %p in ('wmic cpu get loadpercentage') do @echo %p%


on Windows 7 / 8 and 10 as well Windows Server 2010 and Windows Server 2012 for more precise CPU loadavarage results, you can also use:

C:\> typeperf "\processor(_total)\% processor time"

"(PDH-CSV 4.0)","\\Win-Host\processor(_total)\% processor time"
"08/19/2015 12:52:53.343","0.002288"
"08/19/2015 12:52:54.357","0.000000"
"08/19/2015 12:52:55.371","0.000000"
"08/19/2015 12:52:56.385","0.000000"
"08/19/2015 12:52:57.399","0.000799"
"08/19/2015 12:52:58.413","0.000000"
"08/19/2015 12:52:59.427","0.000286"
"08/19/2015 12:53:00.441","0.000000"
"08/19/2015 12:53:01.455","0.000000"
"08/19/2015 12:53:02.469","0.008678"
"08/19/2015 12:53:03.483","0.000000"
"08/19/2015 12:53:04.497","0.002830"
"08/19/2015 12:53:05.511","0.000621"
"08/19/2015 12:53:06.525","0.768834"
"08/19/2015 12:53:07.539","0.000000"
"08/19/2015 12:53:08.553","1.538296"


Share this on

How to Remove / Add SuSE Linux start service command

Thursday, July 2nd, 2015

If you happen to administer SUSE LINUX Enterprise Server 9 (x86_64) and you need to add or remove already existing /etc/init.d script or custom created Apache / Tomcat .. etc. service and you're already familiar with Fedora's / RHEL chkconfig, then the good news chkconfig is also available on SuSE and you can use in same way chkconfig to start / stop / enable / disable boot time services.

To list all available boot time init.d services use:

suse-linux:/etc # chkconfig –list


SuSEfirewall2_final       0:off  1:off  2:off  3:off  4:off  5:off  6:off
SuSEfirewall2_init        0:off  1:off  2:off  3:off  4:off  5:off  6:off
SuSEfirewall2_setup       0:off  1:off  2:off  3:off  4:off  5:off  6:off
Tivoli_lcfd1.bkp          0:off  1:off  2:off  3:off  4:off  5:off  6:off
activate_web_all          0:off  1:off  2:off  3:on   4:off  5:on   6:off
alsasound                 0:off  1:off  2:on   3:on   4:off  5:on   6:off
apache2                   0:off  1:off  2:off  3:off  4:off  5:off  6:off
apache2-eis               0:off  1:off  2:off  3:on   4:off  5:off  6:off
atd                       0:off  1:off  2:off  3:off  4:off  5:off  6:off
audit                     0:off  1:off  2:off  3:off  4:off  5:off  6:off
autofs                    0:off  1:off  2:off  3:off  4:off  5:off  6:off
autoyast                  0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.clock                0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.crypto               0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.device-mapper        0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.evms                 0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.idedma               0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.ipconfig             0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.isapnp               0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.klog                 0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.ldconfig             0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.loadmodules          0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.localfs              0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.localnet             0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.lvm                  0:off  1:off  2:off  3:off  4:off  5:off  6:off                   0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.multipath            0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.proc                 0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.restore_permissions  0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.rootfsck             0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.sched                0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.scpm                 0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.scsidev              0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.shm                  0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.swap                 0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.sysctl               0:off  1:off  2:off  3:off  4:off  5:off  6:off
boot.udev                 0:off  1:off  2:off  3:off  4:off  5:off  6:off
coldplug                  0:off  1:on   2:on   3:on   4:off  5:on   6:off


To then stop the service:

suse-linux:/etc # chkconfig gtiweb off

If you prefer to do it the SuSE way and learn a bit more on SuSE boot time process check out:


suse-linux:/etc # man insserv

Removing already existing SuSE start-up script from init.d start up with insserv is done with:

suse-linux:/etc # cd /etc/init.d/
suse-linux:etc/init.d # insserv -r gtiweb
insserv: script ipmi.hp: service ipmidrv already provided!
insserv: script boot.multipath.2008-10-29: service boot.multipath already provided!

To install a new custom written and placed into /etc/inti.d/ on SuSE's server boot time with insserv:


suse-linux:/etc/init.d/ # insserv your_custom_script_name

Share this on

How to Copy large data directories between 2 Linux / Unix servers without direct ssh / ftp access between server1 and server2 other by using SSH, TAR and Unix pipes

Monday, April 27th, 2015


In a Web application data migration project, I've come across a situation where I have to copy / transfer 500 Gigabytes of data from Linux server 1 (host A) to Linux server 2 (host B). However the two machines doesn't have direct access to each other (via port 22) for security reasons and hence I cannot use sshfs to mount remotely host dir via ssh and copy files like local ones.

As this is a data migration project its however necessery to migrate the data finding a way … Normal way companies do it is to copy the data to External Hard disk storage and send it via some Country Post services or some employee being send in Data center to attach the SAN to new server where data is being migrated However in my case this was not possible so I had to do it different.

I have access to both servers as they're situated in the same Corporate DMZ network and I can thus access both UNIX machines via SSH.

Thanksfully there is a small SSH protocol + TAR archiver and default UNIX pipe's capabilities hack that makes possible to transfer easy multiple (large) files and directories. The only requirement to use this nice trick is to have SSH client installed on the middle host from which you can access via SSH protocol Server1 (from where data is migrated) and Server2 (where data will be migrated).

If the hopping / jump server from which you're allowed to have access to Linux  servers Server1 and Server2 is not Linux and you're missing the SSH client and don't have access on Win host to install anything on it just use portable mobaxterm (as it have Cygwin SSH client embedded )

Here is how:

jump-host:~$ ssh server1 "tar czf – /somedir/" | pv | ssh server2 "cd /somedir/; tar xf

As you can see from above command line example an SSH is made to server1  a tar is used to archive the directory / directories containing my hundred of gigabytes and then this is passed to another opened ssh session to server 2  via UNIX Pipe mechanism and then TAR archiver is used second time to unarchive previously passed archived content. pv command which is in the middle is not obligitory though it is a nice way to monitor status about data transfer like below:

500GB 0:00:01 [10,5MB/s] [===================================================>] 27%

P.S. If you don't have PV installed install it either with apt-get on Debian:


debian:~# apt-get install –yes pv


Or on CentOS / Fedora / RHEL etc.


[root@centos ~]# yum -y install pv


Below is a small chunk of PV manual to give you better idea of what it does:

       pv – monitor the progress of data through a pipe

       pv [OPTION] [FILE]…
       pv [-h|-V]

       pv  allows  a  user to see the progress of data through a pipeline, by giving information such as time elapsed, percentage
       completed (with progress bar), current throughput rate, total data transferred, and ETA.

       To use it, insert it in a pipeline between two processes, with the appropriate options.  Its standard input will be passed
       through to its standard output and progress will be shown on standard error.

       pv  will  copy  each  supplied FILE in turn to standard output (- means standard input), or if no FILEs are specified just
       standard input is copied. This is the same behaviour as cat(1).

       A simple example to watch how quickly a file is transferred using nc(1):

              pv file | nc -w 1 3000

       A similar example, transferring a file from another process and passing the expected size to pv:

              cat file | pv -s 12345 | nc -w 1 3000

Note that with too big file transfers using PV will delay data transfer because everything will have to pass through another 2 pipes, however for file transfers up to few gigabytes its really nice to include it.

If you only need to transfer huge .tar.gz archive and you don't bother about traffic security (i.e. don't care whether transferred traffic is going through encrypted SSH tunnel and don't want to put an overhead to both systems for encrypting the data and you have some unfiltered ports between host 1 and host 2 you can run netcat on host 2 to listen for connections and forward .tar.gz content via netcat's port like so:

linux2:~$ nc -l -p 12345 > /path/destinationfile
linux2:~$ cat /path/sourcfile | nc desti.nation.ip.address 12345

Another way to transfer large data without having connection with server1 and server2 but having connection to a third host PC is to use rsync and good old SSH Tunneling, like so:

jump-host:~$ ssh -R 2200:Linux-server1:22 root@Linux-server2 "rsync -e 'ssh -p 2200' –stats –progress -vaz /directory/to/copy root@localhost:/copy/destination/dir"

Share this on

Resume sftp / scp cancelled (interrupted) network transfer – Continue (large) partially downloaded files on Linux / Windows

Thursday, April 23rd, 2015

I've recentely have a task to transfer some huge Application server long time stored data (about 70GB) of data after being archived between an old Linux host server and a new one to where the new Tomcat Application (Linux) server will be installed to fit the increased sites accessibility (server hardware overload).

The two systems are into a a paranoid DMZ network and does not have access between each other via SSH / FTP / FTPs and even no Web Access on port (80 or SSL – 443) between the two hosts, so in order to move the data I had to use a third HOP station Windows (server) which have a huge SAN network attached storage of 150 TB (as a Mapped drive I:/).

On the Windows HOP station which is giving me access via Citrix Receiver to the DMZ-ed network I'm using mobaxterm so I have the basic UNIX commands such as sftp / scp already existing on the Windows system via it.
Thus to transfer the Chronos Tomcat application stored files .tar.gz archived I've sftp-ed into the Linux host and used get command to retrieve it, e.g.:


Connected to Linux-server.
sftp> get Chronos_Application_23_04_2015.tar.gz


The Secured DMZ Network seemed to have a network shaper limiting my get / Secured SCP download to be at 2.5MBytes / sec, thus the overall file transfer seemed to require a lot of time about 08:30 hours to complete. As it was the middle of day about 13:00 and my work day ends at 18:00 (this meant I would be able to keep the file retrieval session for a maximum of 5 hrs) and thus file transfer would cancel when I logout of the HOP station (after 18:00). However I've already left the file transfer to continue for 2hrs and thus about 23% of file were retrieved, thus I wondered whether SCP / SFTP Protocol file downloads could be resumed. I've checked thoroughfully all the options within sftp (interactive SCP client) and the scp command manual itself however none of it doesn't have a way to do a resume option. Then I thought for a while what I can use to continue the interrupted download and I remembered good old rsync (versatile remote and local file copying tool) which I often use to create customer backup stragies has the ability to resume partially downloaded files I wondered whether this partially downloaded file resume could be done only if file transfer was only initiated through rsync itself and luckily rsync is able to continue interrupted file transfers no matter what kind of HTTP / HTTPS / SCP / FTP program was used to start file retrievalrsync is able to continue cancelled / failed transfer due to network problems or user interaction activity), that turned even pretty easy to continue failed file transfer download from where it was interrupted I had to change to directory where file is located:

cd /path/to/interrupted_file/

and issue command:

rsync -av –partial .

the –partial option is the one that does the file resume trick, -a option stands for –archive and turns on the archive mode; equals -rlptgoD (no -H,-A,-X) arguments and -v option shows a file transfer percantage status line and an avarage estimated time for transfer to complete, an easier to remember rsync resume is like so:

rsync -avP .
receiving incremental file list
  4364009472   8%    2.41MB/s    5:37:34

To continue a failed file upload with rsync (e.g. if you used sftp put command and the upload transfer failed or have been cancalled:

rsync -avP chronos_application_23_04_2015.tar.gz

Of course for the rsync resume to work remote Linux system had installed rsync (package), if rsync was not available on remote system this would have not work, so before using this method make sure remote Linux / Windows server has rsync installed. There is an rsync port also for Windows so to resume large Giga or Terabyte file archive downloads easily between two Windows hosts use cwRsync.

Share this on

How to find and Delete Duplicate files in directory on Linux server with find and fdupes command

Monday, March 16th, 2015


Linux / UNIX find command is very helpful to do a lot of tasks to us admins such as Deleting empty directories to free up occupied inodes or finding and printing only empty files within a root file system within all sub-directories
There is too much of uses of find, however one that is probably rarely used known by sysadmins find command use is how to search for duplicate files on a Linux server:

find -not -empty -type f -printf “%s\n” | sort -rn | uniq -d | xargs -I{} -n1 find -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 –all-repeated=separate

If you're curious how does duplicate files finding works, they are found by comparing file sizes and MD5 signatures, followed by a byte-by-byte comparison.

Most common application of below command is when you want to search and get rid of some old obsolete files which you forgot to delete such as old /etc/ configurations, old SQL backups and PHP / Java / Python programming code files etc.

If you have to do a regular duplicate file find on multiple servers Linux servers perhaps you should install and use  fdupes command.
On Debian Linux to install it:

root@pcfreak:/# apt-cache show fdupes|grep -i descr -A 4
Description: identifies duplicate files within given directories
 FDupes uses md5sums and then a byte by byte comparison to find
 duplicate files within a set of directories. It has several useful
 options including recursion.
Homepage: apt-get install –yes fdupes

To search for duplicate files with fdupes in lets /etc/ just run fdupes without arguments:


root@pcfreak:/# fdupes /etc/



If you want to look up for all duplicate files within root directory:

root@pcfreak:/# fdupes -r /etc/
Building file list /


You can also find duplicate files for multiple directories by just passing all directories as arguments to fdupes


root@pcfreak:/# fdupes -r /etc/ /usr/ /root /disk /nfs_mount /nas

The -r argument (makes a recursive subdirectory search for duplicates), if you want to also see what is the size of duplicate files found add -S option


fdupes -r -S /etc/ /usr/ /root /disk /nfs_mount /nas


If you want to delete all duplicate files within lets say /etc/


root@pcfreak:/# fdupes -d /etc/

fdupes is also available and installable also on RPM based Linux distros Fedora / RHEL / CentOS etc., install on CentOS with:

[root@centos~ ]# yum -y install fdupes

There is also a port available for those who want to run it on FreeBSD on BSD install it from ports:


freebsd# cd /usr/ports/sysutils/fdupes
freebsd# make install clean

If you have a GUI environment installed on the server and you don't want to bother with command line to search for all duplicate files under main filesystem and other lint (junk) files take a look at FSlint


If you're looking for a GUI cross platform duplicate file finder tool that runs on all major used Operating Systems Mac OS X / Windows / Linux take a look at dupeGuru


Share this on