Posts Tagged ‘root root’

How to check Linux OS install date / How long ago was Linux installed

Sunday, October 22nd, 2017

If you're sysadmin who inherited a few hundreds of Linux machines from a previous admin and you're in process of investigating how things were configured by the previous administrator one of the crucial things to find out might be

How Long ago was Linux installed?

Here is how to check the Linux OS install date.

The universal way nomatter the Linux distribution is to use fullowing command:


root@pcfreak:~# tune2fs -l /dev/sda1 | grep 'Filesystem created:'
Filesystem created:       Thu Sep  6 21:44:22 2012



Above command assumes the Linux's root partition / is installed on /dev/sda1 however if your case is different, e.g. the primary root partition is installed on /dev/sda2 or /dev/sdb1 / dev/sdb2 etc. just place the right first partition into the command.

If primary install root partition is /dev/sdb1 for example:

root@pcfreak:~# tune2fs -l /dev/sdb1 | grep 'Filesystem created:'


To find out what is the root partition of the Linux server installed use fdisk command:




root@pcfreak:~# fdisk -l


Disk /dev/sda: 465,8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00051eda

Device     Boot     Start       End   Sectors   Size Id Type
/dev/sda1  *         2048 965193727 965191680 460,2G 83 Linux
/dev/sda2       965195774 976771071  11575298   5,5G  5 Extended
/dev/sda5       965195776 976771071  11575296   5,5G 82 Linux swap / Solaris

Disk /dev/sdb: 111,8 GiB, 120034123776 bytes, 234441648 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000


Other ways to check the Linux OS install date on Debian / Ubuntu / Mint etc. deb. based GNU / Linux


Deban based Linux distributions do create an initial /var/log/installer directory containing various install information such as hardware-summary, partition, initial installed deb packages, exact version of Linux distribution, and the way it was installed either it was installed from an ISO image, or it was network install etc.


root@pcfreak:~# ls -al /var/log/installer/
total 1228
drwxr-xr-x  3 root root   4096 sep  6  2012 ./
drwxr-xr-x 72 root root  12288 окт 22 06:26 ../
drwxr-xr-x  2 root root   4096 sep  6  2012 cdebconf/
-rw-r–r–  1 root root  17691 sep  6  2012 hardware-summary
-rw-r–r–  1 root root    163 sep  6  2012 lsb-release
-rw——-  1 root root 779983 sep  6  2012 partman
-rw-r–r–  1 root root  51640 sep  6  2012 status
-rw——-  1 root root 363674 sep  6  2012 syslog


If those directory is missing was wiped out by the previous administrator, to clear up traces of his previous work before he left job another possible way to find out exact install date is to check timestamp of /lost+found directory;

root@pcfreak:~# ls -ld /lost+found/
drwx—— 2 root root 16384 sep  6  2012 /lost+found//


Check OS Linux install date on (Fedora, CentOS, Scientific Linux, Oracle and other Redhat RPM based Distros)


[root@centos: ~]# rpm -qi basesystem
Name        : basesystem
Version     : 10.0
Release     : 7.el7
Architecture: noarch
Install Date: Mon 02 May 2016 19:20:58 BST
Group       : System Environment/Base
Size        : 0
License     : Public Domain
Signature   : RSA/SHA256, Tue 01 Apr 2014 14:23:16 BST, Key ID     199e2f91fd431d51
Source RPM  : basesystem-10.0-7.el7.src.rpm
Build Date  : Fri 27 Dec 2013 17:22:15 GMT
Build Host  :
Relocations : (not relocatable)
Packager    : Red Hat, Inc. <>
Vendor      : Red Hat, Inc.
Summary     : The skeleton package which defines a simple Red Hat Enterprise Linux system
Description :
Basesystem defines the components of a basic Red Hat Enterprise Linux
system (for example, the package installation order to use during
bootstrapping). Basesystem should be in every installation of a system,
and it should never be removed.


Share this on

Check linux install date / How do I find out how long a Linux server OS was installed?

Wednesday, March 30th, 2016


To find out the Linux install date, there is no one single solution according to the Linux distribution type and version, there are some common ways to get the Linux OS install age.
Perhaps the most popular way to get the OS installation date and time is to check out when the root filesystem ( / ) was created, this can be done with tune2fs command


server:~# tune2fs -l /dev/sda1 | grep 'Filesystem created:'
Filesystem created:       Thu Sep  6 21:44:22 2012


server:~# ls -alct /|tail -1|awk '{print $6, $7, $8}'
sep 6 2012


root home directory is created at install time


server:~# ls -alct /root


root@server:~# ls -lAhF /etc/hostname
-rw-r–r– 1 root root 8 sep  6  2012 /etc/hostname


For Debian / Ubuntu and other deb based distributions the /var/log/installer directory is being created during OS install, so on Debian the best way to check the Linux OS creation date is with:

root@server:~# ls -ld /var/log/installer
drwxr-xr-x 3 root root 4096 sep  6  2012 /var/log/installer/
root@server:~# ls -ld /lost+found
drwx—— 2 root root 16384 sep  6  2012 /lost+found/


On Red Hat / Fedora / CentOS, redhat based Linuces , you can use:


rpm -qi basesystem | grep "Install Date"


basesystem is the package containing basic Linux binaries many of which should not change, however in some cases if there are some security updates package might change so it is also good to check the root filesystem creation time and compare whether these two match.

Share this on

How to split / rar in parts large data archive files on Linux and Windows – Transfer big files across servers located in DMZ rescticted areas

Friday, November 28th, 2014


I was working on a Application Migration Project whose goal was to Install a business application called Asset Guardian and then move current company Data from the old server to the new AppServer.
or that purpose the company vendor Asset Guardian shipped to a Public access FTP, a huge (12GB) ZIP archive file which had to be transferred into a well secured DMZ-ed corporation network with various implemented Traffic Shaping Network policies, a resctrictive firewall allowing access to Internal Network only and to Few (Restrictive configured) Proxy Server IPs on port 80 and 8080.

One of the proxy servers allowed access to the Internet and I set this one and tried downloading the Huge Archive file  with the Windows 2012 server default browser Internet Explorer 10, though the download started it kept slow between ~ 300 – 500KB sec and when reached 3.4GB download failed. I tried resuming the download but as the remote Public FTP server where files resides doesn't support FTP RESUME function.
I thought it might be that Internet Explorer is badly managing the download so, I go forward and installed Portable Firefox (mirrored version 33.1.1 is here). Re-running download with firefox also failed, so the next logical step was for me to try downloading with Windows version of Wget (Wget) and with Portable Free Download Manager (mirrored here) using both of them was unable to complete download (probably due to firewall or Proxy screwing the proxy inspected traffic) thus I had to look for another way to copy the enormous archive into the company network.

To get around the issue I tried to download the file from FTP to another Server running Apache and tried re-downloading the big file archive ( from Apache Webserver via HTTP protocol, this download method didn't work neither using plain HTTP protocol for download when downloaded file reached (3.4GB), thus I realized this is due to restrictive Proxy servers (dropping file downloads) bigger than  3.4GBs).

Then to be able to transfer the huge 12GB file, it seems the only left option was to to chop the big file on smaller file chunks and transfer them one by one.
In my case I had the transferred already to the Apache (Webserver) host which is running Linux so basicly the task was to Transfer Big archive file between the SuSE Linux Enterprise Server (SLES) 11 and Windows 2012 Server.

Quickesy way to do that is by using UNIX split command, i.e.:

split -b 1024m

The outputted files each 1GB are with naming (xaa, xab, xac, xad, xae, xaf, gaf etc.) in same folder where split command is run:

To later merge the files on the Windows 2012 server (copy) command is used:

copy /b file1 + file2 + file3 + file4 filetogether

In my case the command to issue on Win 2012 server was:

copy /b xaa + xab + xac + xae + xae + xaf + xaf + xag xah xai xaj xak

This method to chop and transfer the file is most simple one and it doesn't require the two servers to have WinRAR or Console RAR / unrar installed.

If instead of Copy Huge File from Linux -> Windows host you need to copy too big file (lets say 100GB) between 2 Windows servers (Windows server host A and Windows server Host B – both situated in different firewall corporate networks) you will need to download to Win Host A and use Windows UNIX split equivalent tool called sfk (The Swiss File Knife) , sfk has port also for Mac OS so in case of need for need for migrating huge archive file from Mac OS X host it will serve as Linux's split – I've made SFK (current version) mirror here.

Another way to cut the 12GB file in parts and transfer to destination host via HTTP was to use rar (on the Linux host), then download the file on Win 2012 server and use Winrar Portable Free to extract the multiple files:

To make archive separate in parts set out to certain size out of a huge file with rar on Linux use:

cd /var/www
rar -a -v1000000k Asset_Guardian_Files.splitted.rar /var/www/

10000000Kbs = 10000000/1024 = 976MBs, hence rar produced parts will be sized to 976MB rar parts.

To find out archives check for *splitted*.rar in your /var/www

ls -al /var/www/*splitted*.rar
-rw-r–r– 1 root root 1048576 ное 28 18:34 Asset-Guardian-Files.splitted.part1.rar
-rw-r–r– 1 root root 1048576 ное 28 18:34 Asset-Guardian-Files.splitted.part2.rar
-rw-r–r– 1 root root 1048576 ное 28 18:34 Asset-Guardian-Files.splitted.part3.rar
-rw-r–r– 1 root root 1048576 ное 28 18:34 Asset-Guaridna-Filse.splitted.part4.rar


Then to download the files M$ Win 2012 server IE (, etc.)

Thanks God, Problem Solved 🙂

Share this on

Preserve domain name after redirect with mod_rewrite and some useful mod rewrite redirect and other examples – Redirect domain without changing URL

Friday, July 11th, 2014

If you're a webhosting company sysadmin, sooner or later you will be asked by application developer or some client to redirect from an Apache webserver to some other webserver / URL's IP, in a way that the IP gets preserved after the redirect.

I'm aware of two major ways to do the redirect on webserver level:

1. To redirect From Apache host A to Webserver on host B using ReverseProxy mod_proxy

2. To use Mod Rewrite to redirect all client requests on host A to host B.

There is quite a lot to be said and is said and written online on using mod_rewrite to redirect URLs.
So in this article I will not say nothing new but just present some basic scenarios on Redirecting with mod rewrite and some use cases.
Hope this examples, will help some colleague sys-admin to solve some his crazy boss redirection tasks 🙂 I'm saying crazy boss because I already worked for a  start-up company which was into internet marketing and the CEO has insane SEO ideas, often impossible to achieve …

a) Dynamic URL Redirect from Apache host A to host B without changing domain name in browser URL and keeping everything after the query in

Lets say you want to redirect incoming traffic to DomainA to DomainB keeping whole user browser request, i.e.


Passthe the whole request including /whole/a/lot/of/sub/directory/query.php

so when Apache redirects to redirect to:

In browser 
To do it with Mod_Rewrite either you have to add in .htaccess mod_rewite rules:

RewriteEngine On
RewriteCond %{HTTP_HOST} ^ [OR]
RewriteCond %{HTTP_HOST} ^
RewriteRule ^(.*)$1 [P]

or include this somewhere in VirtualHost configuration of your domain

Above mod_rewrite will make any request to to forward to while preserving the hostname in browser URL bar to old domain, however still contet will be served by

to redirect to

WARNING !!  If you're concerned about your SEO well positioning in search Engines, be sure to never ever use such redirects. Making such redirects will cause two domains to show up duplicate content
and will make Search Engines to reduce your Google, Yahoo, Yandex etc. Pagerank !!

Besides that such, redirect will use mod_rewrite on each and every redirect so from performance stand point it is a CPU killer (for such redirect using native mod_proxy ProxyPass is much more efficient – on websites with hundred of thousands of requests daily using such redirects will cause you to spend your  hardware badly  …)

P.S. ! Mod_Rewrite and Proxy modules needs to be previously enabled
On Debian Linux, make sure following links are existing and pointing to proper existing files from /etc/apache2/mods-available/ to /etc/apache2/mods-enabled

debian:~#  ls -al /etc/apache2/mods-available/*proxy*
-rw-r–r– 1 root root  87 Jul 26  2011 /etc/apache2/mods-available/proxy_ajp.load
-rw-r–r– 1 root root 355 Jul 26  2011 /etc/apache2/mods-available/proxy_balancer.conf
-rw-r–r– 1 root root  97 Jul 26  2011 /etc/apache2/mods-available/proxy_balancer.load
-rw-r–r– 1 root root 803 Jul 26  2011 /etc/apache2/mods-available/proxy.conf
-rw-r–r– 1 root root  95 Jul 26  2011 /etc/apache2/mods-available/proxy_connect.load
-rw-r–r– 1 root root 141 Jul 26  2011 /etc/apache2/mods-available/proxy_ftp.conf
-rw-r–r– 1 root root  87 Jul 26  2011 /etc/apache2/mods-available/proxy_ftp.load
-rw-r–r– 1 root root  89 Jul 26  2011 /etc/apache2/mods-available/proxy_http.load
-rw-r–r– 1 root root  62 Jul 26  2011 /etc/apache2/mods-available/proxy.load
-rw-r–r– 1 root root  89 Jul 26  2011 /etc/apache2/mods-available/proxy_scgi.load

debian:/etc/apache2/mods-avaialble:~# ls *proxy*
proxy.conf@  proxy_connect.load@  proxy_http.load@  proxy.load@

If it is is not enabled to enable proxy support in Apache on Debian / Ubuntu Linux, either create the symbolic links as you see them from above paste or issue with root:

a2enmod proxy_http
a2enmod proxy


b) Redirect Main Domain requests to other Domain specific URL

RewriteEngine On
RewriteCond %{HTTP_HOST} ^
RewriteRule ^(.*) [P]

Note that no matter what kind of subdirectory you request on (lets say you type in ) it will get redirected to:

Sometimes this is convenient for SEO, because it can make you to redirect any requests (including mistakenly typed requests by users or Bot Crawlers to real existing landing page).

c) Redirecting an IP address to a Domain Name

This probably a very rare thing to do as usually a Domain Name is redirected to an IP, however if you ever need to redirect IP to Domain Name:

RewriteCond %{HTTP_HOST} ^##.##.##.##
RewriteRule (.*)$1 [R=301,L]

Replace ## with digits of your IP address, the is used to escape the (.) – dots are normally interpreted by mod_rewrite.

d) Rewritting URL extensions from .htm to .php, doc to docx etc.

Lets say you're updating an old website with .htm or .html to serve .php files with same names as old .htmls use following rewrite rules:. Or all your old .doc files are converted and replaced with .docx and you need to make Apache redirect all .doc requests to .docx.

Options +FollowSymlinks
RewriteEngine on
RewriteRule ^(.*).html$ $1.php [NC]

Options +FollowSymlinks
RewriteEngine on
RewriteRule ^(.*).doc$ $1.docx [NC]

The [NC] flag at the end means "No Case", or "case-insensitive"; Meaning it will not matter whether files are requested with capital or small letters, they will just show files if file under requested name is matched.

Using such a redirect will not cause Apache to redirect old files .html, .htm, .doc and they will still be accessible again creating duplicate content which will have a negavite impact on Search Engine Optimization.

The better way to do old extensioned files redirect is by using:

Options +FollowSymlinks
RewriteEngine on
RewriteRule ^(.+).htm$$1.php [R,NC]

[R] flag would tell make mod_rewrite send HTTP "MOVED TEMPORARILY" redirection, aka, "302" to browser. This would cause search engines and other spidering entities will automatically update their links to the new locations.

e) Grabbing content from URL with Mod Rewrite and passing it to another domain

Lets say you want zip files contained in directory files/ to be redirected from your current webserver on domainA to domainB's download.php script and be passed as argument to the script

Options +FollowSymlinks
RewriteEngine on
RewriteRule ^files/([^/]+)/([^/]+).zip$1&file=$2 [R,NC]

f) Shortening URLs with mod_rewrite

This is ueful If you have a long URL address accessible via some fuzzy long hard to remember URL address and you want to make it acessible via a shorter URL without phyisally moving the files within a short named directory, do:

Options +FollowSymlinks
RewriteEngine On
RewriteRule ^james-brown /james-brown/files/download/download.php

Above rule would make requests coming to be opened via http://mysite/public/james-brown/files/download/download.php?

g) Get rid of the www in your domain name

Nowdays many people are used to typing, if this annoys you and you want them not to see in served URLs the annoying www nonsense, use this:

Options +FollowSymlinks
RewriteEngine on
RewriteCond %{http_host} ^ [NC]
RewriteRule ^(.*)$$1 [R=301,NC]

That's mostly some common uses of mod rewrite redirection, there are thousands of nice ones. If you know others, please share?

References and thanks to:

How to redirect domain without changing the URL

More .htaccess tips and tricks – part 2



Share this on

Linux: how to show all users crontab – List all cronjobs

Thursday, May 22nd, 2014

I'm doing another server services decomissioning and part of decomissioning plan is: Removing application and all related scripts from related machines (FTP, RSYNC, …). In project documentation I found a list with Cron enabled shell scripts:

#Cron tab excerpt:
1,11,21,31,41,51 * * * */webservices/tools/scripts/

that has to be deleted, however there was nowhere mentioned under what kind of credentials (with what kind of user) are the cron scripts running? Hence I had to look up all users that has cronjobs and find inside each user's cronjobs whether respective script is set to run. Herein I will explain shortly how I did that.

Cronjobs by default has few locations from where cronjobs are setupped depending on their run time schedule. First place I checked for the scripts is

/etc/crontabs # cat /etc/crontabs SHELL=/bin/sh
# check scripts in cron.hourly, cron.daily, cron.weekly, and cron.monthly
-*/15 * * * * root test -x /usr/lib/cron/run-crons && /usr/lib/cron/run-crons >/dev/null 2>&1
59 * * * * root rm -f /var/spool/cron/lastrun/cron.hourly
14 4 * * * root rm -f /var/spool/cron/lastrun/cron.daily
29 4 * * 6 root rm -f /var/spool/cron/lastrun/cron.weekly
44 4 1 * * root rm -f /var/spool/cron/lastrun/cron.monthly

I was not really user via what user is shell script run, therefore I looked first if someone doesn't set the script to run via crontab's standard locations for Daily, Hourly,Weekly and Monthly cronjobs:

a) Daily set cron jobs are in:


b) Hourly set cron jobs:


c) Weekly cron jobs are in:


d) Monthly cron jobs:


There is also a location read by crontab for all Software (package distribution) specific cronjobs – all run under root user privileges.:

e) Software specific script cron jobs are in:

As the system has about 327 users in /etc/passwd, checking each user's cronjob manually with:

# crontab -u UserName -l

was too much time consuming thus it is a good practice to list


directory and to see which users has cron jobs defined


# ls -al /var/spool/cron/*
-rw——- 1 root root 11 2007-07-09 17:08 /var/spool/cron/deny

total 0
drwxr-xr-x 2 root root 80 2014-05-22 11:15 .
drwx—— 4 root root 120 2008-02-25 15:45 ..
-rw-r–r– 1 root root 0 2014-05-22 04:15 cron.daily

total 8
drwx—— 2 root root 72 2014-04-03 03:43 .
drwx—— 4 root root 120 2008-02-25 15:45 ..
-rw——- 1 root root 4901 2014-04-03 03:43 root

/var/spool/cron – is crond (/usr/bin/cron/)'s spool directory.

# ls -al /var/spool/cron/tabs/ total 8
drwx------ 2 root root 72 2014-04-03 03:43 .
drwx------ 4 root root 120 2008-02-25 15:45 ..
-rw------- 1 root root 4901 2014-04-03 03:43 root

Above output shows only root superuser has defined crons.

Alternative way to check all user crontabs is via quick Linux one liner shell script show all user cron jobs

for i in $(cat /etc/passwd | sed -e "s#:# #g" | awk '{ print $1 }'); do
echo "user $i --- crontab ---";
crontab -u $i -l 2>&1 >/dev/null;
echo '----------';

Note that above short script has to run with root user. Enjoy 🙂

Share this on

Debian Linux: Installing and monitoring servers with Icanga (Nagios fork soft)

Monday, June 3rd, 2013


There is plenty of software for monitoring how server performs and whether servers are correctly up and running. There is probably no Debian Linux admin who didn't already worked or at least tried Nagios and Mointor to monitor and notify whether server is unreachable or how server services operate. Nagios and Munin are play well together to prevent possible upcoming problems with Web / Db / E-mail services or get notify whether they are completely inaccessible. One similar "next-generation" and less known software is Icanga.
The reason, why to use Icinga  instead of Nagios is  more features a list of what does Icinga supports more than Nagios is on its site here
I recently heard of it and decided to try it myself. To try Icanga I followed Icanga's install tutorial on Wiki.Icanga.Org here
In Debian Wheezy, Icinga is already part of official repositories so installing it like in Squeeze and Lenny does not require use of external Debian BackPorts repositories.

1. Install Icinga pre-requirement packages

debian:# apt-get --yes install php5 php5-cli php-pear php5-xmlrpc php5-xsl php5-gd php5-ldap php5-mysql

2. Install Icanga-web package

debian:~# apt-get --yes install icinga-web

Here you will be prompted a number of times to answer few dialog questions important for security, as well as fill in MySQL server root user / password as well as SQL password that will icinga_web mySQL user use.





Setting up icinga-idoutils (1.7.1-6) …
dbconfig-common: writing config to /etc/dbconfig-common/icinga-idoutils.conf
granting access to database icinga for icinga-idoutils@localhost: success.
verifying access for icinga-idoutils@localhost: success.
creating database icinga: success.
verifying database icinga exists: success.
populating database via sql…  done.
dbconfig-common: flushing administrative password
Setting up icinga-web (1.7.1+dfsg2-6) …
dbconfig-common: writing config to /etc/dbconfig-common/icinga-web.conf

Creating config file /etc/dbconfig-common/icinga-web.conf with new version
granting access to database icinga_web for icinga_web@localhost: success.
verifying access for icinga_web@localhost: success.
creating database icinga_web: success.
verifying database icinga_web exists: success.
populating database via sql…  done.
dbconfig-common: flushing administrative password

Creating config file /etc/icinga-web/conf.d/database-web.xml with new version
database config successful: /etc/icinga-web/conf.d/database-web.xml

Creating config file /etc/icinga-web/conf.d/database-ido.xml with new version
database config successful: /etc/icinga-web/conf.d/database-ido.xml
enabling config for webserver apache2…
Enabling module rewrite.
To activate the new configuration, you need to run:
  service apache2 restart
`/etc/apache2/conf.d/icinga-web.conf' -> `../../icinga-web/apache2.conf'
[ ok ] Reloading web server config: apache2 not running.
root password updates successfully!
Basedir: /usr Cachedir: /var/cache/icinga-web
Cache already purged!

3. Enable Apache mod_rewrite


debian:~# a2enmod rewrite
debian:~# /etc/init.d/apache2 restart

4. Icinga documentation files

Some key hints on Enabling some more nice Icinga features are mentioned in Icinga README files, check out, all docs files included with Icinga separate packs are into:

debian:~# ls -ld *icinga*/
drwxr-xr-x 3 root root 4096 Jun  3 10:48 icinga-common/
drwxr-xr-x 3 root root 4096 Jun  3 10:48 icinga-core/
drwxr-xr-x 3 root root 4096 Jun  3 10:48 icinga-idoutils/
drwxr-xr-x 2 root root 4096 Jun  3 10:48 icinga-web/

debian:~# less /usr/share/doc/icinga-web/README.Debian debian:~# less /usr/share/doc/icinga-idoutils/README.Debian

5. Configuring Icinga

Icinga configurations are separated in two directories:

debian:~# ls -ld *icinga*

drwxr-xr-x 4 root root 4096 Jun  3 10:50 icinga
drwxr-xr-x 3 root root 4096 Jun  3 11:07 icinga-web


etc/icinga/ – (contains configurations files for on exact icinga backend server behavior)


/etc/icinga-web – (contains all kind of Icinga Apache configurations)
Main configuration worthy to look in after install is /etc/icinga/icinga.cfg.

6. Accessing newly installed Icinga via web

To access just installed Icinga, open in browser URL – htp://localhost/icinga-web

icinga web login screen in browser debian gnu linux

logged in inside Icinga / Icinga web view and control frontend


7. Monitoring host services with Icinga (NRPE)

As fork of Nagios. Icinga has similar modular architecture and uses number of external plugins to Monitor external host services list of existing plugins is on Icinga's wiki here.
Just like Nagios Icinga supports NRPE protocol (Nagios Remote Plugin Executor). To setup NRPE, nrpe plugin from nagios is used (nagios-nrpe-server). 

To install NRPE on any of the nodes to be tracked;
debian: ~# apt-get install –yes nagios-nrpe-server

 Then to configure NRPE edit /etc/nagios/nrpe_local.cfg


Once NRPE is supported in Icinga, you can install on Windows or Linux hosts NRPE clients like in Nagios to report on server processes state and easily monitor if server disk space / load or service is in critical state.

Share this on

Captured crackers sslog mysqljackpot MySQL bruteforcer tool / exploit – Xzibit Rootkit and HIDDEN Processes Found: 1 False Positive reports

Monday, October 29th, 2012

XZibit false positive .depend.boot mysqljackpot script kiddie mysql admin user bruteforcer tool and 3 scenarios on how a server could have been hacked
I've noticed some kind of script kiddie gained access somehow on one of the servers, I administrate. A MS-SQL Scanner tool called sslog, was downloaded in tmp and run with root user credentials.

The cracked victim host is running Debian Linux Squeeze and last security update, I've made about few months ago. Inside /tmp/.a/ directory, I've found 0day MS-SQL scanner called mysqljackpot. Maybe the tool is still private exploit scanner because on the Internet I couldn't find it anywhere.:

# ls -al /tmp/.a
total 52
drwxr-xr-x 5 root root 4096 Oct 29 01:10 ./
drwxrwxrwt 10 root root 36864 Oct 29 14:46 ../
drwxr-xr-x 3 root root 4096 Oct 27 21:46 mysqljackpot/
drwxr-xr-x 3 root root 4096 Oct 28 16:58 new/
drwxr-xr-x 3 root root 4096 Oct 29 12:48 pass-multe/
# ls -al /tmp/.a/new/
total 12
drwxr-xr-x 3 root root 4096 Oct 28 16:58 ./
drwxr-xr-x 5 root root 4096 Oct 29 01:10 ../
drwxr-xr-x 3 root root 4096 Oct 29 00:58 mysqljackpot/

After further investigations, I've realized ./sslog is actually a frontend scanner program (Synscan 5.02):

root@host:/tmp/.a/new/mysqljackpot/scanner# ./sslog
Synscan 5.02 (
by John Anderson ,
Neil Kettle .
./sslog: getuid(): UID or EUID of 0 required

As you see in order for the scanner to run it requires to be root with superuser privileges.

mysqljackpot is actually a brute force tool which as explained in a file (README.mysql), found in its directory :

Here is content of README.mysql:

MySQL Login Scanner
By Kingcope

Scans for open mysql servers with the following credentials:
root <nopass>
root mysql
root root
admin <nopass>
admin admin
admin mysql
mysql <nopass>
mysql mysql

Runs on linux.
Requirements: mysql development libraries and headers
Compile (try one of the following depending on your system):
$ ./configure LIBS=-lmysqlclient
$ ./configure LIBS="-L/usr/lib/mysql" -lmysqlclient
$ ./configure LIBS="-L/usr/lib64/mysql" -lmysqlclient
$ ./configure CFLAGS="-lmysqlclient"

afterwards type

$ make linux

terminal 1:
./sslog -v
terminal 2:
./synscan -b <ip block> -p 3306

Inspect Logfile "mysqljack.pot" for open servers.

There is one other README in /tmp/.a/new/mysqljackpot/README, here is what I found in it:

Oracle MySQL on Windows Remote SYSTEM Level Exploit zeroday
All owned By Kingcope

Installation Instructions

1. Install mysql client libraries and headers (UNIX)
RedHat based (e.g. CentOS):
yum install mysql mysql-devel

2. Compile the standalone exploit
issue commands:
gcc mysqljackpot.c -o mysqljackpot -L/usr/lib/mysql -lmysqlclient

3. Compile the reverse shell payload (this is required!)
required because the connect back ip and port are hardcoded in the dll:
use mingw on windows or wine
change REVERSEIP and REVERSEPORT to suit your needs. If you change REVERSEPORT you have
to change the port in mysqljackpot.c too (default port: 443).
issue commands:
set PATH=%PATH%;c:\MinGW\bin\
gcc -c payload.c
gcc -shared -o payload.dll payload.o -lws2_32
copy the payload.dll into the mysqljackpot exploit folder

4. Run The Exploit
./mysqljackpot -u root -p "" -t
A valid database admin user and his password are required
for the exploit to work properly.
This exploit is especially useful when used in connection
to a MySQL login scanner, see scanner/README.mysql inside this package.
Be sure to have the firewall open on the desired reverse port
on the attacking machine.

5. Enjoy your SYSTEM Shell!!!

Yours Sincerely,

— Kingcope


Here is also the header from mysqljackpot.c mysql username brute force tool:

/* Oracle MySQL on Windows Remote SYSTEM Level Exploit zeroday
 * Copyright (C) 2012 Kingcope
 * Thanks to danny.

After thinking over the security breach I thought of  few scenarios on how the attacker entered and run as root superuser. One is;

  •   Cracker entered directly via SSH after sniffing somehow the root password.

After however, a review of last cmd, I've concluded this case is not very likely, e.g.:

# last |grep -i root

did not found any logs, of unusual root logins, neither there seem to be any unusual activity with logins with other non-root users. Of course it is possible someone logged in as root and used some tool to clean, his tracks with some kind of user log-cleaner tool like the one I've written in past in bash this doesn't seem very likely however because. It seem the /tmp/.a/, directory was created by some amateur script kiddie, a professional one would create some a bit smarter directory like for example just few empty spaces , i.e. would create it with, lets say::

# mkdir "   "

instead of the so trivial

# mkdir /tmp/.a/

Also the name of the directory containing the script kiddie tool /tmp/.a is not selected intelligently, but just done in a hurry, hence I even assume /tmp/.a, is created by some automated SK tool writen in hurry by some Romanian SK Cracker 🙂

On the host there was webmin and usermin running. So;

  • my second assumption was it could be someone sniffed a login password via encrypted SSL connection, whether the root logged in via webmin, or somehow exploited usermin (though I should say usermin (which listens by default on port number 20000)

TCP port 20000 on which usermin listens by default is filtered by an iptables rules for all hosts incoming connections, whether webmin logins are permitted only from few IP addresses. Thus this scenario, though more possible than a direct SSH login with root sniffed password still seems to me not very probable.

  • Therefore as a third scenario (most likely what happened), I assume some of the PHP forms on the server or some other undefined PHP excecutable via Apache variable script was missing definition.


Actually saw in /var/log/apache2/error.log plenty of re-occuring warnings of existing undefined variables:

[Mon Oct 29 16:30:43 2012] [error] [client] PHP Notice:  Undefined variable: not_assign in /home/site_dir/www/modules/start.mod.php on line 121, referer:
[Mon Oct 29 16:30:43 2012] [error] [client] PHP Notice:  Undefined variable: counter_cookie in /home/site_dir/www/modules/start.mod.php on line 130, referer:
[Mon Oct 29 16:30:43 2012] [error] [client] PHP Notice:  Undefined variable: campaign_cukie in /home/site_dir/www/modules/start.mod.php on line 135, referer:
[Mon Oct 29 16:30:43 2012] [error] [client] PHP Notice:  Undefined index: actions in /home/site_dir/www/counter/count.php on line 11, referer: http://site-domain-name/start?qid=3&answered_id=4
[Mon Oct 29 16:30:43 2012] [error] [client] PHP Notice:  Undefined variable: flag2 in /home/site_dir/www/counter/count.php on line 52, referer:

Taking this in consideration, I assume the attacker, entered the system finding about the undefined variables, defining them and somehow achieving access to the www-data Apache user shell, and through this shell running some 0day Linux kernel exploit to gain root access and download and install mysqljackpot exploit scanner tool.

Logically as it is common in situations like this, I used rkhunter, chkrootkit and unhide tools to check if the server's main binaries and kernel modules are compromised and is there a rootkit installed (earlier written a post on that here)

In short to do checks, installed rkhunter, chkrootkit and unhide with apt-get (as this is a Debian Squeeze server):

apt-get install --yes rkhunter unhide chkrootkit


Afterwards run in a row:

# for i in $(echo proc sys brute); do unhide $i; done
# chkrootkit
# rkhuter --check

Reports, of the three ones are like so:

Unhide 20100201

[*]Searching for Hidden processes through /proc scanning
# for i in $(echo proc sys brute); do unhide $i; done

[*]Starting scanning using brute force against PIDS with fork()

Unhide 20100201

[*]Searching for Hidden processes through kill(..,0) scanning

[*]Searching for Hidden processes through  comparison of results of system calls

[*]Searching for Hidden processes through getpriority() scanning

[*]Searching for Hidden processes through getpgid() scanning

[*]Searching for Hidden processes through getsid() scanning

[*]Searching for Hidden processes through sched_getaffinity() scanning

[*]Searching for Hidden processes through sched_getparam() scanning

[*]Searching for Hidden processes through sched_getscheduler() scanning

[*]Searching for Hidden processes through sched_rr_get_interval() scanning

[*]Searching for Hidden processes through sysinfo() scanning

HIDDEN Processes Found: 1
Unhide 20100201

Found HIDDEN PID: 4994
Found HIDDEN PID: 13374
Found HIDDEN PID: 14931
Found HIDDEN PID: 18292
Found HIDDEN PID: 19199
Found HIDDEN PID: 22651
[*]Starting scanning using brute force against PIDS with Threads

Found HIDDEN PID: 3296
Found HIDDEN PID: 30790

# chkrootkit -q

/usr/lib/pymodules/python2.5/.path /usr/lib/pymodules/python2.6/.path /lib/init/rw/.ramfs

# rkhunter –check

System checks summary

File properties checks…
    Files checked: 137
    Suspect files: 0

Rootkit checks…
    Rootkits checked : 245
    Possible rootkits: 2
    Rootkit names    : Xzibit Rootkit, Xzibit Rootkit

Applications checks…
    All checks skipped

The system checks took: 1 minute and 5 seconds

All results have been written to the log file (/var/log/rkhunter.log)

One or more warnings have been found while checking the system.
Please check the log file (/var/log/rkhunter.log)


Reports from unhide and chkrootkit,  not seem troubling, however I was concerned about the report from rkhunter – Rootkit names    : Xzibit Rootkit, Xzibit Rootkit.

To get some more info on why chkrootkit, thinks, system is infected with Xzibit (which by the way is an artistic alias of a RAP singer from the 1980's 🙂 I check in /var/log/rkhunter.log


# grep -i xzibit /var/log/rkhunter.log
[16:52:48] Checking for Xzibit Rootkit...
[16:52:48] Xzibit Rootkit                                    [ Not found ]
[16:52:56]          Found string 'hdparm' in file '/etc/init.d/hdparm'. Possible rootkit: Xzibit Rootkit
[16:52:56]          Found string 'hdparm' in file '/etc/init.d/.depend.boot'. Possible rootkit: Xzibit Rootkit
[16:53:01] Rootkit names    : Xzibit Rootkit, Xzibit Rootkit

Onwards I checked content of hdparm and .depend.boot and there I don't see nothing irregular. They both are files from legitimate Debian install, I've checked if they belong to a deb packages as well if they are existing on other Debian Squeeze servers I administer as well as on my Debian Desktop notebook, everywhere they're present, hdparm is part of hdparm deb and .depend.boot is loaded by /etc/init.d/rc script, containing some user string references:

# grep -rli .depend.boot *

# dpkg -S /etc/init.d/hdparm
# hdparm: /etc/init.d/hdparm
# dpkg -S /etc/init.d/.depend.boot
dpkg: /etc/init.d/.depend.boot not found.


Another troubling thing was unhide's return:

HIDDEN Processes Found: 1


After a close examination of the system as well as research on the internet, I've figured out this is also a false positive. For sake of not distributing, Script Kiddie tools, which might put in danger other system administrators I will not put a download link to mysqljackpot publicly. Anyways if someone is willing to have it for study purposes, just drop me a mail and I will post you temporary download link to it.


Also as webmin and usermin is not frequently used, I've decided to completely stop and disable them to load on boot.

I've done also a clamav scan with (lowered priority) over the whole file system with:

# nice -19 clamscan -r /*

in order to determine, if there is no PHPShell or some kind of other remote admin Script kiddie script in perl / php etc. installed.
Tomorrow, I will continue investigatin what is happening and hopefully once I got, how the abuser entered the server will update this post.

Share this on

How to change Debian GNU / Linux console (tty) language to Bulgarian or Russian Language

Wednesday, April 25th, 2012

Debian has a package language-env. I haven't used my Linux console for a long time. So I couldn't exactly remember how I used to be making the Linux console to support cyrillic language (CP1251, bg_BG.UTF-8) etc.

I've figured out for the language-env existence in Debian Book on hosted on OpenFMIBulgarian Faculty of Mathematics and Informatics website.
The package info with apt-cache show displays like that:

hipo@noah:~/Desktop$ apt-cache show language-env|grep -i -A 3 description
Description: simple configuration tool for native language environment
This tool adds basic settings for natural language environment such as
LANG variable, font specifications, input methods, and so on into
user's several dot-files such as .bashrc and .emacs.

What is really strange, is the package maintainer is not Bulgarian, Russian or Ukrainian but Japanese.
As you see the developer is weirdly not Bulgarian but Japanese Kenshi Muto. What is even more interesting is that it is another japanese that has actually written the script set-language-env contained within the package. Checking the script in the header one can see him, Tomohiro KUBOTA

Before I've found about the language-env existence, I knew I needed to have the respective locales installed on the system with:

# dpkg-reconfigure locales

So I run dpkg-reconfigure to check I have existing the locales for adding the Bulgarian language support.
Checking if the bulgarian locale is installed is also possible with /bin/ls:

# ls -al /usr/share/i18n/locales/*|grep -i bg
-rw-r--r-- 1 root root 8614 Feb 12 21:10 /usr/share/i18n/locales/bg_BG

The language-env contains a perl script called set-language-env which is doing the actual Debian Bulgarization / cyrillization. The set-language-env author is another Japanese and again not Slavonic person.

Actually set-language-env script is not doing the Bulgariazation but is a wrapper script that uses a number of "hacks" to make the console support cyrillic.

Further on to make the console support cyrillic, execute:

hipo@noah:~$ set-language-env
Setting up users' native language environment
by modifying their dot-files.
Type "set-language-env -h" for help.
1 : be (Bielaruskaja,Belarusian)
2 : bg (Bulgarian)
3 : ca (Catala,Catalan)
4 : da (Dansk,Danish)
5 : de (Deutsch,German)
6 : es (Espanol,Spanish)
7 : fr (Francais,French)
8 : ja (Nihongo,Japanese)
9 : ko (Hangul,Korean)
10 : lt (Lietuviu,Lithuanian)
11 : mk (Makedonski,Macedonian)
12 : pl (Polski,Polish)
13 : ru (Russkii,Russian)
14 : sr (Srpski,Serbian)
15 : th (Thai)
16 : tr (Turkce,Turkish)
17 : uk (Ukrajins'ka,Ukrainian)
Input number > 2

There are many questions in cyrillic list necessery to be answered to exactly define if you need cyrillic language support for GNOME, pine, mutt, console etcetera.
The script will create or append commands to a number of files on the system like ~/.bash_profile
The script uses the cyr command part of the Debian console-cyrillic package for the actual Bulgarian Linux localization.

As said it was supposed to also do a localization in the past of many Graphical environment programs, as well as include Bulgarian support for GNOME desktop environment. Since GNOME nowdays is already almost completely translated through its native language files, its preferrable that localization to be done on Linux install time by selecting a country language instead of later doing it with set-language-env. If you failed to set the GNOME language during Linux install, then using set-language-env will still work. I've tested it and even though a lot of time passed since set-language-env was heavily used for bulgarization still the GUI env bulgarization works.

If set-language-env is run in gnome-terminal the result, the whole set of question dialogs will pop-up in new xterm and due to a bug, questions imposed will be unreadable as you can see in below screenshot:

set-language-env command screenshot in Debian GNU / Linux gnome-terminal

If you want to remove the bulgarization, later at certain point, lets you don't want to have the cyrillic console or programs support use:

# set-language-env -r
Setting up users native language environment' 

For anyone who wish to know more in depth, how set-language-env works check the README files in /usr/share/doc/language-env/ one readme written by the author of the Bulgarian localization part of the package Anton Zinoviev is /usr/share/doc/language-env/

Share this on

Resolving “nf_conntrack: table full, dropping packet.” flood message in dmesg Linux kernel log

Wednesday, March 28th, 2012

On many busy servers, you might encounter in /var/log/syslog or dmesg kernel log messages like

nf_conntrack: table full, dropping packet

to appear repeatingly:

[1737157.057528] nf_conntrack: table full, dropping packet.
[1737157.160357] nf_conntrack: table full, dropping packet.
[1737157.260534] nf_conntrack: table full, dropping packet.
[1737157.361837] nf_conntrack: table full, dropping packet.
[1737157.462305] nf_conntrack: table full, dropping packet.
[1737157.564270] nf_conntrack: table full, dropping packet.
[1737157.666836] nf_conntrack: table full, dropping packet.
[1737157.767348] nf_conntrack: table full, dropping packet.
[1737157.868338] nf_conntrack: table full, dropping packet.
[1737157.969828] nf_conntrack: table full, dropping packet.
[1737157.969928] nf_conntrack: table full, dropping packet
[1737157.989828] nf_conntrack: table full, dropping packet
[1737162.214084] __ratelimit: 83 callbacks suppressed

There are two type of servers, I've encountered this message on:

1. Xen OpenVZ / VPS (Virtual Private Servers)
2. ISPs – Internet Providers with heavy traffic NAT network routers

I. What is the meaning of nf_conntrack: table full dropping packet error message

In short, this message is received because the nf_conntrack kernel maximum number assigned value gets reached.
The common reason for that is a heavy traffic passing by the server or very often a DoS or DDoS (Distributed Denial of Service) attack. Sometimes encountering the err is a result of a bad server planning (incorrect data about expected traffic load by a company/companeis) or simply a sys admin error…

– Checking the current maximum nf_conntrack value assigned on host:

linux:~# cat /proc/sys/net/ipv4/netfilter/ip_conntrack_max

– Alternative way to check the current kernel values for nf_conntrack is through:

linux:~# /sbin/sysctl -a|grep -i nf_conntrack_max
error: permission denied on key 'net.ipv4.route.flush'
net.netfilter.nf_conntrack_max = 65536
error: permission denied on key 'net.ipv6.route.flush'
net.nf_conntrack_max = 65536

– Check the current sysctl nf_conntrack active connections

To check present connection tracking opened on a system:


linux:~# /sbin/sysctl net.netfilter.nf_conntrack_count
net.netfilter.nf_conntrack_count = 12742

The shown connections are assigned dynamicly on each new succesful TCP / IP NAT-ted connection. Btw, on a systems that work normally without the dmesg log being flooded with the message, the output of lsmod is:

linux:~# /sbin/lsmod | egrep 'ip_tables|conntrack'
ip_tables 9899 1 iptable_filter
x_tables 14175 1 ip_tables

On servers which are encountering nf_conntrack: table full, dropping packet error, you can see, when issuing lsmod, extra modules related to nf_conntrack are shown as loaded:

linux:~# /sbin/lsmod | egrep 'ip_tables|conntrack'
nf_conntrack_ipv4 10346 3 iptable_nat,nf_nat
nf_conntrack 60975 4 ipt_MASQUERADE,iptable_nat,nf_nat,nf_conntrack_ipv4
nf_defrag_ipv4 1073 1 nf_conntrack_ipv4
ip_tables 9899 2 iptable_nat,iptable_filter
x_tables 14175 3 ipt_MASQUERADE,iptable_nat,ip_tables


II. Remove completely nf_conntrack support if it is not really necessery

It is a good practice to limit or try to omit completely use of any iptables NAT rules to prevent yourself from ending with flooding your kernel log with the messages and respectively stop your system from dropping connections.

Another option is to completely remove any modules related to nf_conntrack, iptables_nat and nf_nat.
To remove nf_conntrack support from the Linux kernel, if for instance the system is not used for Network Address Translation use:

/sbin/rmmod iptable_nat
/sbin/rmmod ipt_MASQUERADE
/sbin/rmmod rmmod nf_nat
/sbin/rmmod rmmod nf_conntrack_ipv4
/sbin/rmmod nf_conntrack
/sbin/rmmod nf_defrag_ipv4

Once the modules are removed, be sure to not use iptables -t nat .. rules. Even attempt to list, if there are any NAT related rules with iptables -t nat -L -n will force the kernel to load the nf_conntrack modules again.

Btw nf_conntrack: table full, dropping packet. message is observable across all GNU / Linux distributions, so this is not some kind of local distribution bug or Linux kernel (distro) customization.

III. Fixing the nf_conntrack … dropping packets error

– One temporary, fix if you need to keep your iptables NAT rules is:

linux:~# sysctl -w net.netfilter.nf_conntrack_max=131072

I say temporary, because raising the nf_conntrack_max doesn't guarantee, things will get smoothly from now on.
However on many not so heavily traffic loaded servers just raising the net.netfilter.nf_conntrack_max=131072 to a high enough value will be enough to resolve the hassle.

– Increasing the size of nf_conntrack hash-table

The Hash table hashsize value, which stores lists of conntrack-entries should be increased propertionally, whenever net.netfilter.nf_conntrack_max is raised.

linux:~# echo 32768 > /sys/module/nf_conntrack/parameters/hashsize
The rule to calculate the right value to set is:
hashsize = nf_conntrack_max / 4

– To permanently store the made changes ;a) put into /etc/sysctl.conf:

linux:~# echo 'net.netfilter.nf_conntrack_count = 131072' >> /etc/sysctl.conf
linux:~# /sbin/sysct -p

b) put in /etc/rc.local (before the exit 0 line):

echo 32768 > /sys/module/nf_conntrack/parameters/hashsize

Note: Be careful with this variable, according to my experience raising it to too high value (especially on XEN patched kernels) could freeze the system.
Also raising the value to a too high number can freeze a regular Linux server running on old hardware.

– For the diagnosis of nf_conntrack stuff there is ;

/proc/sys/net/netfilter kernel memory stored directory. There you can find some values dynamically stored which gives info concerning nf_conntrack operations in "real time":

linux:~# cd /proc/sys/net/netfilter
linux:/proc/sys/net/netfilter# ls -al nf_log/

total 0
dr-xr-xr-x 0 root root 0 Mar 23 23:02 ./
dr-xr-xr-x 0 root root 0 Mar 23 23:02 ../
-rw-r--r-- 1 root root 0 Mar 23 23:02 0
-rw-r--r-- 1 root root 0 Mar 23 23:02 1
-rw-r--r-- 1 root root 0 Mar 23 23:02 10
-rw-r--r-- 1 root root 0 Mar 23 23:02 11
-rw-r--r-- 1 root root 0 Mar 23 23:02 12
-rw-r--r-- 1 root root 0 Mar 23 23:02 2
-rw-r--r-- 1 root root 0 Mar 23 23:02 3
-rw-r--r-- 1 root root 0 Mar 23 23:02 4
-rw-r--r-- 1 root root 0 Mar 23 23:02 5
-rw-r--r-- 1 root root 0 Mar 23 23:02 6
-rw-r--r-- 1 root root 0 Mar 23 23:02 7
-rw-r--r-- 1 root root 0 Mar 23 23:02 8
-rw-r--r-- 1 root root 0 Mar 23 23:02 9


IV. Decreasing other nf_conntrack NAT time-out values to prevent server against DoS attacks

Generally, the default value for nf_conntrack_* time-outs are (unnecessery) large.
Therefore, for large flows of traffic even if you increase nf_conntrack_max, still shorty you can get a nf_conntrack overflow table resulting in dropping server connections. To make this not happen, check and decrease the other nf_conntrack timeout connection tracking values:

linux:~# sysctl -a | grep conntrack | grep timeout
net.netfilter.nf_conntrack_generic_timeout = 600
net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 120
net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 60
net.netfilter.nf_conntrack_tcp_timeout_established = 432000
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
net.netfilter.nf_conntrack_tcp_timeout_last_ack = 30
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_close = 10
net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300
net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 300
net.netfilter.nf_conntrack_udp_timeout = 30
net.netfilter.nf_conntrack_udp_timeout_stream = 180
net.netfilter.nf_conntrack_icmp_timeout = 30
net.netfilter.nf_conntrack_events_retry_timeout = 15
net.ipv4.netfilter.ip_conntrack_generic_timeout = 600
net.ipv4.netfilter.ip_conntrack_tcp_timeout_syn_sent = 120
net.ipv4.netfilter.ip_conntrack_tcp_timeout_syn_sent2 = 120
net.ipv4.netfilter.ip_conntrack_tcp_timeout_syn_recv = 60
net.ipv4.netfilter.ip_conntrack_tcp_timeout_established = 432000
net.ipv4.netfilter.ip_conntrack_tcp_timeout_fin_wait = 120
net.ipv4.netfilter.ip_conntrack_tcp_timeout_close_wait = 60
net.ipv4.netfilter.ip_conntrack_tcp_timeout_last_ack = 30
net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait = 120
net.ipv4.netfilter.ip_conntrack_tcp_timeout_close = 10
net.ipv4.netfilter.ip_conntrack_tcp_timeout_max_retrans = 300
net.ipv4.netfilter.ip_conntrack_udp_timeout = 30
net.ipv4.netfilter.ip_conntrack_udp_timeout_stream = 180
net.ipv4.netfilter.ip_conntrack_icmp_timeout = 30

All the timeouts are in seconds. net.netfilter.nf_conntrack_generic_timeout as you see is quite high – 600 secs = (10 minutes).
This kind of value means any NAT-ted connection not responding can stay hanging for 10 minutes!

The value net.netfilter.nf_conntrack_tcp_timeout_established = 432000 is quite high too (5 days!)
If this values, are not lowered the server will be an easy target for anyone who would like to flood it with excessive connections, once this happens the server will quick reach even the raised up value for net.nf_conntrack_max and the initial connection dropping will re-occur again …

With all said, to prevent the server from malicious users, situated behind the NAT plaguing you with Denial of Service attacks:

Lower net.ipv4.netfilter.ip_conntrack_generic_timeout to 60 – 120 seconds and net.ipv4.netfilter.ip_conntrack_tcp_timeout_established to stmh. like 54000

linux:~# sysctl -w net.ipv4.netfilter.ip_conntrack_generic_timeout = 120
linux:~# sysctl -w net.ipv4.netfilter.ip_conntrack_tcp_timeout_established = 54000

This timeout should work fine on the router without creating interruptions for regular NAT users. After changing the values and monitoring for at least few days make the changes permanent by adding them to /etc/sysctl.conf

linux:~# echo 'net.ipv4.netfilter.ip_conntrack_generic_timeout = 120' >> /etc/sysctl.conf
linux:~# echo 'net.ipv4.netfilter.ip_conntrack_tcp_timeout_established = 54000' >> /etc/sysctl.conf

Share this on

How to encrypt files with GPG and OpenSSL on GNU / Linux

Friday, November 25th, 2011

Encrypt files and directories with OpenSSL and GPG (GNUPG), OpenSSL and GPG encryption logo

I have just recently found out that it is possible to use openssl to encrypt files to tighten your security.
Why would I want to encrypt files? Well very simple, I have plain text files where I write down my passwords for servers or account logins for services I use on the internet.

Before this very day I use gpg to encrypt and decrypt my sensitive information files and archives. The way to encrypt files with GPG is very simple, here is an example:

server:~# ls -al test.txt
-rw-r--r-- 1 root root 12 Nov 25 16:50 test.txt
server:~# gpg -c test.txt > test.txt.gpg
Enter passphrase:
Repeat passphrase:

Typing twice the same password produces the encrypted file test.txt.gpg . In order to later decrypt the gpg password protected file I use cmd:

server:~# gpg -d test.txt.gpg >test.txt
Enter passphrase:
Repeat passphrase:
gpg: CAST5 encrypted data
gpg: encrypted with 1 passphrase
gpg: WARNING: message was not integrity protected

As one can see from above output by default gpg uses the CAST5 algorithm to encrypt the data. For all those curious on what kind of encryption does CAST5 provide and where the CAST5 origins are, in short CAST5 is a GNU invented cryptographic algorithm, the short description of the algorithm is as follows:

“…a DES-like Substitution-Permutation Network (SPN) cryptosystem which appears to have good resistance to differential cryptanalysis, linear cryptanalysis, and related-key cryptanalysis. This cipher also possesses a number of other desirable cryptographic properties, including avalanche, Strict Avalanche Criterion (SAC), Bit Independence Criterion (BIC), no complementation property, and an absence of weak and semi-weak keys.”

Anyways, for all those who trust more the DES128 encryption as an encryption algorithm to keep your data secret, the openssl command tool provides another mean to encrypt sensitive data.
To encrypt a file using the openssl’s DES encryption capabilities:

server:~# openssl des -salt -in test.txt -out test.txt.des
enter des-cbc encryption password:
Verifying - enter des-cbc encryption password:

As you can see to encrypt with the DES-CBC its necessery to type twice the secret password “salt” keyword which will be used as an encryption key.

To decrypt later on the DES encrypted file the cmd is:

server:~# openssl des -d -salt -in file.des -out file

In order to encrypt a whole directory earlier compressed with tar zip:

server:~# tar -czf - directory | openssl des -salt -out directory.tar.gz.des

Where directory is the name of directory which will be tarred and crypted.

To later decrypt with openssl the above encrypted tar.gz.des file:

server:~# openssl des -d -salt -in directory.tar.gzdes | tar -x

Share this on