Posts Tagged ‘hosts’

Why du and df reporting different on a filesystem / How to fix inconsistency between used space on FS and disk showing full strangeness

Wednesday, July 24th, 2019

linux-why-du-and-df-shows-different-result-inconsincy-explained-filesystem-full-oddity

If you're a sysadmin on a large server environment such as a couple of hundred of Virtual Machines running Linux OS on either physical host or OpenXen / VmWare hosted guest Virtual Machine, you might end up sometimes at an odd case where some mounted partition mount point reports its file use different when checked with
df
cmd than when checked with du command, like for example:
 

root@sqlserver:~# df -hT /var/lib/mysql
Filesystem   Type  Size Used Avail Use% Mounted On
/dev/sdb5      ext4    19G  3,4G    14G  20% /var/lib/mysql

Here the '-T' argument is used to show us the filesystem.

root@sqlserver:~# du -hsc /var/lib/mysql
0K    /var/lib/mysql/
0K    total

 

1. Simple debug on what might be the root cause for df / du inconsistency reporting

 

Of course the basic thing to do when in that weird situation is to be totally shocked how this is possible and to investigate a bit what is the biggest first level sub-directories that eat up the space on the mounted location, with du:

 

# du -hkx –max-depth=1 /var/lib/mysql/|uniq|sort -n
4       /var/lib/mysql/test
8       /var/lib/mysql/ezmlm
8       /var/lib/mysql/micropcfreak
8       /var/lib/mysql/performance_schema
12      /var/lib/mysql/mysqltmp
24      /var/lib/mysql/speedtest
64      /var/lib/mysql/yourls
144     /var/lib/mysql/narf
320     /var/lib/mysql/webchat_plus
424     /var/lib/mysql/goodfaithair
528     /var/lib/mysql/moonman
648     /var/lib/mysql/daniel
852     /var/lib/mysql/lessn
1292    /var/lib/mysql/gallery

The given output is in Kilobytes so it is a little bit hard to read, if you're used to Mbytes instead, do

 

 # du -hmx –max-depth=1 /var/lib/mysql/|uniq|sort -n|less

 

I've also investigated on the complete /var directory contents sorted by size with:

 

 # du -akx ./ | sort -n
5152564    ./cache/rsnapshot/hourly.2/localhost
5255788    ./cache/rsnapshot/hourly.2
5287912    ./cache/rsnapshot
7192152    ./cache


Even after finding out the bottleneck dirs and trying to clear up a bit, continued facing that inconsistently shown in two commands and if you're likely to be stunned like me and try … to move some files to a different filesystem to free up space or assigned inodes with a hope that shown inconsitency output will be fixed as it might be caused  due to some kernel / FS caching ?? and this will eventually make the mounted FS to refresh …

But unfortunately, if you try it you'll figure out clearing up a couple of Megas or Gigas will make no difference in cmd output.

In my exact case /var/lib/mysql is a separate mounted ext4 filesystem, however same issue was present also on a Network Filesystem (NFS) and thus, my first thought that this is caused by a network failure problem or NFS bug turned to be wrong.

After further short investigation on the inodes on the Filesystem, it was clear enough inodes are available:
 

# df -i /var/lib/mysql
Filesystem       Inodes  IUsed   IFree IUse% Mounted on
/dev/sdb5      1221600  2562 1219038   1% /var/lib/mysql

 

So the filled inodes count assumed issue also has been rejected.
P.S. (if you're not well familiar with them read manual, i.e. – man 7 inode).
 

– Remounting the mounted filesystem

To make sure the filesystem shown inconsistency between du and df is not due to some hanging network mount or bug, first logical thing I did is to remount the filesytem showing different in size, in my case this was done with:
 

# mount -o remount,rw -t ext4 /var/lib/mysql

For machines with NFS remote mounted storage locations, used:

# mount -o remount,rw -t nfs /var/www


FS remount did not solved it so I continued to ponder what oddity and of course I thought of a workaround (in case if this issues are caused by kernel bug or OS lib issue) reboot might be the solution, however unfortunately restarting the VMs was not a wanted easy to do solution, thus I continued investigating what is wrong …

Next check of course was to check, what kind of network connections are opened to the affected hosts with:
 

# netstat -tupanl


Did not found anything that might point me to the reported different Megabytes issue, so next step was to check what is the situation with currently opened files by running processes on the weird df / du reported systems with lsof, and boom there I observed oddity such as multiple files

 

# lsof -nP | grep '(deleted)'

COMMAND   PID   USER   FD   TYPE DEVICE    SIZE NLINK  NODE NAME
mysqld   2588  mysql    4u   REG 253,17      52     0  1495 /var/lib/mysql/tmp/ibY0cXCd (deleted)
mysqld   2588  mysql    5u   REG 253,17    1048     0  1496 /var/lib/mysql/tmp/ibOrELhG (deleted)
mysqld   2588  mysql    6u   REG 253,17       777884290     0  1497 /var/lib/mysql/tmp/ibmDFAW8 (deleted)
mysqld   2588  mysql    7u   REG 253,17       123667875     0 11387 /var/lib/mysql/tmp/ib2CSACB (deleted)
mysqld   2588  mysql   11u   REG 253,17       123852406     0 11388 /var/lib/mysql/tmp/ibQpoZ94 (deleted)

 

Notice that There were plenty of '(deleted)' STATE files shown in memory an overall of 438:

 

# lsof -nP | grep '(deleted)' |wc -l
438


As I've learned a bit online about the problem, I found it is also possible to find deleted unlinked files only without any greps (to list all deleted files in memory files with lsof args only):

 

# lsof +L1|less


The SIZE field (fourth column)  shows a number of files that are really hard in size and that are kept in open on filesystem and in memory, totally messing up with the filesystem. In my case this is temp files created by MYSQLD daemon but depending on the server provided service this might be apache's www-data, some custom perl / bash script executed via a cron job, stalled rsync jobs etc.
 

2. Check all the list open files with the mysql / root user as part of the the server filesystem inconsistency debugging with:

 

– Grep opened files on server by user

# lsof |grep mysql
mysqld    1312                       mysql  cwd       DIR               8,21       4096          2 /var/lib/mysql
mysqld    1312                       mysql  rtd       DIR                8,1       4096          2 /
mysqld    1312                       mysql  txt       REG                8,1   20336792   23805048 /usr/sbin/mysqld
mysqld    1312                       mysql  mem       REG               8,21      24576         20 /var/lib/mysql/tc.log
mysqld    1312                       mysql  DEL       REG               0,16                 29467 /[aio]
mysqld    1312                       mysql  mem       REG                8,1      55792   14886933 /lib/x86_64-linux-gnu/libnss_files-2.28.so

 

# lsof | grep root
COMMAND    PID   TID TASKCMD          USER   FD      TYPE             DEVICE   SIZE/OFF       NODE NAME
systemd      1                        root  cwd       DIR                8,1       4096          2 /
systemd      1                        root  rtd       DIR                8,1       4096          2 /
systemd      1                        root  txt       REG                8,1    1489208   14928891 /lib/systemd/systemd
systemd      1                        root  mem       REG                8,1    1579448   14886924 /lib/x86_64-linux-gnu/libm-2.28.so

Other command that helped to track the discrepancy between df and du different file usage on FS is:
 

# du -hxa  / | egrep '^[[:digit:]]{1,1}G[[:space:]]*'
 

 

3. Fixing large files kept in memory filesystem problem


What is the real reason for ending up with this file handlers opened by running backgrounded programs on the Linux OS?
It could be multiple  but most likely it is due to exceeded server / client interactions or breaking up RAM or HDD drive with writing plenty of logs on the FS without ending keeping space occupied or Programming library bugs used by hanged service leaving the FH opened on storage.

What is the solution to file system files left in memory problem?

The best solution is to first fix custom script or hanged service and then if possible to simply restart the server to make the kernel / services reload or if this is not possible just restart the problem creation processes.

Once the process is identified like in my case this was MySQL on systemd enabled newer OS distros, just do:

 

 

# systemctl restart mysqld.service


or on older init.d system V ones:

# /etc/init.d/service restart


For custom hanged scripts being listed in ps axuwef you can grep the pid and do a kill -HUP (if the script is written in a good way to recognize -HUP and restart the sub-running process properly – BE EXTRA CAREFUL IF YOU'RE RESTARTING BROKEN SCRIPTS as this might cause your running service disruptions …).

# pgrep -l script.sh
7977 script.sh


# kill -HUP PID

 

Now finally this should either mitigate or at best case completely solve the reported disagreement between df and du, after which the calculated / reported disk space should be back to normal and show up approximately the same (note that size changes a bit as mysql service is writting data) constantly extending the size between the two checks.

 

# df -hk /var/lib/mysql; du -hskc /var/lib/mysql
Filesystem       Inodes  IUsed   IFree IUse% Mounted on
/dev/sdb5        19097172 3472744 14631296  20% /var/lib/mysql
3427772    /var/lib/mysql
3427772    total

 

What we learned?

What I've explained in this article is why and how it comes that 'zoombie' files reside on a filesystem
appearing to be eating disk space on a mounted local or network partition, giving strange inconsistent
reports, leading to system service disruptions and impossibility to have correctly shown information on used
disk space on mounted drive.

I went through with some standard logic on debugging service / filesystem / inode issues up explainat, that led me to the finding about deleted files being kept in filesystem and producing the filesystem strange sized / showing not correct / filled even after it was extended with tune2fs and was supposed to have extra 50GBs.

Finally it was explained shortly how to HUP / restart hanging script / service to fix it.

Some few good readings that helped to fix the issue:

What to do when du and df report different usage is here
df in linux not showing correct free space after file removal is here
Why do “df” and “du” commands show different disk usage?
 

Nginx increase security by putting websites into Linux jails howto

Monday, August 27th, 2018

linux-jail-nginx-webserver-increase-security-by-putting-it-and-its-data-into-jail-environment

If you're sysadmining a large numbers of shared hosted websites which use Nginx Webserver to interpret PHP scripts and serve HTML, Javascript, CSS … whatever data.

You realize the high amount of risk that comes with a possible successful security breach / hack into a server by a malicious cracker. Compromising Nginx Webserver by an intruder automatically would mean that not only all users web data will get compromised, but the attacker would get an immediate access to other data such as Email or SQL (if the server is running multiple services).

Nowadays it is not so common thing to have a multiple shared websites on the same server together with other services, but historically there are many legacy servers / webservers left which host some 50 or 100+ websites.

Of course the best thing to do is to isolate each and every website into a separate Virtual Container however as this is a lot of work and small and mid-sized companies refuse to spend money on mostly anything this might be not an option for you.

Considering that this might be your case and you're running Nginx either as a Load Balancing, Reverse Proxy server etc. , even though Nginx is considered to be among the most secure webservers out there, there is absolutely no gurantee it would not get hacked and the server wouldn't get rooted by a script kiddie freak that just got in darknet some 0day exploit.

To minimize the impact of a possible Webserver hack it is a good idea to place all websites into Linux Jails.

linux-jail-simple-explained-diagram-chroot-jail

For those who hear about Linux Jail for a first time,
chroot() jail is a way to isolate a process / processes and its forked children from the rest of the *nix system. It should / could be used only for UNIX processes that aren't running as root (administrator user), because of the fact the superuser could break out (escape) the jail pretty easily.

Jailing processes is a concept that is pretty old that was first time introduced in UNIX version 7 back in the distant year 1979, and it was first implemented into BSD Operating System ver. 4.2 by Bill Joy (a notorious computer scientist and co-founder of Sun Microsystems). Its original use for the creation of so called HoneyPot – a computer security mechanism set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of information systems that appears completely legimit service or part of website whose only goal is to track, isolate, and monitor intruders, a very similar to police string operations (baiting) of the suspect. It is pretty much like а bait set to collect the fish (which in this  case is the possible cracker).

linux-chroot-jail-environment-explained-jailing-hackers-and-intruders-unix

BSD Jails nowadays became very popular as iPhones environment where applications are deployed are inside a customly created chroot jail, the principle is exactly the same as in Linux.

But anyways enough talk, let's create a new jail and deploy set of system binaries for our Nginx installation, here is the things you will need:

1. You need to have set a directory where a copy of /bin/ls /bin/bash /bin/,  /bin/cat … /usr/bin binaries /lib and other base system Linux system binaries copy will reside.

 

server:~# mkdir -p /usr/local/chroot/nginx

 


2. You need to create the isolated environment backbone structure /etc/ , /dev, /var/, /usr/, /lib64/ (in case if deploying on 64 bit architecture Operating System).

 

server:~# export DIR_N=/usr/local/chroot/nginx;
server:~# mkdir -p $DIR_N/etc
server:~# mkdir -p $DIR_N/dev
server:~# mkdir -p $DIR_N/var
server:~# mkdir -p $DIR_N/usr
server:~# mkdir -p $DIR_N/usr/local/nginx
server:~# mkdir -p $DIR_N/tmp
server:~# chmod 1777 $DIR_N/tmp
server:~# mkdir -p $DIR_N/var/tmp
server:~# chmod 1777 $DIR_N/var/tmp
server:~# mkdir -p $DIR_N/lib64
server:~# mkdir -p $DIR_N/usr/local/

 

3. Create required device files for the new chroot environment

 

server:~# /bin/mknod -m 0666 $D/dev/null c 1 3
server:~# /bin/mknod -m 0666 $D/dev/random c 1 8
server:~# /bin/mknod -m 0444 $D/dev/urandom c 1 9

 

mknod COMMAND is used instead of the usual /bin/touch command to create block or character special files.

Once create the permissions of /usr/local/chroot/nginx/{dev/null, dev/random, dev/urandom} have to be look like so:

 

server:~# ls -l /usr/local/chroot/nginx/dev/{null,random,urandom}
crw-rw-rw- 1 root root 1, 3 Aug 17 09:13 /dev/null
crw-rw-rw- 1 root root 1, 8 Aug 17 09:13 /dev/random
crw-rw-rw- 1 root root 1, 9 Aug 17 09:13 /dev/urandom

 

4. Install nginx files into the chroot directory (copy all files of current nginx installation into the jail)
 

If your NGINX webserver installation was installed from source to keep it latest
and is installed in lets say, directory location /usr/local/nginx you have to copy /usr/local/nginx to /usr/local/chroot/nginx/usr/local/nginx, i.e:

 

server:~# /bin/cp -varf /usr/local/nginx/* /usr/local/chroot/nginx/usr/local/nginx

 


5. Copy necessery Linux system libraries to newly created jail
 

NGINX webserver is compiled to depend on various libraries from Linux system root e.g. /lib/* and /lib64/* therefore in order to the server work inside the chroot-ed environment you need to transfer this libraries to the jail folder /usr/local/chroot/nginx

If you are curious to find out which libraries exactly is nginx binary dependent on run:

server:~# ldd /usr/local/nginx/usr/local/nginx/sbin/nginx

        linux-vdso.so.1 (0x00007ffe3e952000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f2b4762c000)
        libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007f2b473f4000)
        libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f2b47181000)
        libcrypto.so.0.9.8 => /usr/local/lib/libcrypto.so.0.9.8 (0x00007f2b46ddf000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f2b46bc5000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f2b46826000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f2b47849000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f2b46622000)


The best way is to copy only the libraries in the list from ldd command for best security, like so:

 

server: ~# cp -rpf /lib/x86_64-linux-gnu/libthread.so.0 /usr/local/chroot/nginx/lib/*
server: ~# cp -rpf library chroot_location

etc.

 

However if you're in a hurry (not a recommended practice) and you don't care for maximum security anyways (you don't worry the jail could be exploited from some of the many lib files not used by nginx and you don't  about HDD space), you can also copy whole /lib into the jail, like so:

 

server: ~# cp -rpf /lib/ /usr/local/chroot/nginx/usr/local/nginx/lib

 

NOTE! Once again copy whole /lib directory is a very bad practice but for a time pushing activities sometimes you can do it …


6. Copy /etc/ some base files and ld.so.conf.d , prelink.conf.d directories to jail environment
 

 

server:~# cp -rfv /etc/{group,prelink.cache,services,adjtime,shells,gshadow,shadow,hosts.deny,localtime,nsswitch.conf,nscd.conf,prelink.conf,protocols,hosts,passwd,ld.so.cache,ld.so.conf,resolv.conf,host.conf}  \
/usr/local/chroot/nginx/usr/local/nginx/etc

 

server:~# cp -avr /etc/{ld.so.conf.d,prelink.conf.d} /usr/local/chroot/nginx/nginx/etc


7. Copy HTML, CSS, Javascript websites data from the root directory to the chrooted nginx environment

 

server:~# nice -n 10 cp -rpf /usr/local/websites/ /usr/local/chroot/nginx/usr/local/


This could be really long if the websites are multiple gigabytes and million of files, but anyways the nice command should reduce a little bit the load on the server it is best practice to set some kind of temporary server maintenance page to show on the websites index in order to prevent the accessing server clients to not have interrupts (that's especially the case on older 7200 / 7400 RPM non-SSD HDDs.)
 

 

8. Stop old Nginx server outside of Chroot environment and start the new one inside the jail


a) Stop old nginx server

Either stop the old nginx using it start / stop / restart script inside /etc/init.d/nginx (if you have such installed) or directly kill the running webserver with:

 

server:~# killall -9 nginx

 

b) Test the chrooted nginx installation is correct and ready to run inside the chroot environment

 

server:~# /usr/sbin/chroot /usr/local/chroot/nginx /usr/local/nginx/nginx/sbin/nginx -t
server:~# /usr/sbin/chroot /usr/local/chroot/nginx /usr/local/nginx/nginx/sbin/nginx

 

c) Restart the chrooted nginx webserver – when necessery later

 

server:~# /usr/sbin/chroot /nginx /usr/local/chroot/nginx/sbin/nginx -s reload

 

d) Edit the chrooted nginx conf

If you need to edit nginx configuration, be aware that the chrooted NGINX will read its configuration from /usr/local/chroot/nginx/nginx/etc/conf/nginx.conf (i'm saying that if you by mistake forget and try to edit the old config that is usually under /usr/local/nginx/conf/nginx.conf

 

 

MySQL: How to check user privileges and allowed hosts to connect with mysql cli

Wednesday, April 2nd, 2014

how-to-check-user-privileges-and-allowed-hosts-to-connect-with-mysql-cli

On a project there are some issues with root admin user unable to access the server from remote host and the most probable reason was there is no access to the server from that host thus it was necessary check mysql root user privilegse and allowed hosts to connect, here SQL query to do it:
 

mysql> select * from `user` where  user like 'root%';
+——————————–+——+——————————————-+————-+————-+————-+————-+————-+———–+————-+—————+————–+———–+————+—————–+————+————+————–+————+———————–+——————+————–+—————–+——————+——————+—————-+———————+——————–+——————+————+————–+———-+————+————-+————–+—————+————-+—————–+———————-+
| Host                           | User | Password                                  | Select_priv | Insert_priv | Update_priv | Delete_priv | Create_priv | Drop_priv | Reload_priv | Shutdown_priv | Process_priv | File_priv | Grant_priv | References_priv | Index_priv | Alter_priv | Show_db_priv | Super_priv | Create_tmp_table_priv | Lock_tables_priv | Execute_priv | Repl_slave_priv | Repl_client_priv | Create_view_priv | Show_view_priv | Create_routine_priv | Alter_routine_priv | Create_user_priv | Event_priv | Trigger_priv | ssl_type | ssl_cipher | x509_issuer | x509_subject | max_questions | max_updates | max_connections | max_user_connections |
+——————————–+——+——————————————-+————-+————-+————-+————-+————-+———–+————-+—————+————–+———–+————+—————–+————+————+————–+————+———————–+——————+————–+—————–+——————+——————+—————-+———————+——————–+——————+————+————–+———-+————+————-+————–+—————+————-+—————–+———————-+
| localhost                      | root | *5A07790DCF43AC89820F93CAF7B03DE3F43A10D9 | Y           | Y           | Y           | Y           | Y           | Y         | Y           | Y             | Y            | Y         | Y          | Y               | Y          | Y          | Y            | Y          | Y                     | Y                | Y            | Y               | Y                | Y                | Y              | Y                   | Y                  | Y                | Y          | Y            |          |            |             |              |             0 |           0 |               0 |                    0 |
| server737                        | root | *5A07790DCF43AC89820F93CAF7B03DE3F43A10D9 | Y           | Y           | Y           | Y           | Y           | Y         | Y           | Y             | Y            | Y         | Y          | Y               | Y          | Y          | Y            | Y          | Y                     | Y                | Y            | Y               | Y                | Y                | Y              | Y                   | Y                  | Y                | Y          | Y            |          |            |             |              |             0 |           0 |               0 |                    0 |
| 127.0.0.1                      | root | *5A07790DCF43AC89820F93CAF7B03DE3F43A10D9 | Y           | Y           | Y           | Y           | Y           | Y         | Y           | Y             | Y            | Y         | Y          | Y               | Y          | Y          | Y            | Y          | Y                     | Y                | Y            | Y               | Y                | Y                | Y              | Y                   | Y                  | Y                | Y          | Y            |          |            |             |              |             0 |           0 |               0 |                    0 |
| server737.server.myhost.net | root | *5A07790DCF43FC89820A93CAF7B03DE3F43A10D9 | Y           | Y           | Y           | Y           | Y           | Y         | Y           | Y             | Y            | Y         | Y          | Y               | Y          | Y          | Y            | Y          | Y                     | Y                | Y            | Y               | Y                | Y                | Y              | Y                   | Y                  | Y                | Y          | Y            |          |            |             |              |             0 |           0 |               0 |                    0 |
| server4586                        | root | *5A07790DCF43AC89820F93CAF7B03DE3F43A10D9 | N           | N           | N           | N           | N           | N         | N           | N             | N            | N         | N          | N               | N          | N          | N            | N          | N                     | N                | N            | N               | N                | N                | N              | N                   | N                  | N                | N          | N            |          |            |             |              |             0 |           0 |               0 |                    0 |
| server4586.myhost.net              | root | *5A07790DCF43AC89820F93CAF7B03DE3F43A10D9 | N           | N           | N           | N           | N           | N         | N           | N             | N            | N         | N          | N               | N          | N          | N            | N          | N                     | N                | N            | N               | N                | N                | N              | N                   | N                  | N                | N          | N            |          |            |             |              |             0 |           0 |               0 |                    0 |
+——————————–+——+——————————————-+————-+————-+————-+————-+————-+———–+————-+—————+————–+———–+————+—————–+————+————+————–+————+———————–+——————+————–+—————–+——————+——————+—————-+———————+——————–+——————+————+————–+———-+————+————-+————–+—————+————-+—————–+———————-+
6 rows in set (0.00 sec)

mysql> exit


Here is query explained:

select * from `user` where  user like 'root%'; query means:

select * – show all
from `user` – from user database
where user like 'root%' – where there is match in user column to any string starting with 'root*',
 

How to configure Apache to serve as load balancer between 2 or more Webservers on Linux / Apache basic cluster

Monday, October 28th, 2013

Apache doing load balancer between Apache servers Apache basic cluster howto

Any admin somehow involved in sphere of UNIX Webhosting knows Apache pretty well. I've personally used Apache for about 10 years now and until now I always used it as a single installation on a Linux. Always so far whenever the requirements for more client connections raised up, web hosting companies I worked for did a migration of Website / websites on a newer better (quicker) server hardware configuration. Everyone knows keeping a site on a single Apache server poses great RISK if the machine hangs up for a reason or gets DoSed this makes websites unavailable until reboot and poses unwanted downtime. Though I know pretty well the concept of load balancing until today I never had configured Apache to serve as Load balancer between two or more identical machines set-upped to interpret PHP / Perl scripts. Amazingly load balancing users web traffic happened to be much easier than I supposed. All necessary is a single Apache configured with mod_proxy_balancer which acts as proxy and ships HTTP requests between two Apache servers. Logically its very important that the entry traffic host with Apache mod_proxy_balancer has to be configured to only run only mod_proxy_balancer otherwise it will be eating unnecessary server memory as with each unnecessary loaded Apache module usage of memory resources raise up.

The scenario of my load balancer and 2 webserver hosts behind it goes like this:

a. Apache with load balancer with external IP address – i.e. (83.228.93.76) with DNS record for ex. www.mybalanced-webserver.com
b. Normally configured Apache to run PHP scripts with internal IP address through NAT – (Network address translation) (on 10.10.10.1) – known under host JEREMIAH
c. Second identical Apache to above host running on 10.10.10.1 with IP 10.10.10.2. with internal host ISSIAH.

N.B.! All 3 hosts are running latest  Debian GNU / Linux 7.2 Wheezy
 
After having this in mind, I proceeded with installing the on 83.228.93.76 apache and removing all unnecessary modules.

!!! Important note is if you use some already existent Apache configured to run PHP or any other unnecessary stuff – make sure you remove this otherwise expect severe performance issues !!!
1. Install Apache webserver

loadbalancer:~# apt-get install --yes apache2

2. Enable mod proxy proxy_balancer and proxy_http
On Debian Linux modules are enabled with a2enmod command;

loadbalancer:~# a2enmod proxy
loadbalancer:~# a2enmod proxy_balancer
loadbalancer:~# a2enmod proxy_http

Actually what a2enmod command does is to make symbolic links from /etc/apache2/mods-available/{proxy,proxy_balancer,proxy_http} to /etc/apache2/mods-available/{proxy,proxy_balancer,proxy_http}

3. Configure Apache mod proxy to load balance traffic between JEREMIAH and ISSAIAH webservers

loadbalancer:~# vim /etc/apache2/conf.d/proxy_balancer

/etc/apache2/conf.d/proxy-balancer

Paste inside:

<Proxy balancer://mycluster> BalancerMember http://10.0.0.1 BalancerMember http://10.0.0.4 </Proxy> ProxyPass / balancer://mycluster – See more at: http://www.elastichosts.com/support/tutorials/add-a-front-end-apache-cloud-load-balancer/#sthash.29iPnZpz.dpuf

<Proxy balancer://mycluster>
BalancerMember http://10.10.10.1
BalancerMember http://10.10.10.2
</Proxy>
ProxyPass / balancer://mycluster

<Proxy balancer://mycluster> BalancerMember http://10.0.0.1 BalancerMember http://10.0.0.4 </Proxy> ProxyPass / balancer://mycluster – See more at: http://www.elastichosts.com/support/tutorials/add-a-front-end-apache-cloud-load-balancer/#sthash.29iPnZpz.dpuf

<Proxy balancer://mycluster> BalancerMember http://10.0.0.1 BalancerMember http://10.0.0.4 </Proxy> ProxyPass / balancer://mycluster – See more at: http://www.elastichosts.com/support/tutorials/add-a-front-end-apache-cloud-load-balancer/#sthash.29iPnZpz.dpuf

<Proxy balancer://mycluster> BalancerMember http://10.0.0.1 BalancerMember http://10.0.0.4 </Proxy> ProxyPass / balancer://mycluster – See more at: http://www.elastichosts.com/support/tutorials/add-a-front-end-apache-cloud-load-balancer/#sthash.29iPnZpz.dpuf

<Proxy balancer://mycluster> BalancerMember http://10.0.0.1 BalancerMember http://10.0.0.4 </Proxy> ProxyPass / balancer://mycluster – See more at: http://www.elastichosts.com/support/tutorials/add-a-front-end-apache-cloud-load-balancer/#sthash.29iPnZpz.dpuf


4. Configure Apache Proxy to access traffic from all hosts (by default it is configured to Deny from all)

<Proxy balancer://mycluster> BalancerMember http://10.0.0.1 BalancerMember http://10.0.0.4 </Proxy> ProxyPass / balancer://mycluster – See more at: http://www.elastichosts.com/support/tutorials/add-a-front-end-apache-cloud-load-balancer/#sthash.29iPnZpz.dpuf

loadbalancer:~# vim /etc/apache2/mods-enabled/proxy.conf

Change there Deny from all to Allow from all

Deny from all
/etc/apache2/mods-enabled/proxy.conf

5. Restart Apache

loadbalancer:~# /etc/init.d/apache2 restart

Once again I have to say that above configuration is actually a basic Apache cluster so hosts behind load balancer Apache there should be machines configured to interpret scripts identically. If one Apache server of the cluster dies, the other Apache + PHP host will continue serve and deliver webserver content so no interruption will happen. This is not a round robin type of load balancer. Above configuration will distribute Webserver load requested in ratio 3/4 3 parts will be served by First server and 4th parth will be delivered by 2nd Apache.
Well, that's all load balancer is configured! Now to test it open in browser www.mybalanacer-webserver.com or try to access it by IP in my case: 83.228.93.76

a2enmod proxy

Saving multiple passwords in Linux with Revelation and Keepass2 – Keeping track of multiple passwords

Thursday, October 17th, 2013

System Administrators who use MS Windows to access multiple hosts in big companies like HP or IBM certainly use some kind of multiple password manager like PasswordSafe.

Keep multiple passwords safe in Microsoft Windows 7 passwordsafe with masterpassword

When number of passwords you have to keep in mind grows significantly using something like PasswordSafe becomes mandatory. Same is valid also for valid for system administrators who use GNU / Linux as a Desktop environment to administer thousands or hundreds of servers. I'm one of those admins who for years use Linux and until recently I kept logging all my passwords in separate directory full with text files created with vim (text editor). As the number of passwords and accesses to servers and web interfaces grow up dramatically as well as my requirement for security raised up I wanted to have my passwords secured being kept encrypted on my hard drive. For those who never use PasswordSafe the idea of program is to store all passwords you have in encrypted database which can be only opened through PasswordSafe by providing a master password.

passwordsafe on microsoft windows keep in order multiple passwords manager

Of course having one master password imposes other security risks as someone who knows the MasterPass can easily access all your passwords anyways for now such level of security perfectly fits my needs.

PasswordSafe is since recently Open Source so there is a Linux port, but the port is still in beta and though I tried hard to install it using provided .deb binaries as well as compile from source, I finally give it up. And decided to review what kind of password managers are available in Debian Wheezy's ports.

Here are those I found with;

apt-cache search password|grep -i manager

cpm – Curses based password manager using PGP-encryption
fpm2 – password manager with GTK+ 2.x GUI
gringotts – secure password and data storage manager
kedpm – KED Password Manager
kedpm-gtk – KED Password Manager
keepass2 – Password manager
keepassx – Cross Platform Password Manager
kwalletmanager – secure password wallet manager
password-gorilla – cross-platform password manager
revelation – GNOME2 Password manager

I didn't have the time to test each one of them, so I installed and checked only those which seemed more reliable, i.e.:
keepass2 and revelation

# apt-get install –yes fpm2 keepass2 revelation

Below is screenshot of each one of managers:

Revelation Linux Gnome graphic password manager program

Revelation – GNOME Password Manager

keepass2 Linux gui password manager screenshot Debian - graphic manager for storing passwords

kde password safe gui program Linux Debian screenshot

KDE QT Interface Linux GUI Password Manager (KeePass2)

With one of this tools admin's life is much easier as you don't have to get crazy and remember thousands of passwords.
Hope this helps some admin out there! Enjoy ! 🙂
 

Linux: Add routing from different class network A (192.168.1.x) to network B (192.168.10.x) with ip route command

Friday, July 12th, 2013

adding routing from one network to other linux with ip route

I had a Linux router which does NAT for a local network located behind a CISCO router receiving internet via its WAN interface routing traffic  to Linux with IP 192.168.1.235. The Linux router has few network interfaces and routes traffic for networks; 192.168.1.0/24 and 192.168.10.0/24. Another Linux with IP 192.168.1.8 had to talk to 192.168.10.0/24 (because it was necessary to be able access  ISCO's router web interface accessible via a local network interface with IP (192.168.10.1). Access to 192.168.10.1 wasn't possible from 192.168.1.8 because routing on NAT-ting Linux (192.168.1.235) to 192.168.10.0/24 network was missing. To make 192.168.1.8 Linux communicate with 192.168.10.1,  had to add following routing rules with ip command on both the Linux with IP 192.168.1.235 and Linux host behind NAT (192.168.1.8).

1. On Server (192.168.1.235) run in root shell and add to /etc/rc.local

# /sbin/ip r add 192.168.10.0/24 via 192.168.1.235
And then copy paste same line before exit 0 in /etc/rc.local

Its good idea always to check routing, after adding anything new, here is mine:
 

# ip r show

192.168.5.0/24 dev eth0  proto kernel  scope link  src 192.168.5.1
192.168.4.0/24 dev eth0  proto kernel  scope link  src 192.168.4.1
192.168.3.0/24 dev eth0  proto kernel  scope link  src 192.168.3.1
192.168.2.0/24 dev eth0  proto kernel  scope link  src 192.168.2.1
192.168.1.0/24 dev eth0  proto kernel  scope link  src 192.168.1.235
192.168.0.0/24 dev eth0  proto kernel  scope link  src 192.168.0.1
192.168.10.0/24 dev eth1  proto kernel  scope link  src 192.168.10.2
default via 192.168.10.1 dev eth1 
 

2. And also on Second Linux host (192.168.1.8) 

# /sbin/ip r add 192.168.10.0/24 via 192.168.1.235
To make routing permanent again paste in /etc/rc.local before exit 0

After above rules, I can normally ping and access hosts on class C network 192.168.10.1-255  from 192.168.1.8.

ClamTK Linux Desktop Anti-Virus program – Checking Windows mapped drives with ClamTK

Thursday, June 20th, 2013

Linux desktop graphical program to scan for-viruses ClamTK clamav frontend application

In general Linux has fame for being Virus Free Operating System. During the 13 last years as dedicated GNU / Linux user, I've seen Linux servers with binaries infected with Viruses, however the hosts, were severely messed hosts because noone updated them on time and script kiddy crackers has "hacked" multiple times. In lifetime one of my old testing computers got infected with Virus because of my mistake of running "suspicious" pre-compiled "cracker" software binaries with no MD5 verification and "questionable" websites…
I share this story because, I want to beat-up the Myth that Linux cannot have Viruses. It CAN but not very likely to happen 🙂

As a Desktop user over the last 10 years, even though I installed plenty of packages from third party sources and never happened to infect my computer with Virus – or at least if I infected I never knew it. A lot of popular MS-Windows Anti-Virus programs, has already ports for Linux. Just to mention few non-free Linux AV software providing install binaries;

  • Avast

  • BitDefender

  • AVG

  • Dr. Web

Though risk of Viruses on Linux is so tiny, it is useful to have ANTI-Virus Software to check files received from Skype, E-mails and onse downloaded with Browser. I always prefer so until now I used Clamav Antivirus to keep an eye periodically on my Desktop Linux host and servers running mail servers (those who run Mail Servers know how useful is Clamav in stopping tons of E-mail attached Malware Viruses and Trojans).

I use mostly Debian Linux, so on every new server or Desktop one of first things I did was to install it, i.e.:

# apt-get --yes install clamav
...

Before I knew Clamav AV for Windows has GUI, anyways till recently I didn't know if there is some kind of free software AV Graphical frontend for Linux. I just found out about ClamTK

Linux Free Antivirus ClamTk clamav Virus Scanner graphical frontend

ClamTK is available in most Linux distributions from default package repositories

On Debian and Ubuntu to install it run common:

debian:~# apt-get --yes clamtk
...

On Fedora and CentOS Linux to install:

[root@fedora ~]# yum -y install clamtk
...

Its best to run it as root superuser (or via sudo) to make ClamTK able read all files or mounts on system:

hipo@debian:~$ sudo clamtk

ClamTK is very simple to use and there are only few configuration options;
clamtk desktop linux free antivirus startup preferences

clamtk scan for viruses linux gui proxy

linux Anti-Virus Desktop graphics  easy to use AntiVirus ClamTK preferences screenshot

ClamTK is very useful when used with mounted Samba Shared (Mapped) Windows drives to scan for Viruses and malware, i.e, after mounting share using cmd like:

# smbmount //192.168.2.28/projects /mnt/projects -o user=USERNAME

How to change hostname permanently on Debian and Ubuntu Linux

Thursday, March 14th, 2013

Change hostname on Debian and Ubuntu Linux terminal hostname screenshot

I had to configure a newly purchased dedicated server from UK2. New servers cames shipped with some random assigned node hostname  like server42803. This is pretty annoying, and has to be changed especially if your company has a naming server policy in some format like; company-s1#, company-s2#, company-sN#.

Changing hostname via hosts definition file /etc/hosts to assign the IP address of the host to the hostname is not enough for changing the hostname shown in shell via SSH user login.

To display full hostname on Debian and Ubuntu, had to type:

server42803:~# hostname
server42803.uk2net.com

To change permanently server host to lets say company-s5;

server42803:~# cat /etc/hostname | \
sed -e 's#server42803.uk2net.com#company-s5#' > /etc/hostname

To change for current logged in SSH session:

server42803:~# hostname company-s5
company-s5:~#

Finally because already old hostname is red by sshd, you have to also restart sshd for new hostname to be visible on user ssh:

company-s5:~# /etc/init.d/ssh restart
...

As well as run script:

company-s5:~# /etc/init.d/hostname.sh

Mission change host accomplished, Enjoy 🙂

Set ISP provider default DNS to overwrite DHCP settings on Debian / Ubuntu Linux

Monday, February 11th, 2013

dhcp linux ovewrite dns settings from console and terminal Debian Ubuntu Fedora CentOS Linux
 

These days, almost every home wireless ISP network router, ADSL modem etc. has its own local running DNS service. Generally this is very good as it puts off the burden of  Internet Service Provider DNS servers and "saves" multitude of users from so common overloads with ISP DNS Servers – caused by ISP DNS Service unable to handle the incoming user DNS (Domain resolve) traffic. Common scenario, where ISP DNS servers is unable to handle DNS traffic is when few thousands of users belonging to ISP gets infected with a Worm, Trojan horse or Virus doing plenty of DNS Spoofs and distributed DDoS attacks.

Though local DNS service (daemons) on local Cable and Wireless Network Routers is something designed to be good it becomes another bottleneck for DNS resolve problems, Calling the ISP tech support for help is often loose of time, as  in ISPs it is so rare to find someone understanding Linux Networking.

The periodic issues with DNS resolving from home routers in my observations has 3 main reasons;

  • Local Cheap network Wireless routers with slow hardware (CPU) and little memory are unable to handle DNS requests, because of torrent Downloads
     
  • DNS Wireless Router can't handle DNS requests to its DNS local service, because a small local network of computers with a landline and wireless (lets say 5 to 10) is trying to access the Internet (browsing) – again due to its low hardware paremeters router CPU heats up cause of multitude of DNS requests

     

  • Something is wrong with general network topology of PCs behind the router. Often people buy a router and use it shared with their neighbors – tampering with Router settings messing it up.

DNS resolving problems are even harder to track whether Internet provider has policy to deliver Internet via automated IP assignment protocol (DHCP),

A very common scenario, I've seen is Internet coming via ISP ADSL / Network router installed at home and mis-configured due to a custom user installation,   or because of ISP technician who installed router in hurry or lacked good competency and messed up with Router Network configuration.

During the years I had to install various Linux distributions for Desktop use in networks located behind such mis-configured Network Hubs. Because of this mis-configured DNS, even though Linux hosts succesfully graps the IP addresses for host IP, Gateway and DNS, they occasionally create problems with Internet Connection leaving the user with impression that Linux is not ready for Desktop use or somehow it is the the Linux distro fault.

After giving an introduction I will continue further to exact problem I've faced with one such mis-configured just today. The same issue has happened in my sysadmin practice over and over again so many times. So finally I decided to write this small story explaining the whole scenario, its causes and fix.

I'm writing this little post from another Linux installation like this which is living on a small local network served by a Vivacom ISP through ADSL Commtrend SmartAX MT882 Router.

The Commtrend does NAT (Network Address Translation)-ting for whole local network, auto-assigning some DNS server to Natted IP PCs local Network addresses in IP raneg; (172.16.0.0-255). The DNS the router assigns for internet is with IP (172.16.0.1), where in reality the DNS on the router is run on Network interface with IP 1921.68.1.1, in other words belonging to the router from another network. Thus PCs connected via a UTP land-line cable connection does not see 192.168.1.1 – meaning Domain name resolving works not at all.
The solution is to assign a static IP address for DNS of Google Public DNS or Open DNS, while leaving the Linux host to automatically assign LAN IP and Gateway using DHCP – (Dynamic Host Configuration Protocol).

By default most Linux distributions use DNS configured in /etc/resolv.conf as a host DNS servers, however as CommTrend Network Router does provide settings for DNS Servers to be used for resolving along with other settings on each Linux host boot settings from /etc/resolv.conf gets ovewritted with the unreachable (from 172.16.0.255), nameserver 192.168.1.1.

Thus to work-around this on most all Linux distributions you can set /etc/resolv.conf to be overwritten adding a line to /etc/rc.local script (before its last line – exit 0);

echo 'nameserver 8.8.8.8' > /etc/resolv.conf
echo 'nameserver 8.8.4.4' >> /etc/resolv.conf

This method is universal, but the problem with it arises, if on the Linux host is planned to run 24 hours a day. DHCP Servers on router has configured DHCP Expiry lease time, which is different on different routers but usually few hours i.e. (4 hrs). Thus in 4 hours, due to DHCP Lease expiry the Linux host will question the DHCP Server for IP, getting together with DHCP IP and Gateway Settings also a DNS IP (overwritting again /etc/resolv.conf – with local running ISP Router IP – 192.168.1.1). One stupid solution of course is to use good old Windows philosophy (reboot it and it will work).

Other little more intelligent but not very efficient solution to problem is to set a cronjob, to run every 1 minute and overwrite /etc/resolv.conf DNS setting.

# crontab -u root -e

*/1 * * * * echo -e 'nameserver 8.8.8.8\nnameserver 8.8.4.4' > /etc/resolv.conf >/dev/null 2>&1

Since the cronjob to overwrite DNS IPs runs every one minute it is possible the host ends up without internet from few secs to 1 minute, this might happen quite rare so for a desktop this is ok. Other inconvenience is it puts a tiny load on system every 1 minute.

Final and best solution is to configure DNS server from /etc/dhcp/dhclient.conf  for Ethernet Interface eth0. Inside /etc/dhcp/dhclient.conf for eth0 make sure you have:

# vi /etc/dhcp/dhclient.conf

interface "eth0" {
prepend domain-name-servers 8.8.8.8;
prepend domain-name-servers 8.8.4.4;
prepend domain-name-servers 208.67.222.222;
prepend domain-name-servers 208.67.220.220;
}