Posts Tagged ‘conf’

Nginx increase security by putting websites into Linux jails howto

Monday, August 27th, 2018

linux-jail-nginx-webserver-increase-security-by-putting-it-and-its-data-into-jail-environment

If you're sysadmining a large numbers of shared hosted websites which use Nginx Webserver to interpret PHP scripts and serve HTML, Javascript, CSS … whatever data.

You realize the high amount of risk that comes with a possible successful security breach / hack into a server by a malicious cracker. Compromising Nginx Webserver by an intruder automatically would mean that not only all users web data will get compromised, but the attacker would get an immediate access to other data such as Email or SQL (if the server is running multiple services).

Nowadays it is not so common thing to have a multiple shared websites on the same server together with other services, but historically there are many legacy servers / webservers left which host some 50 or 100+ websites.

Of course the best thing to do is to isolate each and every website into a separate Virtual Container however as this is a lot of work and small and mid-sized companies refuse to spend money on mostly anything this might be not an option for you.

Considering that this might be your case and you're running Nginx either as a Load Balancing, Reverse Proxy server etc. , even though Nginx is considered to be among the most secure webservers out there, there is absolutely no gurantee it would not get hacked and the server wouldn't get rooted by a script kiddie freak that just got in darknet some 0day exploit.

To minimize the impact of a possible Webserver hack it is a good idea to place all websites into Linux Jails.

linux-jail-simple-explained-diagram-chroot-jail

For those who hear about Linux Jail for a first time,
chroot() jail is a way to isolate a process / processes and its forked children from the rest of the *nix system. It should / could be used only for UNIX processes that aren't running as root (administrator user), because of the fact the superuser could break out (escape) the jail pretty easily.

Jailing processes is a concept that is pretty old that was first time introduced in UNIX version 7 back in the distant year 1979, and it was first implemented into BSD Operating System ver. 4.2 by Bill Joy (a notorious computer scientist and co-founder of Sun Microsystems). Its original use for the creation of so called HoneyPot – a computer security mechanism set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of information systems that appears completely legimit service or part of website whose only goal is to track, isolate, and monitor intruders, a very similar to police string operations (baiting) of the suspect. It is pretty much like а bait set to collect the fish (which in this  case is the possible cracker).

linux-chroot-jail-environment-explained-jailing-hackers-and-intruders-unix

BSD Jails nowadays became very popular as iPhones environment where applications are deployed are inside a customly created chroot jail, the principle is exactly the same as in Linux.

But anyways enough talk, let's create a new jail and deploy set of system binaries for our Nginx installation, here is the things you will need:

1. You need to have set a directory where a copy of /bin/ls /bin/bash /bin/,  /bin/cat … /usr/bin binaries /lib and other base system Linux system binaries copy will reside.

 

server:~# mkdir -p /usr/local/chroot/nginx

 


2. You need to create the isolated environment backbone structure /etc/ , /dev, /var/, /usr/, /lib64/ (in case if deploying on 64 bit architecture Operating System).

 

server:~# export DIR_N=/usr/local/chroot/nginx;
server:~# mkdir -p $DIR_N/etc
server:~# mkdir -p $DIR_N/dev
server:~# mkdir -p $DIR_N/var
server:~# mkdir -p $DIR_N/usr
server:~# mkdir -p $DIR_N/usr/local/nginx
server:~# mkdir -p $DIR_N/tmp
server:~# chmod 1777 $DIR_N/tmp
server:~# mkdir -p $DIR_N/var/tmp
server:~# chmod 1777 $DIR_N/var/tmp
server:~# mkdir -p $DIR_N/lib64
server:~# mkdir -p $DIR_N/usr/local/

 

3. Create required device files for the new chroot environment

 

server:~# /bin/mknod -m 0666 $D/dev/null c 1 3
server:~# /bin/mknod -m 0666 $D/dev/random c 1 8
server:~# /bin/mknod -m 0444 $D/dev/urandom c 1 9

 

mknod COMMAND is used instead of the usual /bin/touch command to create block or character special files.

Once create the permissions of /usr/local/chroot/nginx/{dev/null, dev/random, dev/urandom} have to be look like so:

 

server:~# ls -l /usr/local/chroot/nginx/dev/{null,random,urandom}
crw-rw-rw- 1 root root 1, 3 Aug 17 09:13 /dev/null
crw-rw-rw- 1 root root 1, 8 Aug 17 09:13 /dev/random
crw-rw-rw- 1 root root 1, 9 Aug 17 09:13 /dev/urandom

 

4. Install nginx files into the chroot directory (copy all files of current nginx installation into the jail)
 

If your NGINX webserver installation was installed from source to keep it latest
and is installed in lets say, directory location /usr/local/nginx you have to copy /usr/local/nginx to /usr/local/chroot/nginx/usr/local/nginx, i.e:

 

server:~# /bin/cp -varf /usr/local/nginx/* /usr/local/chroot/nginx/usr/local/nginx

 


5. Copy necessery Linux system libraries to newly created jail
 

NGINX webserver is compiled to depend on various libraries from Linux system root e.g. /lib/* and /lib64/* therefore in order to the server work inside the chroot-ed environment you need to transfer this libraries to the jail folder /usr/local/chroot/nginx

If you are curious to find out which libraries exactly is nginx binary dependent on run:

server:~# ldd /usr/local/nginx/usr/local/nginx/sbin/nginx

        linux-vdso.so.1 (0x00007ffe3e952000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f2b4762c000)
        libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007f2b473f4000)
        libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f2b47181000)
        libcrypto.so.0.9.8 => /usr/local/lib/libcrypto.so.0.9.8 (0x00007f2b46ddf000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f2b46bc5000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f2b46826000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f2b47849000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f2b46622000)


The best way is to copy only the libraries in the list from ldd command for best security, like so:

 

server: ~# cp -rpf /lib/x86_64-linux-gnu/libthread.so.0 /usr/local/chroot/nginx/lib/*
server: ~# cp -rpf library chroot_location

etc.

 

However if you're in a hurry (not a recommended practice) and you don't care for maximum security anyways (you don't worry the jail could be exploited from some of the many lib files not used by nginx and you don't  about HDD space), you can also copy whole /lib into the jail, like so:

 

server: ~# cp -rpf /lib/ /usr/local/chroot/nginx/usr/local/nginx/lib

 

NOTE! Once again copy whole /lib directory is a very bad practice but for a time pushing activities sometimes you can do it …


6. Copy /etc/ some base files and ld.so.conf.d , prelink.conf.d directories to jail environment
 

 

server:~# cp -rfv /etc/{group,prelink.cache,services,adjtime,shells,gshadow,shadow,hosts.deny,localtime,nsswitch.conf,nscd.conf,prelink.conf,protocols,hosts,passwd,ld.so.cache,ld.so.conf,resolv.conf,host.conf}  \
/usr/local/chroot/nginx/usr/local/nginx/etc

 

server:~# cp -avr /etc/{ld.so.conf.d,prelink.conf.d} /usr/local/chroot/nginx/nginx/etc


7. Copy HTML, CSS, Javascript websites data from the root directory to the chrooted nginx environment

 

server:~# nice -n 10 cp -rpf /usr/local/websites/ /usr/local/chroot/nginx/usr/local/


This could be really long if the websites are multiple gigabytes and million of files, but anyways the nice command should reduce a little bit the load on the server it is best practice to set some kind of temporary server maintenance page to show on the websites index in order to prevent the accessing server clients to not have interrupts (that's especially the case on older 7200 / 7400 RPM non-SSD HDDs.)
 

 

8. Stop old Nginx server outside of Chroot environment and start the new one inside the jail


a) Stop old nginx server

Either stop the old nginx using it start / stop / restart script inside /etc/init.d/nginx (if you have such installed) or directly kill the running webserver with:

 

server:~# killall -9 nginx

 

b) Test the chrooted nginx installation is correct and ready to run inside the chroot environment

 

server:~# /usr/sbin/chroot /usr/local/chroot/nginx /usr/local/nginx/nginx/sbin/nginx -t
server:~# /usr/sbin/chroot /usr/local/chroot/nginx /usr/local/nginx/nginx/sbin/nginx

 

c) Restart the chrooted nginx webserver – when necessery later

 

server:~# /usr/sbin/chroot /nginx /usr/local/chroot/nginx/sbin/nginx -s reload

 

d) Edit the chrooted nginx conf

If you need to edit nginx configuration, be aware that the chrooted NGINX will read its configuration from /usr/local/chroot/nginx/nginx/etc/conf/nginx.conf (i'm saying that if you by mistake forget and try to edit the old config that is usually under /usr/local/nginx/conf/nginx.conf

 

 

Install postgresql on Debian Squeeze / How to install PostGreSQL on Obsolete Debian installation

Friday, June 10th, 2016

how-to-install-postgresql-on-obsolete-old-debian-squeeze-tutorial

If you're in position like me to be running an old version of Debian (Squeeze) and you need to install PostgreSQL you will notice that the Debian 6.0 standard repositories are no longer active and apt-get update && apt-get upgrade are returning errors, thus because this Debian release is already too old and even the LTS repositories are inactive it is impossible to install postgresql with the usual.

To get around the situation first thing I did was to try to add followin Debian  repositories. to /etc/apt/sources.list
 

deb http://ftp.debian.net/debian-backports squeeze-backports-sloppy main
deb http://archive.debian.org/debian-archive/debian/ squeeze main contrib non-free
deb http://archive.debian.org/debian-archive/debian/ squeeze-lts main contrib non-free

After adding it I continued getting missing package errors while trying:
 

# apt-get update && apt-get install postgresql postgresql-client
….
…..

 

E: Some index files failed to download. They have been ignored, or old ones used instead.


Thus I googled a bit and I found the following PostgreSQL instructions working Debian 7.0 Wheeze and decided to try it 1 in 1 just changing the repository package wheezy word with squeeze
in original tutorial postgre's deb repositories are:

 

deb http://apt.postgresql.org/pub/repos/apt/ wheezy-pgdg main


I've only changed that one with:

 

deb http://apt.postgresql.org/pub/repos/apt/ squeeze-pgdg main

 

I guess though this worked for Debian Squeeze installing current versions such as Debian 8.0 Jessis and newer wouldn't be a prolem if you just change the debian version keyword witht he distribution for which you need the postgresql package


Here is all the consequential steps I took to make the PostgreSQL 9.5 running on my old and unsupported Debian 6.0 Squeeze

Create /etc/apt/sources.list.d/pgdg.list. The distributions are called codename-pgdg. In the example, replace wheezy with the actual distribution you are using:

# vim /etc/apt/sources.list.d/pgdg.list

 

deb http://apt.postgresql.org/pub/repos/apt/ squeeze-pgdg main

debian:~# apt-get –yes install wget ca-certificates debian:~# wget –quiet -O – https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add – debian:~# apt-get update debian:~# apt-get upgrade debian:~# apt-get –yes install postgresql-9.5 pgadmin3

Next step is to connect to PostGreSQL and create database user and a database # su – postgres $ psql

Create a new database user and a database:

postgres=# CREATE USER mypguser WITH PASSWORD 'mypguserpass'; postgres=# CREATE DATABASE mypgdatabase OWNER mypguser;

 

or

# createuser mypguser #from regular shell # createdb -O mypguser mypgdatabase

Quit from the database

postgres=# q

Connect as user mypguser to new database

# su – mypguser $ psql mypgdatabase

or

# psql -d mypgdatabase -U mypguser

If you get errors like:

psql: FATAL: Ident authentication failed for user "mypguser"

edit pg_hba.conf in /etc/postgresql/9.5.Y/main/pg_hba.conf

 

local all all trust # replace ident or peer with trust

reload postgresql

/etc/init.d/postgresql reload …

 


To make sure that PostGreSQL is running on the system check the following processes are present on the server:

 

 

 

root@pcfreak:/var/www/images# ps axu|grep -i post postgres 9893 0.0 0.0 318696 16172 ? S 15:20 0:00 /usr/lib/postgresql/9.5/bin/postgres -D /var/lib/postgresql/9.5/main -c config_file=/etc/postgresql/9.5/main/postgresql.conf postgres 9895 0.0 0.0 318696 1768 ? Ss 15:20 0:00 postgres: checkpointer process postgres 9896 0.0 0.0 318696 2700 ? Ss 15:20 0:00 postgres: writer process postgres 9897 0.0 0.0 318696 1708 ? Ss 15:20 0:00 postgres: wal writer process postgres 9898 0.0 0.0 319132 2564 ? Ss 15:20 0:00 postgres: autovacuum launcher process postgres 9899 0.0 0.0 173680 1652 ? Ss 15:20 0:00 postgres: stats collector process root 14117 0.0 0.0 112404 924 pts/1 S+ 16:09 0:00 grep -i post

 

 


Well that's all folks now you will have the postgresql running on its default port 5433:

 

debian:/etc/postgresql/9.5/main# grep -i port postgresql.conf
port = 5433 # (change requires restart)
# supported by the operating system:
# supported by the operating system:
# ERROR REPORTING AND LOGGING # %r = remote host and port

 

 


Well that's it folks thanks The Lord Jesus Christ grace by the prayers of John The Baptist and Saint Sergij Radonezhki it works 🙂

 

 

Remove \r (Carriage Return) from string with standard bash shell / sed / tr / vim or awk – Replace \r hidden messy characters from files

Tuesday, February 10th, 2015

remove_r_carriage_return_from_string_with-standard-bash_shell_sed_tr_or_awk_replace_annoying_hidden_messy_characters_from_files

I've been recently writting this Apache webserver / Tomcat / JBoss / Java decomissioning bash script. Part of the script includes extraction from httpd.conf of DocumentRoot variable configured for Apache host.
I was using following one liner to grep and store DocumentRoot set directory into new variable:

documentroot=$(grep -i documentroot /usr/local/apache/conf/httpd.conf | awk '{ print $2 }' |sed -e 's#"##g');

Above line greps for documentroot prints 2nd column of the matchi (which is the Apache server set docroot and then removes any " chars).

However I faced the issue that parsed string contained in $documentroot variable there was mysteriously containing r – return carriage – this is usually Carriage Return (CR) sent by Mac OS and Apple computers. For those who don't know the End of Line of files in UNIX / Linux OS-es is LF – often abreviated as n – often translated as return new line), while Windows PCs use for EOF CR + LF – known as the infamous  rn. I was running the script from the server which is running SuSE SLES 11 Linux, meaning the CR + LF end of file is standardly used, however it seem someone has editted the httpd.conf earlier with a text editor from Mac OS X (Terminal). Thus I needed a way to remove the r from CR character out of the variable, because otherwise I couldn't use it to properly exec tar to archive the documentroot set directory, cause the documentroot directory was showing unexistent.

Opening the httpd.conf in standard editor didn't show the r at the end of
"directory", e.g. I could see in the file when opened with vim

DocumentRoot "/usr/local/apache/htdocs/site/www"

However obviously the r character was there to visualize it I had to use cat command -v option (–show-nonprinting):

cat -v /usr/local/apache/conf/httpd.conf

DocumentRoot "/usr/local/apache/htdocs/site/wwwr"


1. Remove the r CR with bash

To solve that with bash, I had to use another quick bash parsing that scans through $directory and removes r, here is how:

documentroot=${documentroot%$'r'}

It is also possible to use same example to remove "broken" Windows rn Carriage Returns after file is migrated from Windows to Liunx /  FreeBSD host:

documentroot=${documentroot%$'rn'}

 

2. Remove r Carriage Return character with sed

Other way to do remove (del) Windows / Mac OS Carriage Returns in case if Migrating to UNIX is with sed (stream editor).

sed -i s/r// filename >> filename_out.txt


3. Remove r CR character with tr

There is a third way also to do it with (tr) – translate or delete characters old shool *nix command:

tr -d 'r' < file_with_carriagereturns > file_without_carriage_returns

 

4. Remove r CRs with awk (pattern scanning and processing language)

 awk 'sub("$", "r")' inputf_with_crs.txt > outputf_without_crs.txt


5. Delete r CR with VIM editor

:%s/r//g


6. Converting  file DOS / UNIX OSes with dos2unix and unix2dos command line tools

For sysadmins who don't want to bother with writting code to convert CR when moving files between Windows and UNIX hosts there are dos2unix and unix2dos installable commands.

All done Cheers ! 🙂

Fix to “413 Request Entity Too Large” error in Nginx webserver and what causes it

Friday, November 14th, 2014

nginx_413_request_entity_too_large-fix

If you administer NGINX caching server serving static files content and redirecting some requests to Apache and you end up with errors when uploading big files (using HTTP PUT method), even though in Apache's PHP  upload_max_filesize is set to relatively high number upload_max_filesize = 60M.

Here is what happens during hand shake of web-browser -> server interaction 'till status is returned:
 

Web browser or Webcrawler robot goes through the following phases while talking to Web server:

 

1. Obtain an IP address from the IP name of the site (base on site URL without the leading 'http://'). 
This is provided by domain name servers (DNSs) configured for PC.
2. Open an IP socket connection to that IP address.
3. Write an HTTP data stream through that socket
(4) Receive an HTTP data stream back from the Web server in response. 
This data stream contains status codes whose values are determined by the HTTP protocol
whether successful. 

 

In the case the is recognized and reported to client 'web browser', causing the error.

The fix is to also increase max file upload limit in NGINX this is done via:
 
client_max_body_size variable in /usr/local/nginx/nginx.conf (or /etc/nginx/nginx.conf whether Nginx is installed from package).
Here is extract from nginx.conf

http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

 

    server {
        client_max_body_size 60M;
        listen       80;
        server_name  localhost;

        # Main location
        location / {
            proxy_pass         http://127.0.0.1:8000/;
        }
    }
}


To make new configuration active Restart Nginx:

/etc/init.d/nginx restart

Apache increase loglevel – Increasing Apache logged data for better statistic analysis

Tuesday, July 1st, 2014

apache-increase-loglevel-howto-increasing-apache-logged-data-for-better-statistic-analysis
In case of development (QA) systems, where developers deploy new untested code, exposing Apache or related Apache modules to unexpected bugs often it is necessery to increase Apache loglevel to log everything, this is done with:

 

LogLevel debug

LogLevel warn is common logging option for Apache production webservers.
 

Loglevel warn


in httpd.conf is the default Apache setting for Log. For some servers that produce too many logs this setting could be changed to LogLevel crit which will make the web-server log only errors of critical importance to webserver. Using LogLevel debug setting is very useful whether you have to debug issues with unworking (failing) SSL certificates. It will give you whole dump with SSL handshake and reason for it failing.

You should be careful before deciding to increasing server log level, especially on production servers.
Increased logging level puts higher load on Apache webserver, as well as produces a lot of gigabytes of mostly useless logs that could lead quickly to filling all free disk space.

If you  would like to increase logged data in access.log / error.log, because you would like to perform versatile statistical analisys on daily hits, unique visits, top landing pages etc. with Webalizer, Analog or Awstats.

Change LogFormat and CustomLog variables from common to combined.

By default Apache is logging with following LogFormat and Customlog
 

LogFormat "%h %l %u %t "%r" %>s %b" common
CustomLog logs/access_log common


Which will be logging in access.log format:

 

127.0.0.1 – jericho [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326


Change it to something like:

 

LogFormat "%h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-agent}i"" combined CustomLog log/access_log combined


This would produce logs like:

127.0.0.1 – jericho [10/Oct/2000:13:55:36 -0700] “GET /apache_pb.gif HTTP/1.0” 200 2326 “http://www.example.com/start.html” “Mozilla/4.08 [en] (Win98; I ;Nav)"

 

Using Combined Log Format produces all logged information from CustomLog … common, and also logs the Referrer and User-Agent headers, which indicate where users were before visiting your Web site page and which browsers they used. You can read rore on custom Apache logging tailoring theme on Apache's website

Use apt-get with Proxy howto – Set Proxy system-wide in Linux shell and Gnome

Friday, May 16th, 2014

linux-apt-get-configure-proxy-howto-set-proxy-systemwide-in-linux

I juset setup a VMWare Virtual Machine on my HP notebook and installed Debian 7.0 stable Wheezy. Though VMWare identified my Office Internet and configured automatically NAT, I couldn't access the internet from a browser until I remembered all HP traffic is going through a default set browser proxy.
After setting a proxy to Iceweasel, Internet pages started opening normally, however as every kind of traffic was also accessible via HP's proxy, package management with apt-get (apt-get update, apt-get install etc. were failing with errors):


# apt-get update

Ign cdrom://[Debian GNU/Linux 7.2.0 _Wheezy_ – Official i386 CD Binary-1 20131012-12:56] wheezy Release.gpg
Ign cdrom://[Debian GNU/Linux 7.2.0 _Wheezy_ – Official i386 CD Binary-1 20131012-12:56] wheezy Release
Ign cdrom://[Debian GNU/Linux 7.2.0 _Wheezy_ – Official i386 CD Binary-1 20131012-12:56] wheezy/main i386 Packages/DiffIndex
Ign cdrom://[Debian GNU/Linux 7.2.0 _Wheezy_ – Official i386 CD Binary-1 20131012-12:56] wheezy/main Translation-en_US
Err http://ftp.by.debian.org wheezy Release.gpg
  Could not connect to ftp.by.debian.org:80 (86.57.151.3). – connect (111: Connection refused)
Err http://ftp.by.debian.org wheezy-updates Release.gpg
  Unable to connect to ftp.by.debian.org:http:
Err http://security.debian.org wheezy/updates Release.gpg
  Cannot initiate the connection to security.debian.org:80 (2607:ea00:101:3c0b:207:e9ff:fe00:e595). – connect (101: Network is unreachable) [IP: 2607:ea00:101:3c0b:207:e9ff:fe00:e595 80]
Reading package lists…

 

This error is caused because apt-get is trying to directly access above http URLs and because port 80 is filtered out from HP Office, it fails in order to make it working I had to configure apt-get to use Proxy host – here is how:

a) Create /etc/apt/apt.conf.d/02proxy file (if not already existing)
and place inside:
 

Acquire::http::proxy::Proxy "https://web-proxy.cce.hp.com";
Acquire::ftp::proxy::Proxy "ftp://web-proxy.cce.hp.com";


To do it from console / gnome-terminal issue:
echo ''Acquire::http::Proxy "https://web-proxy.cce.hp.com:8088";' >> /etc/apt/apt.conf.d/02proxy
echo ''Acquire::ftp::Proxy "https://web-proxy.cce.hp.com:8088";' >> /etc/apt/apt.conf.d/02proxy

That's all now apt-get will tunnel all traffic via HTTP and FTP proxy host web-proxy.cce.hp.com and apt-get works again.

Talking about Proxyfing Linux's apt-get, its possible to also set proxy shell variables, which are red and understood by many console programs like Console browsers lynx, links, elinks  as well as wget and curl commands, e.g.:

 

export http_proxy=http://192.168.1.5:5187/
export https_proxy=$http_proxy
export ftp_proxy=$http_proxy
export rsync_proxy=$http_proxy
export no_proxy="localhost,127.0.0.1,localaddress,.localdomain.com"

For proxies protected with username and password export variables should look like so: echo -n "username:"
read -e username
echo -n "password:"
read -es password
export http_proxy="http://$username:$password@proxyserver:8080/"
export https_proxy=$http_proxy
export ftp_proxy=$http_proxy
export rsync_proxy=$http_proxy
export no_proxy="localhost,127.0.0.1,localaddress,.localdomain.com"

To make this Linux proxy settings system wide on Debian / Ubuntu there is the /etc/environment file add to it:
 

http_proxy=http://proxy.server.com:8080/
https_proxy=http://proxy.server.com:8080/
ftp_proxy=http://proxy.server.com:8080/
no_proxy="localhost,127.0.0.1,localaddress,.localdomain.com"
HTTP_PROXY=http://proxy.server.com:8080/
HTTPS_PROXY=http://proxy.server.com:8080/
FTP_PROXY=http://proxy.server.com:8080/
NO_PROXY="localhost,127.0.0.1,localaddress,.localdomain.com"


To make proxy global (systemwide) for most (non-Debian specific) Linux distributions shell environments create new file /etc/profile.d/proxy.sh and place something like:

function proxy(){
echo -n "username:"
read -e username
echo -n "password:"
read -es password
export http_proxy="http://$username:$password@proxyserver:8080/"
export https_proxy=$http_proxy
export ftp_proxy=$http_proxy
export rsync_proxy=$http_proxy
export no_proxy="localhost,127.0.0.1,localaddress,.localdomain.com"
echo -e "nProxy environment variable set."
}
function proxyoff(){
unset HTTP_PROXY
unset http_proxy
unset HTTPS_PROXY
unset https_proxy
unset FTP_PROXY
unset ftp_proxy
unset RSYNC_PROXY
unset rsync_proxy
echo -e "nProxy environment variable removed."
}

To set Global Proxy (make Proxy Systemwide) for a user in GNOME Desktop environment launch gnome-control-center

And go to Network -> Network Proxy

/images/gnome-configure-systemwide-proxy-howto-picture1

/images/gnome-configure-systemwide-proxy-howto-picture2

To make proxy settings also system wide for some GUI Gnome GTK3 applications

gsettings set org.gnome.system.proxy mode 'manual'
gsettings set org.gnome.system.proxy.http host 'your-proxy.server.com'
gsettings set org.gnome.system.proxy.http port 8080

Preserve Session IDs of Tomcat cluster behind Apache reverse proxy / Sticky sessions with mod_proxy and Tomcat

Wednesday, February 26th, 2014

apache_and_tomcat_merged_logo_prevent_sticky_sessions
Having a combination of Apache webservice Reverse Proxy to redirect invisibly traffic to a number of Tomcat server positioned in a DMZ is a classic task in big companies Corporate world.
Hence if you work for company like IBM or HP sooner or later you will need to configure Apache Webserver cluster with few running Jakarta Tomcat Application servers behind. Scenario with necessity to access a java based application via Tomcat which requires logging (authentication) relaying on establishing and keeping a session ID is probably one of the most common ones and if you do it for first time you will probably end up with Session ID issues.  Session ID issues are hard to capture at first as on first glimpse application will seem to be working but users will have to re-login all the time even though the programmers might have coded for a session to expiry in 30 minutes or so.

… I mean not having configured Session ID prevention to Tomcats will cause random authentication session expiries and users using the Tomcat app will be unable to normally access below application with authenticated credentials. The solution to these is known under term "Sticky sessions"
To configure Sticky sessions you need to already have configured Apache/s with following minimum configuration:

  • enabled mod_proxy, proxy_balancer_module, proxy_http_module and or mod_proxy_ajp (in Apache config)

  LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
LoadModule proxy_http_module modules/mod_proxy_http.so

  • And configured and tested Tomcats running an Application reachable via AJP protocol

Below example assumes there is Reverse Proxy Load Balancer Apache which has to forward all traffic to 2 tomcats. The config can easily be extended for as many as necessary by adding more BalancerMembers.

In Apache webserver (apache2.conf / httpd.conf) you need to have JSESSIONID configured. These JSESSIONID is going to be appended to each client request from Reverse Proxy to each of Tomcat servers with value opened once on authentication to first Tomcat node to each of the other ones.

<Proxy balancer://mycluster>
BalancerMember ajp://10.16.166.53:11010/ route=delivery1
BalancerMember ajp://10.16.166.66:11010/ route=delivery2
</Proxy>

ProxyRequests Off
ProxyPass / balancer://mycluster/ stickysession=JSESSIONID
ProxyPassReverse / balancer://mycluster/

The two variables route=delivery1 and route=delivery2 are routed to hosts identificators that also has to be present in Tomcat server configurations
In Tomcat App server First Node (server.xml)

<Engine name="Catalina" defaultHost="localhost" jvmRoute="delivery1">

In Tomcat App server Second Node (server.xml)

<Engine name="Catalina" defaultHost="localhost" jvmRoute="delivery2">

Once Sticky Sessions are configured it is useful to be able to track they work fine this is possible through logging each of established JESSSIONIDs, to do so add in httpd.conf

LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"\"%{JSESSIONID}C\"" combined

After modifications restart Apache and Tomcat to load new configs. In Apache access.log the proof should be the proof that sessions are preserved via JSESSIONID, there should be logs like:
 

127.0.0.1 - - [18/Sep/2013:10:02:02 +0800] "POST /examples/servlets/servlet/RequestParamExample HTTP/1.1" 200 662 "http://localhost/examples/servlets/servlet/RequestParamExample" "Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130807 Firefox/17.0""B80557A1D9B48EC1D73CF8C7482B7D46.server2"

127.0.0.1 - - [18/Sep/2013:10:02:06 +0800] "GET /examples/servlets/servlet/RequestInfoExample HTTP/1.1" 200 693 "http://localhost/examples/servlets/" "Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130807 Firefox/17.0""B80557A1D9B48EC1D73CF8C7482B7D46.server2"

That should solve problems with mysterious session expiries 🙂

VIM and VI UNIX text editor syntax highlighting and howto add remove code auto indent

Tuesday, February 4th, 2014

vim-vi-linux-text-editor-logo-vim-highlighting how to turn vim syntax highlighting on linux

For my daily system administration job I have to login to many SuSE Linux servers and do various configugration edits.
The systems are configured in different ways and the only text editors available across all servers I can use are VI and VIM (VI Improved).

As I usually had to edit configuration files and scripts and I'm on SSH color terminal its rather annoying that on some of the servers opening a file with VIM is not displayed with SYNTAX HIGHLIGHTING. Not having syntax highlighting is ugly and makes editting ugly and unreadable.
Thus it is useful to enable VI syntax highlighting straight into the file being editted. I suspect many novice sysadmins might not know how to turn syntax highlighting in vi so here is how.
 

Turn Syntax Highlighting in VIM

 

1. Open file with vim lets say Apache configuration

# vim /etc/apache2/apache2.conf

2. Press (Esc) Escape and ":" from kbd and then type in syntax on

:syntax on

vim-syntax-highlighting-howto-syntax-on-picture-screenshot-apache-config

To Turn On / Off VI Syntax Highlighting permanent add ":syntax on"
into ~/.vimrc

~/.vimrc file is red automatically on VIM start, so right after :syntax on is appended in it on relaunch vim will start showing colorfully.

Enjoy ! 🙂

 

Apache SSLCertificateChainFile adding SSL with Certificate Chain / What is Certificate Chain

Friday, January 31st, 2014

configure-apache-ssl-certificate-chain-ssl-certificate-keychain-each-signing-each-other

If you work in a big company with large network infrastructure who has to deal with SSL Certificates you will sooner or later will have to learn about existence of SSL Certificate Chains.
Its worthy thus to know what is SSL Certificate Chains and how such a chain is configured in Apache?

Personal SSL certificates (certificates issued to an individual or a company) can be used by clients to uniquely identify themselves when they are involved in starting an SSL connection.
SSL Certificate file contains X.509 certificate, which, in turn, contains a public key used for encryption.
Each personal certificate has zero or more certificate chains of certification authority certificates that extend back to the root certification authority.
 

Certificate R (Root Certification Authority)
    |
    | represents issuer of
    V
Certificate I1 (Intermediate Certification Authority)
    |
    | represents issuer of
    V
Certificate I2 (A subsidiary Intermediate Certification Authority)
    |
    | represents issuer of
    V
Certificate I3 (A further subsidiary Intermediate Certification Authority)
    |
    | represents issuer of
    V
Certificate P (A personal certificate that is used to identify its owner 
               on an SSL handshake)

Certificate chains are used to verify the authenticity of each certificate in that chain, including the personal certificate. Each certificate in the chain is validated using its 'parent' certificate, which in turn is validated using the next certificate up the chain, and so on, from the personal certificate up to the root certification authority certificate.

Now after explaining thoroughfully what is SSL Certificate Chain, here is how to configure a SSL Certificate in Apache Webserver.

Open apache2.conf or httpd.conf (depending on GNU / Linux distribution) and add to it;

  SSLEngine On
   SSLCertificateFile conf/cert/webserver-host.crt
   SSLCertificateKeyFile conf/cert/webserver-host.key
   SSLCertificateChainFile conf/cert/internet-v4.crt
   # SSLCertificateChainFile conf/cert/intranet-v3.crt
   SSLOptions +StdEnvVars +OptRenegotiate +ExportCertData

SSLCertificateChainFile conf/cert/chain-cert.crt
loads a chain of separate Personal SSL certificates each signing each other on different levels, chain is leading to top ROOT CA (Certificate Authority).

How to configure Apache to serve as load balancer between 2 or more Webservers on Linux / Apache basic cluster

Monday, October 28th, 2013

Apache doing load balancer between Apache servers Apache basic cluster howto

Any admin somehow involved in sphere of UNIX Webhosting knows Apache pretty well. I've personally used Apache for about 10 years now and until now I always used it as a single installation on a Linux. Always so far whenever the requirements for more client connections raised up, web hosting companies I worked for did a migration of Website / websites on a newer better (quicker) server hardware configuration. Everyone knows keeping a site on a single Apache server poses great RISK if the machine hangs up for a reason or gets DoSed this makes websites unavailable until reboot and poses unwanted downtime. Though I know pretty well the concept of load balancing until today I never had configured Apache to serve as Load balancer between two or more identical machines set-upped to interpret PHP / Perl scripts. Amazingly load balancing users web traffic happened to be much easier than I supposed. All necessary is a single Apache configured with mod_proxy_balancer which acts as proxy and ships HTTP requests between two Apache servers. Logically its very important that the entry traffic host with Apache mod_proxy_balancer has to be configured to only run only mod_proxy_balancer otherwise it will be eating unnecessary server memory as with each unnecessary loaded Apache module usage of memory resources raise up.

The scenario of my load balancer and 2 webserver hosts behind it goes like this:

a. Apache with load balancer with external IP address – i.e. (83.228.93.76) with DNS record for ex. www.mybalanced-webserver.com
b. Normally configured Apache to run PHP scripts with internal IP address through NAT – (Network address translation) (on 10.10.10.1) – known under host JEREMIAH
c. Second identical Apache to above host running on 10.10.10.1 with IP 10.10.10.2. with internal host ISSIAH.

N.B.! All 3 hosts are running latest  Debian GNU / Linux 7.2 Wheezy
 
After having this in mind, I proceeded with installing the on 83.228.93.76 apache and removing all unnecessary modules.

!!! Important note is if you use some already existent Apache configured to run PHP or any other unnecessary stuff – make sure you remove this otherwise expect severe performance issues !!!
1. Install Apache webserver

loadbalancer:~# apt-get install --yes apache2

2. Enable mod proxy proxy_balancer and proxy_http
On Debian Linux modules are enabled with a2enmod command;

loadbalancer:~# a2enmod proxy
loadbalancer:~# a2enmod proxy_balancer
loadbalancer:~# a2enmod proxy_http

Actually what a2enmod command does is to make symbolic links from /etc/apache2/mods-available/{proxy,proxy_balancer,proxy_http} to /etc/apache2/mods-available/{proxy,proxy_balancer,proxy_http}

3. Configure Apache mod proxy to load balance traffic between JEREMIAH and ISSAIAH webservers

loadbalancer:~# vim /etc/apache2/conf.d/proxy_balancer

/etc/apache2/conf.d/proxy-balancer

Paste inside:

<Proxy balancer://mycluster> BalancerMember http://10.0.0.1 BalancerMember http://10.0.0.4 </Proxy> ProxyPass / balancer://mycluster – See more at: http://www.elastichosts.com/support/tutorials/add-a-front-end-apache-cloud-load-balancer/#sthash.29iPnZpz.dpuf

<Proxy balancer://mycluster>
BalancerMember http://10.10.10.1
BalancerMember http://10.10.10.2
</Proxy>
ProxyPass / balancer://mycluster

<Proxy balancer://mycluster> BalancerMember http://10.0.0.1 BalancerMember http://10.0.0.4 </Proxy> ProxyPass / balancer://mycluster – See more at: http://www.elastichosts.com/support/tutorials/add-a-front-end-apache-cloud-load-balancer/#sthash.29iPnZpz.dpuf

<Proxy balancer://mycluster> BalancerMember http://10.0.0.1 BalancerMember http://10.0.0.4 </Proxy> ProxyPass / balancer://mycluster – See more at: http://www.elastichosts.com/support/tutorials/add-a-front-end-apache-cloud-load-balancer/#sthash.29iPnZpz.dpuf

<Proxy balancer://mycluster> BalancerMember http://10.0.0.1 BalancerMember http://10.0.0.4 </Proxy> ProxyPass / balancer://mycluster – See more at: http://www.elastichosts.com/support/tutorials/add-a-front-end-apache-cloud-load-balancer/#sthash.29iPnZpz.dpuf

<Proxy balancer://mycluster> BalancerMember http://10.0.0.1 BalancerMember http://10.0.0.4 </Proxy> ProxyPass / balancer://mycluster – See more at: http://www.elastichosts.com/support/tutorials/add-a-front-end-apache-cloud-load-balancer/#sthash.29iPnZpz.dpuf


4. Configure Apache Proxy to access traffic from all hosts (by default it is configured to Deny from all)

<Proxy balancer://mycluster> BalancerMember http://10.0.0.1 BalancerMember http://10.0.0.4 </Proxy> ProxyPass / balancer://mycluster – See more at: http://www.elastichosts.com/support/tutorials/add-a-front-end-apache-cloud-load-balancer/#sthash.29iPnZpz.dpuf

loadbalancer:~# vim /etc/apache2/mods-enabled/proxy.conf

Change there Deny from all to Allow from all

Deny from all
/etc/apache2/mods-enabled/proxy.conf

5. Restart Apache

loadbalancer:~# /etc/init.d/apache2 restart

Once again I have to say that above configuration is actually a basic Apache cluster so hosts behind load balancer Apache there should be machines configured to interpret scripts identically. If one Apache server of the cluster dies, the other Apache + PHP host will continue serve and deliver webserver content so no interruption will happen. This is not a round robin type of load balancer. Above configuration will distribute Webserver load requested in ratio 3/4 3 parts will be served by First server and 4th parth will be delivered by 2nd Apache.
Well, that's all load balancer is configured! Now to test it open in browser www.mybalanacer-webserver.com or try to access it by IP in my case: 83.228.93.76

a2enmod proxy