Posts Tagged ‘HTTP’

Improve haproxy logging with custom log-format for better readiability

Friday, April 12th, 2024

Haproxy logging is a very big topic, worthy of many articles, but unfortunately not enough is written on the topic, perhaps for the reason haproxy is free software and most people who use it doesn't follow the philosophy of free software sharing but want to keep, the acquired knowledge on the topic for their own and if possible in the capitalist world most of us live to use it for a Load Balancer haproxy consultancy, consultancy fee or in their daily job as system administrators (web and middleware) or cloud specialist etc. 🙂

Having a good haproxy logging is very important as you need to debug issues with backend machines or some other devices throwing traffic to the HA Proxy.
Thus it is important to build a haproxy logging in a way that it provides most important information and the information is as simple as possible, so everyone can understand what is in without much effort and same time it contains enough debug information, to help you if you want to use the output logs with Graylog filters or process data with some monitoring advanced tool as Prometheus etc.

In our effort to optimize the way haproxy logs via a configured handler that sends the haproxy output to logging handler configured to log through rsyslog, we have done some experiments with logging arguments and came up with few variants, that we liked. In that article the idea is I share this set of logging  parameters with hope to help some other guy that starts with haproxy to build a good logging readable and easy to process with scripts log output from haproxy.

The criterias for a decent haproxy logging used are:

1. Log should be simple but not dumb
2. Should be concrete (and not too much complicated)
3. Should be easy to read for the novice and advanced sysadmin

Before starting, have to say that building the logging format seems tedious task but to make it fit your preference could take a lot of time, especially as logging parameters naming is hard to remember, thus the haproxy logging documentation log-format description table comes really handy:

Haproxy log-format paremeters ASCII table
 

 Please refer to the table for log-format defined variables :
 

+---+------+-----------------------------------------------+-------------+
| R | var  | field name (8.2.2 and 8.2.3 for description)  | type        |
+---+------+-----------------------------------------------+-------------+
|   | %o   | special variable, apply flags on all next var |             |
+---+------+-----------------------------------------------+-------------+
|   | %B   | bytes_read           (from server to client)  | numeric     |
| H | %CC  | captured_request_cookie                       | string      |
| H | %CS  | captured_response_cookie                      | string      |
|   | %H   | hostname                                      | string      |
| H | %HM  | HTTP method (ex: POST)                        | string      |
| H | %HP  | HTTP request URI without query string (path)  | string      |
| H | %HQ  | HTTP request URI query string (ex: ?bar=baz)  | string      |
| H | %HU  | HTTP request URI (ex: /foo?bar=baz)           | string      |
| H | %HV  | HTTP version (ex: HTTP/1.0)                   | string      |
|   | %ID  | unique-id                                     | string      |
|   | %ST  | status_code                                   | numeric     |
|   | %T   | gmt_date_time                                 | date        |
|   | %Ta  | Active time of the request (from TR to end)   | numeric     |
|   | %Tc  | Tc                                            | numeric     |
|   | %Td  | Td = Tt - (Tq + Tw + Tc + Tr)                 | numeric     |
|   | %Tl  | local_date_time                               | date        |
|   | %Th  | connection handshake time (SSL, PROXY proto)  | numeric     |
| H | %Ti  | idle time before the HTTP request             | numeric     |
| H | %Tq  | Th + Ti + TR                                  | numeric     |
| H | %TR  | time to receive the full request from 1st byte| numeric     |
| H | %Tr  | Tr (response time)                            | numeric     |
|   | %Ts  | timestamp                                     | numeric     |
|   | %Tt  | Tt                                            | numeric     |
|   | %Tw  | Tw                                            | numeric     |
|   | %U   | bytes_uploaded       (from client to server)  | numeric     |
|   | %ac  | actconn                                       | numeric     |
|   | %b   | backend_name                                  | string      |
|   | %bc  | beconn      (backend concurrent connections)  | numeric     |
|   | %bi  | backend_source_ip       (connecting address)  | IP          |
|   | %bp  | backend_source_port     (connecting address)  | numeric     |
|   | %bq  | backend_queue                                 | numeric     |
|   | %ci  | client_ip                 (accepted address)  | IP          |
|   | %cp  | client_port               (accepted address)  | numeric     |
|   | %f   | frontend_name                                 | string      |
|   | %fc  | feconn     (frontend concurrent connections)  | numeric     |
|   | %fi  | frontend_ip              (accepting address)  | IP          |
|   | %fp  | frontend_port            (accepting address)  | numeric     |
|   | %ft  | frontend_name_transport ('~' suffix for SSL)  | string      |
|   | %lc  | frontend_log_counter                          | numeric     |
|   | %hr  | captured_request_headers default style        | string      |
|   | %hrl | captured_request_headers CLF style            | string list |
|   | %hs  | captured_response_headers default style       | string      |
|   | %hsl | captured_response_headers CLF style           | string list |
|   | %ms  | accept date milliseconds (left-padded with 0) | numeric     |
|   | %pid | PID                                           | numeric     |
| H | %r   | http_request                                  | string      |
|   | %rc  | retries                                       | numeric     |
|   | %rt  | request_counter (HTTP req or TCP session)     | numeric     |
|   | %s   | server_name                                   | string      |
|   | %sc  | srv_conn     (server concurrent connections)  | numeric     |
|   | %si  | server_IP                   (target address)  | IP          |
|   | %sp  | server_port                 (target address)  | numeric     |
|   | %sq  | srv_queue                                     | numeric     |
| S | %sslc| ssl_ciphers (ex: AES-SHA)                     | string      |
| S | %sslv| ssl_version (ex: TLSv1)                       | string      |
|   | %t   | date_time      (with millisecond resolution)  | date        |
| H | %tr  | date_time of HTTP request                     | date        |
| H | %trg | gmt_date_time of start of HTTP request        | date        |
| H | %trl | local_date_time of start of HTTP request      | date        |
|   | %ts  | termination_state                             | string      |
| H | %tsc | termination_state with cookie status          | string      |
+---+------+-----------------------------------------------+-------------+
R = Restrictions : H = mode http only ; S = SSL only


Our custom log-format built in order to fulfill our needs is as this:

log-format %ci:%cp\ %H\ [%t]\ [%f\ %fi:%fp]\ [%b/%s\ %si:%sp]\ %Tw/%Tc/%Tt\ %B\ %ts\ %ac/%fc/%bc/%sc/%sq/%bq


Once you place the log-format as a default for all haproxy frontend / backends or for a custom defined ones, the output you will get when tailing the log is:

# tail -f /var/log/haproxy.log

Apr  5 21:47:19  10.42.73.83:23262 haproxy-fqdn-hostname.com [05/Apr/2024:21:46:23.879] [ft_FRONTEND_NAME 10.46.108.6:61310] [bk_BACKEND_NAME/bk_appserv3 10.75.226.88:61310] 1/0/55250 55 sD 4/2/1/0/0/0
Apr  5 21:48:14  10.42.73.83:57506 haproxy-fqdn-hostname.com [05/Apr/2024:21:47:18.925] [ft_FRONTEND_NAME 10.46.108.6:61310] [bk_BACKEND_NAME//bk_appserv1 10.35.242.134:61310] 1/0/55236 55 sD 4/2/1/0/0/0
Apr  5 21:49:09  10.42.73.83:46520 haproxy-fqdn-hostname.com [05/Apr/2024:21:48:13.956] [ft_FRONTEND_NAME 10.46.108.6:61310] [bk_BACKEND_NAME//bk_appserv2 10.75.226.89:61310] 1/0/55209 55 sD 4/2/1/0/0/0


If you don't care about extra space and logs being filled with more naming, another variant of above log-format, that makes it even more readable even for most novice sys admin or programmer would look like this:

log-format [%t]\ %H\ [IN_IP]\ %ci:%cp\ [FT_NAME]\ %f:%fp\ [FT_IP]\ %fi:%fp\ [BK_NAME]\ [%b/%s:%sp]\ [BK_IP]\ %si:%sp\ [TIME_WAIT]\ {%Tw/%Tc/%Tt}\ [CONN_STATE]\ {%B\ %ts}\ [STATUS]\ [%ac/%fc/%bc/%sc/%sq/%bq]

Once you apply the config test the haproxy.cfg to make sure no syntax errors during copy / paste from this page

haproxy-serv:~# haproxy -c -f /etc/haproxy/haproxy.cfg
Configuration file is valid


Next restart graceously haproxy 

haproxy-serv:~# /usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)


Once you reload haproxy graceously without loosing the established connections in stead of restarting it completely via systemd sysctl restart haproxy:

 

2024-04-05T21:46:03+02:00 localhost haproxy[1897731]: 193.200.198.195:50714 haproxy-fqdn-hostname.com [05/Apr/2024:21:46:03.012] [FrotnendProd 10.55.0.20:27800] [BackendProd/<NOSRV> -:-] -1/-1/0 0 — 4/1/0/0/0/0
2024-04-05T21:46:03+02:00 localhost haproxy[1897731]: 193.100.193.189:54290 haproxy-fqdn-hostname.com
[05/Apr/2024:21:46:03.056] [FrotnendProd 10.55.0.20:27900] [BackendProd/<NOSRV> -:-] -1/-1/0 0 — 4/4/3/0/0/0
2024-04-05T21:46:03+02:00 localhost haproxy[1897731]: 193.100.193.190:26778 haproxy-fqdn-hostname.com
[05/Apr/2024:21:46:03.134] [FrotnendProd 10.55.0.20:27900] [BackendProd/tsefas02s 10.35.242.134:27900] 1/-1/0 0 CC 4/4/3/0/0/0

Note that in that log localhost haproxy[pid] is written by rsyslog, you can filter it out by modifying rsyslogd configurations

The only problem with this log-format is not everyone wants to have to much repeating information pointer on which field is what, but I personally liked this one as well because using it even though occuping much more space, makes the log much easier to process with perl or python scripting for data visualize and very for programs that does data or even "big data" analysis.

Speeding up Apache through apache2-mpm-worker and php5-cgi on Debian / How to improve Apache performance and decrease server memory consumption

Friday, March 18th, 2011

speeding up apache through apache2-mpm-worker and php5-cgi on Debian Linux / how to improve apache performance and decrease server responce time
By default most Apache running Linux servers on the Internet are configured to use with the mpm prefork apache module
Historically prefork apache module is the predecessor of the worker module therefore it's believed to be a way more tested and reliable, if you need a critical reliable webserver configuration.

However from my experience by so far with the Apache MPM Worker I can boldly say that many of the rumors concerning the unreliabity of apache2-mpm-worker are just myths.

The old way Apache handles connections e.g. the mod prefork is the well known way that high amount of the daemons on Linux and BSD are still realying on.
When prefork is a used by Apache, every new TCP/IP connection arriving at your Linux server on the Apache configured port let's say on port 80 is being served by Apache in a way that the Apache process (mother process) parent does fork a new Apache parent copy in order to serve the new request.
Thus by using the prefork Apache needs to fork new process (if it doesn't have already an empty forked one waiting for connections) and serve the HTTP request of the new client, after the request of the client is completed the newly forked Apache usually dies (even though it again depends on the way the Apache server is configured via the Apache configuration – apache2.conf / httpd.conf etc.).

Now you can imagine how slow and memory consuming it is that all the time the parent Apache process spawns new processes, kills old ones etc. in order to fulfill the client requests.

Now just to compare the Apace mpm prefork does not use the old forking way, but relies on a few Apache processes which handles all the requests without constantly being destroyed and recreated like with the prefork module.
This saves operations and system resources, threaded programming has already been proven to be more efficient way to handle tasks and is heavily adopted in GUI programming for instance in Microsoft Windows, Mac OS X, Linux Gnome, KDE etc.

There is plenty of information and statistical data which compares Apache running with prefork and respectively worker modules online.
As the goal of this article is not to went in depths with this kind of information I would not say more on it but let you explore online a bit more about them in case if you're interested.

The purpose of this article is to explain in short how to substitute the Apache2-MPM-Prefork and how your server performance could benefit out of the use of Apache2-MPM-Worker.
On Debian the default Apache process serving module in Apache 1.3x,Apache 2.0x and 2.2x is prefork thus the installation of apache2-mpm-worker is not "a standard way" to install Apache

Deciding to swith from the default Debian apache-mpm-prefork to apache-mpm-worker is quite a serious and responsible decision and in some cases might cause troubles, if you have decided to follow my article be sure to consider all the possible negative consequences of switching to the apache worker !

Now after having said a bunch of info which might be not necessary with the experienced system admin I'll continue on with the steps to install the apache2-mpm-worker.

1. Install the apache2-mpm-worker

debian:~# apt-get install apache2-mpm-worker php5-cgi
Reading state information... Done
The following packages were automatically installed and are no longer required:
The following packages will be REMOVED apache2-mpm-prefork libapache2-mod-php5
The following NEW packages will be installed apache2-mpm-worker
0 upgraded, 1 newly installed, 2 to remove and 46 not upgraded.
Need to get 0B/259kB of archives.After this operation, 6193kB disk space will be freed.

As you can notice in below's text confirmation which will appear you will have to remove the apache2-mpm-prefork and the apache2-mpm-worker modules before you can proceed to install the apache2-mpm-prefork.

You might ask yourself if I remove my installed libphp how would I be able to use my Apache with my PHP based websites? And why does the apt package manager requires the libapache2-mod-php5 to get removed.
The explanation is simple apache2-mpm-worker is not thread safe, in other words scripts which does use the php fork(); function would not work correctly with the Apache worker module and will probably be leading to PHP and Apache crashes.
Therefore in order to install the apache mod worker it's necessary that no libapache2-mod-php5 is existent on the system.
In order to have a PHP installed on the server again you will have to use the php5-cgi deb package, this is the reason in the above apt-get command I'm also requesting apt to install the php5-cgi package next to apache2-mpm-worker.

2. Enable the cgi and cgid apache modules

debian:~# a2enmod cgi
debian:~# a2enmod cgid

3. Activate the mod_actions apache modules

debian:~# cd /etc/apache2/mods-enabled
debian:~# ln -sf ../mods-available/actions.load
debian:~# ln -sf ../mods-available/actions.conf

4. Add configuration options in order to enable mod worker to use the newly installed php5-cgi

Edit /etc/apache2/mods-available/actions.conf vim, mcedit or nano (e.g. your editor of choice and add inside:

&ltIfModule mod_actions.c>
Action application/x-httpd-php /cgi-bin/php5
</IfModule>

After completing all the above instructions, you might also need to edit your /etc/apache2/apache2.conf to tune up, how your Apache mpm worker will serve client requests.
Configuring the <IfModule mpm_worker_module> in apache2.conf is necessary to optimize your newly installed mpm_worker module for performance.

5. Configure the mod_worker_module in apache2.conf One example configuration for the mod worker is:

<IfModule mpm_worker_module>
StartServers 2
MaxClients 150
MinSpareThreads 25
MaxSpareThreads 75
ThreadsPerChild 25
MaxRequestsPerChild 0
</IfModule>

Consider the fact that this configuration is just a sample and it's in no means configured for serving Apache requests for high load Apache servers and you need to further play with the values to have a good results on your server.

6. Check that all is fine with your Apache configurations and no syntax errors are encountered

debian:~# /usr/sbin/apache2ctl -t
Syntax OK

If you get something different from Syntax OK track the error and fix it before you're ready to restart the Apache server.

7. Now restart the Apache server

debian:~# /etc/init.d/apache2 restart

All should run fine and hopefully your PHP scripts should be interpreted just fine through the php5-cgi instead of the libapache2-mod-php5.
Using the /usr/bin/php5-cgi will increase with some percentage your server CPU load but on other hand will drasticly decrease the Webserver memory consumption.
That's quite logical because the libapache2-mod-hp5 is loaded once during apache server whether a new instance of /usr/bin/php5-cgi is invoked during each of Apache requests via the mod worker.

There is one serious security flow coming with php5-cgi, DoS against a server processing scripts through php5-cgi is much easier to be achieved.
An example for a denial attack which could affect a website running with mod worker and php5-cgi, could be simulated from a simple user with a web browser which holds up the f5 or ctrl + r browser page refresh buttons.
In that case whenever php5-cgi is used the CPU load would rise drastic, one possible solution to this denial of service issues is by installing and using libapache2-mod-evasive like so:

8. Install libapache2-mod-evasive

debian:~# apt-get install libapache2-mod-evasive
The Apache mod evasive module is a nice apache module to minimize HTTP DoS and brute force attacks.
Now with mod worker through the php5-cgi, your apache should start serving requests more efficiently than before.
For some performance reasons some might even want to try out the fastcgi with the worker to boost the Apache performance but as I have never tried that I can't say how reliable a a mod worker with a fastcgi would be.

N.B. ! If you have some specific php configurations within /etc/php5/apache2/php.ini you will have to set them also in /etc/php5/cgi/php.ini before you proceed with the above instructions to install Apache otherwise your PHP scripts might not work as expected.

Mod worker is also capable to work with the standard mod php5 Apache module, but if you decide to go this route you will have to recompile your PHP lib manually from source as in Debian this option is not possible with the default php library.
This installation worked fine on Debian Lenny but suppose the same installation should work fine on Debian Squeeze as well as Debian testing/unstable.
Feedback on the afore-described mod worker installation is very welcome!

How to enable output compression (gzipfile content compression) in nginx webserver

Friday, April 8th, 2011

I have recently installed and configured a Debian Linux server with nginx
. Since then I’ve been testing around different ways to optimize the nginx performance.

In my nginx quest, one of the most crucial settings which dramatically improved the end client performance was enabling the so called output compression which in Apache based servers is also known as content gzip compression .
In Apache webservers the content gzip compression is provided by a server module called mod_deflate .

The output compression nginx settings saves a lot of bandwidth and though it adds up a bit more load to the server, the plain text files like html, xml, js and css’s download time reduces drasticly as they’re streamed to the browser in gzip compressed format.
This little improvement in download speed also does impact the overall end user browser experience and therefore improves the browsing speed experience with websites.

If you have already had experience nginx you already know it is a bit fastidious and you have to be very careful with it’s configuration, however thanksfully enabling the gzip compression was actually rather easier than I thought.

Here is what I added in my nginx config to enable output compression:

## Compression
gzip on;
gzip_buffers 16 8k;
gzip_comp_level 9;
gzip_http_version 1.1;
gzip_min_length 0;
gzip_vary on;

Important note here is that need to add this code in the nginx configuration block starting with:

http {
....
## Compression
gzip on;
gzip_buffers 16 8k;
gzip_comp_level 9;
gzip_http_version 1.1;
gzip_min_length 0;
gzip_vary on;

In order to load the gzip output compression as a next step you need to restart the nginx server, either by it’s init script if you use one or by killing the old nginx server instances and starting up the nginx server binary again:
I personally use an init script, so restarting nginx for me is done via the cmd:

debian:~# /etc/init.d/nginx restart
Restarting nginx: nginx.

Now to test if the output gzip compression is enabled for nginx, you can simply use telnet

hipo@linux:~$ telnet your-nginx-webserver-domain.com 80
Escape character is '^]'.

After the Escape character is set ‘^]’ appears on your screen type in the blank space:

HEAD / HTTP/1.0

and press enter twice.
The output which should follow should look like:


HTTP/1.1 200 OK
Server: nginx
Date: Fri, 08 Apr 2011 12:04:43 GMT
Content-Type: text/html
Content-Length: 13
Last-Modified: Tue, 22 Mar 2011 15:04:26 GMT
Connection: close
Vary: Accept-Encoding
Expires: Fri, 15 Apr 2011 12:04:43 GMT
Cache-Control: max-age=604800
Accept-Ranges: bytes

The whole transaction with telnet command issued and the nginx webserver output should look like so:

hipo@linux:~$ telnet your-nginx-webserver-domain.com 80
Trying xxx.xxx.xxx.xxx...
Connected to your-nginx-webserver-domain.com
.Escape character is '^]'.
HEAD / HTTP/1.0

HTTP/1.1 200 OK
Server: nginx
Date: Fri, 08 Apr 2011 12:04:43 GMT
Content-Type: text/html
Content-Length: 13
Last-Modified: Tue, 22 Mar 2011 15:04:26 GMT
Connection: close
Vary: Accept-Encoding
Expires: Fri, 15 Apr 2011 12:04:43 GMT
Cache-Control: max-age=604800
Accept-Ranges: bytes

The important message in the returned output which confirms your nginx output compression is properly configured is:

Vary: Accept-Encoding

If this message is returned by your nginx server, this means your nginx now will distribute it’s content to it’s clients in compressed format and apart from the browsing boost a lot of server and client bandwitdth will be saved.

mod_rewrite redirect rule 80 to 443 on Apache webserver

Wednesday, April 2nd, 2014

A classic sysadmin scenario is to configure new Apache webserver with requirement to have an SSL ceriticate installed and working on port 443 and all requests coming on port 80 to be redirected to https://.
On Apache this is done with simple mod_rewrite rule:

RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}

Before applying the rule don't forget to have Apache mod_rewrite enabled usually it is not enabled on default most Linux distributions by default.
On shared hostings if you don't have access to directly modify Apache configuration but have .htaccess enabled you can add above rules also to .htaccess

Add this to respective VirtualHost configuration and restart Apache and that's it. If after configuring it for some reason it is not working debug mod_rewrite issues by enabling mod_rewrite's rewrite.log

Other useful Apache mod_rewrite redirect rule is redirect a single landing page from HTTP to HTTP

RewriteEngine On
RewriteRule ^apache-redirect-http-to-https.html$ https://www.site-url.com/apache-redirect-http-to-https.html [R=301,L]

!Note! that in case where performance is a key requirement for a website it might be better to use the standard way to redirect HTTP to HTTPS protocol in Apache through:

ServerName www.site-url.com Redirect / https://www.site-url.com/

To learn more on mod_rewrite redirecting  check out this official documentation on Apache's official site.

Fix to “413 Request Entity Too Large” error in Nginx webserver and what causes it

Friday, November 14th, 2014

nginx_413_request_entity_too_large-fix

If you administer NGINX caching server serving static files content and redirecting some requests to Apache and you end up with errors when uploading big files (using HTTP PUT method), even though in Apache's PHP  upload_max_filesize is set to relatively high number upload_max_filesize = 60M.

Here is what happens during hand shake of web-browser -> server interaction 'till status is returned:
 

Web browser or Webcrawler robot goes through the following phases while talking to Web server:

 

1. Obtain an IP address from the IP name of the site (base on site URL without the leading 'http://'). 
This is provided by domain name servers (DNSs) configured for PC.
2. Open an IP socket connection to that IP address.
3. Write an HTTP data stream through that socket
(4) Receive an HTTP data stream back from the Web server in response. 
This data stream contains status codes whose values are determined by the HTTP protocol
whether successful. 

 

In the case the is recognized and reported to client 'web browser', causing the error.

The fix is to also increase max file upload limit in NGINX this is done via:
 
client_max_body_size variable in /usr/local/nginx/nginx.conf (or /etc/nginx/nginx.conf whether Nginx is installed from package).
Here is extract from nginx.conf

http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

 

    server {
        client_max_body_size 60M;
        listen       80;
        server_name  localhost;

        # Main location
        location / {
            proxy_pass         http://127.0.0.1:8000/;
        }
    }
}


To make new configuration active Restart Nginx:

/etc/init.d/nginx restart

Protect NGINX webserver with password – Nginx basic HTTP htaccess authentication

Tuesday, December 2nd, 2014

Protect-nginx-webserver-with-password_migrate_apache_password_protect_to_Nginx_basic_HTTP_htaccess_authentication
If you're migrating a website from Apache Webserver to Nginx to boost performance and better Utilize your servers hardware and the websites (Virtualhosts) has sections with implemented Apache .htaccess / .htaccess password authentication, you will have to migrate also Apache directory password protection to Nginx.

This is not a hard task as NginX's password protection uses same password format as Apache and Nginx password protection files are generated with standard htpasswd part of apache2-utils package (on Debian / Ubuntu servers) and httpd-tools on CentOS / Fedora / RHEL. If you're migrating the Apache websites to Nginx on a fresh new installed server and website developers are missing htpasswd tool to install it depending on Linux distro:

On Debian / Ubuntu deb based servers, install htpasswd with:

apt-get install –yes apache2-utils


On CentOS / Fedora … other RPM based servers:

 

yum -y install httpd-tools

Once installed if you need to protect new section site still being in development with password with Nginx, do it as usual with htpasswd
 

htpasswd -c /home/site/nginx-websitecom/.htpaswd admin


Note that if .htpasswd file has already exist and has other user records, to not overwritted multiple users / passes and  let all users in file login to Nginx HTTP auth with separate passwords, do:

htpasswd /var/www/nginx-websietcom/.htpasswd elijah


Now open config file of Nginx Vhost and modify it to include configuration like this:

 

server {
       listen 80;
       server_name www.nginx-website.com nginx-website.com;
       root /var/www/www.nginx-website.com/www;
[…]
       location /test {
                auth_basic "Restricted";
                auth_basic_user_file /var/www/www.example.com/.htpasswd;
       }
[…]
}


Do it for as many Vhosts as you have and to make the new settings take affect restart Nginx:

/etc/init.d/nginx restart

Enjoy 🙂

Secure Apache webserver against basic Denial of Service attacks with mod_evasive on Debian Linux

Wednesday, September 7th, 2011

Secure Apache against basic Denial of Service attacks with mod evasive, how webserver DDoS works

One good module that helps in mitigating, very basic Denial of Service attacks against Apache 1.3.x 2.0.x and 2.2.x webserver is mod_evasive

I’ve noticed however many Apache administrators out there does forget to install it on new Apache installations or even some of them haven’t heard about of it.
Therefore I wrote this small article to create some more awareness of the existence of the anti DoS module and hopefully thorugh it help some of my readers to strengthen their server security.

Here is a description on what exactly mod-evasive module does:

debian:~# apt-cache show libapache2-mod-evasive | grep -i description -A 7

Description: evasive module to minimize HTTP DoS or brute force attacks
mod_evasive is an evasive maneuvers module for Apache to provide some
protection in the event of an HTTP DoS or DDoS attack or brute force attack.
.
It is also designed to be a detection tool, and can be easily configured to
talk to ipchains, firewalls, routers, and etcetera.
.
This module only works on Apache 2.x servers

How does mod-evasive anti DoS module works?

Detection is performed by creating an internal dynamic hash table of IP Addresses and URIs, and denying any single IP address which matches the criterias:

  • Requesting the same page more than number of times per second
  • Making more than N (number) of concurrent requests on the same child per second
  • Making requests to Apache during the IP is temporarily blacklisted (in a blocking list – IP blacklist is removed after a time period))

These anti DDoS and DoS attack protection decreases the possibility that Apache gets DoSed by ana amateur DoS attack, however it still opens doors for attacks who has a large bot-nets of zoombie hosts (let’s say 10000) which will simultaneously request a page from the Apache server. The result in a scenario with a infected botnet running a DoS tool in most of the cases will be a quick exhaustion of system resources available (bandwidth, server memory and processor consumption).
Thus mod-evasive just grants a DoS and DDoS security only on a basic, level where someone tries to DoS a webserver with only possessing access to few hosts.
mod-evasive however in many cases mesaure to protect against DoS and does a great job if combined with Apache mod-security module discussed in one of my previous blog posts – Tightening PHP Security on Debian with Apache 2.2 with ModSecurity2
1. Install mod-evasive

Installing mod-evasive on Debian Lenny, Squeeze and even Wheezy is done in identical way straight using apt-get:

deiban:~# apt-get install libapache2-mod-evasive
...

2. Enable mod-evasive in Apache

debian:~# ln -sf /etc/apache2/mods-available/mod-evasive.load /etc/apache2/mods-enabled/mod-evasive.load

3. Configure the way mod-evasive deals with potential DoS attacks

Open /etc/apache2/apache2.conf, go down to the end of the file and paste inside, below three mod-evasive configuration directives:

<IfModule mod_evasive20.c>
DOSHashTableSize 3097DOS
PageCount 30
DOSSiteCount 40
DOSPageInterval 2
DOSSiteInterval 1
DOSBlockingPeriod 120
#DOSEmailNotify hipo@mymailserver.com
</IfModule>

In case of the above configuration criterias are matched, mod-evasive instructs Apache to return a 403 (Forbidden by default) error page which will conserve bandwidth and system resources in case of DoS attack attempt, especially if the DoS attack targets multiple requests to let’s say a large downloadable file or a PHP,Perl,Python script which does a lot of computation and thus consumes large portion of server CPU time.

The meaning of the above three mod-evasive config vars are as follows:

DOSHashTableSize 3097 – Increasing the DoSHashTableSize will increase performance of mod-evasive but will consume more server memory, on a busy webserver this value however should be increased
DOSPageCount 30 – Add IP in evasive temporary blacklist if a request for any IP that hits the same page 30 consequential times.
DOSSiteCount 40 – Add IP to be be blacklisted if 40 requests are made to a one and the same URL location in 1 second time
DOSBlockingPeriod 120 – Instructs the time in seconds for which an IP will get blacklisted (e.g. will get returned the 403 foribden page), this settings instructs mod-evasive to block every intruder which matches DOSPageCount 30 or DOSSiteCount 40 for 2 minutes time.
DOSPageInterval 2 – Interval of 2 seconds for which DOSPageCount can be reached.
DOSSiteInterval 1 – Interval of 1 second in which if DOSSiteCount of 40 is matched the matched IP will be blacklisted for configured period of time.

mod-evasive also supports IP whitelisting with its option DOSWhitelist , handy in cases if for example, you should allow access to a single webpage from office env consisting of hundred computers behind a NAT.
Another handy configuration option is the module capability to notify, if a DoS is originating from a number of IP addresses using the option DOSEmailNotify
Using the DOSSystemCommand in relation with iptables, could be configured to filter out any IP addresses which are found to be matching the configured mod-evasive rules.
The module also supports custom logging, if you want to keep track on IPs which are found to be trying a DoS attack against the server place in above shown configuration DOSLogDir “/var/log/apache2/evasive” and create the /var/log/apache2/evasive directory, with:
debian:~# mkdir /var/log/apache2/evasive

I decided not to log mod-evasive DoS IP matches as this will just add some extra load on the server, however in debugging some mistakenly blacklisted IPs logging is sure a must.

4. Restart Apache to load up mod-evasive debian:~# /etc/init.d/apache2 restart
...

Finally a very good reading which sheds more light on how exactly mod-evasive works and some extra module configuration options are located in the documentation bundled with the deb package to read it, issue:

debian:~# zless /usr/share/doc/libapache2-mod-evasive/README.gz

How to check if your Linux WebServer is under a DoS attack

Friday, July 22nd, 2011

There are few commands I usually use to track if my server is possibly under a Denial of Service attack or under Distributed Denial of Service

Sys Admins who still have not experienced the terrible times of being under a DoS attack are happy people for sure …

1. How to Detect a TCP/IP Denial of Service Attack This are the commands I use to find out if a loaded Linux server is under a heavy DoS attack, one of the most essential one is of course netstat.
To check if a server is under a DoS attack with netstat, it’s common to use:

linux:~# netstat -ntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n|wc -l

If the output of below command returns a result like 2000 or 3000 connections!, then obviously it’s very likely the server is under a DoS attack.

To check all the IPS currently connected to the Apache Webserver and get a very brief statistics on the number of times each of the IPs connected to my server, I use the cmd:

linux:~# netstat -ntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n
221 80.143.207.107 233 145.53.103.70 540 82.176.164.36

As you could see from the above command output the IP 80.143.207.107 is either connected 221 times to the server or is in state of connecting or disconnecting to the node.

Another possible way to check, if a Linux or BSD server is under a Distributed DoS is with the list open files command lsof
Here is how lsof can be used to list the approximate number of ESTABLISHED connections to port 80.

linux:~# lsof -i TCP:80
litespeed 241931 nobody 17u IPv4 18372655 TCP server.www.pc-freak.net:http (LISTEN)
litespeed 241931 nobody 25u IPv4 18372659 TCP 85.17.159.89:http (LISTEN)
litespeed 241931 nobody 30u IPv4 29149647 TCP server.www.pc-freak.net:http->83.101.6.41:54565 (ESTABLISHED)
litespeed 241931 nobody 33u IPv4 18372647 TCP 85.17.159.93:http (LISTEN)
litespeed 241931 nobody 34u IPv4 29137514 TCP server.www.pc-freak.net:http->83.101.6.41:50885 (ESTABLISHED)
litespeed 241931 nobody 35u IPv4 29137831 TCP server.www.pc-freak.net:http->83.101.6.41:52312 (ESTABLISHED)
litespeed 241931 nobody 37w IPv4 29132085 TCP server.www.pc-freak.net:http->83.101.6.41:50000 (ESTABLISHED)

Another way to get an approximate number of established connections to let’s say Apache or LiteSpeed webserver with lsof can be achieved like so:

linux:~# lsof -i TCP:80 |wc -l
2100

I find it handy to keep track of above lsof command output every few secs with gnu watch , like so:

linux:~# watch "lsof -i TCP:80"

2. How to Detect if a Linux server is under an ICMP SMURF attack

ICMP attack is still heavily used, even though it’s already old fashioned and there are plenty of other Denial of Service attack types, one of the quickest way to find out if a server is under an ICMP attack is through the command:

server:~# while :; do netstat -s| grep -i icmp | egrep 'received|sent' ; sleep 1; done
120026 ICMP messages received
1769507 ICMP messages sent
120026 ICMP messages received
1769507 ICMP messages sent

As you can see the above one liner in a loop would check for sent and recieved ICMP packets every few seconds, if there are big difference between in the output returned every few secs by above command, then obviously the server is under an ICMP attack and needs to hardened.

3. How to detect a SYN flood with netstat

linux:~# netstat -nap | grep SYN | wc -l
1032

1032 SYNs per second is quite a high number and except if the server is not serving let’s say 5000 user requests per second, therefore as the above output reveals it’s very likely the server is under attack, if however I get results like 100/200 SYNs, then obviously there is no SYN flood targetting the machine 😉

Another two netstat command application, which helps determining if a server is under a Denial of Service attacks are:

server:~# netstat -tuna |wc -l
10012

and

server:~# netstat -tun |wc -l
9606

Of course there also some other ways to check the count the IPs who sent SYN to the webserver, for example:

server:~# netstat -n | grep :80 | grep SYN |wc -l

In many cases of course the top or htop can be useful to find, if many processes of a certain type are hanging around.

4. Checking if UDP Denial of Service is targetting the server

server:~# netstat -nap | grep 'udp' | awk '{print $5}' | cut -d: -f1 | sort |uniq -c |sort -n

The above command will list information concerning possible UDP DoS.

The command can easily be accustomed also to check for both possible TCP and UDP denial of service, like so:

server:~# netstat -nap | grep 'tcp|udp' | awk '{print $5}' | cut -d: -f1 | sort |uniq -c |sort -n
104 109.161.198.86
115 112.197.147.216
129 212.10.160.148
227 201.13.27.137
3148 91.121.85.220

If after getting an IP that has too many connections to the server and is almost certainly a DoS host you would like to filter this IP.

You can use the /sbin/route command to filter it out, using route will probably be a better choice instead of iptables, as iptables would load up the CPU more than simply cutting the route to the server.

Here is how I remove hosts to not be able to route packets to my server:

route add 110.92.0.55 reject

The above command would null route the access of IP 110.92.0.55 to my server.

Later on to look up for a null routed IP to my host, I use:

route -n |grep -i 110.92.0.55

Well hopefully this should be enough to give a brief overview on how, one can dig in his server and find if he is under a Distributed Denial of Service, hope it’s helpful to somebody out there.
Cheers 😉

Redirect http URL folder to https e.g. redirect (http:// to https://) with mod_rewrite – redirect port 80 to port 443 Rewrite rule

Saturday, July 17th, 2010

There is a quick way to achieve a a full url redirect from a normal unencrypted HTTP request to a SSL crypted HTTPS

This is achieved through mod_rewrite using the RedirectMatch directive.

For instance let’s say we’d like to redirect https://www.pc-freak.net/blog to https://www.pc-freak.net/blog.
We simply put in our .htacess file the following rule:

Redirect permanent /blog https://www.cadiabank.com/login

Of course this rule assumes that the current working directory where the .htacess file is stored is the main domain directory e.g. / .
However this kind of redirect is a way inflexible so for more complex redirect, you might want to take a look at mod rewrite’s RedirectMatch directive.

For instance if you inted to redirect all urls (https://www.pc-freak.net/blog/something/asdf/etc.) which as you see includes the string blog/somestring/asdf/etc. to (https://www.pc-freak.net/blog/something/asdf/etc then you might use some htaccess RedirectMatch rule like:

RedirectMatch permanent ^/blog/(.*)$ https://www.pc-freak.net.net$1
or
RedirectMatch permanent ^/blog/(.*)$ https://www.pc-freak.net.net/$1

Hopefully your redirect from the http protocol to https protocol with mod_rewrite rule should be completed.
Also consider that the Redirect directive which by the way is an Apache directive should be faster to process requests, so everywhere you can I recommend using instead of RedirectMatch which calls the external Apache mod_rewrite and will probably be times slower.

Httpwatch a must have web developer and web hosting sysadmin Firefox / Internet Explorer / IPad / IPhone add-on

Monday, December 16th, 2013


Today a colleague of mine referred me to a wonderful Mozilla Firefox (Windows / Mac) plugin called HttpWatch.

HttpWatch is an HTTP sniffer for IE, Firefox, iPhone & iPad
that provides new insights into how your website loads and performs.
The plugin is quite simple it shows you all requests from your Browser to remote server with plenty of Debug information (on the fly). You can see exactly the Commands sent over the HTTP protocol as well as returned request status responce from Web Server (i.e. 200, 300, 400). By knowing the status returned by webserver you can debug odd problems with website authentication as well as oddities caused by proxies you don't know about. Besides showing responce returned on web requests HttpWatch shows also hand-shake of session ID variables. This makes the plugin  precious for Web developers and System Administrators working in Web & Middleware (Linux / Windows based Web Hosting companies)  etc.

HttpWatch is also a must have plugin for anyone looking to optimize a website for speed or for fixingwebsite responce time bottleneck issues. The size of plugin is quite big as of time of writting about 18.2 Megabytes. HttpWatch comes with separate app installer like any other stand alone Windows application.  Unfortunately Httpwatch does not have a version for GNU / Linux. Linux users could use HTTPFox, Google Chrome Developer tools or
Firebug.

Once you have plugin installed to check what's happening with a website access in (Firefox) select Tools -> HttpWatch. You will get a bottom screen new window with deug info.

httpwatch debugging accessed website information - web browser tool to optimize your website

Here is list of some of the many things for which plugin is useful;

  • Records HTTP
  • Decrypts HTTPS Traffic
  • Integrates with Internet Explorer & Firefox
  • Supports the SPDY Protocol in Firefox
  • Standalone Log File Viewer
  • Summary of Recorded Traffic
  • Grouping of Requests by Page
  • Collect Log Files From Your Customers
  • Request Level Time Charts
  • Real-Time Page Level Time Charts
  • Page Events
  • Detects Potential Problems
  • Customizable Data Columns
  • Data Tips
  • Automation Support
  • Advanced Filtering
  • Millisecond Level Timing
  • HTTP Compression
  • Network Level Performance Data
  • Extended Cookie Information
  • Shows Interaction with Browser Cache
  • Raw HTTP Streams
  • Export Data to CSV, HAR and XML
  • Import HAR files
  • Customizable CSV Export
  • Keyboard Accelerators
  • Access to Cached and Downloaded Content
  • Accurately Records Requests and Responses
  • Automatic Recording and Saving

Finally HttpWatch is a plugin to have next to Yahoo's YSlow, FasterfoxFireBug and Firefox's Web Developer plugin