Posts Tagged ‘google’

Change default browser to Internet Explorer

Wednesday, September 18th, 2013

Almost no sane person and security aware person uses Internet Explorer still. However still in huge American companies it is heavily used. If you install Firefox or Google Chrome and by mistake you change default browser to one of them then it is worthy revert back default browser to Internet Explorer.

Here is how to do it;
Open Internet Explorer and navigate to:

Tools -> Interent Options -> Programs -> click on (Make Default)

Internet-Explorer-Internet-Options-screenshot-on-Windows-7

change-default-browser-to-internet-explorer-make-default-button-screenshot

Done
 

Share this on

How to install Google Chrome web browser on Debian 7 Wheezy Linux

Wednesday, September 4th, 2013

How to install Google Chrome web browser on Debian Gnu Linux Chrome and tux logo
Just installed Debian 7 Linux and wondered how to install Google Chrome Browser on Debian Wheezy. It took me a while until I figure it out, as direct download from Google after searching for Chrome Linux had library requirements which are missing from Debian 7 Wheezy repositories.
Here is how;

1. Add  Wheezy Backports and Google's Chrome Repository to /etc/apt/sources.list

echo 'deb http://ftp.debian.org/debian/ wheezy-backports main contrib non-free' >> /etc/apt/sources.list
echo 'deb http://dl.google.com/linux/chrome/deb/ stable main' >> /etc/apt/sources.list

2. Install Google Chrome with apt-get

Here you have few options install Google Chrome Beta (whether you prefer you're an innovator), install unstable – if you prefer latest functionality and don't count on stability or install stable version.

a) Install Google Chrome Beta

apt-get install --yes google-chrome-beta

b) Install Google Chrome Unstable

apt-get install --yes google-chrome-unstable

c) Install Google Stable

apt-get install --yes google-chrome-stable

I personally prefer always to keep stable so prefer to install google-chrome-stable.

Only reason I need Google-Chrome is for testing how websites looks with it. Otherwise I don't recommend this browser to anyone who cares for his security. Obviously as Chrome is product of Google it is almost certainly it keeps complete surveillance on what you do on the net.

That's all happy web development with Chrome on Debian 🙂
 

Share this on

Linux: Generating Web statistics from Old Apache logs with Webalizer

Thursday, July 25th, 2013

Webalizer generate and visualize in web page statistics of old websites howto webalizer static html google analytics like statistics on linux logo

Often it happens, that some old hosted websites were created in a way so no Web Statistics are available. Almost all modern created websites nowadays are already set to use Google AnalyticsAnyhow every now and then I stumble on hosting clients whose websites creator didn't thought on how to track how many hits or unique visitors site gets in a month / year etc.
 Thanksfully this is solvable by good "uncle" admin with help with of Webalizer (with custom configuration) and a little bit of shell scripting.

The idea is simple, we take the old website logs located in lets say 
/var/log/apache2/www.website-access.log*,
move files to some custom created new directory lets say /root/www.website-access-logs/ and then configure webalizer to read and generate statistics based on log in there.

For the purpose, we have to have webalizer installed on Linux system. In my case this is Debian GNU / Linux.

For those who hear of Webalizer for first time here is short package description:

debian:~# apt-cache show webalizer|grep -i description -A 2

Description-en: web server log analysis program
The Webalizer was designed to scan web server log files in various formats
and produce usage statistics in HTML format for viewing through a browser.

 If webalizer is not installed still install it with:

debian:~# apt-get install --yes webalizer
...
.....

Then make backup copy of original / default webalizer.conf (very important step especially if server is already processing Apache log files with some custom webalizer configuration:

debian:~# cp -rpf /etc/webalizer/webalizer.conf /etc/webalizer/webalizer.conf.orig

Next step is to copy webalizer.conf with a name reminding of website of which logs will be processed, e.g.:

debian:~# cp -rpf /etc/webalizer/webalizer.conf /etc/webalizer/www.website-webalizer.conf

In www.website-webalizer.conf config file its necessary to edit at least 4 variables:

LogFile /var/log/apache2/access.log
OutputDir /var/www
#Incremental no
ReportTitle Usage statistics for

 Make sure after modifying 3 vars read something like:  
LogFile /root/www.website/access_log_merged_1.log
OutputDir /var/www/www.website
Incremental yes
ReportTitle Usage statistics for Your-Website-Host-Name.com

Next create /root/www.website and /var/www/www.website, then copy all files you need to process from /var/log/apache2/www.website* to /root/www.website:

debian:~# mkdir -p /root/www.website
debian:~# cp -rpf /var/log/apache2/www.website* /root/www.website

On Debian Apache uses logrotate to archive old log files, so all logs except www.website-access.log and wwww.website-access.log.1 are gzipped:

debian:~#  cd /root/www.website
debian:~# ls 
www.website-access.log.10.gz
www.website-access.log.11.gz
www.website-access.log.12.gz
www.website-access.log.13.gz
www.website-access.log.14.gz
www.website-access.log.15.gz
www.website-access.log.16.gz
www.website-access.log.17.gz
www.website-access.log.18.gz
www.website-access.log.19.gz
www.website-access.log.20.gz
...
 

Then we have to un-gzip zipped logs and create one merged file from all of them ready to be red later by Webalizer. To do so I use a tiny shell script like so:

for n in {52..1}; do gzip -d www.dobrudzhatour.net-access.log.$n.gz; done
for n in {52..1}; do cat www.dobrudzhatour.net-access.log.$n >> access_log_merged_1.log;
done

First look de-gzips and second one does create a merged file from all with name access_merged_1.log The range of log files in my case is from www.website-access.log.1 to www.website-access.log.52, thus I have in loop back number counting from 52 to 1.

Once access_log_merged_1.log is ready we can run webalizer to process file (Incremental) and generate all time statistics for www.website:

debian:~# webalizer -c /etc/webalizer/webalizer-www.website-webalizer.conf

Webalizer V2.01-10 (Linux 2.6.32-27-server) locale: en_US.UTF-8
Using logfile /root/www.website/access_log_merged_1.log (clf)
Using default GeoIP database Creating output in /var/www/webalizer-www.website
Hostname for reports is 'debian'
Reading history file… webalizer.hist
Reading previous run data.. webalizer.current
333474 records (333474 ignored) in 37.50 seconds, 8892/sec

To check out just generated statistics open in browser:

http://yourserverhost/webalizer-www.website/

or

http://IP_Address/webalizer-www.website

 You should see statistics pop-up, below is screenshot with my currently generated stats:

Webalizer website access statistics screenshot Debian GNU Linux

Share this on

Checking port security on Linux with Nmap – Just another Nmap examples tutorial

Sunday, June 9th, 2013

Scanning with nmap checking computer network security Linux FreeBSD Windows Nmap logo
Nmap
(Network Mapper) is one of the most essential tools for checking server security. As a penetration testing instrument it is both used by SysAdmins / Crackers and Security Specialists. Its perfect too to make periodic port audits and determine how good is configured server firewall or even in time of building one. Often with time Firewall rules grow bigger and bigger and as a consequence there is a risk of loopholes in FW rules, nmap routine host checks (i.e. run as a cronjob and logging port status on server is IMHO a good preventive measure).

I first get introduced to Nmap in the early days of my careers as IT Geek and System Administrator around year 2000. Back then Computer Security and hacking culture was a common thing across IT geeks and ppl hanging in IRC 😉 This article will not say much of news for those accustomed to Nmap, but hope interesting for people newly introduced to Computer Security it will be of use.


1. Checking host status with Nmap (Is remote scanned host up).

There is plenty of ways to check, whether remote host is reachable, ping is classics, but not always relevant as many network admins decide to filter ping for security reasons. Of course one can do manual try outs with telnet on common Services Ports (Apache, Mail, Squid, MySQL etc. / 80,25,8080, 3306), or even write on own prog to do so but its worthless as Nmap is already there with options for this and its report in about 90% of cases is relevant:

To check whether host is up with Nmap:

pcfreak:~# nmap -sP google.com

Starting Nmap 6.00 ( http://nmap.org ) at 2013-06-08 11:58 EEST
Nmap scan report for google.com (173.194.39.227)
Host is up (0.013s latency).
Other addresses for google.com (not scanned): 173.194.39.238 173.194.39.231 173.194.39.226 173.194.39.232 173.194.39.230 173.194.39.233 173.194.39.228 173.194.39.225 173.194.39.229 173.194.39.224
rDNS record for 173.194.39.227: sof01s02-in-f3.1e100.net
Nmap done: 1 IP address (1 host up) scanned in 0.74 seconds

2. Port map with Quick remote host (connect) scan

Most classical way of scanning, since the early days of computing is to  attempt connecting to remote host ports opening connection via creating new TCP or UDP protocol socket with C's connect(); function. Hence nmap's "default" way of scanning is like so. Anyways it doesn't scan all possible 65534 ports, when run with no extra arguments, but instead scans only those more popular widespread used.

noah:~# nmap -sT pc-freak.net

 

Starting Nmap 5.00 ( http://nmap.org ) at 2013-06-08 15:05 EEST
Stats: 0:00:01 elapsed; 0 hosts completed (1 up), 1 undergoing Connect Scan
Connect Scan Timing: About 2.00% done; ETC: 15:07 (0:01:38 remaining)
Stats: 0:00:02 elapsed; 0 hosts completed (1 up), 1 undergoing Connect Scan
Connect Scan Timing: About 24.40% done; ETC: 15:05 (0:00:09 remaining)
Stats: 0:00:03 elapsed; 0 hosts completed (1 up), 1 undergoing Connect Scan
Connect Scan Timing: About 77.25% done; ETC: 15:05 (0:00:01 remaining)
Interesting ports on pc-freak.net (83.228.93.76):
Not shown: 985 filtered ports
PORT     STATE  SERVICE
20/tcp   closed ftp-data
21/tcp   open   ftp
22/tcp   open   ssh
25/tcp   open   smtp
53/tcp   open   domain
80/tcp   open   http
110/tcp  open   pop3
143/tcp  open   imap
443/tcp  closed https
465/tcp  open   smtps
631/tcp  closed ipp
993/tcp  open   imaps
995/tcp  closed pop3s
8022/tcp open   unknown
9001/tcp open   tor-orport

Nmap done: 1 IP address (1 host up) scanned in 4.69 seconds
 

During scan, pressing Enter, prints on screen statistics on how many percentage of scan is completed. In older Nmap, releases this was not so, it is very convenient stuff, as some host scans (with specific firewalls), can have anti port scan rules making the scan time ultra luggish. If this is the case nmap can be run in different scan mode, I'm gonna say few words on that later.

3. Nmap – Scanning only selected ports of interest and  port range

a) Scanning only desired ports
Whether scanning a complete range of IPs from C or B class network, it is handy to only scan only ports of interests for example (Apache, SMTP, POP3, IMAP etc.).
Here is how to scan those 4;

noah:~# nmap -sT pc-freak.net -p 80,25,110,143

 

Starting Nmap 6.00 ( http://nmap.org ) at 2013-06-08 15:49 EEST
Stats: 0:00:00 elapsed; 0 hosts completed (0 up), 1 undergoing Ping Scan
Ping Scan Timing: About 100.00% done; ETC: 15:49 (0:00:00 remaining)
Stats: 0:00:00 elapsed; 0 hosts completed (1 up), 1 undergoing Connect Scan
Connect Scan Timing: About 100.00% done; ETC: 15:49 (0:00:00 remaining)
Nmap scan report for pc-freak.net (83.228.93.76)
Host is up (0.20s latency).
PORT    STATE SERVICE
25/tcp  open  smtp
80/tcp  open  http
110/tcp open  pop3
143/tcp open  imap

Nmap done: 1 IP address (1 host up) scanned in 1.00 seconds

List of all common network services with port number is located in /etc/services

b) Scanning a port range

By default nmap does not scan all the ports in the low ports range 1-1024. This port range according to RFC standards are reserved for standard more often and high priority network services. Default's nmap scan does not scan all of the 1-1024 ports and sometimes, some people prefer to run services in non-standard port numbers on some obscure ports in those port range. It is common that some "hacked (cracked is proper word here)", have secretly install Connect Shell or Connect back shell services running in those port range. Thus scanning those port range on administrated servers (especially whether there is suspicion for intrusion).

noah:~# nmap -sT pc-freak.net -p 1-1024

 

 

Starting Nmap 5.00 ( http://nmap.org ) at 2013-06-08 15:47 EEST
Stats: 0:00:04 elapsed; 0 hosts completed (1 up), 1 undergoing Connect Scan
Connect Scan Timing: About 77.44% done; ETC: 15:47 (0:00:01 remaining)
Stats: 0:00:04 elapsed; 0 hosts completed (1 up), 1 undergoing Connect Scan
Connect Scan Timing: About 84.86% done; ETC: 15:47 (0:00:01 remaining)
Interesting ports on pc-freak.net (83.228.93.76):
Not shown: 1011 filtered ports
PORT    STATE  SERVICE
20/tcp  closed ftp-data
21/tcp  open   ftp
22/tcp  open   ssh
25/tcp  open   smtp
53/tcp  open   domain
80/tcp  open   http
110/tcp open   pop3
143/tcp open   imap
443/tcp closed https
465/tcp open   smtps
631/tcp closed ipp
993/tcp open   imaps
995/tcp closed pop3s

4. Scanning all possible ports to make complete node port audit

As I said prior, if no extra port arguments nmap scans only number of pre-selected high use ports. However it is always nice to run complete port scan. Doing complete port scan on host, can reveal unusual open ports for cracker backdoors or ports or whether on Windows (ports open by Viruses and Trojans). As the complete number of possible remote ports to attempt to connect to is (65536), such a scan is much slower and sometimes can take literally "ages". To scan all ports on my home router in a local 100 M/Bit network with my notebook it takes about 23 minutes. On remote hosts it can take from 30 / 40 minutes to many hours – depending on firewall type on remote scanned host. Also by scanning all ports, there is risk remote host add you to its FW reject rules, whether its running some kind of automated software for Intrusion Detection (IDS) like Snort or AIDE.
To run complete port scan with nmap;

noah:~# nmap -sT pc-freak.net -p 0-65535
 

Starting Nmap 6.00 ( http://nmap.org ) at 2013-06-08 22:28 EEST
Stats: 0:00:01 elapsed; 0 hosts completed (1 up), 1 undergoing Connect Scan
Connect Scan Timing: About 0.03% done
Stats: 0:00:01 elapsed; 0 hosts completed (1 up), 1 undergoing Connect Scan
Connect Scan Timing: About 0.05% done
Stats: 0:06:35 elapsed; 0 hosts completed (1 up), 1 undergoing Connect Scan
Connect Scan Timing: About 31.23% done; ETC: 22:50 (0:14:28 remaining)
Stats: 0:06:35 elapsed; 0 hosts completed (1 up), 1 undergoing Connect Scan
Connect Scan Timing: About 31.24% done; ETC: 22:50 (0:14:27 remaining)
Stats: 0:08:21 elapsed; 0 hosts completed (1 up), 1 undergoing Connect Scan
Connect Scan Timing: About 37.41% done; ETC: 22:51 (0:13:57 remaining)
Stats: 0:08:21 elapsed; 0 hosts completed (1 up), 1 undergoing Connect Scan
Connect Scan Timing: About 37.43% done; ETC: 22:51 (0:13:56 remaining)
Stats: 0:08:21 elapsed; 0 hosts completed (1 up), 1 undergoing Connect Scan
Connect Scan Timing: About 37.46% done; ETC: 22:51 (0:13:56 remaining)
Stats: 0:08:22 elapsed; 0 hosts completed (1 up), 1 undergoing Connect Scan
Connect Scan Timing: About 37.50% done; ETC: 22:51 (0:13:55 remaining)
Stats: 0:08:22 elapsed; 0 hosts completed (1 up), 1 undergoing Connect Scan
Connect Scan Timing: About 37.53% done; ETC: 22:51 (0:13:56 remaining)
Stats: 0:08:28 elapsed; 0 hosts completed (1 up), 1 undergoing Connect Scan
Connect Scan Timing: About 37.96% done; ETC: 22:51 (0:13:50 remaining)
Stats: 0:11:55 elapsed; 0 hosts completed (1 up), 1 undergoing Connect Scan
Connect Scan Timing: About 53.22% done; ETC: 22:51 (0:10:28 remaining)
Nmap scan report for pc-freak.net (83.228.93.76)
Host is up (0.0023s latency).
Not shown: 65518 filtered ports
PORT     STATE  SERVICE
20/tcp   closed ftp-data
21/tcp   open   ftp
22/tcp   open   ssh
25/tcp   open   smtp
53/tcp   open   domain
80/tcp   open   http
110/tcp  open   pop3
143/tcp  open   imap
443/tcp  closed https
465/tcp  open   smtps
631/tcp  closed ipp
993/tcp  open   imaps
995/tcp  closed pop3s
2060/tcp open   unknown
2070/tcp open   ah-esp-encap
2207/tcp closed unknown
8022/tcp open   oa-system
9001/tcp open   tor-orport

Nmap done: 1 IP address (1 host up) scanned in 1367.73 seconds

5. Scanning a network range of IPs with NMAP

It is common thing to scan a network range in C class network, especially as usually we admins have to administrate a number of hosts running in a local network:

 

noah:~# nmap -sP '192.168.0.*'

Starting Nmap 6.00 ( http://nmap.org ) at 2013-06-08 22:29 EEST
Stats: 0:00:01 elapsed; 0 hosts completed (0 up), 256 undergoing Ping Scan
Ping Scan Timing: About 0.98% done
Stats: 0:00:09 elapsed; 0 hosts completed (0 up), 256 undergoing Ping Scan
Parallel DNS resolution of 256 hosts. Timing: About 0.00% done
Nmap scan report for 192.168.0.16
Host is up (0.00029s latency).
Nmap done: 256 IP addresses (1 host up) scanned in 9.87 seconds

You can also scan class C network with:

>noah:~# nmap -sP 192.168.1.0/24

6. Obtaining network services version numbers

Nmap is capable digging version numbers of remote running application binding to port:. Option to try to guess obtain version number is -sV (Show Version).

noah:~# nmap -sV pc-freak.net

Starting Nmap 6.00 ( http://nmap.org ) at 2013-06-08 22:35 EEST
Stats: 0:00:05 elapsed; 0 hosts completed (1 up), 1 undergoing Connect Scan
Service scan Timing: About 90.91% done; ETC: 22:37 (0:00:09 remaining)
Nmap scan report for pc-freak.net (83.228.93.76)
Host is up (0.0083s latency).
Not shown: 985 filtered ports
PORT     STATE  SERVICE         VERSION
20/tcp   closed ftp-data
21/tcp   open   ftp             ProFTPD 1.3.3a
22/tcp   open   ssh             OpenSSH 5.5p1 Debian 6+squeeze3 (protocol 2.0)
25/tcp   open   smtp            qmail smtpd
53/tcp   open   domain?
80/tcp   open   http            Apache httpd
110/tcp  open   pop3            qmail pop3d
143/tcp  open   imap            Courier Imapd (released 2005)
443/tcp  closed https
465/tcp  open   ssl/smtp        qmail smtpd
631/tcp  closed ipp
993/tcp  open   tcpwrapped
995/tcp  closed pop3s
8022/tcp open   http            ShellInABox httpd
9001/tcp open   ssl/tor-orport?
Service Info: Host: mail.pc-freak.net; OSs: Unix, Linux; CPE: cpe:/o:linux:kernel

Service detection performed. Please report any incorrect results at http://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 126.37 seconds

 

7. Checking remote server OS version

 noah:~# nmap -O pc-freak.net

 

Starting Nmap 6.00 ( http://nmap.org ) at 2013-06-08 22:42 EEST
Nmap scan report for pc-freak.net (83.228.93.76)
Host is up (0.0017s latency).
Not shown: 985 filtered ports
PORT     STATE  SERVICE
20/tcp   closed ftp-data
21/tcp   open   ftp
22/tcp   open   ssh
25/tcp   open   smtp
53/tcp   open   domain
80/tcp   open   http
110/tcp  open   pop3
143/tcp  open   imap
443/tcp  closed https
465/tcp  open   smtps
631/tcp  closed ipp
993/tcp  open   imaps
995/tcp  closed pop3s
8022/tcp open   oa-system
9001/tcp open   tor-orport
Device type: general purpose|broadband router|WAP|media device
Running (JUST GUESSING): Linux 2.6.X|2.4.X|3.X (94%), Gemtek embedded (89%), Siemens embedded (89%), Netgear embedded (88%), Western Digital embedded (88%), Comtrend embedded (88%)
OS CPE: cpe:/o:linux:kernel:2.6 cpe:/o:linux:kernel:2.4.20 cpe:/o:linux:kernel:3 cpe:/o:linux:kernel:2.4
Aggressive OS guesses: Linux 2.6.32 – 2.6.35 (94%), Vyatta 4.1.4 (Linux 2.6.24) (94%), Linux 2.6.32 (93%), Linux 2.6.17 – 2.6.36 (93%), Linux 2.6.19 – 2.6.35 (93%), Linux 2.6.30 (92%), Linux 2.6.35 (92%), Linux 2.4.20 (Red Hat 7.2) (92%), Linux 2.6.22 (91%), Gemtek P360 WAP or Siemens Gigaset SE515dsl wireless broadband router (89%)
No exact OS matches for host (test conditions non-ideal).

OS detection performed. Please report any incorrect results at http://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 8.76 seconds

As you can see from above output OS version guess is far from adequate, as my home router is running a Debian Squeeze. However in some older Linux releases, where services return OS version nr., it reports proper.

8. Scanning silently with Nmap SYN (Stealth Scan)

As many servers run some kind of IDS logging attempts to connect to multiple ports on the host and add scanning IP to filtering CHAIN. It is generally good idea to always scan with SYN Scan. SYN scan is not a guarantee that scanning attempt will not be captured by well configured IDS, or admin snorting on network with tcpdump,trafshow or iptraf. Stealth scan is useful to prevent IDS from raising red lamps.

noah:~# nmap -sS pc-freak.net

Starting Nmap 6.00 ( http://nmap.org ) at 2013-06-08 22:57 EEST
Nmap scan report for pc-freak.net (83.228.93.76)
Host is up (0.0075s latency).
Not shown: 985 filtered ports
PORT     STATE  SERVICE
20/tcp   closed ftp-data
21/tcp   open   ftp
22/tcp   open   ssh
25/tcp   open   smtp
53/tcp   open   domain
80/tcp   open   http
110/tcp  open   pop3
143/tcp  open   imap
443/tcp  closed https
465/tcp  open   smtps
631/tcp  closed ipp
993/tcp  open   imaps
995/tcp  closed pop3s
8022/tcp open   oa-system
9001/tcp open   tor-orport

Nmap done: 1 IP address (1 host up) scanned in 7.73 seconds

 

9. Nmap Scan Types (Paranoid | sneaky | polite | normal | insane)

Nmap has 6 modes of scanning. Whether no Type of scan is passed on with (-T) arg. , it scans in normal mode. Paranoid and sneaky are the slowest but lest aggressive and less likely to be captured by automated firewall filtering rules soft or IDS.

Insane mode is for people, who want to scan as quickly as possible not caring about consequences. Usually whether scanning your own hosts Insane is nice as it saves you time.

Paranoid scan is ultra, slow so in general, such scan is helpful if you're going to sleep and you  want to scan your concurrent company servers, without being identified. Paraonid scan, takes hours and depending on where remote scanned host is located can sometimes take maybe 12 to 24 hours.
noah:~# nmap -T0 pc-freak.net

Starting Nmap 6.00 ( http://nmap.org ) at 2013-06-09 00:23 EEST
Stats: 0:15:00 elapsed; 0 hosts completed (1 up), 1 undergoing SYN Stealth Scan
SYN Stealth Scan Timing: About 0.05% done
Almost always -T3 or T4 is reasonable.

10. Scanning hosts in verbose mode

pcfreak:~# nmap -vv localhost

Starting Nmap 5.00 ( http://nmap.org ) at 2013-06-09 01:14 EEST
NSE: Loaded 0 scripts for scanning.
Initiating SYN Stealth Scan at 01:14
Scanning localhost (127.0.0.1) [1000 ports]
Discovered open port 21/tcp on 127.0.0.1
Discovered open port 111/tcp on 127.0.0.1
Discovered open port 22/tcp on 127.0.0.1
Discovered open port 53/tcp on 127.0.0.1
Discovered open port 993/tcp on 127.0.0.1
Discovered open port 143/tcp on 127.0.0.1
Discovered open port 110/tcp on 127.0.0.1
Discovered open port 80/tcp on 127.0.0.1
Discovered open port 3306/tcp on 127.0.0.1
Discovered open port 25/tcp on 127.0.0.1
Discovered open port 783/tcp on 127.0.0.1
Discovered open port 8022/tcp on 127.0.0.1
Discovered open port 9001/tcp on 127.0.0.1
Discovered open port 465/tcp on 127.0.0.1
Completed SYN Stealth Scan at 01:14, 0.09s elapsed (1000 total ports)
Host localhost (127.0.0.1) is up (0.0000070s latency).
Scanned at 2013-06-09 01:14:27 EEST for 1s
Interesting ports on localhost (127.0.0.1):
Not shown: 986 closed ports
PORT     STATE SERVICE
21/tcp   open  ftp
22/tcp   open  ssh
25/tcp   open  smtp
53/tcp   open  domain
80/tcp   open  http
110/tcp  open  pop3
111/tcp  open  rpcbind
143/tcp  open  imap
465/tcp  open  smtps
783/tcp  open  spamassassin
993/tcp  open  imaps
3306/tcp open  mysql
8022/tcp open  unknown
9001/tcp open  tor-orport

Read data files from: /usr/share/nmap
Nmap done: 1 IP address (1 host up) scanned in 0.21 seconds
           Raw packets sent: 1000 (44.000KB) | Rcvd: 2014 (84.616KB)

 

11. Nmap typical scan arguments combinations

noah:~# nmap -sS -P0 -sV pc-freak.net

Stats: 0:01:46 elapsed; 0 hosts completed (1 up), 1 undergoing Service Scan
Service scan Timing: About 90.91% done; ETC: 01:22 (0:00:10 remaining)
Nmap scan report for pc-freak.net (83.228.93.76)
Host is up (0.0063s latency).
Not shown: 985 filtered ports
PORT     STATE  SERVICE         VERSION
20/tcp   closed ftp-data
21/tcp   open   ftp             ProFTPD 1.3.3a
22/tcp   open   ssh             OpenSSH 5.5p1 Debian 6+squeeze3 (protocol 2.0)
25/tcp   open   smtp            qmail smtpd
53/tcp   open   domain?
80/tcp   open   http            Apache httpd
110/tcp  open   pop3            qmail pop3d
143/tcp  open   imap            Courier Imapd (released 2005)
443/tcp  closed https
465/tcp  open   ssl/smtp        qmail smtpd
631/tcp  closed ipp
993/tcp  open   tcpwrapped
995/tcp  closed pop3s
8022/tcp open   http            ShellInABox httpd
9001/tcp open   ssl/tor-orport?
Service Info: Host: mail.pc-freak.net; OSs: Unix, Linux; CPE: cpe:/o:linux:kernel

Service detection performed. Please report any incorrect results at http://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 106.23 seconds
 

12. Logging nmap output

Nmap can output logs in Plain Text (TXT) / GNMAP and XML. I prefer logging to TXT, as plain text is always better:
noah:~# nmap pc-freak.net -o nmap-log.txt

Starting Nmap 6.00 ( http://nmap.org ) at 2013-06-09 01:32 EEST
Stats: 0:00:01 elapsed; 0 hosts completed (1 up), 1 undergoing Connect Scan
Connect Scan Timing: About 4.60% done; ETC: 01:32 (0:00:21 remaining)
Nmap scan report for pc-freak.net (83.228.93.76)
Host is up (0.013s latency).
Not shown: 985 filtered ports
PORT     STATE  SERVICE
20/tcp   closed ftp-data
21/tcp   open   ftp
22/tcp   open   ssh
25/tcp   open   smtp
53/tcp   open   domain
80/tcp   open   http
110/tcp  open   pop3
143/tcp  open   imap
443/tcp  closed https
465/tcp  open   smtps
631/tcp  closed ipp
993/tcp  open   imaps
995/tcp  closed pop3s
3306/tcp closed mysql
8022/tcp open   oa-system

Nmap done: 1 IP address (1 host up) scanned in 5.23 seconds

Below is also a paste from nmap man page (Example section) nmap -Pn -p80 -oX logs/pb-port80scan.xml -oG logs/pb-port80scan.gnmap 216.163.128.20/20

This scans 4096 IPs for any web servers (without pinging them) and saves the output in grepable and XML formats.

13. Other good Nmap scanning examples and arguments

One very useful Nmap option is;
-A – Enables OS detection and Version detection, Script scanning and Traceroute

Whether you have a list of all IPs administrated by you and you would like to scan all of them;

noah:~# nmap -iL /root/scan_ip_addresses.txt

Other useful option is -sA (This does TCP ACK Scan), it is useful way to determine if remote host is running some kind of stateful firewall. Instead of connecting to ports to check whether opened, ACKs are send.

– Fast port Scan

noah:~# nmap -F pc-freak.net
...

-D argument (Decoy scanning
Nmap has option for simulating port scan from multiple IPs, the so called Decoyed scanning. Using Decoys, one can hide real IP address from which Nmap scan is initiated

# nmap -n -D192.168.1.5,10.5.1.2,172.1.2.4,3.4.2.1 192.168.1.5

– Scan firewall for security weaknesses

(TCP Null Scan to full firewall to generate responce)
# nmap -sN 10.10.10.1

(TCP Fin scan to check firewall)

 # nmap -sF 10.10.10.1

(TCP Xmas scan to check firewall)

# nmap -sX 10.10.10.1

– Scan UDP ports

# nmap -sU hostname

– Scan remote host using IP (ping) Protocol

noah:~# nmap -P0 pc-freak.net

Connect Scan Timing: About 96.20% done; ETC: 23:16 (0:00:00 remaining)
Nmap scan report for pc-freak.net (83.228.93.76)
Host is up (0.0099s latency).
Not shown: 985 filtered ports
PORT     STATE  SERVICE
20/tcp   closed ftp-data
21/tcp   open   ftp
22/tcp   open   ssh
25/tcp   open   smtp
53/tcp   open   domain
80/tcp   open   http
110/tcp  open   pop3
143/tcp  open   imap
443/tcp  closed https
465/tcp  open   smtps
631/tcp  closed ipp
993/tcp  open   imaps
995/tcp  closed pop3s
8022/tcp open   oa-system
9001/tcp open   tor-orport

Nmap done: 1 IP address (1 host up) scanned in 4.97 seconds

 

Share this on

Linux: Fixing Qmail server qmail-smtpd port 25 slow (lagged) connect problem

Thursday, May 16th, 2013

qmail logo fixing qmail mail SMTP port 25 connect delays

After updating my Debian Squeeze to latest stable packages from repository with standard:
# apt-get update && apt-get upgrade

I routinely checked, if afterwards all is fine with Qmail?, just to find out connect to port 25 was hell delayed about 40-50 seconds before qmail responds with standard assigned Mail Greeting.
I Googled long time to see if I can find a post or forum thread discussing, exact issue, but though I found similar discussions I didn't found anything that exactly match problem. Thus I decided to follow the good old experimental try / fail method to figure out what causes it.

elow is pastes from telnet, illustrating delays in Qmail SMTP greeting respond:

# telnet localhost 25
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

# telnet localhost 25
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

# telnet localhost 25
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

I spend about 2 hours, checking Qmail for the standard so common errors, usually causing it to not work properly following my previous article testing qmail installation problems

After going, through all of possible causes the only clue for problems, were some slowness with spamassassin. This brought me the idea that something is done wrong with spamassassin .I tried disabling, Spamassassin Razon and Pyzor restarting spamd through (in my case done not via the standard start/stop debian script) but through daemontools with svc and qmailctl i.e.:

# svc -d /service/spamd
# svc -u /service/spamd
# svc -a /service/spamd

qmailctl restart
* Stopping qmail-smtpdssl.
* Stopping qmail-smtpd.
* Sending qmail-send SIGTERM and restarting.
* Restarting qmail-smtpd.
* Restarting qmail-smtpdssl.
* Restarting qmail-pop3d.
This doesn't help, so I continued trying to figure out, what is wrong .One assumption for slow  qmail-smtpd responce was of course slow DNS resolve issues. I checked /etc/resolv.conf to find out server is configured to use local  configured DJBDNS server as first line DNS resolver. I used djbdns for it is simple and easy to configure, however it is a bit obsolete so it was possible bottleneck. After commenting line to use localhost 127.0.0.1
and settings as primary DNS Google Public DNS 8.8.8.8, problem persisted so problems with hosts resolving was obviously not the problem.

I pondered for about 30 minutes, checking again all logs and checking machine processes. Just to remember before I experienced similar issues caused by unresolving RBL (blacklist IP) hosts. I checked configured SPF records in
(process list) and noticed following 4 hosts;

# ps auxwwf

7190 ?        S      0:00 tcpserver -vR -l /var/qmail/control/me -c 30 -u 89 -g 89 -x /etc/tcp.smtp.cdb 0 25 rblsmtpd -t0 -r zen.spamhaus.org -r dnsbl.njabl.org -r dnsbl.sorbs.net -r bl.spamcop.net qmail-smtpd /var/qmail/control/me /home/vpopmail/bin/vchkpw /bin/true
 

I checked one by one hosts and find out 1st two hosts in line are no longer resolving (blacklist is no longer accessible) as before:

 

zen.spamhaus.org, dnsbl.njabl.org

DNSBL (DNS blocklist) is configured on this host via /service/qmail-smtpd/run, hence to remove two unresolvable hosts forcing the weird qmail-smtpd connect delay I had to modify in it:

RBL_BAD="zen.spamhaus.org dnsbl.njabl.org dnsbl.sorbs.net bl.spamcop.net"

to

RBL_BAD="dnsbl.sorbs.net bl.spamcop.net"

After a close examinations in mail server config /var/qmail/control/spfrules, found one other Unresolvable SPF Blacklist host configured ;
# cat /var/qmail/control/spfrules
include:spf.trusted-forwarder.org

To move that one I null-ed file:

# cat /dev/null > /var/qmail/control/spfrules

Finally to take affect all changes, launched Qmail start:

# qmailctl restart
Restarting qmail:
* Stopping qmail-smtpdssl.
* Stopping qmail-smtpd.
* Sending qmail-send SIGTERM and restarting.
* Restarting qmail-smtpd.
* Restarting qmail-smtpdssl.
* Restarting qmail-pop3d.

To check all was fine afterwards, again used telnet:

# telnet localhost 25
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
220 This is Mail Pc-Freak.NET ESMTP

Mail greeting now appears in about 2-3 seconds time.

 

 

Share this on

Linux Currency convert GUI tool KeuroCalc / Convert world currencies Desktop Linux application, Convert USD to EUR

Thursday, April 25th, 2013

keurocalc Linux convert us dollars to euro and to rest of major world currencies

If you happen to run a small business or you're just an adventurer who use Linux for his notebook Desktop. Sooner or later you will end up needing Linux software to convert between world currencies. Some might argue that such a software is obsolete since already there are the Google Currency Converter and plenty of other (online) web currency converter sites. However for people like to use desktop applications like me it is much better to use a separate desktop tool which do currency convertion. If this is the case with and you happen to use Debian GNU / Linux, Ubuntu Fedora or any other main stream Linux distribution on your Linux powered Laptop or Tablet you will be surely happy to know about KEuroCalc – Universal Currency Converter. As all "K"-named starting Linux apps unfortunately keurocalc is using QT KDE graphic library and thus whenever used on GNOME it starts a bunch of KDE services (kedinit,klauncher, kded), however the load of this few on any modern notebook or PC is neglectably low so for most users the only disadvantage of kerocalc might be interface is looking a bit different compared to rest of Gnome GTK+ programs.

To install keurocalc on deb based Linuces e.g. – Debian / Ubuntu, ArchLinux ..:

noah:~# apt-cache show keurocalc|grep -i description -A 3

Description: universal currency converter and calculator – binary package
 KEurocalc is a universal currency converter and calculator.
 It downloads latest exchange rates directly from the
 European Central Bank and the Federal Reserve Bank of New York.
 

noah:~# apt-get install --yes keurocalc

Reading package lists… Done
Building dependency tree      
Reading state information… Done
The following NEW packages will be installed:
  keurocalc
0 upgraded, 1 newly installed, 0 to remove and 23 not upgraded.
Need to get 0 B/87.8 kB of archives.
After this operation, 319 kB of additional disk space will be used.
Selecting previously deselected package keurocalc.
(Reading database … 393466 files and directories currently installed.)
Unpacking keurocalc (from …/keurocalc_1.0.3-2_amd64.deb) …
Processing triggers for hicolor-icon-theme …
Processing triggers for man-db …
Processing triggers for menu …
Processing triggers for gnome-menus …
Processing triggers for desktop-file-utils …
Setting up keurocalc (1.0.3-2) …
Processing triggers for menu …

On Fedora, CentOS and rest of RPM based Linux distros keurocalc is installable too out of default package repositories:

[root@fedora ~]# yum -y -q install keurocalc
....

 

Here is a snapshot of keurocalc GUI interface;

Linux Universal Currency Converter Keurocalc

Keurocalc Linux universal currency converter settings screenshot

As you see from settings screenshot, information about rates can be obtained from 2 sources; – European Central Bank and New York Federal Reserve Bank. I give a try also to Euro, no network access (fixed rates only) method but unfortunately by choosing it you can only convert between Fixed Currencies (currencies which are already not in use – in EU member states who dropped their local currencies in favor of EURO).

I've tested the program and it works good, the disadvantage is convertion between some of the World currencies of countries with non-transparent planned (Soviet like) economies for example Belarus is not among app list of convertable currencies.

Share this on

Apr 23 Saint George’s day in England – St. George Patron Saint of England

Tuesday, April 23rd, 2013

Earlier when I wrote an article about celebration of St. George's day in Bulgaria and took the time to read a bit more in Wikipedia about which country venerate st. George's day who by the way is one of the most honored Orthodox Saints, I curiously found United  Kingdom to be among one of the countries keeping saint's memory. Today while opening Google.co.uk for a search Google's usual picture logo had instead below nice looking fairytale medieval picture;

United KIngdom patron saint George Google logo medieval picture

The picture made my childish nature be curious and I clicked on it just to find few articles about Saint George's day in England which happens to be celebrated today in 23 of April. As I myself bear name after saint George it means it is now my nameday in England 🙂 Though saint George is England patron saint because English people are not so religious as earlier, the feast is not considered as Official Public Feast. In Bulgaria we celebrate st. George's day in 6th of may and it is non-working public holiday for all country as well as it is Official Feast of Bulgarian Army.
I like comparing things so It was quite curious for me to see how Saint George is depicted in England and Western Europe countries and compare to our Orthodox icon saint tradition;

saint George orthodox icon from Novgorod 15th century icon

St. George Orthodox icon from Novgorod 15-th centuryicon

saint George orthodox icon

St. George Roman Warrior before his Martyrdom – Orthodox Icon

England South Darley St George depiction on church window

Saint England depicted on Anglican Church Window

saint George and the Dragon Raphael painter painting year 1506

Saint George and the Dragon by Master Raphael – circa 1506

saint George and the Dragon master Raphael painting 2

Master Raphael – Saint George killing the Dragon (beast)

Saint Martyr George from Lydda Palestine Carlo Crivelli - Italian Master 14th century

Saint Martyr George from Lydda Palestine Carlo Crivelli – Italian Master 14th century

Curious fact related to Saint George's veneration is that the center cross on England's flag is actually saint George's cross of victory – A reference for saint's victory over evil with faith in Christ.
 

Saint George Cross on England's national flag

In England it is typical flags with the image of St George's cross are flown on some buildings, especially pubs, and a few people wear a red rose on their lapel.
Saint's day is most venerated in Salisbury, where there’s an annual St George’s Day pageant, which probably dates back to the 13th century. During the crusades in the 1100s and 1200s, English knights used St George's cross as part of their uniform. St. George's cross keeps in England official flag for centuries. Nowadays the flag of England – the so called Union Flag is a combination of St George's cross, St. Andrew's ( X shaped crsoss ) and St. Patrick's cross. Even to this day English football fans paint variation of cross on their face most of which do it without realizing  the deep roots of the ancient Great Britain symbol.

Share this on

How to download books from Books Google with Google Book Download stand alone program and Greasemonkey with Google Books Downloader script

Thursday, February 7th, 2013

If you are student or just a researcher, you already know most of the good books you can find are on books.google.com. Google Books's is nice, but not all browsers support it well. Older mobile phones has big troubles with it, plus it is always nice to have a stored copy of book on your PC for later review or just to refresh your memory on books previously read.

Thus if you get to task to download Books from Google a quick research reveals few programs claiming to support downloading Books from Google in PDF;

1. Google Books Download standalone application for Windows and Mac OS X

Google Books Download is said to support Save of Google books in PDF, JPEG or PNG format.
This program works good whether you need to extract only certain book pages, however with complete books it often hangs. Other problem is it is  proprietary software, (freeware), so pages book pages it downloads in PDF had a big red color stamp complaining the program is trial.
There is a cracked version available on Piratebay.se's website. But as Piratebay is filtered from here. To test it I had to google it via piratebay proxy: –  with "piratebay  google books download"
.


Google Books Download
, standalone app from Piratebay is at current version 3.1.308.
As you can see from screenshot Google Book Download has two modes of work, one is;
Download Manually
– This is used for manual download a pages from a complete book and converting them to PDF.
Download Automatically – Is purposed to download a complete book from books.google.com and converting it to PDF. Downloading a complete copy of book using this mode is sometimes, hanging, plus it is really, really slow. The reason is each of the pages from the Book is first scanned using OCR (Optical Character Recognition) technology page by page and later after all pages are downloaded in pictures, they're converted to 1 PDF file.

Because Download Automatically loops at certain pages, this makes Google Books Download almost useless for people looking to store a full copy of books on Books.Google.com ….

2. Downloading PDFs from books.google.com with Firefox Greasemonkey and Google Book Downloaderjavascript

a. Install GreaseMonkey Firefox add-on

If you never before heard of Greasemonkey is a Mozilla Firefox Extension that allows users to install scripts that make on-the-fly changes to web page content after or before the page is loaded in the browser (also known as augmented browsing).

b. Install Google book downloader GreaseMonkey javascript

After a FF restart, you're ready to download any book from Books.Google.com.
To use it open the book you want to download and on the left upper corner you will see a Download this book button, press it and the book will be scanned in OCR and saved in PNG picture format. Below is a screenshot showing a sample book to download from books.google.com;

how to download book from google in firefox web browser screenshot


After each book page is succesfully download in page on the left pane you get a download status;

google book download firefox screenshot pictures - Scythian Monks download - how to download books to pictures from Google books on Windows XP, Windows 8

You should keep in mind that the download links of Google Book pages, will have a time expiry, so if you don't hurry up to save the pictures for later use soon links will become inaccessible and showing as broken from Google – I'm not sure how much exactly is google's max expiry time set of links but I guess it should be something 5-10 mins.

The pages of PDF, gets fetched as pictures one by one so it takes 20 secs or so to get all links to pages. Since Google Books Downloader only provides links to PDF pages it is necessery to either save each of the pictures manually (quite a lot of effort) or Install and use lets say DownThemAll! FF download extension. Using DownThemAll does not completely automates picture downloads, as you need to manually select all pictures for downloading, but at least selecting pages saves some time. To download all book pages with DownThemAll click with right mouse button on the left pane where links to pictures appears and choose download with DownThemAll!. After that tick on all links pointing to books.google.com……. to make them have the green tick as shown in below screenshot;

Once you have all PNGs saved on the PC you need to then convert them to unified PDF file. One way to do this is using ImageMagick's convert command line tool.
To do so install imagemagick for Windows downloading Win binaries from here
There are a bunch of binaries you will need to install named like ImageMagick-*-x86-static.exe

Run cmd.exe, change dir (cd) to folder where the just download book is saved in PNG and issue:


C:\Downloads> convert *.png pdf/my-book-from-pictures.pdf

Share this on

How to install Skype on Nokia n95 8G mobile phone

Saturday, January 5th, 2013

 

How to install skype on nokia n95 8G java client program

 I was asked if it is possible to install Skype on a Nokia Mobile N95 8G by a relative of a friend I remembered I've earlier installed and used Skype on my Nokia 9300i with some kind of Java Skype client, so I guessed installing Skype on Nokia N95 will be much easier and most likely supported by Skype.com's available mobile versions.

Further on, I tried downloading from Skype.com's client mobile download versions and after selecting the model of the Nokia N95 OS which is Symbia, was asked for a mobile number to which Skype will be send via SMS. I've typed in the Mobile number, hoping it will be the usual click on link and authorize Skype download and install on mobile, but instead Skype send me in SMS, just a link to Skype's Mobile download section  http://www.skype.com/m/ .

In other words, typing in the phone number and navigating through the URL from the mobile was completely useless. As I ended on the same place where I could manually browse using Nokia embedded Browser ….

Though I did my best to look through all the appearing links in N95's browser I was just redirected to either the same or similar page with  Skype Download button. After getting pissed off of looking and not finding any usable Symbian Skype app install binary (.SIS or SISX), I've decided to just look in Google. If some third party website is not storing  .SISX Skype bin for Nokia N95.  After few minutes of search I've found one offering an Archive with 3 versions of Skype. 2 of the .sisx files
, were versions of Skype which I've installed using Nokia Mobile Suite for Windows. Two of the .sisx files in archive Symbian Mobile Skype ver. 1.5 and Symbian Mobile Skype ver 5 did not work on N95. The binary that launched okay on the Mobile was Skype_S60_3_0_v_1_5_0_12.sisx. Though this binary launches the client and one can choose between the usual Sign In or Register new skype name buttons, it wasn't possible to login with the username and skype. After a bunch of prolonged waiting trying to Sign in the skype showed up again the Skype Login Name and Password prompts.

I spend some more time, trying to Install Fring – free mobile, chat, voice, video Skype substitute program, after reading on few Symbian Forums that fring is able to be used as a mobile substitute for Skype. Just a bit after installing it I've red on some forums some other posts from  2010, saying fring support for Skype is no longer available.  I tried also to login to Fring's client with Skype Login and Pass but login failed thus uninstalled Fring and continued researching online whether Skype can somehow be used on Nokia N95 mobile.

I've found a Skype Java (.JAR) same file version which I have also installed on my Nokia 9300i and downloaded and tried this one as well. Guess what it works 😉 It is not supporting Outgoing Skype users video clients, and it supports incoming Skype Video calls only with pre-purchased Skype call minutes, but Skype messaging works. I've made mirror of nokiaN95.jar Nokia N95 Java skype program here. I was lazy to research further if there is some other software or Skype.com symbian old version mirrored somewhere on the net working with the N95, but I guess it is possible. If someone reading this post knows a better Skype binary supporting Skype Video and Voice please drop a comment how ? Other way to use skype without loosing time to install Skype client is http://IMO.IM web Skype service

Share this on

Some Apache performance optimizations to do on brand new installed Linux servers – Apache performance tuning tips

Wednesday, December 12th, 2012

good tips to optimize Apache webserver  on Debian CentOS and RHEL Linux for better performance and faster website openings

It is a good idea, on any productive server which is supposed to run Apache + PHP on Linux to do some initial Apache configurations which will guarantee a better WebServer performance and improved Apache client thoroughput. On every each and other new configured Linux server planned to server as an Apache + some database backend, I routinely make this tune ups even without thinking. The reason I do it is time and experience proofed this optimizations works like a charm and almost in 100% of cases they can only improve situation with the server, decrease the general expected load and thus save costs for potential hardware. Besides that the few config options which I'm about to suggest in this article guarantee improved WebPage opening times and most of times overall Apache response times. The consequence of embedding the optimizations has a straight influence on Google / Yahoo PageRanking as it is not a secret most (if not all) Search Engines, rank with a Higher PageRank webpages which load up for lower opening times.

 

1. Change values for KeepAlive, Timeout and KeepAliveTimeout

First thing to change in Apache default config is reduce the default value set for KeepAliveTimeout and KeepAlive and TimeOut

a. Reducing  KeepAliveTimeout

  a.In Debian, Ubuntu servers this value has to be changed in /etc/apache2/apache2.conf

b. in RHEL, Fedora and other RPM based distros check in /etc/httpd/conf/httpd.conf

By default KeepAliveTimeout is set to 15 – KeepAliveTimeout 15. 15 Seconds is a long delay and on a by Apache servers it is very likely you will have hundreds if not thousands of Apache forks or internal threads, keeping still open for clients which already navigated off from the website or websites hosted and served by Apache.

Taking this in consideration, most of the times I prefer setting the KeepAliveTimeout value to 7 secs – i.e.;

KeepAliveTimeout 7

even to some hosts, where you have a well tested PHP Code or just serving static files it is a good idea to decrease it to 5 secs (this is much more risky and likely to create problems, I set it to 5 secs in a vary rare occasions, anyhow you might want to experiment)

Bear in mind that in some cases, where page execution (lets say a PHP script) takes longer to execute than 7 seconds clients might end up with empty pages as Apache will drop off the opened TCP / IP connection to remote client. Thus for some people who run badly written websites with PHP scripts which take long time to execute lowering default KeepAliveTimeout might have negative results. Therefore as a rule of thumb if you reduce the KeepAliveTimeout, be sure to monitor closely with the website testers team or via some website feedback form if the website continues to perform okay for end clients, if not just tune up KeepAliveTimeout to a value with which the website works fine. Other reason why KeepAliveTimeout is so good in almost all cases to reduce is by simply closing quicker opened network connections, less Apache childs keeps loaded in memory and therefore more memory is available for eventual new clients  connecting.

Here is also KeepAliveTimeout explained as pasted from a Debian apache2.conf:

#
# KeepAliveTimeout: Number of seconds to wait for the next request from the
# same client on the same connection.
#
#KeepAliveTimeout 15
KeepAliveTimeout 5

b. Turn on KeepAlive

By default most Linux distros came with KeepAlive setting turned off, switch it on;

#
# KeepAlive: Whether or not to allow persistent connections (more than
# one request per connection). Set to "Off" to deactivate.
#
# KeepAlive Off
KeepAlive On

c. Reduce the amount for TimeOut of client inactivity

Default TimeOut setting is set to 300 seconds!
A good value to reduce it to is 40 or 80. 80 value is less likely to create content serving unexpected interrupts. On most servers I just set to 40 as so far this value works well for me.
 

#
# Timeout: The number of seconds before receives and sends time out.
#
#Timeout 300
Timeout 40
 

2. Enable Apache mod-expires – WebServer content caching

debian:~# ln -sf /etc/apache2/mods-available/expires.load /etc/apache2/mods-enabled/expires.load

Depending on Deb or RPM based Linux distro in Apache config (apache2.conf or httpd.conf), add following mod_expires directives;

<IfModule mod_expires.c>
ExpiresActive On
ExpiresDefault A86400
ExpiresByType image/x-icon A2592000
ExpiresByType application/x-javascript A2592000
ExpiresByType text/css A2592000
ExpiresByType image/gif A604800
ExpiresByType image/png A604800
ExpiresByType image/jpeg A604800
</IfModule>

One note to make, here that on some websites based on Smarty, Zend PHP Framework etc. PHP frameworks mod_expires might cause some troubles, however in 70-80% of the cases just enabling it causes no harm to the overall website functionality. Be sure to test it well if you enable it and don't blame me if it cause you issues.


3. Set ServerRoot and  Raise-up ServerLimit and MaxKeepAliveRequests  directives

By default the value set for ServerLimit is too low for productive servers (256 mpm_prefork Apache childs maximum), thus for servers which are expected to get in parallel few hundreds of unique IP clients I usually set it along with ServerRoot like so;

#
# ServerRoot: The top of the directory tree under which the server's
# configuration, error, and log files are kept.
#
# NOTE!  If you intend to place this on an NFS (or otherwise network)
# mounted filesystem then please read the LockFile documentation (available
# at <URL:http://httpd.apache.org/docs/2.2/mod/mpm_common.html#lockfile>);
# you will save yourself a lot of trouble.
#
# Do NOT add a slash at the end of the directory path.
#
#ServerRoot "/etc/apache2"
ServerRoot "/etc/apache2"
ServerLimit 10600

Another good practice is to set MaxKeepAliveRequests which will be handled by Apache forked child to a high value but not to 0 (which will make once forked Apache childs to never die – making them likely to mess up assigned memory due to memory leaks or Apache bugs). On a productive servers I set values from  5000 to 50000.

#
# MaxKeepAliveRequests: The maximum number of requests to allow
# during a persistent connection. Set to 0 to allow an unlimited amount.
# We recommend you leave this number high, for maximum performance.
#
MaxKeepAliveRequests 50000

4. Enable mod_rewrite Apache support

This step is not optimizing Apache performance but it is useful to enable mod_rewrite, as there is almost no website today which doesn't use mod_rewrite via .htaccess passed directives.

debian:~# ln -sf  /etc/apache2/mods-available/rewrite.load /etc/apache2/mods-enabled/rewrite.load

5. Adjusting default values of StartServers, MinSpareServers, MaxSpareServers, MaxClients and MaxRequestsPerChild  for mpm_prefork

Default config values set for mpm_prefork, are for a tiny home server, depending on  the server amount of memory and CPU power – StartServers, MinSpareServers, MaxSpareServers, MaxClients and MaxRequestsPerChild – should be carefully tailored and tested with Apache Benchmark little tool and Siege or any other benchmarking tool before WebServer is made publicly accessible.

Default values from apache2.conf are like so:

<IfModule mpm_prefork_module>
    StartServers          5
    MinSpareServers       5
    MaxSpareServers      10
    MaxClients          150
    MaxRequestsPerChild   0
</IfModule>

A good configuration for a productive server with 24GB of Memory and 8 CPUs x 2.13 Ghz (about 17Ghz Computing Power) would be for exmpl.:

<IfModule mpm_prefork_module>
    StartServers          2000
    MinSpareServers       600
    MaxSpareServers      800
    MaxClients          3600
    MaxRequestsPerChild   10000
</IfModule>

5. Intall and Enable Eaccelerator

On almost all servers I install I install immediately after basic Apache + PHP + MySQL packages, Eaccelerator. Eaccelerator helps utilizing better server free memory and significantly accelerates Apache pages serve time

I've earlier blogged on How to install Eaccelerator to decrease server CPU load and increase page serving performance here

6. Disable Server Side Includes ( SSI ) support

I've earlier blogged how to disable Apache SSI on Debian Linux – you can read here. The change SSI will make whether off is not so big so even leaving it on  is not a big deal.

7. Remove and Purge Suhosin apache module

suhosin is useful module that tightens Apache security, however for me it has earlier create a lot of issues and it is my personal view that life is better without suhosin. I've earlier stumbled on a weird issue causing Apache to mysteriously crash – removing suhosin solved it all. I'm not sure if suhosin is installed by default on Debian, but it is often installed a a package dependency to some php-devel packages, so I find it wise always to check if it is present on the system and remove it if it is.
 

8. Enable Apache mod_deflate (gzip) compression to speed up delivery of CSS and Javascripts

Archiving with gzip and de-archiving CSS, JS and HTML is very useful, as it reduces the size of transferred content. This however might impose a bit of higher CPU load, so I only enable this one whether I target increase in network thoroughput, however for people concerned of CPU load it is better to keep it off as it is by default.

For a bit more on how mod_deflate is enabled on Debian check my previous article – Speeding Apache hosted websites with mod_deflate gzip compression

CentOS and RHEL users who need to enable mod_deflate – check here

9. Change the way logrotate handles log rotation (disable log gzip compession) or disable Apache logging completely

On Linux servers with Apache where 30000 to 50000 of unique IP visitors requests are served, the access.log becomes enormous. Things become even worser as by default Apache logs are configured to be rotated once a week instead of daily. Thus once logrotation takes place, a huge log has to be processed – for instance 20 GB. This puts extra load on the server and often makes the normal Apache operation bloated. To get rid of this problem I suggest you check my previous article – Recommended access.log logrotate practices on heavy loaded servers

Alternatively it is sometimes, better to completely disable Apache access.log logging to reduce a bit the Apache load – though from security and statistical point of view it is bad practice. I've disabled it however, as on some servers logging is implemented on PHP scripts level instead. I've earlier blogged how disabling access.log and error.log is done here

10. Disable Apache version reporting

This is more of a security than performance optimization, but also has neglectful effect, as on requests one line less is reported by Apache 🙂
To disable Apache version reporting check my previous article here

11. Switch from mpm_prefork to  mpm_worker Apache (threaded) engine

For some new Apache configurations, which doesn't need exec(); or system(); or any other PHP embedded external code execution functions, from performance point of view it is much better to just switch to the much more sophisticated performance efficient and less memory hungry Apache2  mpm-worker engine – the downside of it is you will have to configure PHP to be executed via php5-cgi apache module.

12. Tune up (increase) PHP memory_limit variable

This is not Apache optimization, but most servers need it as they run Apache and PHP in a line. Default PHP memory_limit is set to the low 16 Mb it is good to raise it to 64 or 128MB (but be careful as this might make Apache easier to DoS or DDoS)
I've blogged on the topic of memory_limit and timezone issues I experienced earlier here

13. Make sure you have a good quick DNS set in /etc/resolv.conf

An usual /etc/resolv.conf which I use for new servers with Apache looks like so:

debian:~# cat /etc/resolv.conf
nameserver 127.0.0.1
nameserver 8.8.8.8
nameserver 8.8.4.4
nameserver 208.67.222.222
nameserver 208.67.220.220
 

The first line is set to use 127.0.0.1, as I find it very useful and to improve overall system efficiency and make it much fail proof, if the server is configured to run a custom DJBDNS server on localhost.

As you see further DNS set in my usual resolv.conf's are Google's Public DNS 8.8.8.8 and 8.8.4.4 and OpenDNS's 208.67.222.222 and 208.67.220.220

I highly recommend you follow my practice and install DJBDNS local caching DNS to speed up resolving efficiency and hence speed up Apache client interactions (of course this is useful only if Apache or some PHP scripts use DNS requests, but as most do it is a  good practice)

After all changes, to take affect I do the usual Apache restart with;

debian:~# apache2ctl -k restart
.....

That's it, if you know of other optimization tips, Please drop a comment 🙂

Share this on