Archive for the ‘Monitoring’ Category

Howto Fix “sysstat Cannot open /var/log/sysstat/sa no such file or directory” on Debian / Ubuntu Linux

Monday, February 15th, 2016

sysstast-no-such-file-or-directory-fix-Debian-Ubuntu-Linux-howto
I really love sysstat and as a console maniac I tend to install it on every server however by default there is some <b>sysstat</b> tuning once installed to make it work, for those unfamiliar with <i>sysstat</i> I warmly recommend to check, it here is in short the package description:<br /><br />
 

server:~# apt-cache show sysstat|grep -i desc -A 15
Description: system performance tools for Linux
 The sysstat package contains the following system performance tools:
  – sar: collects and reports system activity information;
  – iostat: reports CPU utilization and disk I/O statistics;
  – mpstat: reports global and per-processor statistics;
  – pidstat: reports statistics for Linux tasks (processes);
  – sadf: displays data collected by sar in various formats;
  – nfsiostat: reports I/O statistics for network filesystems;
  – cifsiostat: reports I/O statistics for CIFS filesystems.
 .
 The statistics reported by sar deal with I/O transfer rates,
 paging activity, process-related activities, interrupts,
 network activity, memory and swap space utilization, CPU
 utilization, kernel activities and TTY statistics, among
 others. Both UP and SMP machines are fully supported.
Homepage: http://pagesperso-orange.fr/sebastien.godard/

 

If you happen to install sysstat on a Debian / Ubuntu server with:

server:~# apt-get install –yes sysstat


, and you try to get some statistics with sar command but you get some ugly error output from:

 

server:~# sar Cannot open /var/log/sysstat/sa20: No such file or directory


And you wonder how to resolve it and to be able to have the server log in text databases periodically the nice sar stats load avarages – %idle, %iowait, %system, %nice, %user, then to FIX that Cannot open /var/log/sysstat/sa20: No such file or directory

You need to:

server:~# vim /etc/default/sysstat


By Default value you will find out sysstat stats it is disabled, e.g.:

ENABLED="false"

Switch the value to "true"

ENABLED="true"


Then restart sysstat init script with:

server:~# /etc/init.d/sysstat restart

However for those who prefer to do things from menu Ncurses interfaces and are not familiar with Vi Improved, the easiest way is to run dpkg reconfigure of the sysstat:

server:~# dpkg –reconfigure


sysstat-reconfigure-on-gnu-linux

 

root@server:/# sar
Linux 2.6.32-5-amd64 (pcfreak) 15.02.2016 _x86_64_ (2 CPU)

0,00,01 CPU %user %nice %system %iowait %steal %idle
0,15,01 all 24,32 0,54 3,10 0,62 0,00 71,42
1,15,01 all 18,69 0,53 2,10 0,48 0,00 78,20
10,05,01 all 22,13 0,54 2,81 0,51 0,00 74,01
10,15,01 all 17,14 0,53 2,44 0,40 0,00 79,49
10,25,01 all 24,03 0,63 2,93 0,45 0,00 71,97
10,35,01 all 18,88 0,54 2,44 1,08 0,00 77,07
10,45,01 all 25,60 0,54 3,33 0,74 0,00 69,79
10,55,01 all 36,78 0,78 4,44 0,89 0,00 57,10
16,05,01 all 27,10 0,54 3,43 1,14 0,00 67,79


Well that's it now sysstat error resolved, text reporting stats data works again, Hooray! 🙂

Share this on

How to find and Delete Duplicate files in directory on Linux server with find and fdupes command

Monday, March 16th, 2015

search-duplcate-files-linux-command-and-graphical-tools-how-to-find-duplicate-files-on-linux-mac-and-windows-os

Linux / UNIX find command is very helpful to do a lot of tasks to us admins such as Deleting empty directories to free up occupied inodes or finding and printing only empty files within a root file system within all sub-directories
There is too much of uses of find, however one that is probably rarely used known by sysadmins find command use is how to search for duplicate files on a Linux server:
 

find -not -empty -type f -printf “%s\n” | sort -rn | uniq -d | xargs -I{} -n1 find -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 –all-repeated=separate

If you're curious how does duplicate files finding works, they are found by comparing file sizes and MD5 signatures, followed by a byte-by-byte comparison.

Most common application of below command is when you want to search and get rid of some old obsolete files which you forgot to delete such as old /etc/ configurations, old SQL backups and PHP / Java / Python programming code files etc.

If you have to do a regular duplicate file find on multiple servers Linux servers perhaps you should install and use  fdupes command.
On Debian Linux to install it:

root@pcfreak:/# apt-cache show fdupes|grep -i descr -A 4
Description: identifies duplicate files within given directories
 FDupes uses md5sums and then a byte by byte comparison to find
 duplicate files within a set of directories. It has several useful
 options including recursion.
Homepage: http://code.google.com/p/fdupes/

 

root@pc-freak.net:/# apt-get install –yes fdupes

To search for duplicate files with fdupes in lets /etc/ just run fdupes without arguments:

 

root@pcfreak:/# fdupes /etc/
/etc/magic
/etc/magic.mime

/etc/odbc.ini
/etc/.pwd.lock
/etc/environment
/etc/odbcinst.ini

/etc/shadow-
/etc/shadow


If you want to look up for all duplicate files within root directory:
 

root@pcfreak:/# fdupes -r /etc/
Building file list /

 

You can also find duplicate files for multiple directories by just passing all directories as arguments to fdupes

 

root@pcfreak:/# fdupes -r /etc/ /usr/ /root /disk /nfs_mount /nas


The -r argument (makes a recursive subdirectory search for duplicates), if you want to also see what is the size of duplicate files found add -S option

 

fdupes -r -S /etc/ /usr/ /root /disk /nfs_mount /nas

 


If you want to delete all duplicate files within lets say /etc/

 

root@pcfreak:/# fdupes -d /etc/

fdupes is also available and installable also on RPM based Linux distros Fedora / RHEL / CentOS etc., install on CentOS with:
 

[root@centos~ ]# yum -y install fdupes


There is also a port available for those who want to run it on FreeBSD on BSD install it from ports:

 

freebsd# cd /usr/ports/sysutils/fdupes
freebsd# make install clean


If you have a GUI environment installed on the server and you don't want to bother with command line to search for all duplicate files under main filesystem and other lint (junk) files take a look at FSlint

FSlint-2.02-search-for-duplicate-and-lint-files-linux-gui-tool

If you're looking for a GUI cross platform duplicate file finder tool that runs on all major used Operating Systems Mac OS X / Windows / Linux take a look at dupeGuru

 

Share this on

How to check Apache Webserver and MySQL server uptime – Check uptime of a running daemon with PS (process) command

Tuesday, March 10th, 2015

check_Apache_Webserver_and_MySQL_server_uptime_-_Check-uptime-of-running-daemon-service-with-PS-process-command

Something very useful that most Apache LAMP (Linux Apache MySQL PHP) admins should know is how to check Apache Webserver uptime and MySQL server running (uptime).
Checking Apache / MySQL uptime is primary useful for scripting purposes – creating auto Apache / MySQL service restart scripts, or just as a quick console way to check what is the status and uptime of Webserver / SQL.

My experience as a sysadmin shows that lack of Periodic Apache and MySQL restart every week or every month often creates sys-admin a lot of a headaches cause (Apache / NGINX / SQL  server) starts eating too much memory or under some circumstances leads to service or system crashes. Periodic system main services restart is especially helpful in case if Website's backend programming code is writetn in a bad and buggy uneffient way by unprofessional (novice) programmers.
While I was still working as Senior SysAdmin in Design.BG, I've encountered many such Crappy Web applications developed by dozen of different programmers (because company's programmers changed too frequently and many of the hired Web Developers ,were still learning to program, I guess same is true also for other Start-UP Web / IT Company where crappy programming code is developed you will certainly need to keep an eye on Apache / MYSQL uptime.  If that's the case below 2 quick one liners with PS command will help you keep an eye on Apache / MYSQL uptime

 

ps -eo "%U %c %t"| grep apache2 | grep -v grep|grep root
root     apache2            02:30:05

Note that above example is Debian specific on RPM based distributions you will have to grep for httpd instead of apache2
 

ps -eo "%U %c %t"| grep http| grep -v grep|grep root

root     apache2            10:30:05

To check MySQL uptine:
 

ps -eo "%U %c %t"| grep mysqld
root     mysqld_safe        20:42:53
mysql    mysqld             20:42:53


Though example is for mysql and Apache you can easily use ps cmd in same way to check any other Linux service uptime such as Java / Qmail / PostgreSQL / Postfix etc.
 

ps -eo "%U %c %t"|grep qmail
qmails   qmail-send      19-01:10:48
qmaill   multilog        19-01:10:48
qmaill   multilog        19-01:10:48
qmaill   multilog        19-01:10:48
root     qmail-lspawn    19-01:10:48
qmailr   qmail-rspawn    19-01:10:48
qmailq   qmail-clean     19-01:10:48
qmails   qmail-todo      19-01:10:48
qmailq   qmail-clean     19-01:10:48
qmaill   multilog        40-18:02:53

 

 ps -eo "%U %c %t"|grep -i nginx|grep -v root|uniq
nobody   nginx           55-01:22:44

 

ps -eo "%U %c %t"|grep -i java|grep -v root |uniq
hipo   java            27-22:02:07

 

Share this on

Linux: How to see / change supported network bandwidth of NIC interface and get various eth network statistics with ethtool

Monday, January 19th, 2015

linux-how-to-see-change-supported-network-bandwidth-of-NIC-interface-and-view-network-statistics
If you're a novice Linux sysadmin and inherited some dedicated servers without any documentation and hence on of the first things you have to do to start a new server documentation is to check the supported TCP/IP network speed of servers Network (ethernet) Interfaces. On Linux this is very easy task to verify the speed of LAN card supported Local / Internet traffic install ethtool (if not already preseont on the servers) – assuming you're dealing with Debian / Ubuntu Linux servers.

1. Install ethtool on Deb and RPM based distros

dedi-server1:~# apt-cache show ethtool|grep -i desc -A 3
Description: display or change Ethernet device settings
 ethtool can be used to query and change settings such as speed, auto-
 negotiation and checksum offload on many network devices, especially
 Ethernet devices.

dedi-server1:~# apt-get install –yes ethtool
..

ethtool should be installed by default on CentOS / Fedora / RHEL and  syntax is same like on Debs. If you happen to miss ethtool on any (SuSE) / RedHat / RPM based distro install it with yum

[root@centos:~] # yum -y install ethtool


2. Get ethernet configurations

To check the current eth0 / eth1 / ethX network (Speed / Duplex) and other network related configuration configuration:
 

dedi-server5:~# ethtool eth0

Settings for eth0:
        Supported ports: [ TP ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Supports auto-negotiation: Yes
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full

        Advertised pause frame use: No
        Advertised auto-negotiation: Yes
        Speed: 1000Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 1
        Transceiver: internal
        Auto-negotiation: on
        MDI-X: off
        Supports Wake-on: pumbag
        Wake-on: g
        Current message level: 0x00000001 (1)
        Link detected: yes

Having a NIC configured to act as Duplex is very important as Duplex communication enables LAN card to communicate both sides (Sent / Receive) packets simultaneously.

full-duplex-half-duplex-explained-picture

Probably most interesting parameters for most admins are the ones that are telling whether the NIC UpLink is 10megabyte / 100 megabyte or 1Gigabyte as well as supported Receive / Send ( Transfer ) speeds of LAN, a common useful ethtool admin use to just show current LAN ethernet interface speed:

server-admin1:~# ethtool eth0 |grep -i speed
        Speed: 1000Mb/s

 

To get info about NIC (kernel module / driver) used with ethtool:

dedi-server3:~# ethtool -i eth0 driver: e1000e
version: 1.2.20-k2
firmware-version: 1.8-0
bus-info: 0000:06:00.0

3. Make LAN Card blink to recognize eth is mapped to which Physical LAN

Besides that ethtool has many other useful use cases, for example if you have a server with 5 lan or more LAN cards, but you're not sure to which of all different EthX interfaces correspond, a very useful thing is to make eth0, eth1, eth2, eth3, etc. blink for 5 seconds in order to identify which static IP is binded physically to which NIC , here is how:

ethtool -p eth0 5


Then you can follow the procedure for any interface on the server and map them with a sticker 🙂

Ethtool is also useful for getting "deep" (thorough) statistics on Server LAN cards, this could be useful to identify sometimes hard to determine broadcast flood attacks:
 

4. Get network statistics with ethtool for interfaces
 

dedi-server5:~# ethtool -S eth0|grep -vw 0
NIC statistics:
     rx_packets: 6196644448
     tx_packets: 7197385158
     rx_bytes: 2038559235701
     tx_bytes: 8281206569250
     rx_broadcast: 357508947
     tx_broadcast: 172
     rx_multicast: 34731963
     tx_multicast: 20
     rx_errors: 115
     multicast: 34731963
     rx_length_errors: 115
     rx_no_buffer_count: 26391
     rx_missed_errors: 10059
     tx_timeout_count: 3
     tx_restart_queue: 2590
     rx_short_length_errors: 115
     tx_tcp_seg_good: 964136993
     rx_long_byte_count: 2038559235701
     rx_csum_offload_good: 5824813965
     rx_csum_offload_errors: 42186
     rx_smbus: 383640020

5. Turn on Auto Negotiation and change NIC set speed to 10 / 100 / 1000 Mb/s

Auto-negotiation is important as an ethernet procedure by which two communication devices (2 network cards) choose common transmission parameters such as speed, duplex mode, and flow control in order to achieve maximum transmission speed over the network. On 1000BASE-T basednetworks the standard is a mandatory. There is also backward compatability for older 10BASE-T Networks.

a) To raise up NIC to use 1000 Mb/s in case if the bandwidth was raised to 1Gb/s but NIC settings were not changed:

dedi-server1:~# ethtool -s eth0 speed 1000 duplex half autoneg off


b) In case if LAN speed has to be reduced for some weird reason to 10 / 100Mb/s

 

dedi-server1:~# ethtool -s eth0 speed 10 duplex half autoneg off

dedi-server1:~# ethtool -s eth0 speed 100 duplex half autoneg off

c) To enable disable NIC Autonegotiation:

dedi-server1:~# ethtool -s eth0 autoneg on


6. Change Speed / Duplex settings to load on boot

a) Set Network to Duplex on Fedora / CentOS etc.

Quickest way to do it is of course to use /etc/rc.local. If you want to do it following distribution logic on CentOS / RHEL Linux:

Add to /etc/sysconfig/network-scripts/ifcfg-eth0

vim /etc/sysconfig/network/-scripts/ifcfg-eth0

ETHTOOL_OPTS="speed 1000 duplex full autoneg off"

To load the new settings restart networking (be careful to have physical access to server if something goes wrong 🙂 )

service network restart

b) Change network speed / duplex setting on Debian / Ubuntu Linux

Add at the end of /etc/network/interfaces

vim /etc/network/interfaces

post-up ethtool -s eth0 speed 100 duplex full autoneg off

7. Tune NIC ring buffers

dedi-server1:~# ethtool -g eth0

Ring parameters for eth0:
Pre-set maximums:
RX:             4096
RX Mini:        0
RX Jumbo:       0
TX:             4096
Current hardware settings:
RX:             256
RX Mini:        0
RX Jumbo:       0
TX:             256

As you can see the default setting of RX (receive) buffer size is low 256 and on busy servers with high traffic loads, depending on the hardware NIC vendor this RX buffer size varies.
Through increasing the Rx/Tx ring buffer size , you can decrease the probability of discarding packets in the NIC during a scheduling delay.
A change in rx buffer ring requires NIC restart so (be careful not to loose connection to remote server), be sure to have iLO access to it.

Here is how to raise Rx ring buffer size 4 times from default value:

ethtool -G eth0 rx 4096 tx 4069

Share this on

How to check who is flooding your Apache, NGinx Webserver – Real time Monitor statistics about IPs doing most URL requests and Stopping DoS attacks with Fail2Ban

Wednesday, August 20th, 2014

check-who-is-flooding-your-apache-nginx-webserver-real-time-monitoring-ips-doing-most-url-requests-to-webserver-and-protecting-your-webserver-with-fail2ban
If you're Linux ystem administrator in Webhosting company providing WordPress / Joomla / Drupal web-sites hosting and your UNIX servers suffer from periodic denial of service attacks, because some of the site customers business is a target of competitor company who is trying to ruin your client business sites through DoS or DDOS attacks, then the best thing you can do is to identify who and how is the Linux server being hammered. If you find out DoS is not on a network level but Apache gets crashing because of memory leaks and connections to Apache are so much that the CPU is being stoned, the best thing to do is to check which IP addresses are causing the excessive GET / POST / HEAD requests in logged.

There is the Apachetop tool that can give you the most accessed webserver URLs in a refreshed screen like UNIX top command, however Apachetop does not show which IP does most URL hits on Apache / Nginx webserver. 


1. Get basic information on which IPs accesses Apache / Nginx the most using shell cmds

There is a way using standard shell tools, to get some basic information on which IP accesses the webserver the most with:

tail -n 500 /var/log/apache2/access.log | cut -d' ' -f1 | sort | uniq -c | sort -gr

Or if you want to keep it refreshing periodically every few seconds run it through watch command:

watch "tail -n 500 /var/log/apache2/access.log | cut -d' ' -f1 | sort | uniq -c | sort -gr"

monioring-access-hits-to-webserver-by-ip-show-most-visiting-apache-nginx-ip-with-shell-tools-tail-cut-uniq-sort-tools-refreshed-with-watch-cmd

Another useful combination of shell commands is to Monitor POST / GET / HEAD requests number in access.log :

 awk '{print $6}' access.log | sort | uniq -c | sort -n

     1 "alihack<%eval
      1 "CONNECT
      1 "fhxeaxb0xeex97x0fxe2-x19Fx87xd1xa0x9axf5x^xd0x125x0fx88x19"x84xc1xb3^v2xe9xpx98`X'dxcd.7ix8fx8fxd6_xcd\x834x0c"
      1 "x16x03x01"
      1 "xe2
      2 "mgmanager&file=imgmanager&version=1576&cid=20
      6 "4–"
      7 "PUT
     22 "–"
     22 "OPTIONS
     38 "PROPFIND
   1476 "HEAD
   1539 "-"
  65113 "POST
 537122 "GET


However using shell commands combination is plenty of typing and hard to remember, plus above tools does not show you, approximately how frequenty IP hits the webserver

2. Real-time monitoring IP addresses with highest URL reqests with logtop

Real-time monitoring on IP addresses with highest URL requests is possible with no need of "console ninja skills"  through – logtop.

2.1 Install logtop on Debian / Ubuntu and deb derivatives Linux

a) Installing Logtop the debian way

LogTop is easily installable on Debian and Ubuntu in newer releases of Debian – Debian 7.0 and Ubuntu 13/14 Linux it is part of default package repositories and can be straightly apt-get-ed with:

apt-get install –yes logtop

b) Installing Logtop from source code (install on older deb based Linuxes)

On older Debian – Debian 6 and Ubuntu 7-12 servers to install logtop compile from source code – read the README installation instructions or if lazy copy / paste below:

cd /usr/local/src
wget https://github.com/JulienPalard/logtop/tarball/master
mv master JulienPalard-logtop.tar.gz
tar -zxf JulienPalard-logtop.tar.gz

cd JulienPalard-logtop-*/
aptitude install libncurses5-dev uthash-dev

aptitude install python-dev swig

make python-module

python setup.py install

make

make install

 

mkdir -p /usr/bin/
cp logtop /usr/bin/

2.2 Install Logtop on CentOS 6.5 / 7.0 / Fedora / RHEL and rest of RPM based Linux-es

b) Install logtop on CentOS 6.5 and CentOS 7 Linux

– For CentOS 6.5 you need to rpm install epel-release-6-8.noarch.rpm
 

wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm -ivh epel-release-6-8.noarch.rpm
links http://dl.fedoraproject.org/pub/epel/6/SRPMS/uthash-1.9.9-6.el6.src.rpm
rpmbuild –rebuild
uthash-1.9.9-6.el6.src.rpm
cd /root/rpmbuild/RPMS/noarch
rpm -ivh uthash-devel-1.9.9-6.el6.noarch.rpm


– For CentOS 7 you need to rpm install epel-release-7-0.2.noarch.rpm

 

links http://download.fedoraproject.org/pub/epel/beta/7/x86_64/repoview/epel-release.html
 

Click on and download epel-release-7-0.2.noarch.rpm

rpm -ivh epel-release-7-0.2.noarch
rpm –import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
yum -y install git ncurses-devel uthash-devel
git clone https://github.com/JulienPalard/logtop.git
cd logtop
make
make install

 

2.3 Some Logtop use examples and short explanation

logtop shows 4 columns as follows – Line number, Count, Frequency, and Actual line

The quickest way to visualize which IP is stoning your Apache / Nginx webserver on Debian?

tail -f access.log | awk {'print $1; fflush();'} | logtop

logtop-check-which-ip-is-making-most-requests-to-your-apache-nginx-webserver-linux-screenshot

On CentOS / RHEL

tail -f /var/log/httpd/access_log | awk {'print $1; fflush();'} | logtop

Using LogTop even Squid Proxy caching server access.log can be monitored.
To get squid Top users by IP listed:

 

tail -f /var/log/squid/access.log | awk {'print $1; fflush();'} | logtop


logtop-visualizing-top-users-using-squid-proxy-cache

Or you might visualize in real-time squid cache top requested URLs

tail -f /var/log/squid/access.log | awk {'print $7; fflush();'} | logtop

visualizing-top-requested-urls-in-squid-proxy-cache-howto-screenshot

3. Automatically Filter IP addresses causing Apache / Nginx Webservices Denial of Service with fail2ban
 

Once you identify the problem if the sites hosted on server are target of Distributed DoS, probably your best thing to do is to use fail2ban to automatically filter (ban) IP addresses doing excessive queries to system services. Assuming that you have already installed fail2ban as explained in above link (On Debian / Ubuntu Linux) with:

apt-get install –yes fail2ban

To make fail2ban start filtering DoS attack IP addresses, you will have to set the following configurations:

vim /etc/fail2ban/jail.conf

Paste in file:

[http-get-dos]
 
enabled = true
port = http,https
filter = http-get-dos
logpath = /var/log/apache2/WEB_SERVER-access.log
# maxretry is how many GETs we can have in the findtime period before getting narky
maxretry = 300
# findtime is the time period in seconds in which we're counting "retries" (300 seconds = 5 mins)
findtime = 300
# bantime is how long we should drop incoming GET requests for a given IP for, in this case it's 5 minutes
bantime = 300
action = iptables[name=HTTP, port=http, protocol=tcp]


Before you paste make sure you put the proper logpath = location of webserver (default one is /var/log/apache2/access.log), if you're using multiple logs for each and every of hosted websites, you will probably want to write a script to automatically loop through all logs directory get log file names and automatically add auto-modified version of above [http-get-dos] configuration. Also configure maxtretry per IP, findtime and bantime, in above example values are a bit low and for heavy loaded websites which has to serve thousands of simultaneous connections originating from office networks using Network address translation (NAT), this might be low and tuned to prevent situations, where even the customer of yours can't access there websites 🙂

To finalize fail2ban configuration, you have to create fail2ban filter file:

vim /etc/fail2ban/filters.d/http-get-dos.conf

Paste:

# Fail2Ban configuration file
#
# Author: http://www.go2linux.org
#
[Definition]
 
# Option: failregex
# Note: This regex will match any GET entry in your logs, so basically all valid and not valid entries are a match.
# You should set up in the jail.conf file, the maxretry and findtime carefully in order to avoid false positives.
 
failregex = ^<HOST> -.*"(GET|POST).*
 
# Option: ignoreregex
# Notes.: regex to ignore. If this regex matches, the line is ignored.
# Values: TEXT
#
ignoreregex =

To make fail2ban load new created configs restart it:
 

/etc/init.d/fail2ban restart

If you want to test whether it is working you can use Apache webserver Benchmark tools such as ab or siege.
The quickest way to test, whether excessive IP requests get filtered – and make your IP banned temporary:

ab -n 1000 -c 20 http://your-web-site-dot-com/

This will make 1000 page loads in 20 concurrent connections and will add your IP to temporary be banned for (300 seconds) = 5 minutes. The ban will be logged in /var/log/fail2ban.log, there you will get smth like:

2014-08-20 10:40:11,943 fail2ban.actions: WARNING [http-get-dos] Ban 192.168.100.5
2013-08-20 10:44:12,341 fail2ban.actions: WARNING [http-get-dos] Unban 192.168.100.5

Share this on

Monitoring Disk use, CPU Load, Memory use and Network in one console ncurses interface – Glance

Thursday, August 14th, 2014

monitoring-disk-use-memory-cpu-load-and-network-in-one-common-interfaces-with-glances-Linux-BSD-UNIX
If you're Linux / UNIX / BSD system administrator you already have experience with basic admin's system monitoring:

  •     CPU load
  •     OS Name/Kernel version
  •     System load avarage and Uptime
  •     Disk and Network Input/Output I/O operations by interface
  •     Process statistics / Top loading processes etc.
  •     Memory / SWAP usage and free memory
  •     Mounted partitions


Such info is provided by command line tools such as:

top, df, free, sensors, ifconfig, iotop, hddtemp, mount, nfsstat, nfsiostat, dstat, uptime, nethogs iptraf

etc.

There are plenty of others advanced tools also Web based server monitoring visualization  tools, such as Monit, Icanga, PHPSysInfo, Cacti which provide you statistics on computer hardware and network utilization

So far so good, if you already are used to convenience of web *NIX based monitoring but you don't want to put load on the servers with such and you're lazy to write custom scripts that show most important monitoring information – necessery for daily system administration monitoring and prevention from downtimes and tracking bottlenecks you will be glad to hear about Glances
 

Glances is a free (LGPL) cross-platform curses-based monitoring tool which aims to present a maximum of information in a minimum of space, ideally to fit in a classical 80×24 terminal or higher to have additionnal information. Glances can adapt dynamically the displayed information depending on the terminal size. It can also work in a client/server mode for remote monitoring.


1. Installing Glances curses-based monitoring tool on Debian 7 / Ubuntu 13+ / Mint  Linux

We have to install python-pip (python package installer tool) to later install Glances

apt-get install –yes 'python-dev' 'python-jinja2' 'python-psutil'
                        'python-setuptools' 'hddtemp' 'python-pip' 'lm-sensors'


Before proceeding to install Glances to make Thermal sensors working (if supported by hardware) run:

 

 sensors-detect

Glances is written in Python and uses psutil library to obtain monitoring statistic values, thus it is necessery to install few more Python libraries:

pip install 'batinfo' 'pysensors'

If you're about to use pip – Python package installer tool, behind a proxy server use instead:
 

pip install –proxy=http://your-proxy-host.com:8080 'batinfo' 'pysensors'

Then install Glances script itself again using pip
 

pip install 'Glances'

Downloading/unpacking Glances
  Downloading Glances-2.0.1.tar.gz (3.3Mb): 3.3Mb downloaded
  Running setup.py egg_info for package Glances
    
Downloading/unpacking psutil>=2.0.0 (from Glances)
  Downloading psutil-2.1.1.tar.gz (216Kb): 216Kb downloaded
  Running setup.py egg_info for package psutil

Successfully installed Glances psutil

 

Then run glances from terminal
 

glances -t 3

-t 3 option tells glances to refresh collected statistics every 3 seconds

glances-console-monitoring-tool-every-systemad-ministrator-should-know-and-use-show-memory-disk-cpu-mount-point-statistics-in-common-shared-screen-linux-freebsd-unix

 

2. Installing Glances monitoring console tool on CentOS / RHEL / Fedora / Scientific Linux

Installing glances on CentOS 7 / Fedora and rest of RPM based distributions can be done by adding external RPM repositories, cause glances is not available in default yum repositories.

To enable Extra-packages repositories:
 

rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm


Then update yum to include new repository's packages into package list and install python-pip and python-devel rpms
 

yum update
yum install python-pip python-devel


Glances-console-server-stateScreenhot-on-CentOS-Linux-monitoring-in-ncurses-Linux-BSD

There is also FreeBSD port to install Glances on FreeBSD:
 

cd /usr/sysutils/py-glances
make install


Enjoy 🙂 !

 

 

Share this on

Monitoring CPU load and memory usage on Mac OS X command line (Terminal)

Thursday, July 3rd, 2014

macosx-server-screenshot-server-assistant-apple-tool
You might be stunned to find out Mac OS X has a server variant called Mac OS X server. For the usual admin having to administer a Mac OS X based server is something rarely to do, however it might happen some day, and besides that nowadays Mac OS X has about 10% percentage share of PC desktop and laptops used on the Internet (data collected from w3cschools log files). Thus cause it is among popular OSes, it very possible sooner or later as a sysadmin you will have to troubleshoot issues on at least Mac OS X notebook. Mac has plenty of instruments to debug OS issues as it is UNIX (BSD) based

Mac OS X has already a GUI tool called Activity Monitor (existing in Mac OS 10.3 onwards) in earlier verions, there was tool called Process Viewer and CPU Monitor.

To start Activity Monitor open Finder and launch it via:

Applications -> Utilities -> Activity Monitor

As a Linux guy, I like to use command line and there Mac OS X is equipped with a good arsenal of tools to check CPU load and Memory. Mac OS X comes with sar – (system activity reporter), top (process monitor) and vm_stat (virtual memory statistics) command – these ones are equivalent of Linux's sar (from sysstats package), top and Linux vmstat (report virtual memory statistics).

1. Check out Mac OS X HDD Input / Output statistics
 

$ sar -d -f ~/output.sar

20:43:18   device    r+w/s    blks/s
New Disk: [disk0] IODeviceTree:/PCI0@0/RP06@1C,5/SSD0@0/PRT0@0/PMP@0/@0:0
New Disk: [disk1] IOService:/IOResources/IOHDIXController/IOHDIXHDDriveOutKernel@1/IODiskImageBlockStorageDeviceOutKernel/IOBlockStorageDriver/Apple UDIF, только для чтения, сжатый (zlib)
New Disk: [disk2] IOService:/IOResources/IOHDIXController/IOHDIXHDDriveOutKernel@3/IODiskImageBlockStorageDeviceOutKernel/IOBlockStorageDriver/Apple UDIF, только для чтения, сжатый (bzip2)
New Disk: [disk3] IOService:/IOResources/IOHDIXController/IOHDIXHDDriveOutKernel@4/IODiskImageBlockStorageDeviceOutKernel/IOBlockStorageDriver/Apple UDIF, только для чтения, сжатый (bzip2)
New Disk: [disk4] IOService:/IOResources/IOHDIXController/IOHDIXHDDriveOutKernel@6/IODiskImageBlockStorageDeviceOutKernel/IOBlockStorageDriver/Apple UDIF, только для чтения, сжатый (zlib)
20:43:28   disk0        7        312
20:43:28   disk1        0          0
20:43:28   disk2        0          0
20:43:28   disk3        0          0
20:43:28   disk4        0          0
20:43:38   disk0       12        251
20:43:38   disk1        0          

2. Checking Mac OS X CPU Load from terminal

To check Load from Mac OS command line use:
 

$ sar -o ~/output.sar 10 10

That gathers 10 sets of metrics at 10 second intervals. You can then extract useful information from the output file (even while it's still running), this will get you cpu load on Mac OS system spitting stats every 10 seconds.

 

21:22:33  %usr  %nice   %sys   %idle
21:22:43    7      0      2     90
21:22:53    8      0      3     89
21:23:03   11      0      4     85
21:23:13    9      0      3     88
21:23:23    9      0      3     88
21:23:33    7      0      3     90
21:23:43   10      0      3     87
21:23:53   10      0      4     85
21:24:03   10      0      5     85
21:24:13    8      0      3     88
Average:      8      0      3     87   


3. Checking Free memory on  Mac OS X

Use this obscure one liner to free -m Linux memory command like output from Mac terminal

$ vm_stat | perl -ne '/page size of (d+)/ and $size=$1; /Pagess+([^:]+)[^d]+(d+)/ and printf("%-16s % 16.2f Min", "$1:", $2 * $size / 1048576);'
 

free: 43.38 Mi
active: 1762.00 Mi
inactive: 1676.91 Mi
speculative: 3.29 Mi
wired down: 609.38 Mi
copy-on-write: 29431.01 Mi
zero filled: 4687689.80 Mi
reactivated: 30288.86 Mi


To show inactive memory in Gigabytes every 10 seconds

$ vm_stat 10 | awk 'NR>2 {gsub("K","000");print ($1+$4)/256000}'

1.70532
1.70455
1.70389
1.6904

 

It is also possible to get memory statistics on Mac PC running top in non-interactive mode and grepping it from output:

$ top -l 1 | head -n 10 | grep PhysMem | sed 's/, /n /g'

 

PhysMem: 599M wired, 1735M active, 1712M inactive, 4046M used, 47M free.

 

4. Quick command to get Kernel / how many CPUs, available memory and load avarage on Mac OS X

From y. 2003 onwards of Mac OS have hostinfo(host information) command, providing admin with quick way to get System Info on Mac OS

$ hostinfo

 

Mach kernel version:
Darwin Kernel Version 12.5.0: Sun Sep 29 13:33:47 PDT 2013; root:xnu-2050.48.12~1/RELEASE_X86_64
Kernel configured for up to 4 processors.
2 processors are physically available.
4 processors are logically available.
Processor type: i486 (Intel 80486)
Processors active: 0 1 2 3
Primary memory available: 4.00 gigabytes
Default processor set: 98 tasks, 621 threads, 4 processors
Load average: 1.63, Mach factor: 2.54

 


If you need more verbose information on system hardware and resources, check out system_profiler. As the manual describes it, system_profiler(reports system hardware and software configuration.) cmd:

$ system_profiler Here is a link to output file generated by system_prifler

 

 

Share this on

Linux: basic system CPU, Disk and Network resource monitoring via phpsysinfo lightweight script

Wednesday, June 18th, 2014

phpsysinfo-logo-simple-way-to-web-monitor-windows-linux-server-cpu-disk-network-resources

There are plenty of GNU / Linux softwares to monitor server performance (hard disk space, network and CPU load) and general hardware health both text based for SSH console) and from web.

Just to name a few for console precious tools, such are:

And for web based Linux / Windows server monitoring my favourite tools are:

phpsysinfo is yet another web based Linux monitoring software for small companies or home router use it is perfect for people who don't want to spend time learning how to configure complicated and robust multiple server monitoring software like Nagios or Icanga.

phpsysinfo is quick and dirty way to monitor system uptime, network, disk and memory usage, get information on CPU model, attached IDEs, SCSI devices and PCIs from the web and is perfect for Linux servers already running Apache and PHP.

1. Installing PHPSysInfo on Debian, Ubuntu and deb derivative Linux-es

PHPSysInfo is very convenient and could be prefered instead of above tools for the reason it is available by default in Debian and Ubuntu package repositories and installable via apt-get and it doesn't require any further configuration, to roll it you install you place a config and you forget it.
 

 # apt-cache show phpsysinfo |grep -i desc -A 2

Description: PHP based host information
 phpSysInfo is a PHP script that displays information about the
 host being accessed.

 

Installation is a piece of cake:

# apt-get install --yes phpsysinfo

Add phpsysinfo directives to /etc/apache2/conf.d/phpsysinfo.conf to make it accessible via default set Apache vhost domain under /phpsysinfo

Paste in root console:
 

cat > /etc/apache2/conf.d/phpsysinfo.conf <<-EOF
Alias /phpsysinfo /usr/share/phpsysinfo
<Location /phpsysinfo>
 Options None
 Order deny,allow
 Deny from all
 #Allow from localhost
 #Allow from 192.168.56.2
 Allow from all
</Location>
EOF

 

Above config will allow access to /phpsysinfo from any IP on the Internet, this could be a security hole, thus it is always better to either protect it with access .htaccess password login or allow it only from certain IPs, from which you will access it with something like:

Allow from 192.168.2.100

Then restart Apache server:

# /etc/init.d/apache2 restart

 

To access phpsysinfo monitoring gathered statistics, access it in a browser http://defaultdomain.com/phpsysinfo/

phpsysinfo_on_debian_ubuntu_linux-screenshot-quick-and-dirty-web-monitoring-for-windows-and-linux-os

2. Installing PHPSysinfo on CentOS, Fedora and RHEL Linux
 

Download and untar

# cd /var/www/html
# wget https://github.com/phpsysinfo/phpsysinfo/archive/v3.1.13.tar.gz
# tar -zxvf phpsysinfo-3.1.13.tar.gz
# ln -sf phpsysinfo-3.1.13 phpsysinfo
# mv phpsysinfo.ini.new phpsysinfo.ini

 

Install php php-xml and php-mbstring RPM packages
 

yum -y install php php-xml php-mbstring
...

Start Apache web service

[root@ephraim html]# /etc/init.d/httpd restart

[root@ephraim html]# ps ax |grep -i http
 8816 ?        Ss     0:00 /usr/sbin/httpd
 8819 ?        S      0:00 /usr/sbin/httpd

phpsysinfo-install-on-centos-rhel-fedora-linux-simple-monitoring

As PhpSysInfo is written in PHP it is also possible to install phpsysinfo on Windows.

phpsysinfo is not the only available simple monitoring server performance remotely tool, if you're looking for a little bit extended information and a better visualization interface alternative to phpsysinfo take a look at linux-dash.

In context of web monitoring other 2 web PHP script tools useful in remote server monitoring are:

OpenStatus – A simple and effective resource and status monitoring script for multiple servers.
LookingGlass – User-friendly PHP Looking Glass (Web interface to use Host (Nslookup), Ping, Mtr – Matt Traceroute)

Share this on