Posts Tagged ‘yum’

yum add proxy on CentOS, RHEL, Fedora Linux howto

Thursday, June 5th, 2014

yum-via-proxy-yum-package-management-mascot
Whether you had to install a CentOS server in a DMZ-ed network with paranoic system firewall rules or simply you want to use your own created RPM local repository to run RPM installs and CentOS system updates via monitored Proxy you will have to configure yum to use a proxy.

There is a standard way to do it by adding a proxy directive to /etc/yum.conf as explained in CentOS official documetnation.
However for some reason:

proxy=http://your-proxy-url.com:8080
proxy_username=yum-user
proxy_password=qwerty

proxy vars adding to /etc/yum.conf [main] section is not working on CentOS 6.5?
However there is a dirty patch by using the OS environment standard variable http_proxy
To make yum work via proxy in gnome-terminal run first:

export http_proxy=http://your-proxy-server.com:8080

or if proxy is protected by username / password run instead:

export username='yum-user'
export password='qwerty'
export http_proxy="http://$username:$password@your-proxy-server:8080/

Afterwards yum will work via the proxy, i.e.:

yum update && yum upgrade

To make http_proxy exported system wide check my previous post – Set Proxy System-Wide

Hope this helps someone.

Monitoring Disk use, CPU Load, Memory use and Network in one console ncurses interface – Glance

Thursday, August 14th, 2014

monitoring-disk-use-memory-cpu-load-and-network-in-one-common-interfaces-with-glances-Linux-BSD-UNIX
If you're Linux / UNIX / BSD system administrator you already have experience with basic admin's system monitoring:

  •     CPU load
  •     OS Name/Kernel version
  •     System load avarage and Uptime
  •     Disk and Network Input/Output I/O operations by interface
  •     Process statistics / Top loading processes etc.
  •     Memory / SWAP usage and free memory
  •     Mounted partitions


Such info is provided by command line tools such as:

top, df, free, sensors, ifconfig, iotop, hddtemp, mount, nfsstat, nfsiostat, dstat, uptime, nethogs iptraf

etc.

There are plenty of others advanced tools also Web based server monitoring visualization  tools, such as Monit, Icanga, PHPSysInfo, Cacti which provide you statistics on computer hardware and network utilization

So far so good, if you already are used to convenience of web *NIX based monitoring but you don't want to put load on the servers with such and you're lazy to write custom scripts that show most important monitoring information – necessery for daily system administration monitoring and prevention from downtimes and tracking bottlenecks you will be glad to hear about Glances
 

Glances is a free (LGPL) cross-platform curses-based monitoring tool which aims to present a maximum of information in a minimum of space, ideally to fit in a classical 80×24 terminal or higher to have additionnal information. Glances can adapt dynamically the displayed information depending on the terminal size. It can also work in a client/server mode for remote monitoring.


1. Installing Glances curses-based monitoring tool on Debian 7 / Ubuntu 13+ / Mint  Linux

We have to install python-pip (python package installer tool) to later install Glances

apt-get install –yes 'python-dev' 'python-jinja2' 'python-psutil'
                        'python-setuptools' 'hddtemp' 'python-pip' 'lm-sensors'


Before proceeding to install Glances to make Thermal sensors working (if supported by hardware) run:

 

 sensors-detect

Glances is written in Python and uses psutil library to obtain monitoring statistic values, thus it is necessery to install few more Python libraries:

pip install 'batinfo' 'pysensors'

If you're about to use pip – Python package installer tool, behind a proxy server use instead:
 

pip install –proxy=http://your-proxy-host.com:8080 'batinfo' 'pysensors'

Then install Glances script itself again using pip
 

pip install 'Glances'

Downloading/unpacking Glances
  Downloading Glances-2.0.1.tar.gz (3.3Mb): 3.3Mb downloaded
  Running setup.py egg_info for package Glances
    
Downloading/unpacking psutil>=2.0.0 (from Glances)
  Downloading psutil-2.1.1.tar.gz (216Kb): 216Kb downloaded
  Running setup.py egg_info for package psutil

Successfully installed Glances psutil

 

Then run glances from terminal
 

glances -t 3

-t 3 option tells glances to refresh collected statistics every 3 seconds

glances-console-monitoring-tool-every-systemad-ministrator-should-know-and-use-show-memory-disk-cpu-mount-point-statistics-in-common-shared-screen-linux-freebsd-unix

 

2. Installing Glances monitoring console tool on CentOS / RHEL / Fedora / Scientific Linux

Installing glances on CentOS 7 / Fedora and rest of RPM based distributions can be done by adding external RPM repositories, cause glances is not available in default yum repositories.

To enable Extra-packages repositories:
 

rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm


Then update yum to include new repository's packages into package list and install python-pip and python-devel rpms
 

yum update
yum install python-pip python-devel


Glances-console-server-stateScreenhot-on-CentOS-Linux-monitoring-in-ncurses-Linux-BSD

There is also FreeBSD port to install Glances on FreeBSD:
 

cd /usr/sysutils/py-glances
make install


Enjoy 🙂 !

 

 

Text Monitoring of connection server (traffic RX / TX) business in ASCII graphs with speedometer / Easy Monitor network traffic performance

Friday, May 4th, 2012

While reading some posts online related to MS-Windows TcpViewnetwork traffic analyzing tool. I've came across very nice tool for tracking connection speed for Linux (Speedometer). If I have to compare it, speedometer is somehow similar to nethogs and iftop bandwidth network measuring utilities .

What differentiates speedometer from iftop / nethogs / iptraf is it is more suitable for visualizing a network file or data transfers.
The graphs speedometer draws are way easier to understand, than iftop graphs.

Even complete newbies can understand it with no need for extraordinary knowledge in networking. This makes Speedometer, a top tool to visually see the amount of traffic flowing through server network interface (eth0) … (eth1) etc.

What speedometer shows is similar to the Midnight Commander's (mc) file transfer status bar, except the statistics are not only for a certain file transfer but can show overall statistics over server passing network traffic amount (though according to its manual it can be used to also track individual file transfers).

The simplicity for basic use makes speedometer nice tool to track for network congestion issues on Linux. Therefore it is a  must have outfit for every server admin. Below you see a screenshot of my terminal running speedometer on a remote server.

Speedometer ascii traffic track server network business screenshot in byobu screen like virtual terminal emulator

1. Installing speedometer on Debian / Ubuntu and Debian derivatives

For Debian and Ubuntu server administrators speedometer is already packaged as a deb so its installation is as simple as:

debian:~# apt-get --yes install speedometer
....

2. Installing speedometer from source for other Linux distributions CentOS, Fedora, SuSE etc.

Speedometer is written in python programming language, so in order to install and use on other OS Linux platforms, it is necessery to have installed (preferably) an up2date python programming language interpreter (python ver. 2.6 or higher)..
Besides that it is necessary to have installed the urwid -( console user interface library for Python) available for download via excess.org/urwid/

 

Hence to install speedometer on RedHat based Linux distributions one has to follow these steps:

a) Download & Install python urwid library

[root@centos ~]# cd /usr/local/src
[root@centos src]# wget -q http://excess.org/urwid/urwid-1.0.1.tar.gz
[root@centos src]# tar -zxvvf urwid-1.0.1.tar.gz
....
[root@centos src]# cd urwid-1.0.1
[root@centos urwid-1.0.1]# python setup.py install
running install
running build
running build_py
creating build
creating build/lib.linux-i686-2.4
creating build/lib.linux-i686-2.4/urwid
copying urwid/tests.py -> build/lib.linux-i686-2.4/urwid
copying urwid/command_map.py -> build/lib.linux-i686-2.4/urwid
copying urwid/graphics.py -> build/lib.linux-i686-2.4/urwid
copying urwid/vterm_test.py -> build/lib.linux-i686-2.4/urwid
copying urwid/curses_display.py -> build/lib.linux-i686-2.4/urwid
copying urwid/display_common.py -> build/lib.linux-i686-2.4/urwid
....

b) Download and install python-setuptools

python-setuptools is one other requirement of speedometer, happily on CentOS and Fedora the rpm package is already there and installable with yum:

[root@centos ~]# yum -y install python-setuptools
....

c) Download and install Speedometer

[root@centos urwid-1.0.1]# cd /usr/local/src/
[root@centos src]# wget -q http://excess.org/speedometer/speedometer-2.8.tar.gz
[root@centos src]# tar -zxvvf speedometer-2.8.tar.gz
.....
[root@centos src]# cd speedometer-2.8
[root@centos speedometer-2.8]# python setup.py install
Traceback (most recent call last):
File "setup.py", line 26, in ?
import speedometer
File "/usr/local/src/speedometer-2.8/speedometer.py", line 112
n = n * granularity + (granularity if r else 0)
^

While running the CentOS 5.6 installation of speedometer-2.8, I hit the
"n = n * granularity + (granularity if r else 0)
error.

After consultation with some people in #python (irc.freenode.net), I've figured out this error is caused due the outdated version of python interpreter installed by default on CentOS Linux 5.6. On CentOS 5.6 the python version is:

[root@centos ~]# python -V
Python 2.4.3

As I priorly said speedometer 2.8's minimum requirement for a python to be at v. 2.6. Happily there is quick way to update python 2.4 to python 2.6 on CentOS 5.6, as there is an RPM repository maintained by Chris Lea which contains RPM binary of python 2.6.

To update python 2.4 to python 2.6:

[root@centos speedometer-2.8]# rpm -Uvh http://yum.chrislea.com/centos/5/i386/chl-release-5-3.noarch.rpm[root@centos speedometer-2.8]# rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CHL[root@centos speedometer-2.8]# yum install python26

Now the newly installed python 2.6 is executable under the binary name python26, hence to install speedometer:

[root@centos speedometer-2.8]# python26 setup.py install
[root@centos speedometer-2.8]# chown root:root /usr/local/bin/speedometer
[root@centos speedometer-2.8]# chmod +x /usr/local/bin/speedometer

[root@centos speedometer-2.8]# python26 speedometer -i 1 -tx eth0

The -i will instruct speedometer to refresh the screen graphs once a second.

3. Using speedometer to keep an eye on send / received traffic network congestion

To observe, the amount of only sent traffic via a network interface eth0 with speedometer use:

debian:~# speedometer -tx eth0

To only keep an eye on received traffic through eth0 use:

debian:~# speedometer -rx eth0

To watch over both TX and RX (Transmitted and Received) network traffic:

debian:~# speedometer -tx eth0 -rx eth0

If you want to watch in separate windows TX and RX traffic while  running speedometer you can run in separate xterm windows speedometer -tx eth0 and speedometer -rx eth0, like in below screenshot:

Monitor Received and Transmitted server Network traffic in two separate xterm windows with speedometer ascii graphs

4. Using speedometer to test network maximum possible transfer speed between server (host A) and server (host B)

The speedometer manual suggests few examples one of which is:

How fast is this LAN?

host-a$ cat /dev/zero | nc -l -p 12345
host-b$ nc host-a 12345 > /dev/null
host-b$ speedometer -rx eth0

When I red this example in speedometer's manual, it wasn't completely clear to me what the author really meant, but a bit after when I thought over the example I got his point.

The idea behind this example is that a constant stream of zeros taken from /dev/zero will be streamed over via a pipe (|) to nc which will bind a port number 12345, anyone connecting from another host machine, lets say a server with host host-b to port 12345 on machine host-a will start receiving the /dev/zero streamed content.

Then to finally measure the streamed traffic between host-a and host-b machines a speedometer is started to visualize the received traffic on network interface eth0, thus measuring the amount of traffic flowing from host-a to host-b

I give a try to the exmpls, using for 2 test nodes my home Desktop PC, Linux running  arcane version of Ubuntu and my Debian Linux notebook.

First on the Ubuntu PC I issued
 

hipo@hip0-desktop:~$ cat /dev/zero | nc -l -p 12345
 

Note that I have previously had installed the netcat, as nc is not installed by default on Ubuntu and Debian. If you, don't have nc installed yet, install it with:

apt-get –yes install netcat

"cat /dev/zero | nc -l -p 12345" will not produce any output, but will display just a blank line.

Then on my notebook I ran the second command example, given in the speedometer manual:
 

hipo@noah:~$ nc 192.168.0.2 12345 > /dev/null

Here the 192.168.0.2 is actually the local network IP address of my Desktop PC. My Desktop PC is connected via a normal 100Mbit switch to my routing machine and receives its internet via  NAT. The second test machine (my laptop), gets its internet through a WI-FI connection received by a Wireless Router connected via a UTP cable to the same switch to which my Desktop PC is connected.

Finally to test / get my network maximum thoroughput I had to use:

hipo@noah:~$ speedometer -rx wlan0

Here, I  monitor my wlan0 interface, as this is my (laptop) wireless card interface over which I have connectivity to my local network and via which through the the WI-FI router I get connected to the internet.

Below is a snapshot captured showing approximately what is the max network thoroughput from:

Desktop PC -> to my Thinkpad R61 laptop

Using Speedometer to test network thorougput between two network server hosts screenshot Debian Squeeze Linux

As you can see in the shot approximately the maximum network thoroughput is in between:
2.55MB/s min and 2.59MB/S max, the speed is quite low for a 100 MBit local network, but this is normal as most laptop wireless adapters hardly transfer traffic in more than 10 to 20 MBits per sec.

If the same nework thoroughput test is conducted between two machines both connected to a same 100 M/bit switch, the traffic should be at least a 8 MB/sec.

There is something, else to take in consideration that probably makes the provided example network thoroughput measuring a bit inaccurate. The fact that the /dev/zero content is stremed over is slowing down the zeroes sent over network because of the  pipe ( | ) use slows down the stream.

5. Using speedometer to visualize maximum writting speed to a local hard drive on Linux

In the speedometer manual, I've noticed another interesting application of this nifty tool.

speedometer can be used to track and visualize the maximum writing speed a hard disk drive or hard drive partition can support on Linux OS:

A copy paster from the manual text is as follows:

How fast can I write data to my filesystem? (with at least 1GB free)
dd bs=1000000 count=1000 if=/dev/zero of=bigfile &
speedometer bigfile

However, when I tried copy/pasting the example in terminal, to test the maximum writing speed to an external USB hard drive, only dd command was started and speedometer failed to initialize and display graphs of the file creation speed.

I've found a little "hack" that makes the man example work by adding a 3 secs sleep like so:

debian:/media/Expansion Drive# dd bs=1000000 count=1000 if=/dev/zero of=bigfile & sleep 3; speedometer bigfile

Here is a screenshot of the bigfile created by dd and tracked "in real time" by speedometer:

How fast is writting data to local USB expandable hard disk Debian Linux speedometer screenshot

Actually the returned results from this external USB drive are, quite high, the possible reason for that is it is connected to my laptop over an USB protocol verion 3.

6. Using Speedometer to keep an eye on file download in progress

This application of speedometer is mostly useless especially on Linux where it is used as a Desktop.

However in some occasions if files are transferred over ssh or in non interactive FTP / Samba file transfers between Linux servers it can come handy.

To visualize the download and writing speed of lets say FTP transferred .AVI movie (during the actual file transfer) on the download host issue:

# speedometer Download-Folder/What-goes-around-comes-around.avi

7. Estimating approximate time for file transfer

There is another section in the speedometer manual pointing of the program use to calculate the time remaining for a file transfer.

The (man speedometer) provided example text is:

How long it will take for my 38MB transfer to finish?
speedometer favorite_episode.rm $((38*1024*1024))

At first glimpse it hard to understand (like the other manual example). A bit of reasoning and I comprehend what the man author meant by the obscure calculation:

$((38*1024*1024))

This is a formula used in which 38 has to be substituted with the exact file size amount of the transferred file. The author manual used a 38MB file so this is why he put $((38* … in the formula.

I give it a try – (just for the sake to see how it works) with a file with a size of 2500MB, in below two screenshot pictures I show my preparation to copy the file and the actual copying / "real time" transfer tracking with speedometer's status percentage completion bar.

xterm terminal copy file and estimate file copying operation speed on linux with speedometer preparation

Two xterm terminals one is copying a file the other one uses speedometer to estimate the time remaining to complete the file transfer from expansion USB hard drive to my laptop harddrive

 

How to count how many files are in a directory with find on Linux

Tuesday, February 21st, 2012

how to count how many directories are on your linux server

Did you ever needed to count, how many files in a directory are there?
Having the concrete number of files in a directory is not a seldom task but still very useful especially for scripts or simply for the sake of learning

The quickest and maybe the easiest way to count all files in a directory in Linux is with a combination of find and wc commands:

Here is how;

linux:~# cd ascii
linux:~/ascii# find . -type f -iname '*' -print |wc -l
407

This will find and list all matched files in any directory and subdirectories, print them out and count them with wc command.
The -type f argument instructs find to look only for files.

Other helpful variance of finding and listing all files in a directory and subdirectories is to list and count all the files with a certain file extension under a directory. For example, lets list all text files (.txt) contained in a directory and all level sub-directories:

linux:~/ascii# find . -type f -iname '*.txt' -print |wc -l
401

If you need to check the number of files in a directory for multiple directories on a server and you're aiming at doing it efficienly, issung above find .. | wc code will definitely be not a good choice. If used it will generate heavy load for the system and along with that will complete the execution in ages if issued on a large number of files containing dirs.

Thanksfully if efficiency is targetted, there is a command written in C called tree which is more efficient than find.
To count the number of files in dir but using tree :

linux:~# cd ascii
linux:/ascii# tree | tail -n 1
32 directories, 407 files

By default tree prints info for both the number of found files and directories.
To print out only the files matched, awk comes handy, e.g.:

linux:/ascii# tree |tail -n 1| awk '{ print $3 }'407

To list only the number of files in a directory without its existing sub-directories ls + wc use is also possible:

linux:~/ascii# ls -l | grep ^- | wc -l68

This result the above command would produce is +1 more than the real number of files, as it counts the directory ".." as one file (in UNIX / LINUX everything is file).

A short one liner script that can calculate all files correctly by substracting 1 is and hence present correct result on number of files is like so:

linux:~/ascii# var=$(ls -l | grep ^- | wc -l); var=$(($var - 1)); echo $var

ls can be used to calculate the number of 1-st level sub-directories under certain directory for instance:

linux:~/ascii# ls -l |grep ^d|wc -l
25

You see the ascii directory has 25 subdirectories in its 1st level.

To check symlinks under a directory with ls the command would be:

linux:~/ascii# ls -l | grep ^l | wc -l
0

Note above 3 ls | grep … examples, will not work properly if the directory contains files with SUID or some special properties set.
Hence to get the same 3 results for active files, directories and symbolic links, a one liner similar to the one below can be used instead:

linux:~/ascii# for t in files links directories; do echo `find . -type ${t:0:1} | wc -l` $t; done 2> /dev/null
407 files
0 links
33 directories

This will show statistics about all files, links and directories for all directory sub-levels.
Just in case if there is need to only count files, links and directories without directory recursion enabled, use:

linux:~/ascii# for t in files links directories; do echo `find . -maxdepth 1 -type ${t:0:1} | wc -l` $t; done 2> /dev/null
68 files
0 links
26 directories

Anyways the above bash loop will be slow, for directories containing thousands of files. For better performance the equivallent of above bash loop rewritten in perl would be:

linux:~/ascii# ls -l |perl -e 'while(<>){$h{substr($_,0,1)}+=1;} END {foreach(keys %h) {print "$_ $h{$_}\n";}}'
- 68
d 25
t 1
linux:~/ascii#
In any case the most preferrable and efficient way to count files en directories is by using tree command.
In my view using always tree command instead of code "hacks" is smart idea.

In Slackware tree command is part of the base install, on Debian and CentOS Linux, tree cmd is not part of the base system and requires install via apt / yum e.g.:

debian:~# apt-get --yes install tree
...

[root@centos:~ ]# yum --yes install tree

Happy counting 😉

Tracking I/O hard disk server bottlenecks with iostat on GNU / Linux and FreeBSD

Tuesday, March 27th, 2012

Hard disk overhead tracking on Linux and FreeBSD with iostat

I've earlier wrote an article How to find which processes are causing hard disk i/o overhead on Linux there I explained very rawly few tools which can be used to benchmark hard disk read / write operations. My prior article accent was on iotop and dstat and it just mentioned of iostat. Therefore I've wrote this short article in attempt to explain a bit more thoroughfully on how iostat can be used to track problems with excessive server I/O read/writes.

Here is the command man page description;
iostatReport Central Processing Unit (CPU) statistics and input/output statistics for devices, partitions and network filesystems

I will further proceed with few words on how iostat can be installed on various Linux distros, then point at few most common scenarious of use and a short explanation on the meaning of each of the command outputs.

1. Installing iostat on Linux

iostat is a swiss army knife of finding a server hard disk bottlenecks. Though it is a must have tool in the admin outfut, most of Linux distributions will not have iostat installed by default.
To have it on your server, you will need to install sysstat package:

a) On Debian / Ubuntu and other Debian GNU / Linux derivatives to install sysstat:

debian:~# apt-get --yes install sysstat

b) On Fedora, CentOS, RHEL etc. install is with yum:

[root@centos ~]# yum -y install sysstat

c) On Slackware Linux sysstat package which contains iostat is installed by default. 

d) In FreeBSD, there is no need for installation of any external package as iostat is part of the BSD world (bundle commands).
I should mention bsd iostat and Linux's iostat commands are not the same and hence there use to track down hard disk bottlenecks differs a bit, however the general logic of use is very similar as with most tools in BSD and Linux.

2. Checking a server hard disk for i/o disk bottlenecks on G* / Linux

Once having the sysstat installed on G* / Linux systems, the iostat command will be added in /usr/bin/iostat
a) To check what is the hard disk read writes per second (in megabytes) use:

debian:~# /usr/bin/iostat -m
Linux 2.6.32-5-amd64 (debian) 03/27/2012 _x86_64_ (8 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
15.34 0.36 2.76 2.66 0.00 78.88
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 63.89 0.48 8.20 6730223 115541235
sdb 64.12 0.44 8.23 6244683 116039483
md0 2118.70 0.22 8.19 3041643 115528074

In the above output the server, where I issue the command is using sda and sdb configured in software RAID 1 array visible in the output as (md0)

The output of iostat should already be easily to read, for anyone who didn't used the tool here is a few lines explanation of the columns:

The %user 15.34 meaning is that 15.34 out of 100% possible i/o load is generad by system level read/write operations.
%nice – >Show the percentage of CPU utilization that occurred while executing at the user level with nice priority.
%iowait – just like the top command idle it shows the idle time when the system didn't have an outstanding disk I/O requests.
%steal – show percentage in time spent in time wait of CPU or virtual CPUs to service another virtual processor (high numbers of disk is sure sign for i/o problem).
%idle – almost the same as meaning to %iowait
tps – HDD transactions per second
MB_read/s (column) – shows the actual Disk reads in Mbytes at the time of issuing iostat
MB_wrtn/s – displays the writes p/s at the time of iostat invocation
MB_read – shows the hard disk read operations in megabytes, since the server boot 'till moment of invocation of iostat
MB_wrtn – gives the number of Megabytes written on HDD since the last server boot filesystem mount

The reason why the Read / Write values for sda and sdb are similar in this example output is because my disks are configured in software RAID1 (mirror)

The above iostat output reveals in my specific case the server is experiencing mostly Disk writes (observable in the high MB_wrtn/s 8.19 md0 in the above sample output).

It also reveals, the I/O reads experienced on that server hard disk are mostly generated as a system (user level load) – see (%user 15.34 and md0 2118.70).

For all those not familiar with system also called user / level load, this is all kind of load which is generated by running programs on the server – (any kind of load not generated by the Linux kernel or loaded kernel modules).

b) To periodically keep an eye on HDD i/o operations with iostat, there are two ways:

– Use watch in conjunction with iostat;

[root@centos ~]# watch "/usr/bin/iostat -m"
Every 2.0s: iostat -m Tue Mar 27 11:00:30 2012
Linux 2.6.32-5-amd64 (centos) 03/27/2012 _x86_64_ (8 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
15.34 0.36 2.76 2.66 0.00 78.88
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 63.89 0.48 8.20 6730255 115574152
sdb 64.12 0.44 8.23 6244718 116072400
md0 2118.94 0.22 8.20 3041710 115560990
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 55.00 0.01 25.75 0 51
sdb 52.50 0.00 24.75 0 49
md0 34661.00 0.01 135.38 0 270

Even though watch use and -d might appear like identical, they're not watch does refresh the screen, executing instruction similar to the clear command which clears screen on every 2 seconds, so the output looks like the top command refresh, while passing the -d 2 will output the iostat command output on every 2 secs in a row so all the data is visualized on the screen. Hence -d 2 in cases, where more thorough debug is necessery is better. However for a quick routine view watch + iostat is great too.

c) Outputting extra information for HDD input/output operations;

root@debian:~# iostat -x
Linux 2.6.32-5-amd64 (debian) 03/27/2012 _x86_64_ (8 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
15.34 0.36 2.76 2.66 0.00 78.88
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sda 4.22 2047.33 12.01 51.88 977.44 16785.96 278.03 0.28 4.35 3.87 24.72
sdb 3.80 2047.61 11.97 52.15 906.93 16858.32 277.05 0.03 5.25 3.87 24.84
md0 0.00 0.00 20.72 2098.28 441.75 16784.05 8.13 0.00 0.00 0.00 0.00

This command will output extended useful Hard Disk info like;
r/s – number of read requests issued per second
w/s – number of write requests issued per second
rsec/s – numbers of sector reads per second
b>wsec/s – number of sectors wrote per second
etc. etc.

Most of ppl will never need to use this, but it is good to know it exists.

3. Tracking read / write (i/o) hard disk bottlenecks on FreeBSD

BSD's iostat is a bit different in terms of output and arguments.

a) Here is most basic use:

freebsd# /usr/sbin/iostat
tty ad0 cpu
tin tout KB/t tps MB/s us ni sy in id
1 561 45.18 44 1.95 14 0 5 0 82

b) Periodic watch of hdd i/o operations;

freebsd# iostat -c 10
tty ad0 cpu
tin tout KB/t tps MB/s us ni sy in id
1 562 45.19 44 1.95 14 0 5 0 82
0 307 51.96 113 5.73 44 0 24 0 32
0 234 58.12 98 5.56 16 0 7 0 77
0 43 0.00 0 0.00 1 0 0 0 99
0 485 0.00 0 0.00 2 0 0 0 98
0 43 0.00 0 0.00 0 0 1 0 99
0 43 0.00 0 0.00 0 0 0 0 100
...

As you see in the output, there is information like in the columns tty, tin, tout which is a bit hard to comprehend.
Thanksfully the tool has an option to print out only more essential i/o information:

freebsd# iostat -d -c 10
ad0
KB/t tps MB/s
45.19 44 1.95
58.12 97 5.52
54.81 108 5.78
0.00 0 0.00
0.00 0 0.00
0.00 0 0.00
20.48 25 0.50

The output info is quite self-explanatory.

Displaying a number of iostat values for hard disk reads can be also achieved by omitting -c option with:

freebsd# iostat -d 1 10
...

Tracking a specific hard disk partiotion with iostat is done with:

freebsd# iostat -n /dev/ad0s1a
tty cpu
tin tout us ni sy in id
1 577 14 0 5 0 81
c) Getting Hard disk read/write information with gstat

gstat is a FreeBSD tool to print statistics for GEOM disks. Its default behaviour is to refresh the screen in a similar fashion like top command, so its great for people who would like to periodically check all attached system hard disk and storage devices:

freebsd# gstat
dT: 1.002s w: 1.000s
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
0 10 0 0 0.0 10 260 2.6 15.6| ad0
0 10 0 0 0.0 10 260 2.6 11.4| ad0s1
0 10 0 0 0.0 10 260 2.8 12.5| ad0s1a
0 0 0 0 0.0 0 0 0.0 20.0| ad0s1b
0 0 0 0 0.0 0 0 0.0 0.0| ad0s1c
0 0 0 0 0.0 0 0 0.0 0.0| ad0s1d
0 0 0 0 0.0 0 0 0.0 0.0| ad0s1e
0 0 0 0 0.0 0 0 0.0 0.0| acd0

It even has colors if your tty supports colors 🙂

Another useful tool in debugging the culprit of excessive hdd I/O operations is procstat command:

Here is a sample procstat run to track (httpd) one of my processes imposing i/o hdd load:

freebsd# procstat -f 50404
PID COMM FD T V FLAGS REF OFFSET PRO NAME
50404 httpd cwd v d -------- - - - /
50404 httpd root v d -------- - - - /
50404 httpd 0 v c r------- 56 0 - -
50404 httpd 1 v c -w------ 56 0 - -
50404 httpd 2 v r -wa----- 56 75581 - /var/log/httpd-error.log
50404 httpd 3 s - rw------ 105 0 TCP ::.80 ::.0
50404 httpd 4 p - rw---n-- 56 0 - -
50404 httpd 5 p - rw------ 56 0 - -
50404 httpd 6 v r -wa----- 56 25161132 - /var/log/httpd-access.log
50404 httpd 7 v r rw------ 56 0 - /tmp/apr8QUOUW
50404 httpd 8 v r -w------ 56 0 - /var/run/accept.lock.49588
50404 httpd 9 v r -w------ 1 0 - /var/run/accept.lock.49588
50404 httpd 10 v r -w------ 1 0 - /tmp/apr8QUOUW
50404 httpd 11 ? - -------- 2 0 - -

Btw fstat is sometimes helpful in identifying the number of open files and trying to estimate which ones are putting the hdd load.
Hope this info helps someone. If you know better ways to track hdd excessive loads on Linux / BSD pls share 'em pls.
 

How to install and configure NTP Server (ntpd) to synchronize Linux server clock over the Internet on CentOS, RHEL, Fedora

Thursday, February 9th, 2012

Every now and then I have to work on servers running CentOS or Fedora Linux. Very typical problem that I observe on many servers which I have to inherit is the previous administrator did not know about the existence of NTP (Network Time Protocol) or forgot to install the ntpd server. As a consequence the many installed server services did not have a correct clock and at some specific cases this caused issues for web applications running on the server or any CMS installed etc.

The NTP Daemon is existing in GNU / linux since the early days of Linux and it served quite well so far. The NTP protocol has been used since the early days of the internet and for centuries is a standard protocol for BSD UNIX.

ntp is available in I believe all Linux distributions directly as a precompiled binary and can be installed on Fedora, CentOS with:

[root@centos ~]# yum install ntp

ntpd synchronizes the server clock with one of the /etc/ntp.conf defined RedHat NTP list

server 0.rhel.pool.ntp.org
server 1.rhel.pool.ntp.org
server 2.rhel.pool.ntp.org

To Synchronize manually the server system clock the ntp CentOS rpm package contains a tool called ntpdate :
Hence its a good practice to use ntpdate to synchronize the local server time with a internet server, the way I prefer to do this is via a government owned ntp server time.nist.gov, e.g.

[root@centos ~]# ntpdate time.nist.gov
8 Feb 14:21:03 ntpdate[9855]: adjust time server 192.43.244.18 offset -0.003770 sec

Alternatively if you prefer to use one of the redhat servers use:

[root@centos ~]# ntpdate 0.rhel.pool.ntp.org
8 Feb 14:20:41 ntpdate[9841]: adjust time server 72.26.198.240 offset 0.005671 sec

Now as the system time is set to a correct time via the ntp server, the ntp server is to be launched:

[root@centos ~]# /etc/init.d/ntpd start
...

To permanently enable the ntpd service to start up in boot time issue also:

[root@centos ~]# chkconfig ntpd on

Using chkconfig and /etc/init.d/ntpd cmds, makes the ntp server to run permanently via the ntpd daemon:

[root@centos ~]# ps ax |grep -i ntp
29861 ? SLs 0:00 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g

If you prefer to synchronize periodically the system clock instead of running permanently a network server listening (for increased security), you should omit the above chkconfig ntpd on and /etc/init.d/ntpd start commands and instead set in root crontab the time to get synchronize lets say every 30 minutes, like so:

[root@centos ~]# echo '30 * * * * root /sbin/ntpd -q -u ntp:ntp' > /etc/cron.d/ntpd

The time synchronization via crontab can be also done using the ntpdate cmd. For example if you want to synchronize the server system clock with a network server every 5 minutes:

[root@centos ~]# crontab -u root -e

And paste inside:

*/5 * * * * /sbin/ntpdate time.nist.gov 2>1 > /dev/null

ntp package is equipped with ntpq Standard NTP Query Program. To get very basic stats for the running ntpd daemon use:

[root@centos ~]# ntpq -p
remote refid st t when poll reach delay offset jitter
======================================================
B1-66ER.matrix. 192.43.244.18 2 u 47 64 17 149.280 41.455 11.297
*ponderosa.piney 209.51.161.238 2 u 27 64 37 126.933 32.149 8.382
www2.bitvector. 132.163.4.103 2 u 1 64 37 202.433 12.994 13.999
LOCAL(0) .LOCL. 10 l 24 64 37 0.000 0.000 0.001

The remote field shows the servers to which currently the ntpd service is connected. This IPs are the servers which ntp uses to synchronize the local system server clock. when field shows when last the system was synchronized by the remote time server and the rest is statistical info about connection quality etc.

If the ntp server is to be run in daemon mode (ntpd to be running in the background). Its a good idea to allow ntp connections from the local network and filter incoming connections to port num 123 in /etc/sysconfig/iptables :

-A INPUT -s 192.168.1.0/24 -m state --state NEW -p udp --dport 123 -j ACCEPT
-A INPUT -s 127.0.0.1 -m state --state NEW -p udp --dport 123 -j ACCEPT
-A INPUT -s 0.0.0.0 -m state --state NEW -p udp --dport 123 -j DROP

Restrictions on which IPs can be connected to the ntp server can also be implied on a ntpd level through /etc/ntp.conf. For example if you would like to add the local network IPs range 192.168.0.1/24 to access ntpd, in ntpd.conf should be added policy:

# Hosts on local network are less restricted.
restrict 192.168.0.1 mask 255.255.255.0 nomodify notrap

To deny all access to any machine to the ntpd server add in /etc/ntp.conf:

restrict default ignore

After making any changes to ntp.conf , a server restart is required to load the new config settings, e.g.:

[root@centos ~]# /sbin/service ntpd restart

In most cases I think it is better to imply restrictions on a iptables (firewall) level instead of bothering change the default ntp.conf

Once ntpd is running as daemon, the server listens for UDP connections on udp port 123, to see it use:

[root@centos ~]# netstat -tulpn|grep -i ntp
udp 0 0 10.10.10.123:123 0.0.0.0:* 29861/ntpd
udp 0 0 80.95.28.179:123 0.0.0.0:* 29861/ntpd
udp 0 0 127.0.0.1:123 0.0.0.0:* 29861/ntpd
udp 0 0 0.0.0.0:123 0.0.0.0:* 29861/ntpd

 

Scanning shared hosting servers to catch abusers, unwanted files, phishers, spammers and script kiddies with clamav

Friday, August 12th, 2011

Clamav scanning shared hosting servers to catch abusers, phishers, spammers, script kiddies etc. logo

I’m responsible for some GNU/Linux servers which are shared hosting and therefore contain plenty of user accounts.
Every now and then our company servers gets suspended because of a Phishing websites, Spammers script kiddies and all the kind of abusers one can think of.

To mitigate the impact of the server existing unwanted users activities, I decided to use the Clamav Antivirus – open source virus scanner to look up for potentially dangerous files, stored Viruses, Spammer mailer scripts, kernel exploits etc.

The Hosting servers are running latest CentOS 5.5. Linux and fortunately CentOS is equipped with an RPM pre-packaged latest Clamav release which of the time of writting is ver. (0.97.2).

Installing Clamav on CentOS is a piece of cake and it comes to issuing:

[root@centos:/root]# yum -y install clamav
...

After the install is completed, I’ve used freshclam to update clamav virus definitions

[root@centos:/root]# freshclam
ClamAV update process started at Fri Aug 12 13:19:32 2011
main.cvd is up to date (version: 53, sigs: 846214, f-level: 53, builder: sven)
WARNING: getfile: daily-13357.cdiff not found on remote server (IP: 81.91.100.173)
WARNING: getpatch: Can't download daily-13357.cdiff from db.gb.clamav.net
WARNING: getfile: daily-13357.cdiff not found on remote server (IP: 163.1.3.8)
WARNING: getpatch: Can't download daily-13357.cdiff from db.gb.clamav.net
WARNING: getfile: daily-13357.cdiff not found on remote server (IP: 193.1.193.64)
WARNING: getpatch: Can't download daily-13357.cdiff from db.gb.clamav.net
WARNING: Incremental update failed, trying to download daily.cvd
Downloading daily.cvd [100%]
daily.cvd updated (version: 13431, sigs: 173670, f-level: 60, builder: arnaud)
Downloading bytecode.cvd [100%]
bytecode.cvd updated (version: 144, sigs: 41, f-level: 60, builder: edwin)
Database updated (1019925 signatures) from db.gb.clamav.net (IP: 217.135.32.99)

In my case the shared hosting hosted websites and FTP user files are stored in /home directory thus I further used clamscan in the following way to check report and log into file the scan results for our company hosted user content.

[root@centos:/root]# screen clamscan -r -i --heuristic-scan-precedence=yes --phishing-scan-urls=yes --phishing-cloak=yes --phishing-ssl=yes --scan-archive=no /home/ -l /var/log/clamscan.log
home/user1/mail/new/1313103706.H805502P12513.hosting,S=14295: Heuristics.Phishing.Email.SpoofedDomain FOUND/home/user1/mail/new/1313111001.H714629P29084.hosting,S=14260: Heuristics.Phishing.Email.SpoofedDomain FOUND/home/user1/mail/new/1305115464.H192447P14802.hosting,S=22663: Heuristics.Phishing.Email.SpoofedDomain FOUND/home/user1/mail/new/1311076363.H967421P17372.hosting,S=13114: Heuristics.Phishing.Email.SpoofedDomain FOUND/home/user1/mail/domain.com/infos/cur/859.hosting,S=8283:2,S: Heuristics.Phishing.Email.SSL-Spoof FOUND/home/user1/mail/domain.com/infos/cur/131.hosting,S=6935:2,S: Heuristics.Phishing.Email.SSL-Spoof FOUND

I prefer running the clamscan in a screen session, because it’s handier, if for example my ssh connection dies the screen session will preserve the clamscan cmd execution and I can attach later on to see how scan went.

clamscan of course is slower as it does not use Clamav antivirus daemon clamd , however I prefer running it without running the daemon, as having a permanently running clamd on the servers sometimes creates problems or hangs and it’s not really worthy to have it running since I’m intending to do a clamscan no more than once per month to see some potential users which might need to be suspended.

Also later on, after it finishes all possible problems are logged to /var/log/clamscan.log , so I can read the file report any time.

A good idea might also be to implement the above clamscan to be conducted, once per month via a cron job, though I’m still in doubt if it’s better to run it manually once per month to search for the malicious users content or it’s better to run it via cron schedule.

One possible pitfall with automating the above clamscan /home virus check up, might be the increased load it puts to the system. In some cases the Webserver and SQL server might be under a heavy load at the exactly same time the clamscan cron work is running, this might possible create severe issues for users websites, if it’s not monitored.
Thus I would probably go with running above clamscan manually each month and monitor the server performance.
However for people, who have “iron” system hardware and clamscan file scan is less likely to cause any issues, probably a cronjob would be fine. Here is sample cron job to run clamscan:

10 05 01 * * clamscan -r -i --heuristic-scan-precedence=yes --phishing-scan-urls=yes --phishing-cloak=yes --phishing-ssl=yes --scan-archive=no /home/ -l /var/log/clamscan.log >/dev/null 2>&1

I’m interested to hear if somebody already is using a clamscan to run on cron without issues, once I’m sure that running it on cron would not lead to server down-times, i’ll implement it via cron job.

Anyone having experience with running clamscan directory scan through crond? 🙂

How to auto restart CentOS Linux server with software watchdog (softdog) to reduce server downtime

Wednesday, August 10th, 2011

How to auto restart centos with software watchdog daemon to mitigate server downtimes, watchdog linux artistic logo

I’m in charge of dozen of Linux servers these days and therefore am required to restart many of the servers with a support ticket (because many of the Data Centers where the servers are co-located does not have a web interface or IPKVM connected to the server for that purpose). Therefore the server restart requests in case of crash sometimes gets processed in few hours or in best case in at least half an hour.

I’m aware of the existence of Hardware Watchdog devices, which are capable to detect if a server is hanged and auto-restart it, however the servers I administrate does not have Hardware support for Watchdog timer.

Thanksfully there is a free software project called Watchdog which is easily configured and mitigates the terrible downtimes caused every now and then by a server crash and respective delays by tech support in Data Centers.

I’ve recently blogged on the topic of Debian Linux auto-restart in case of kernel panic , however now i had to conifgure watchdog on some dozen of CentOS Linux servers.

It appeared installation & configuration of Watchdog on CentOS is a piece of cake and comes to simply following few easy steps, which I’ll explain quickly in this post:

1. Install with yum watchdog to CentOS

[root@centos:/etc/init.d ]# yum install watchdog
...

2. Add to configuration a log file to log watchdog activities and location of the watchdog device

The quickest way to add this two is to use echo to append it in /etc/watchdog.conf:

[root@centos:/etc/init.d ]# echo 'file = /var/log/messages' >> /etc/watchdog.conf
echo 'watchdog-device = /dev/watchdog' >> /etc/watchdog.conf

3. Load the softdog kernel module to initialize the software watchdog via /dev/watchdog

[root@centos:/etc/init.d ]# /sbin/modprobe softdog

Initialization of softdog should be indicated by a line in dmesg kernel log like the one above:

[root@centos:/etc/init.d ]# dmesg |grep -i watchdog
Software Watchdog Timer: 0.07 initialized. soft_noboot=0 soft_margin=60 sec (nowayout= 0)

4. Include the softdog kernel module to load on CentOS boot up

This is necessery, because otherwise after reboot the softdog would not be auto initialized and without it being initialized, the watchdog daemon service could not function as it does automatically auto reboots the server if the /dev/watchdog disappears.

It’s better that the softdog module is not loaded via /etc/rc.local but the default CentOS methodology to load module from /etc/rc.module is used:

[root@centos:/etc/init.d ]# echo modprobe softdog >> /etc/rc.modules
[root@centos:/etc/init.d ]# chmod +x /etc/rc.modules

5. Start the watchdog daemon service

The succesful intialization of softdog in step 4, should have provided the system with /dev/watchdog, before proceeding with starting up the watchdog daemon it’s wise to first check if /dev/watchdog is existent on the system. Here is how:

[root@centos:/etc/init.d ]# ls -al /dev/watchdogcrw------- 1 root root 10, 130 Aug 10 14:03 /dev/watchdog

Being sure, that /dev/watchdog is there, I’ll start the watchdog service.

[root@centos:/etc/init.d ]# service watchdog restart
...

Very important note to make here is that you should never ever configure watchdog service to run on boot time with chkconfig. In other words the status from chkconfig for watchdog boot on all levels should be off like so:

[root@centos:/etc/init.d ]# chkconfig --list |grep -i watchdog
watchdog 0:off 1:off 2:off 3:off 4:off 5:off 6:off

Enabling the watchdog from the chkconfig will cause watchdog to automatically restart the system as it will probably start the watchdog daemon before the softdog module is initialized. As watchdog will be unable to read the /dev/watchdog it will though the system has hanged even though the system might be in a boot process. Therefore it will end up in an endless loops of reboots which can only be fixed in a linux single user mode!!! Once again BEWARE, never ever activate watchdog via chkconfig!

Next step to be absolutely sure that watchdog device is running it can be checked with normal ps command:

[root@centos:/etc/init.d ]# ps aux|grep -i watchdog
root@hosting1-fr [~]# ps axu|grep -i watch|grep -v greproot 18692 0.0 0.0 1816 1812 ? SNLs 14:03 0:00 /usr/sbin/watchdog
root 25225 0.0 0.0 0 0 ? ZN 17:25 0:00 [watchdog] <defunct>

You have probably noticed the defunct state of watchdog, consider that as absolutely normal, above output indicates that now watchdog is properly running on the host and waiting to auto reboot in case of sudden /dev/watchdog disappearance.

As a last step before, after being sure its initialized properly, it’s necessery to add watchdog to run on boot time via /etc/rc.local post init script, like so:

[root@centos:/etc/init.d ]# echo 'echo /sbin/service watchdog start' >> /etc/rc.local

Now enjoy, watchdog is up and running and will automatically restart the CentOS host 😉

How to install php mbstring support (add php mbstring module) on CentOS 5.5

Tuesday, August 2nd, 2011

I needed to install support for mbstring, as it was required by a client hosted on one of the servers running on CentOS 5.5.

Installation is quite straight forward as php-mbstring rpm package is available, here is how to install mbstring:

[root@centos [~]#yum install php-mbstring
...

Further on a restart of Apache or Litespeed and the mbstring support is loaded in php.
On some OpenVZ CentOS virtual servers enabling the php-mbstring might require also a complete php recompile if php is not build with the –enable-mbstring

If thus the mbstring has to be enabled on an OpenVZ server with php precompile, this can be easily done with cpeeasyapache , like so

server: ~# cd /home/cpeasyapache/src/php-5.2.9
server: php-5.2.9# cat config.nice |head -n $(($(cat config.nice |wc -l) - 1)) >> config.nice.new;
server: php-5.2.9# echo "'--enable-mbstring' \" >> config.nice.new; echo '"$@"' >> config.nice.new
server: php-5.2.9# mv config.nice config.nice.orig; mv config.nice.new config.nice

After that follow the normal way with make, make install and make install modules , e.g.:

server: php-5.2.9# make && make install && make install modules

Next the php-mbstring is enabled enjoy 😉

Installing HTOP on CentOS 5.5 OpenVZ Linux server from source

Friday, July 22nd, 2011

Htop Cool picture logo / htop on CentOS OpenVZ

Lately, I’m basicly using htop‘s nice colourful advanced Linux top command frontend in almost every server I manage, therefore I’ve almost abondoned top usage these days and in that reason I wanted to have htop installed on few of the OpenVZ CentOS 5.5 Linux servers at work.

I looked online but unfortunately I couldn’t find any rpm pre-built binary packages. The source rpm package I tried to build from dag wieers repository failed as well, so finally I went further and decided to install htop from source

Here is how I did it:

1. Install gcc and glibc-devel prerequired rpm packages

[root@centos ~]# yum install gcc glibc-devel

2. Download htop and compile from source

[root@centos src]# cd /usr/local/src
[root@centos src]# wget "http://sourceforge.net/projects/htop/files/htop/0.9/htop-0.9.tar.gz/download"
Connecting to heanet.dl.sourceforge.net|193.1.193.66|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 418767 (409K) [application/x-gzip]
Saving to: "download"

100%[======================================>] 418,767 417K/s in 1.0s
2011-07-22 13:30:28 (417 KB/s) – “download” saved [418767/418767]

[root@centos src]# mv download htop.tar.gz
[root@centos src]# tar -zxf htop.tar.gz
[root@centos src]# cd htop-0.9
[root@centos htop-0.9]# ./configure && make && make install

make install should install htop to /usr/local/bin/htop

That’s all folks! , now my OpenVZ CentOS server is equipped with the nifty htop tool 😉