Posts Tagged ‘tutorial’

What is inode and how to find out which directory is eating up all your filesystem inodes on Linux, Increase inode count on a ext3 ext4 and ufs filesystems

Tuesday, August 20th, 2019


If you're a system administrator of multiple Linux servers used for Web serving delivery / Mail server sysadmin, Database admin or any High amount of Drives Data Storage used for backup servers infra, Data Repository administrator such as Linux hosted Samba / CIFS shares, etc. or using some Linux Hosting Provider to host your website or any other UNIX like Infrastructure servers that demands a storage of high number of files under a Directory  you might end up with the common filesystem inode depletion issues ( Maximum Inode number for a filesystem is predefined, limited and depending on the filesystem configured size).

In case a directory stored files end up exceding the amount of possible addressable inodes could prevent any data to be further assiged and stored on the Filesystem.

When a device runs out of inodes, new files cannot be created on the device, even though there may be plenty free space available and the first time it happened to me very long time ago I was completely puzzled how this is possible as I was not aware of Inodes existence  …

Reaching maximum inodes number (e.g. inode depletion), often happens on Busy Mail servers (receivng tons of SPAM email messages) or Content Delivery Network (CDN – Website Image caching servers) which contain many small files on EXT3 or EXT4 Journalled filesystems. File systems (such as Btrfs, JFS or XFS) escape this limitation with extents or dynamic inode allocation, which can 'grow' the file system or increase the number of inodes.


Hence ending being out of inodes could cause various oddities on how stored data behaves or communicated to other connected microservices and could lead to random application disruptions and odd results costing you many hours of various debugging to find the root cause of inodes (index nodes) being out of order.

In below article, I will try to give an overall explanation on what is an I-Node on a filesystem, how inodes of FS unit could be seen, how to diagnose a possible inode poblem – e.g.  see the maximum amount of inodes available per filesystem and how to prepare (format) a new filesystem with incrsed set of maximum inodes.


What are filesystem i-nodes?


This is a data structure in a Unix-style file system that describes a file-system object such as a file or a directory.
The data structure described in the inodes might vary slightly depending on the filesystem but usually on EXT3 / EXT4 Linux filesystems each inode stores the index to block that contains attributes and disk block location(s) of the object's data.
– Yes for those who are not aware on how a filesystem is structured on *nix it does allocate all stored data in logical separeted structures called data blocks. Each file stored on a local filesystem has a file descriptor, there are virtual unit structures file tables and each of the inodes that are a reference number has a own data structure (inode table).

Inodes / "Index" are slightly unusual on file system structure that stored the access information of files as a flat array on the disk, with all the hierarchical directory information living aside from this as explained by Unix creator and pioneer- Dennis Ritchie (passed away few years ago).


Simplified explanation on file descriptors, file table and inode, table on a common Linux filesystem

Here is another description on what is I-node, given by Ken Thompson (another Unix pioneer and father of Unix) and Denis Ritchie, described in their paper published in 1978:

"    As mentioned in Section 3.2 above, a directory entry contains only a name for the associated file and a pointer to the file itself. This pointer is an integer called the i-number (for index number) of the file. When the file is accessed, its i-number is used as an index into a system table (the i-list) stored in a known part of the device on which the directory resides. The entry found thereby (the file's i-node) contains the description of the file:…
    — The UNIX Time-Sharing System, The Bell System Technical Journal, 1978  "


What is typical content of inode and how I-nodes play with rest of Filesystem units?

The inode is just a reference index to a data block (unit) that contains File-system object attributes. It may include metadata information such as (times of last change, access, modification), as well as owner and permission data.


On a Linux / Unix filesystem, directories are lists of names assigned to inodes. A directory contains an entry for itself, its parent, and each of its children.



Structure of inode table-on Linux Filesystem diagram (picture source

  • Information about files(data) are sometimes called metadata. So you can even say it in another way, "An inode is metadata of the data."
  •  Inode : Its a complex data-structure that contains all the necessary information to specify a file. It includes the memory layout of the file on disk, file permissions, access time, number of different links to the file etc.
  •  Global File table : It contains information that is global to the kernel e.g. the byte offset in the file where the user's next read/write will start and the access rights allowed to the opening process.
  • Process file descriptor table : maintained by the kernel, that in turn indexes into a system-wide table of files opened by all processes, called the file table .

The inode number indexes a table of inodes in a known location on the device. From the inode number, the kernel's file system driver can access the inode contents, including the location of the file – thus allowing access to the file.

  •     Inodes do not contain its hardlink names, only other file metadata.
  •     Unix directories are lists of association structures, each of which contains one filename and one inode number.
  •     The file system driver must search a directory looking for a particular filename and then convert the filename to the correct corresponding inode number.

The operating system kernel's in-memory representation of this data is called struct inode in Linux. Systems derived from BSD use the term vnode, with the v of vnode referring to the kernel's virtual file system layer.

But enough technical specifics, lets get into some practical experience on managing Filesystem inodes.

Listing inodes on a Fileystem

Lets say we wan to to list an inode number reference ID for the Linux kernel (files):


root@linux: # ls -i /boot/vmlinuz-*
 3055760 /boot/vmlinuz-3.2.0-4-amd64   26091901 /boot/vmlinuz-4.9.0-7-amd64
 3055719 /boot/vmlinuz-4.19.0-5-amd64  26095807 /boot/vmlinuz-4.9.0-8-amd64

To list an inode of all files in the kernel specific boot directory /boot:


root@linux: # ls -id /boot/
26091521 /boot/

Listing inodes for all files stored in a directory is also done by adding the -i ls command flag:

Note the the '-1' flag was added to to show files in 1 column without info for ownership permissions


root@linux:/# ls -1i /boot/
26091782 config-3.2.0-4-amd64
 3055716 config-4.19.0-5-amd64
26091900 config-4.9.0-7-amd64
26095806 config-4.9.0-8-amd64
26091525 grub/
 3055848 initrd.img-3.2.0-4-amd64
 3055644 initrd.img-4.19.0-5-amd64
26091902 initrd.img-4.9.0-7-amd64
 3055657 initrd.img-4.9.0-8-amd64
 3055760 vmlinuz-3.2.0-4-amd64
 3055719 vmlinuz-4.19.0-5-amd64
26091901 vmlinuz-4.9.0-7-amd64
26095807 vmlinuz-4.9.0-8-amd64


To get more information about Linux directory, file, such as blocks used by file-unit, Last Access, Modify and Change times, current External Symbolic or Static links for filesystem object:

root@linux:/ # stat /etc/
  File: /etc/
  Size: 16384         Blocks: 32         IO Block: 4096   catalog
Device: 801h/2049d    Inode: 6365185     Links: 231
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2019-08-20 06:29:39.946498435 +0300
Modify: 2019-08-14 13:53:51.382564330 +0300
Change: 2019-08-14 13:53:51.382564330 +0300
 Birth: –


Within a POSIX system (Linux-es) and *BSD are more or less such, a file has the following attributes[9] which may be retrieved by the stat system call:

   – Device ID (this identifies the device containing the file; that is, the scope of uniqueness of the serial number).
    File serial numbers.
    – The file mode which determines the file type and how the file's owner, its group, and others can access the file.
    – A link count telling how many hard links point to the inode.
    – The User ID of the file's owner.
    – The Group ID of the file.
    – The device ID of the file if it is a device file.
    – The size of the file in bytes.
    – Timestamps telling when the inode itself was last modified (ctime, inode change time), the file content last modified (mtime, modification time), and last accessed (atime, access time).
    – The preferred I/O block size.
    – The number of blocks allocated to this file.


Getting more extensive information on a mounted filesystem

Most Linuxes have the tune2fs installed by default (in debian Linux this is through e2fsprogs) package, with it one can get a very good indepth information on a mounted filesystem, lets say about the ( / ) root FS.

root@linux:~# tune2fs -l /dev/sda1
tune2fs 1.44.5 (15-Dec-2018)
Filesystem volume name:   <none>
Last mounted on:          /
Filesystem UUID:          abe6f5b9-42cb-48b6-ae0a-5dda350bc322
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Filesystem flags:         signed_directory_hash
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              30162944
Block count:              120648960
Reserved block count:     6032448
Free blocks:              13830683
Free inodes:              26575654
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      995
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Filesystem created:       Thu Sep  6 21:44:22 2012
Last mount time:          Sat Jul 20 11:33:38 2019
Last write time:          Sat Jul 20 11:33:28 2019
Mount count:              6
Maximum mount count:      22
Last checked:             Fri May 10 18:32:27 2019
Check interval:           15552000 (6 months)
Next check after:         Wed Nov  6 17:32:27 2019
Lifetime writes:          338 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:              256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
First orphan inode:       21554129
Default directory hash:   half_md4
Directory Hash Seed:      d54c5a90-bc2d-4e22-8889-568d3fd8d54f
Journal backup:           inode blocks

Important note to make here is file's inode number stays the same when it is moved to another directory on the same device, or when the disk is defragmented which may change its physical location. This also implies that completely conforming inode behavior is impossible to implement with many non-Unix file systems, such as FAT and its descendants, which don't have a way of storing this invariance when both a file's directory entry and its data are moved around. Also one inode could point to a file and a copy of the file or even a file and a symlink could point to the same inode, below is example:

$ ls -l -i /usr/bin/perl*
266327 -rwxr-xr-x 2 root root 10376 Mar 18  2013 /usr/bin/perl
266327 -rwxr-xr-x 2 root root 10376 Mar 18  2013 /usr/bin/perl5.14.2

A good to know is inodes are always unique values, so you can't have the same inode number duplicated. If a directory is damaged, only the names of the things are lost and the inodes become the so called “orphan”, e.g.  inodes without names but luckily this is recoverable. As the theory behind inodes is quite complicated and is complicated to explain here, I warmly recommend you read Ian Dallen's Unix / Linux / Filesystems – directories inodes hardlinks tutorial – which is among the best academic Tutorials explaining various specifics about inodes online.


How to Get inodes per mounted filesystem


root@linux:/home/hipo# df -i
Filesystem       Inodes  IUsed   IFree IUse% Mounted on


dev             2041439     481   2040958   1% /dev
tmpfs            2046359     976   2045383   1% /run
tmpfs            2046359       4   2046355   1% /dev/shm
tmpfs            2046359       6   2046353   1% /run/lock
tmpfs            2046359      17   2046342   1% /sys/fs/cgroup
/dev/sdb5        1221600    2562   1219038   1% /usr/var/lib/mysql
/dev/sdb6        6111232  747460   5363772  13% /var/www/htdocs
/dev/sdc1      122093568 3083005 119010563   3% /mnt/backups
tmpfs            2046359      13   2046346   1% /run/user/1000

As you see in above output Inodes reported for each of mounted filesystems has a specific number. In above output IFree on every mounted FS locally on Physical installed OS Linux is good.

Here is an example on how to recognize a depleted Inodes on a OpenXen Virtual Machine with attached Virtual Hard disks.

linux:~# df -i
Filesystem         Inodes     IUsed      IFree     IUse%   Mounted on
/dev/xvda         2080768    2080768     0      100%    /
tmpfs             92187      3          92184   1%     /lib/init/rw
varrun            92187      38          92149   1%    /var/run
varlock            92187      4          92183   1%    /var/lock
udev              92187     4404        87783   5%    /dev
tmpfs             92187       1         92186   1%    /dev/shm


Finding files with a certain inode

At some cases if you want to check all the copy files of a certain file that have the same i-node pointer it is useful to find them all by their shared inode this is possible with simple find (below example is for /usr/bin/perl binary sharing same inode as perl5.28.1:


ls -i /usr/bin/perl
23798851 /usr/bin/perl*


 find /usr/bin -inum 435308 -print


Find directory that has a large number of files in it?

To get an overall number of inodes allocated by a certain directory, lets say /usr /var


root@linux:/var# du -s –inodes /usr /var
566931    /usr
56020    /var/

To get a list of directories use by inode for a directory with its main contained sub-directories sorted from 1 till highest number use:

du -s –inodes * 2>/dev/null |sort -g


Usually running out of inodes means there is a directory / fs mounts that has too many (small files) that are depleting the max count of possible inodes.

The most simple way to list directories and number of files in them on the server root directory is with a small bash shell loop like so:

for i in /*; do echo $i; find $i |wc -l; done

Another way to identify the exact directory that is most likely the bottleneck for the inode depletion in a sorted by file count, human readable form:

find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n

This will dump a list of every directory on the root (/) filesystem prefixed with the number of files (and subdirectories) in that directory. Thus the directory with the largest number of files will be at the bottom.


The -xdev switch is used to instruct find to narrow it's search to only the device where you're initiating the search (any other sub-mounted NAS / NFS filesystems from a different device will be omited).


Print top 10 subdirectories with Highest Inode Usage


Once identifed the largest number of files directories that is perhaps the issue, to further get a list of Top subdirectories in it with highest amount of inodes used, use below cmd:


for i in `ls -1A`; do echo "`find $i | sort -u | wc -l` $i"; done | sort -rn | head -10


To list more than 10 of the top inodes used dirs change the head -10 to whatever num needed.

N.B. ! Be very cautious when running above 2 find commands on a very large filesystems as it will be I/O Excessive and in filesystems that has some failing blocks this could create further problems.

To omit putting a high I/O load on a production filesystem, it is possible to also use du + very complex regular expression:

cd /backup
du –inodes -S | sort -rh | sed -n         '1,50{/^.\{71\}/s/^\(.\{30\}\).*\(.\{37\}\)$/\1…\2/;p}'

Results returned are from top to bottom.


How to Increase the amount of Inodes count on a new created volume EXT4 filesystem

Some FS-es XFS, JFS do have an auto-increase inode feature in case if their is physical space, whether otheres such as reiserfs does not have inodes at all but still have a field reported when queried for errors. But the classical Linux ext3 / ext4 does not have a way to increase the inode number on a live filesystem. Instead the way to do it there is to prepare a brand new filesystem on a Disk / NAS / attached storage.

The number of inodes at format-time of the block storage can be as high as 4 billion inodes. Before you create the new FS, you have to partition the new the block storage as ext4 with lets say parted command (or nullify the content of an with dd to clean up any previous existing data on a volume if there was already existing data:

parted /dev/sda

dd if=/dev/zero of=/dev/path/to/volume

  then format it with this additional parameter:


mkfs.ext4 -N 3000000000 /dev/path/to/volume


Here in above example the newly created filesystem of EXT4 type will be created with 3 Billion inodes !, for setting a higher number on older ext3 filesystem max inode count mkfs.ext3 could be used instead.

Bear in mind that 3 Billion number is a too high number and if you plan to have some large number of files / directories / links structures just raise it up to your pre-planning requirements for FS. In most cases it will be rarely anyone that want to have this number higher than 1 or 2 billion of inodes.

On FreeBSD / NetBSD / OpenBSD setting inode maximum number for a UFS / UFS2 (which is current default FreeBSD FS), this could be done via newfs filesystem creation command after the disk has been labeled with disklabel:


freebsd# newfs -i 1024 /dev/ada0s1d


Increase the Max Count of Inodes for a /tmp filesystem


Sometimes on some machines it is necessery to have ability to store very high number of small files (e.g. have a very large number of inodes) on a temporary filesystem kept in memory. For example some web applications served by Web Server Apache + PHP, Nginx + Perl-FastCGI are written in a bad manner so they kept tons of temporary files in /tmp, leading to issues with exceeded amount of inodes.
If that's the case to temporary work around you can increase the count of Inodes for /tmp to a very high number like 2 billions using:


mount -o remount,nr_inodes=<bignum> /tmp

To make the change permanent on next boot if needed don't forget to put the nr_inodes=whatever_bignum as a mount option for the temporary fs to /etc/fstab

Eventually, if you face this issues it is best to immediately track which application produced the mess and ask the developer to fix his messed up programs architecture.




It was explained on the very common issue of having maximum amount of inodes on a filesystem depleted and the unpleasent consequences of inability to create new files on living FS.
Then a general overview was given on what is inode on a Linux / Unix filesystem, what is typical content of inode, how inode addressing is handled on a FS. Further was explained how to get basic information about available inodes on a filesystem, how to get a filename/s based on inode number (with find), the well known way to determine inode number of a directory or file (with ls) and get more extensive information on a FS on inodes with tune2fs.
Also was explained how to identify directories containing multitudes of files in order to determine a sub-directories that is consuming most of the inodes on a filesystem. Finally it was explained very raughly how to prepare an ext4 filesystem from scratch with predefined number to inodes to much higher than the usual defaults by mkfs.ext3 / mkfs.ext4 and *bsds newfs as well as how to raise the number of inodes of /tmp tmpfs temporary RAM filesystem.

How to install Samsung ML-2010 (ML-2010P) Mono Laser Printer on Xubuntu GNU/Linux

Wednesday, January 18th, 2012

I had to make one old Samsung ML-2010P Laser Printer work on Xubuntu Linux . I've had some issues in installing it, I couldn't fine any step by step tutorial online, on how the printer can be made work fine on Linux. Therefore I took the time to experiment and see if I could make it work. Since the printer is old, not much people are interested any more in making the printer operational on Linux, hence I couldn't find too much relevant posts and sites on the net, anyways thanks God after a bit of pondering I finally succeeded to make the Samsung ML-2010P printer to print on Linux.This are the exact steps one has to follow to make this old bunch of hardware to play nice on Linux:

1. use lsusb to list the printer model

root@linux:~# lsusb |grep -i samsung
Bus 001 Device 003: ID 04e8:326c Samsung Electronics Co., Ltd ML-2010P Mono Laser Printer

You see the printer reports as Samsung Electronics Co., Ltd ML-2010P Mono Laser Printer

2. Install cups printing service required packages

root@linux:~# apt-get install cups cups-bsd cups-client cups-common
root@linux:~# apt-get install cups-driver-gutenprint ghostscript-cups
root@linux:~# apt-get install python-cups python-cupshelpers

3. Install foomatic packages

root@linux:~# apt-get install foomatic-db foomatic-db-engine foomatic-db-gutenprint
root@linux:~# apt-get install foomatic-filters python-foomatic

4. Install hpijs hplip printconfand other packages necesssery for proper printer operation

root@linux:~# apt-get install hpijs hplip hplip-data ijsgutenprint
root@linux:~# apt-get install min12xxw openprinting-pdds printconf foo2zjs

P.S. Some of the packages I list might already have been installed as a dependency to another package, as I'm writting this article few days after I've succeeded installing the printer, I don't remember the exact install order.

5. Install splix (SPL Driver for Unix)

Here is a quote taken from Spix's project website:

"SpliX is a set of CUPS printer drivers for SPL (Samsung Printer Language) printers.
If you have a such printer, you need to download and use SpliX. Moreover you will find documentation about this proprietary language.

root@linux:~# apt-get install splix

For more information on splix, check on Splix SPL driver for UNIX website

You can check on the projects website the Samsung ML 2010 Printer is marked as Working
Next step is to configure the Printer

6. Go to Cups interface on localhost in browser and Add the Samsung printer.

Use Firefox, SeaMonkey or any browser of choice to configure CUPS:

Type in the browser:


Next a password prompt will appear asking for a user/pass. The user/pass you have to use is the same as the password of the user account you're logged on with.

UNIX Linux Administration CUPS Printer adding Samsung ML 2010 ML-2010P Xubuntu

Click on the Add Printer button and choose to add the Samsung ML-2010.

Then restart the CUP Service (cupsd) to make it load the new settings:

root@linux:~# /etc/init.d/cups restart

Now give the printer a try in printing some page in SeaMonkey, Chrome or Firefox (the quickest way is through pressing CTRL + P )

Following this steps, I've managed to run the printer on Xubuntu Linux, though the same steps if followed should most probably make the Samsnung ML 2010 play nice with other Linux distributions with a little or no adjustments.
I'll be glad to hear if someone succeeded in making the printer work on other distributions, if so please drop me a comment.
That's all folks! Enjoy printing 😉

Install jwchat web chat jabber interface to work with Debian ejabberd jabber server

Wednesday, January 4th, 2012

JWChat ejabber jabber Ajax / HTML based client logo

I have recently blogged how I've installed & configured ejabberd (jabber server) on Debian .
Today I decided to further extend, my previous jabberd installation by installing JWChat a web chat interface frontend to ejabberd (a good substitute for a desktop app like pidgin which allows you to access a jabber server from anywhere)

Anyways for a base of installing JWChat , I used the previously installed debian deb version of ejabberd from the repositories.

I had a lot of troubles until I actually make it work because of some very minor mistakes in following the official described tutorial ejabberd website jwchat install tutorual

The only way I can make jwchat work was by using the install jwchat with ejabberd's HTTP-Bind and file server method

Actually for quite a long time I was not realizing that, there are two ways to install JWChat , so by mistake I was trying to mix up some install instructions from both jwchat HTTP-Bind file server method and JWchat Apache install method

I've seen many people complaining on the page of Install JWChat using Apache method which seemed to be experiencing a lot of strangle troubles just like the mines when I mixed up the jwchat php scripts install using instructions from both install methods. Therefore my guess is people who had troubles in installing using the Apache method and got the blank page issues while accessing as well as various XML Parsing Error: no element found errors on – is most probably caused by the same install instructions trap I was diluted in.

The steps to make JWChat install using the HTTP-Bind and file server method, if followed should be followed absolutely precisely or otherwise THEY WILL NOT WORK!!!

This are the exact steps I followed to make ejabberd work using the HTTP-Bind file server method :

1. Create directory to store the jwchat Ajax / htmls

debian:~# mkdir /var/lib/ejabberd/www
debian:~# chmod +x /var/lib/ejabberd
debian:~# chmod +x /var/lib/ejabberd/www

2. Modify /etc/ejabberd/ejabberd.cfg and include the following configs

While editting the conf find the section:


Scrolling down you will fine some commented code marked with %% that will read:

{5269, ejabberd_s2s_in, [
{shaper, s2s_shaper},
{max_stanza_size, 131072}

Right after it leave one new line and place the code:

{5280, ejabberd_http, [
{request_handlers, [
{["web"], mod_http_fileserver}


Scrolling a bit down the file, there is a section which says:

%%% =======

%% Modules enabled in all ejabberd virtual hosts.

The section below the comments will look like so:

{modules, [ {mod_adhoc, []},
{mod_announce, [{access, announce}]}, % requires mod_adhoc
{mod_caps, []},
{mod_configure,[]}, % requires mod_adhoc
{mod_ctlextra, []},
{mod_disco, []},
%%{mod_echo, [{host, "echo.localhost"}]},
{mod_irc, []},
{mod_last, []},

After the {mod_last, … the following lines should be added:

{mod_http_bind, []},
{mod_http_fileserver, [
{docroot, "/var/lib/ejabberd/www"},
{accesslog, "/var/log/ejabberd/webaccess.log"}

3. Download and extract latest version of jwchat

Of the time of writting the latest version of jwchat is jwchat-1.0 I have mirrored it on pc-freak for convenience:

debian:~# wget

debian:~# cd /var/lib/ejabberd/www
debian:/var/lib/ejabberd/www# tar -xzvf jwchat-1.0.tar.gz
debian:/var/lib/ejabberd/www# mv jwchat-1.0 jwchat
debian:/var/lib/ejabberd/www# cd jwchat

4. Choose the language in which you will prefer jwchat web interface to appear

I prefer english as most people would I suppose:

debian:/var/lib/ejabberd/www/jwchat# for a in $(ls *.en); do b=${a%.en}; cp $a $b; done

For other languages change in the small one liner shell script b=${a%.en} (en) to whatever language you will prefer to make primary.After selecting the correct langauge a rm cmd should be issued to get rid of the .js.* and .html.* in other language files which are no longer needed:

debian:/var/lib/ejabberd/www/jwchat# rm *.html.* *.js.*

5. Configure JWChat config.js

Edit /var/lib/ejabberd/www/jwchat/config.js , its necessery to have inside code definitions like:

/* If your Jabber server is, set this: */
var SITENAME = "";

/* If HTTP-Bind works correctly, you may want do remove HTTP-Poll here */
name:"Native Binding",
description:"Ejabberd's native HTTP Binding backend",

6. Restart EJabberd server to load the new config settings

debian:~# /etc/init.d/ejabberd restart
Restarting jabber server: ejabberd..

7. Test JWChat HTTP-Bind and file server backend

I used elinksand my beloved Epiphany (default gnome browser) which by the way is the browser I use daily to test that the JWChat works fine with the ejabberd.
To test the newly installed HTTP-Bind ejabberd server backend on port 5280 I used URL: had quite a struggles with 404 not found errors, which I couldn't explain for half an hour. After a thorough examination, I've figured out the reasons for the 404 errors was my stupidity …
The URL was incorrect because I fogrot to move jwchat-1.0 to jwchat e.g. (mv jwchat-1.0 jwchat) earlier explained in that article was a step I missed. Hence to access the web interface of the ejabberd without the 404 error I had to access it via:

JWChat Ejabber webchat Epiphany Linux screenshot

Finally it is handy to add a small index.php redirect to redirect to

The php should like so:

header( 'Location:' ) ;

How to make GRE tunnel iptables port redirect on Linux

Saturday, August 20th, 2011

I’ve recently had to build a Linux server with some other servers behind the router with NAT.
One of the hosts behind the Linux router was running a Window GRE encrypted tunnel service. Which had to be accessed with the Internet ip address of the server.
In order < б>to make the GRE tunnel accessible, a bit more than just adding a normal POSTROUTING DNAT rule and iptables FORWARD is necessery.

As far as I’ve read online, there is quite of a confusion on the topic of how to properly configure the GRE tunnel accessibility on Linux , thus in this very quick tiny tutorial I’ll explain how I did it.

1. Load the ip_nat_pptp and ip_conntrack_pptp kernel module

linux-router:~# modprobe ip_nat_pptp
linux-router:~# modprobe ip_conntrack_pptp

These two modules are an absolutely necessery to be loaded before the remote GRE tunnel is able to be properly accessed, I’ve seen many people complaining online that they can’t make the GRE tunnel to work and I suppose in many of the cases the reason not to be succeed is omitting to load this two kernel modules.

2. Make the ip_nat_pptp and ip_nat_pptp modules to load on system boot time

linux-router:~# echo 'ip_nat_pptp' >> /etc/modules
linux-router:~# echo 'ip_conntrack_pptp' >> /etc/modules

3. Insert necessery iptables PREROUTING rules to make the GRE tunnel traffic flow

linux-router:~# /sbin/iptables -A PREROUTING -d -p tcp -m tcp --dport 1723 -j DNAT --to-destination
linux-router:~# /sbin/iptables -A PREROUTING -p gre -j DNAT --to-destination

In the above example rules its necessery to substitute the ip address withe the external internet (real IP) address of the router.

Also the IP address of is the internal IP address of the host where the GRE host tunnel is located.

Next it’s necessery to;

4. Add iptables rule to forward tcp/ip traffic to the GRE tunnel

linux-router:~# /sbin/iptables -A FORWARD -p gre -j ACCEPT

Finally it’s necessery to make the above iptable rules to be permanent by saving the current firewall with iptables-save or add them inside the script which loads the iptables firewall host rules.
Another possible way is to add them from /etc/rc.local , though this kind of way is not recommended as rules would add only after succesful bootup after all the rest of init scripts and stuff in /etc/rc.local is loaded without errors.

Afterwards access to the GRE tunnel to the local IP using the port 1723 and host IP is possible.
Hope this is helpful. Cheers 😉

How to Split files on Linux FreeBSD, NetBSD and OpenBSD

Sunday, July 31st, 2011

Split large files in pieces Scissors

Did you have the need to sometimes split an SQL extra large files to few pieces in order to be able to later upload it via phpmyadmin?
Did you needed an extra large video or data file to be cut in few pieces in order to transfer it in few pieces over an USB stick?
Or just to give you an another scenario where I sometimes need to have an enormous file let’s say 3G split in few pieces, in order to later read it in vim or mcedit .
I sometimes need to achieve this on FreeBSD and Linux hosts thus I thought it will be helpful to somebody to give a very quick tutorial on the way large files can be cut in pieces on Linux and BSD hosts.

GNU/Linux and FreeBSD are equipped with the split command. The purpose of this command is exactly the cutting of a file to a number of pieces.

On Linux the split command comes by default install to the system with the coreutils package on most Debian (deb) based and Redhat based (rpm) distributions, theerefore Linux’s version of split is GNU/split since it’s part of the GNU Coreutils package. An interesting fact about Linux split is that one of the two programmers who has coded it is Richard Stallman 😉

On BSD Unix split is the AT&T UNIX (BSD) split

In the past splitting files in pieces was much more needed than today, as people used floppy drives to transfer data, though today with the bloom of Internet and the improve of the data carriers transferring even an extra large files from one place to another is a way more trivial task still at many occasions splitting it in pieces is needed.

Even though today splitting file is very rarely required, still there are times when being able to split a file in X number of parts is very much needed.
Maybe the most common use of splitting a file today is necessery when a large SQL file dumps, like let’s say 200 MBytes of info database needs to be moved from ane hosting provider to another one.
Many hosting providers does disallow direct access with standard mySQL client programs to the database directly and only allow a user to connect only via phpMyAdmin or some other web interface like Cpanel to improve data into the SQL or PostgreSQL server.

In such times, having knowledge on the Unix split command is a priceless asset.

Even though on Linux and BSD the code for the split command is not identical and GNU/split and BSD/split has some basic differences, the use of split on both of these Unices is identical.
The way to split a file in few pieces using on both Linux and BSD OSes is being done with one and the same command, here is how:

1. Splitting file in size of 40 mb On Linux

linux:~# split -b 40m SQL-Backup-Data.sql SQL-Backup-Data_split

2. Splitting file in size of 40mb on BSD (FreeBSD, OpenBSD, NetBSD)

freebsd# split -b 40m SQL-Backup-Data.sql SQL-Backup-Data_split

The Second argument the split command takes is actually called a prefix, the prefix is used as a basis name for the creation of the newly generated files cut in pieces file based on SQL-Backup-Data.sql.

As I said identical command will split the SQL-Backup-Data.sql files in a couple of parts which of it will be sized 40 megas.

These command will generate few files output like:

freebsd# ls -1 SQL-Backup-Dat*SQL-Backup-Data.sql

As you see the SQL-Backup-Data.sql with size 200MB is being split in four files each of which is sized 40mbytes.

After the files are transfered to another Linux or BSD host, they can easily be again united in the original file with the command:

linux:~# for i in $(ls -1 SQL-Backup-Data_split*); echo $i >> SQL-Backup-Data.sql

Alternatively in most Unices also using cat should be enough to collect back the pieces into the original file, like so:

freebsd# cat SQL-Backup-Data_split* >> SQL-Backup-Data.sql

Enjoy splitting

Sys Admin VIM Quick Cheat Sheet ! ;)

Wednesday, July 20th, 2011

Have you, ever thought of refreshing your VIM knowledge obtained back in the days reading the vimtutorial available straight in vim via the:
vimtutor comand?

I asked few vim related question today in #vim in irc freenode and I was referred to one mate to the following picture:

Vi VIM Tutorial Quick Cheat Sheet

VIM QUICK Tutorial Sheet Picture ! 😉 Nice ! Aint’t it? 🙂

xorg on Toshiba Satellite L40 14B with Intel GM965 video hangs up after boot and the worst fix ever / How to reinstall Ubuntu by keeping the old personal data and programs

Wednesday, April 27th, 2011

black screen ubuntu troubles

I have updated Ubuntu version 9.04 (Jaunty) to 9.10 and followed the my previous post update ubuntu from 9.04 to Latest Ubuntu

I expected that a step by step upgrade from a release to release will work like a charm and though it does on many notebooks it doesn't on Toshiba Satellite L40

The update itself went fine, whether I used the update-manager -d and followed the above pointed tutorial, however after a system restart the PC failed to boot the X server properly, a completely blank screen with blinking cursor appeared and that was all.

I restarted the system into the 2.6.35-28-generic kernel rescue-mode recovery kernel in order to be able to enter into physical console.

Logically the first thing I did is to check /var/log/messages and /var/log/Xorg.0.log but I couldn't find nothing unusual or wrong there.

I suspected something might be wrong with /etc/X11/xorg.conf so I deleted it:

ubuntu:~# rm -f /etc/X11/xorg.conf

and attempted to re-create the xorg.conf X configuration with command:

ubuntu:~# dpkg-reconfigure xserver-xorg

This command was reported to be the usual way to reconfigure the X server settings from console, but in my case (for unknown reasons) it did nothing.

Next the command which was able to re-generate the xorg.conf file was:

ubuntu:~# X -configure

The command generates a xorg.conf sample file in /root/xorg.conf.* so I used the conf to put it in /etc/X11/xorg.conf X's default location and restarted in hope that this would fix the non-booting issue.

Very sadly again the black screen of death appeared on the notebook toshiba screen.
I further thought of completely wipe out the xorg.conf in hope that at least it might boot without the conf file but this worked out neither.

I attempted to run the Xserver with a xorg.conf configured to work with vesa as it's well known vesa X server driver is supposed to work on 99% of the video cards, as almost all of them nowdays are compatible with the vesa standard, but guess what in my case vesa worked not!

The only version of X I can boot in was the failsafe X screen mode which is available through the grub's boot menu recovery mode.

Further on I decided to try few xorg.conf which I found online and were reported to work fine with Intel GM965 internal video , and yes this was also unsucessful.

Some of my other futile attempts were: to re-install the xorg server with apt-get, reinstall the xserver-xorg-video-intel driver e.g.:

ubuntu:~# apt-get install --reinstall xserver-xorg xserver-xorg-video-intel

As nothing worked out I was completely pissed off and decided to take an alternative approach which will take a lot of time but at least will probably be succesful, I decided to completely re-install the Ubuntu from a CD after backing up the /home directory and making a list of available packages on the system, so I can further easily run a tiny bash one-liner script to install all the packages which were previously existing on the laptop before the re-install:

Here is how I did it:

First I archived the /home directory:

ubuntu:/# tar -czvf home.tar.gz home/

For 12GB of data with some few thousands of files archiving it took about 40 minutes.

The tar spit archive became like 9GB and I hence used sftp to upload it to a remote FTP server as I was missing a flash drive or an external HDD where I can place the just archived data.

Uploading with sftp can be achieved with a command similar to:

Connected to
sftp> put home.tar.gz

As a next step to backup in a file the list of all current installed packages, before I can further proceed to boot-up with the Ubuntu Maverich 10.10 CD and prooceed with the fresh install I used command:

for i in $(dpkg -l| awk '{ print $2 }'); do
echo $i; done >> my_current_ubuntu_packages.txt

Once again I used sftp as in above example to upload my_current_update_packages.txt file to my FTP host.

After backing up all the stuff necessery, I restarted the system and booted from the CD-rom with Ubuntu.
The Ubuntu installation as usual is more than a piece of cake and even if you don't have a brain you can succeed with it, so I wouldn't comment on it 😉

Right after the installation I used the sftp client once again to fetch the home.tar.gz and my_current_ubuntu_packages.txt

I placed the home.tar.gz in /home/ and untarred it inside the fresh /home dir:

ubuntu:/home# tar -zxvf home.tar.gz

Eventually the old home directory was located in /home/home so thereon I used Midnight Commander ( the good old mc text file explorer and manager ) to restore the important user files to their respective places.

As a last step I used the my_current_ubuntu_packages.txt in combination with a tiny shell script to install all the listed packages inside the file with command:

ubuntu:~# for i in $(cat my_current_ubuntu_packagespackages.txt); do
apt-get install --yes $i; sleep 1;

You will have to stay in front of the computer and manually answer a ncurses interface questions concerning some packages configuration and to be honest this is really annoying and time consuming.

Summing up the overall time I spend with this stupid Toshiba Satellite L40 with the shitty Intel GM965 was 4 days, where each day I tried numerous ways to fix up the X and did my best to get through the blank screen xserver non-bootable issue, without a complete re-install of the old Ubuntu system.
This is a lesson for me that if I stumble such a shitty issues I will straight proceed to the re-install option and not loose my time with non-sense fixes which would never work.

Hope the article might be helpful to somebody else who experience some problems with Linux similar to mine.

After all at least the Ubuntu Maverick 10.10 is really good looking in general from a design perspective.
What really striked me was the placement of the close, minimize and maximize window buttons , it seems in newer Ubuntus the ubuntu guys decided to place the buttons on the left, here is a screenshot:

Left button positioning of navigation Buttons in Ubuntu 10.10

I believe the solution I explain, though very radical and slow is a solution that would always work and hence worthy 😉
Let me hear from you if the article was helpful.

How to automatically reboot (restart) Debian GNU Lenny / Squeeze Linux on kernel panic, some general CPU overload or system crash

Monday, June 21st, 2010

If you are a system administrator, you have probably wondered at least once ohw to configure your Linux server to automatically reboot itself if it crashes, is going through a mass CPU overload, e.g. the server load average “hits the sky”.
I just learned from a nice article found here that there is a kernel variable which when enabled takes care to automatically restart a crashed server with the terrible Kernel Panic message we all know.

The variable I’m taking about is kernel.panic for instance kernel.panic = 20 would instruct your GNU Linux kernel to automatically reboot if it experiences a kernel panic system crash within a time limit of 20 seconds.

To start using the auto-reboot linux capabilities on a kernel panic occurance just set the variable to /etc/sysctl.conf

debian-server:~# echo 'kernel.panic = 20' >> /etc/sysctl.conf

Now we will also have to enable the variable to start being use on the system, so execute:

debian-server:~# sysctl -p There you go automatic system reboots on kernel panics is now on.
Now to further assure yourself the linux server you’re responsible of will automatically restart itself on a emergency situation like a system overload I suggest you check Watchdog

You might consider checking out this auto reboot tutorial which explains in simple words how watchdog is installed and configured.
On Debian installing and maintaining watchdog is really simple and comes to installing and enabling the watchdog system service, right afteryou made two changes in it’s configuration file /etc/watchdog.conf

To do so execute:

debian-server:~# apt-get install watchdog
debian-server:~# echo "file = /var/log/messages" >> /etc/watchdog.conf
debian-server:~# echo "watchdog-device = /dev/watchdog" >> /etc/watchdog.conf

Well that should be it, you might also need to load some kernel module to monitor your watchdog.
On my system the kernel modules related to watchdog are located in:

If not then you should certainly try the software watchdog linux kernel module called softdog , to do so issue:
debian-server:~# /sbin/modprobe softdog

It’s best if you load the module while the softdog daemon is disabled.
If you consider auto loadig the softdog software watchdog kernel driver you should exec:

debian-server:~# echo 'softdog' >> /etc/modules

Finally a start of the watchdog is necessery:


debian-server:~# /etc/init.d/watchdog start
Stopping watchdog keepalive daemon....
Starting watchdog daemon....

That should be all your automatic system reboots should be now on! 🙂