Posts Tagged ‘directory’

What is inode and how to find out which directory is eating up all your filesystem inodes on Linux, Increase inode count on a ext3 ext4 and ufs filesystems

Tuesday, August 20th, 2019

what-is-inode-find-out-which-filesystem-or-directory-eating-up-all-your-system-inodes-linux_inode_diagram

If you're a system administrator of multiple Linux servers used for Web serving delivery / Mail server sysadmin, Database admin or any High amount of Drives Data Storage used for backup servers infra, Data Repository administrator such as Linux hosted Samba / CIFS shares, etc. or using some Linux Hosting Provider to host your website or any other UNIX like Infrastructure servers that demands a storage of high number of files under a Directory  you might end up with the common filesystem inode depletion issues ( Maximum Inode number for a filesystem is predefined, limited and depending on the filesystem configured size).

In case a directory stored files end up exceding the amount of possible addressable inodes could prevent any data to be further assiged and stored on the Filesystem.

When a device runs out of inodes, new files cannot be created on the device, even though there may be plenty free space available and the first time it happened to me very long time ago I was completely puzzled how this is possible as I was not aware of Inodes existence  …

Reaching maximum inodes number (e.g. inode depletion), often happens on Busy Mail servers (receivng tons of SPAM email messages) or Content Delivery Network (CDN – Website Image caching servers) which contain many small files on EXT3 or EXT4 Journalled filesystems. File systems (such as Btrfs, JFS or XFS) escape this limitation with extents or dynamic inode allocation, which can 'grow' the file system or increase the number of inodes.

 

Hence ending being out of inodes could cause various oddities on how stored data behaves or communicated to other connected microservices and could lead to random application disruptions and odd results costing you many hours of various debugging to find the root cause of inodes (index nodes) being out of order.

In below article, I will try to give an overall explanation on what is an I-Node on a filesystem, how inodes of FS unit could be seen, how to diagnose a possible inode poblem – e.g.  see the maximum amount of inodes available per filesystem and how to prepare (format) a new filesystem with incrsed set of maximum inodes.

 

What are filesystem i-nodes?

 

This is a data structure in a Unix-style file system that describes a file-system object such as a file or a directory.
The data structure described in the inodes might vary slightly depending on the filesystem but usually on EXT3 / EXT4 Linux filesystems each inode stores the index to block that contains attributes and disk block location(s) of the object's data.
– Yes for those who are not aware on how a filesystem is structured on *nix it does allocate all stored data in logical separeted structures called data blocks. Each file stored on a local filesystem has a file descriptor, there are virtual unit structures file tables and each of the inodes that are a reference number has a own data structure (inode table).

Inodes / "Index" are slightly unusual on file system structure that stored the access information of files as a flat array on the disk, with all the hierarchical directory information living aside from this as explained by Unix creator and pioneer- Dennis Ritchie (passed away few years ago).

what-is-inode-very-simplified-explanation-diagram-data

Simplified explanation on file descriptors, file table and inode, table on a common Linux filesystem

Here is another description on what is I-node, given by Ken Thompson (another Unix pioneer and father of Unix) and Denis Ritchie, described in their paper published in 1978:

"    As mentioned in Section 3.2 above, a directory entry contains only a name for the associated file and a pointer to the file itself. This pointer is an integer called the i-number (for index number) of the file. When the file is accessed, its i-number is used as an index into a system table (the i-list) stored in a known part of the device on which the directory resides. The entry found thereby (the file's i-node) contains the description of the file:…
    — The UNIX Time-Sharing System, The Bell System Technical Journal, 1978  "


 

What is typical content of inode and how I-nodes play with rest of Filesystem units?


The inode is just a reference index to a data block (unit) that contains File-system object attributes. It may include metadata information such as (times of last change, access, modification), as well as owner and permission data.

 

On a Linux / Unix filesystem, directories are lists of names assigned to inodes. A directory contains an entry for itself, its parent, and each of its children.

Structure-of-inode-table-on-Linux-Filesystem-diagram

 

Structure of inode table-on Linux Filesystem diagram (picture source GeeksForGeeks.org)

  • Information about files(data) are sometimes called metadata. So you can even say it in another way, "An inode is metadata of the data."
  •  Inode : Its a complex data-structure that contains all the necessary information to specify a file. It includes the memory layout of the file on disk, file permissions, access time, number of different links to the file etc.
  •  Global File table : It contains information that is global to the kernel e.g. the byte offset in the file where the user's next read/write will start and the access rights allowed to the opening process.
  • Process file descriptor table : maintained by the kernel, that in turn indexes into a system-wide table of files opened by all processes, called the file table .

The inode number indexes a table of inodes in a known location on the device. From the inode number, the kernel's file system driver can access the inode contents, including the location of the file – thus allowing access to the file.

  •     Inodes do not contain its hardlink names, only other file metadata.
  •     Unix directories are lists of association structures, each of which contains one filename and one inode number.
  •     The file system driver must search a directory looking for a particular filename and then convert the filename to the correct corresponding inode number.

The operating system kernel's in-memory representation of this data is called struct inode in Linux. Systems derived from BSD use the term vnode, with the v of vnode referring to the kernel's virtual file system layer.


But enough technical specifics, lets get into some practical experience on managing Filesystem inodes.
 

Listing inodes on a Fileystem


Lets say we wan to to list an inode number reference ID for the Linux kernel (files):

 

root@linux: # ls -i /boot/vmlinuz-*
 3055760 /boot/vmlinuz-3.2.0-4-amd64   26091901 /boot/vmlinuz-4.9.0-7-amd64
 3055719 /boot/vmlinuz-4.19.0-5-amd64  26095807 /boot/vmlinuz-4.9.0-8-amd64


To list an inode of all files in the kernel specific boot directory /boot:

 

root@linux: # ls -id /boot/
26091521 /boot/


Listing inodes for all files stored in a directory is also done by adding the -i ls command flag:

Note the the '-1' flag was added to to show files in 1 column without info for ownership permissions

 

root@linux:/# ls -1i /boot/
26091782 config-3.2.0-4-amd64
 3055716 config-4.19.0-5-amd64
26091900 config-4.9.0-7-amd64
26095806 config-4.9.0-8-amd64
26091525 grub/
 3055848 initrd.img-3.2.0-4-amd64
 3055644 initrd.img-4.19.0-5-amd64
26091902 initrd.img-4.9.0-7-amd64
 3055657 initrd.img-4.9.0-8-amd64
26091756 System.map-3.2.0-4-amd64
 3055703 System.map-4.19.0-5-amd64
26091899 System.map-4.9.0-7-amd64
26095805 System.map-4.9.0-8-amd64
 3055760 vmlinuz-3.2.0-4-amd64
 3055719 vmlinuz-4.19.0-5-amd64
26091901 vmlinuz-4.9.0-7-amd64
26095807 vmlinuz-4.9.0-8-amd64

 

To get more information about Linux directory, file, such as blocks used by file-unit, Last Access, Modify and Change times, current External Symbolic or Static links for filesystem object:
 

root@linux:/ # stat /etc/
  File: /etc/
  Size: 16384         Blocks: 32         IO Block: 4096   catalog
Device: 801h/2049d    Inode: 6365185     Links: 231
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2019-08-20 06:29:39.946498435 +0300
Modify: 2019-08-14 13:53:51.382564330 +0300
Change: 2019-08-14 13:53:51.382564330 +0300
 Birth: –

 

Within a POSIX system (Linux-es) and *BSD are more or less such, a file has the following attributes[9] which may be retrieved by the stat system call:

   – Device ID (this identifies the device containing the file; that is, the scope of uniqueness of the serial number).
    File serial numbers.
    – The file mode which determines the file type and how the file's owner, its group, and others can access the file.
    – A link count telling how many hard links point to the inode.
    – The User ID of the file's owner.
    – The Group ID of the file.
    – The device ID of the file if it is a device file.
    – The size of the file in bytes.
    – Timestamps telling when the inode itself was last modified (ctime, inode change time), the file content last modified (mtime, modification time), and last accessed (atime, access time).
    – The preferred I/O block size.
    – The number of blocks allocated to this file.

 

Getting more extensive information on a mounted filesystem


Most Linuxes have the tune2fs installed by default (in debian Linux this is through e2fsprogs) package, with it one can get a very good indepth information on a mounted filesystem, lets say about the ( / ) root FS.
 

root@linux:~# tune2fs -l /dev/sda1
tune2fs 1.44.5 (15-Dec-2018)
Filesystem volume name:   <none>
Last mounted on:          /
Filesystem UUID:          abe6f5b9-42cb-48b6-ae0a-5dda350bc322
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Filesystem flags:         signed_directory_hash
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              30162944
Block count:              120648960
Reserved block count:     6032448
Free blocks:              13830683
Free inodes:              26575654
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      995
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Filesystem created:       Thu Sep  6 21:44:22 2012
Last mount time:          Sat Jul 20 11:33:38 2019
Last write time:          Sat Jul 20 11:33:28 2019
Mount count:              6
Maximum mount count:      22
Last checked:             Fri May 10 18:32:27 2019
Check interval:           15552000 (6 months)
Next check after:         Wed Nov  6 17:32:27 2019
Lifetime writes:          338 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:              256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
First orphan inode:       21554129
Default directory hash:   half_md4
Directory Hash Seed:      d54c5a90-bc2d-4e22-8889-568d3fd8d54f
Journal backup:           inode blocks


Important note to make here is file's inode number stays the same when it is moved to another directory on the same device, or when the disk is defragmented which may change its physical location. This also implies that completely conforming inode behavior is impossible to implement with many non-Unix file systems, such as FAT and its descendants, which don't have a way of storing this invariance when both a file's directory entry and its data are moved around. Also one inode could point to a file and a copy of the file or even a file and a symlink could point to the same inode, below is example:

$ ls -l -i /usr/bin/perl*
266327 -rwxr-xr-x 2 root root 10376 Mar 18  2013 /usr/bin/perl
266327 -rwxr-xr-x 2 root root 10376 Mar 18  2013 /usr/bin/perl5.14.2

A good to know is inodes are always unique values, so you can't have the same inode number duplicated. If a directory is damaged, only the names of the things are lost and the inodes become the so called “orphan”, e.g.  inodes without names but luckily this is recoverable. As the theory behind inodes is quite complicated and is complicated to explain here, I warmly recommend you read Ian Dallen's Unix / Linux / Filesystems – directories inodes hardlinks tutorial – which is among the best academic Tutorials explaining various specifics about inodes online.

 

How to Get inodes per mounted filesystem

 

root@linux:/home/hipo# df -i
Filesystem       Inodes  IUsed   IFree IUse% Mounted on

 

dev             2041439     481   2040958   1% /dev
tmpfs            2046359     976   2045383   1% /run
tmpfs            2046359       4   2046355   1% /dev/shm
tmpfs            2046359       6   2046353   1% /run/lock
tmpfs            2046359      17   2046342   1% /sys/fs/cgroup
/dev/sdb5        1221600    2562   1219038   1% /usr/var/lib/mysql
/dev/sdb6        6111232  747460   5363772  13% /var/www/htdocs
/dev/sdc1      122093568 3083005 119010563   3% /mnt/backups
tmpfs            2046359      13   2046346   1% /run/user/1000


As you see in above output Inodes reported for each of mounted filesystems has a specific number. In above output IFree on every mounted FS locally on Physical installed OS Linux is good.


Here is an example on how to recognize a depleted Inodes on a OpenXen Virtual Machine with attached Virtual Hard disks.

linux:~# df -i
Filesystem         Inodes     IUsed      IFree     IUse%   Mounted on
/dev/xvda         2080768    2080768     0      100%    /
tmpfs             92187      3          92184   1%     /lib/init/rw
varrun            92187      38          92149   1%    /var/run
varlock            92187      4          92183   1%    /var/lock
udev              92187     4404        87783   5%    /dev
tmpfs             92187       1         92186   1%    /dev/shm

 

Finding files with a certain inode


At some cases if you want to check all the copy files of a certain file that have the same i-node pointer it is useful to find them all by their shared inode this is possible with simple find (below example is for /usr/bin/perl binary sharing same inode as perl5.28.1:

 

ls -i /usr/bin/perl
23798851 /usr/bin/perl*

 

 find /usr/bin -inum 435308 -print
/usr/bin/perl5.28.1
/usr/bin/perl

 

Find directory that has a large number of files in it?

To get an overall number of inodes allocated by a certain directory, lets say /usr /var

 

root@linux:/var# du -s –inodes /usr /var
566931    /usr
56020    /var/

To get a list of directories use by inode for a directory with its main contained sub-directories sorted from 1 till highest number use:
 

du -s –inodes * 2>/dev/null |sort -g

 

Usually running out of inodes means there is a directory / fs mounts that has too many (small files) that are depleting the max count of possible inodes.

The most simple way to list directories and number of files in them on the server root directory is with a small bash shell loop like so:
 

for i in /*; do echo $i; find $i |wc -l; done


Another way to identify the exact directory that is most likely the bottleneck for the inode depletion in a sorted by file count, human readable form:
 

find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n


This will dump a list of every directory on the root (/) filesystem prefixed with the number of files (and subdirectories) in that directory. Thus the directory with the largest number of files will be at the bottom.

 

The -xdev switch is used to instruct find to narrow it's search to only the device where you're initiating the search (any other sub-mounted NAS / NFS filesystems from a different device will be omited).

 

Print top 10 subdirectories with Highest Inode Usage

 

Once identifed the largest number of files directories that is perhaps the issue, to further get a list of Top subdirectories in it with highest amount of inodes used, use below cmd:

 

for i in `ls -1A`; do echo "`find $i | sort -u | wc -l` $i"; done | sort -rn | head -10

 

To list more than 10 of the top inodes used dirs change the head -10 to whatever num needed.

N.B. ! Be very cautious when running above 2 find commands on a very large filesystems as it will be I/O Excessive and in filesystems that has some failing blocks this could create further problems.

To omit putting a high I/O load on a production filesystem, it is possible to also use du + very complex regular expression:
 

cd /backup
du –inodes -S | sort -rh | sed -n         '1,50{/^.\{71\}/s/^\(.\{30\}\).*\(.\{37\}\)$/\1…\2/;p}'


Results returned are from top to bottom.

 

How to Increase the amount of Inodes count on a new created volume EXT4 filesystem

Some FS-es XFS, JFS do have an auto-increase inode feature in case if their is physical space, whether otheres such as reiserfs does not have inodes at all but still have a field reported when queried for errors. But the classical Linux ext3 / ext4 does not have a way to increase the inode number on a live filesystem. Instead the way to do it there is to prepare a brand new filesystem on a Disk / NAS / attached storage.

The number of inodes at format-time of the block storage can be as high as 4 billion inodes. Before you create the new FS, you have to partition the new the block storage as ext4 with lets say parted command (or nullify the content of an with dd to clean up any previous existing data on a volume if there was already existing data:
 

parted /dev/sda


dd if=/dev/zero of=/dev/path/to/volume


  then format it with this additional parameter:

 

mkfs.ext4 -N 3000000000 /dev/path/to/volume

 

Here in above example the newly created filesystem of EXT4 type will be created with 3 Billion inodes !, for setting a higher number on older ext3 filesystem max inode count mkfs.ext3 could be used instead.

Bear in mind that 3 Billion number is a too high number and if you plan to have some large number of files / directories / links structures just raise it up to your pre-planning requirements for FS. In most cases it will be rarely anyone that want to have this number higher than 1 or 2 billion of inodes.

On FreeBSD / NetBSD / OpenBSD setting inode maximum number for a UFS / UFS2 (which is current default FreeBSD FS), this could be done via newfs filesystem creation command after the disk has been labeled with disklabel:

 

freebsd# newfs -i 1024 /dev/ada0s1d

 

Increase the Max Count of Inodes for a /tmp filesystem

 

Sometimes on some machines it is necessery to have ability to store very high number of small files (e.g. have a very large number of inodes) on a temporary filesystem kept in memory. For example some web applications served by Web Server Apache + PHP, Nginx + Perl-FastCGI are written in a bad manner so they kept tons of temporary files in /tmp, leading to issues with exceeded amount of inodes.
If that's the case to temporary work around you can increase the count of Inodes for /tmp to a very high number like 2 billions using:

 

mount -o remount,nr_inodes=<bignum> /tmp

To make the change permanent on next boot if needed don't forget to put the nr_inodes=whatever_bignum as a mount option for the temporary fs to /etc/fstab

Eventually, if you face this issues it is best to immediately track which application produced the mess and ask the developer to fix his messed up programs architecture.

 

Conclusion

 

It was explained on the very common issue of having maximum amount of inodes on a filesystem depleted and the unpleasent consequences of inability to create new files on living FS.
Then a general overview was given on what is inode on a Linux / Unix filesystem, what is typical content of inode, how inode addressing is handled on a FS. Further was explained how to get basic information about available inodes on a filesystem, how to get a filename/s based on inode number (with find), the well known way to determine inode number of a directory or file (with ls) and get more extensive information on a FS on inodes with tune2fs.
Also was explained how to identify directories containing multitudes of files in order to determine a sub-directories that is consuming most of the inodes on a filesystem. Finally it was explained very raughly how to prepare an ext4 filesystem from scratch with predefined number to inodes to much higher than the usual defaults by mkfs.ext3 / mkfs.ext4 and *bsds newfs as well as how to raise the number of inodes of /tmp tmpfs temporary RAM filesystem.

How to use zip command to archive directory and files in GNU / Linux

Monday, November 6th, 2017

how-to-use-zip-command-to-archive-directory-and-files-in-gnu-linux-and-freebsd

How to zip directory or files with ZIP command in LInux or any other Unix like OS?

Why would you want to ZIP files in Linux if you have already gzip and bzip archive algorithms? Well for historical reasons .ZIP is much supported across virtually all major operating systems like Unix, Linux, VMS, MSDOS, OS/2, Windows NT, Minix, Atari and Macintosh, FreeBSD, OpenBSD, NetBSD, Amiga and Acorn RISC and many other operating systems.

Assuming that zip command line tool is available across most GNU / Linux and WinZIP is available across almost all Windowses, the reason you might need to create .zip archive might be to just transfer the files from your Linux / FreeBSD desktop system or a friend with M$ Windows.

So below is how to archive recursively files inside a directory using zip command:
 

 $ zip -r myvacationpics.zip /home/your-directory/your-files-pictures-text/

 


or you can write it shorter with omitting .zip as by default zip command would create .zip files

 

$ zip -r whatever-zip-file-name /home/your-directory/your-files-pictures-text/

 


The -r tells zip to recurse into directories (e.g. archive all files and directories inside your-files-pictures-text/)

If you need to archive just a files recursively with a file extension such as .txt inside current directory

 

$ zip -R my-zip-archive.zip '*.txt'


Above command would archive any .txt found inside your current directory if the zip command is for example issued from /home/hipo all found files such as /home/hipo/directory1, /home/hipo/directory2, /home/hipo/directory2/directory3/directory4 and all the contained subdirs that contain any .txt extension files will be added to the archive.

For the Linux desktop users that are lazy and want to zip files without much typing take a look at PeaZip for Linux 7Z / ZIP GUI interface tool

 

What is this directory /run/user/1000 on Debian and Fedora GNU / Linux?

Monday, October 23rd, 2017

what-is-this-folder-directory-run-user-1000-in-debian-debianfedoraubuntu-linux

So what are these /run/user/1000, /run/user/0, /run/user109, /run/user/1000 showing in my df -h I'm I hacked or what?

root@noah:~# df -h|grep -i tmpfs
tmpfs 201M 22M 179M 11% /run
tmpfs 1001M 0 1001M 0% /dev/shm
tmpfs 5,0M 4,0K 5,0M 1% /run/lock
tmpfs 1001M 0 1001M 0% /sys/fs/cgroup
tmpfs 201M 0 201M 0% /run/user/0
tmpfs 201M 36K 201M 1% /run/user/1000
tmpfs 201M 16K 201M 1% /run/user/109

 

/run/user/$uid is created by pam_systemd and used for storing files used by running processes for that
user. These might be things such as your keyring daemon, pulseaudio, etc.

Prior to systemd, these applications typically stored their files in /tmp. They couldn't use a location in
/home/$user as home directories are often mounted over network filesystems, and these files should not be
shared among hosts. /tmp was the only location specified by the FHS which is local, and writable by all
users.

However storing all these files in /tmp is problematic as /tmp is writable by everyone, and while you can
change the ownership & mode on the files being created, it's more difficult to work with.

However storing all these files in /tmp is problematic as /tmp is writable by everyone, and while you can
change the ownership & mode on the files being created, it's more difficult to work with.

So systemd came along and created /run/user/$uid.
This directory is local to the system and only
accessible by the target user. So applications looking to store their files locally no longer have to
worry about access control.

It also keeps things nice and organized. When a user logs out, and no active sessions remain, pam_systemd will wipe the /run/user/$uid directory out. With various files scattered around /tmp, you couldn't dothis.

Should mention that it is called $XDG_RUNTIME_DIR, documented at 8 standards.freedesktop.org/basedir-spec/basedir-spec-latest.h‌​tml.

What if: I have started a "background" computation process with nohup, and it saves its intermediate results/data in a temp file. Can I count on it not being wiped while the process is running, or it will be wiped, and the process started with nohup will loose its data? 

It's unlikely to be wiped, but /run/user is a tmpfs filesystem in debian, ubuntu and fedora, so it'll be limited.

What if the pidfile is a service running under root.
Should it's PID go under /var/run or /var/run/user/0 ?

If since there is no active sessions will it be removed? 

This directory contains system information data describing the system since it was booted.
Files under this directory must be cleared (removed or truncated as appropriate) at the beginning of the boot process.

The purposes of this directory were once served by /var/run. In general, programs may continue to use /var/run to fulfill the requirements set out for /run for the purposes of backwards
compatibility.

Programs which have migrated to use /run should cease their usage of /var/runexcept as noted in the section on /var/run.

Programs may have a subdirectory of /run; this is encouraged for programs that use more than one run-time file.

Users may also have a subdirectory of /run, although care must be taken to appropriately limit access rights to prevent unauthorized use of /run itself and other subdirectories.

In the case of the /run/user directory, is used by the different user services, like dconf, pulse,systemd, etc. that needs a place for their lock files and sockets. There are as many directories as
different users UID's are logged in the system.

How to synchronize with / from Remote FTP server using LFTP like with rsync

Sunday, October 15th, 2017

how-to-synchronize-from-remote-ftp-server-easily-like-rsync.jpg

Have you ever been in a need to easily synchronize with a remote host which only runs FTP server?

Or are you in a local network and you need to mirror a directory or a couple of directories in a fast and easy to remember way?

If so then you'll be happy to use below LFTP command that is doing pretty much the same as Rsync, with only difference that it can mirror files over FTP (old but gold File Transfer Protocol).
 

lftp -u FTP_USERNAME,FTP_PASSWORD -e 'mirror REMOTE_DIRECTORY LOCAL_DIRECTORY' FTP_SERVER_HOSTNAME


Enjoy and thanks to my dear friend Amridikon for the tip ! 🙂

Some standard software programs to install on Windows to make your Windows feel more like a Linux / Unix Desktop host

Friday, March 17th, 2017

linux-freebsd-unix-migration-to-windows-some-useful-customizations-and-program-softwares-to-install-to-make-your-windows-feel-like-more-linux-and-bsd-unix

If you're Windows user like me with a Linux / FreeBSD / OpenBSD / NetBSD – a dedicated Unix user and end up working for financial reasons in some TOP 100 Fortune companies (CSC, SAP, IBM, Hewlett Packard,Enterprise, Oracle) etc.  and forced for business purposes (cause some programs such as Skype for Business Desktop Share does not run fine on Unix like and thus you have to work notebook pre-installed with Windows 7 / 8 or 10 but you're so accustomed to customizations already from UNIX environments and you would like to create yourself the Windows to resemble Linux and probably customize much of how Windows behaves by default.

Here is what I personally did on my work Windows 7 Enterprise on my HP Elitebook notebook to give myself the extra things I'm used to my Debian Linux Desktop.


1. Downloaded and instaled standard gnome-terminal xterm like immediately (E.g. check MobaXterm great alternative to Putty),
2. Changed cutomize Windows 7 appearance to be more like classical Windows XP,  change Windows 8 / 10 start menu appearance to be more like in classic Windows 2000
3. Installed following bunch of softwares

  • VIM Text Editor for Windows
  • Thunderbird Mail Client
  • OpenVPN client
  • Oracle VM Virtualbox
  • Opera
  • Mozilla Firefox
  • Password Safe
  • Ext2FS / Ext3FS (support programs)
  • F.lux (to auto adjust screen brightness day and night for better sleep)
  • install ActivePerl for Windows
  • Install GNUWin Tools (and perhaps most importantly)
  • CygWin,  (to provide Windows with most needed console Linux tools), Clink.
  • WinSCP
  • Swish (to be able to remotely mount your Linux partitions and see them as local Windows drives)
  • dosbox (to play some of the good old Dos games :))
  • Windirstat (to easily check the size of complete directory and subdirectories)
  • SpaceSniffer (to be able to see which directory or files are taking the most space on the system)


Along with all above goodies here is also some good software I find essential for every web developer / system administrator / network administrator or java,  C, php pprogrammer out there that's using Windows as his Desktop platrofm.

Another thing I prefer  on Windows 7 when used as workstation is to change the default Windows 7 LogonUI screen background as well check out how here

Perhaps there is plenty of other goodprograms to install on Windows to make it feel even more like a Linux / Unix Desktop host, if you happen to somehow stuck to this article and you've migrated from Llinux / BSD desktop to Windows for work purposes please share with me any other goodies you happen to use that is from *Unix.

Must have software on freshly installed windows – Essential Software after fresh Windows install

Friday, March 18th, 2016

Install-update-multiple-programs-applications-at-once-using-ninite

If you're into IT industry even if you don't like installing frequently Windows or you're completely Linux / BSD user, you will certainly have a lot of friends which will want help from you to re-install or fix their Windows 7 / 8 / 10 OS. At least this is the case with me every year, I'm kinda of obliged to install fresh windowses on new bought friends or relatives notebooks / desktop PCs.

Of course according to for whom the new Windows OS installed the preferrences of necessery software varies, however more or less there is sort of standard list of Windows Software which is used daily by most of Avarage Computer user, such as:
 

I tend to install on New Windows installs and thus I have more or less systematized the process.

I try to usually stick to free software where possible for each of the above categories as a Free Software enthusiast and luckily nowadays there is a lot of non-priprietary or at least free as in beer software available out there.

For Windows sysadmins or College and other public institutions networks including multiple of Windows Computers which are not inside a domain and also for people in computer repair shops where daily dozens of windows pre-installs or a set of software Automatic updates are  necessery make sure to take a look at Ninite

ninite-automate-windows-program-deploy-and-update-on-new-windows-os-openoffice-screenshot

As official website introduces Ninite:

Ninite – Install and Update All Your Programs at Once

Of course as Ninite is used by organizations as NASA, Harvard Medical School etc. it is likely the tool might reports your installed list of Windows software and various other Win PC statistical data to Ninite developers and most likely NSA, but this probably doesn't much matter as this is probably by the moment you choose to have installed a Windows OS on your PC.

ninite-choises-to-build-an-install-package-with-useful-essential-windows-software-screenshot
 

For Windows System Administrators managing small and middle sized network PCs that are not inside a Domain Controller, Ninite could definitely save hours and at cases even days of boring install and maintainance work. HP Enterprise or HP Inc. Employees or ex-employees would definitely love Ninite, because what Ninite does is pretty much like the well known HP Internal Tool PC COE.

Ninite could also prepare an installer containing multiple applications based on the choice on Ninite's website, so that's also a great thing especially if you need to deploy a different type of Users PCs (Scientific / Gamers / Working etc.)

Perhaps there are also other useful things to install on a new fresh Windows installations, if you're using something I'm missing let me know in comments.

chmod all directories permissions only and omit files (recursively) on Linux howto

Friday, March 11th, 2016

execute-write-read-of-user-group-and-others-on-linux-unix-bsd-explanationary-picture

If you mistakenly chmod-ed all files within directory full of multiple other subdirectories and files and you want to revert back and set a certain file permissions (read, wite execute) privileges only to all directories:
 

find /path/to/base/dir -type d -exec chmod 755 {} +


If there are too many files or directories you need to change mod use
 

chmod 755 $(find /path/to/base/dir -type d) chmod 644 $(find /path/to/base/dir -type f)

Above willl run evaluate $() all files searched and print them and pass them to chmod so if you have too many files / directories to change it will drastically reduce execution time.

An alternative and perhaps a better way to do it for those who don't remember by heart the chmod permission (numbers), use something like:
 

chmod -R u+rwX,go+rX,go-w /path

Below is arguments meaning:

    -R = recursively;
    u+rwX = Users can read, write and execute;
    go+rX = group and others can read and execute;
    go-w = group and others can't write

If like piping, a less efficient but still working way to change all directory permissions only is with:
 

find /path/to/base/dir -type d -print0 | xargs -0 chmod 755
find /path/to/base/dir -type f -print0 | xargs -0 chmod 644


For those who wish to automate and often do change permissions of only files or only directories it might be also nice to look at (chmod_dir_files-recursive.sh) shell script

Tadadam 🙂

 

Howto Fix “sysstat Cannot open /var/log/sysstat/sa no such file or directory” on Debian / Ubuntu Linux

Monday, February 15th, 2016

sysstast-no-such-file-or-directory-fix-Debian-Ubuntu-Linux-howto
I really love sysstat and as a console maniac I tend to install it on every server however by default there is some <b>sysstat</b> tuning once installed to make it work, for those unfamiliar with <i>sysstat</i> I warmly recommend to check, it here is in short the package description:<br /><br />
 

server:~# apt-cache show sysstat|grep -i desc -A 15
Description: system performance tools for Linux
 The sysstat package contains the following system performance tools:
  – sar: collects and reports system activity information;
  – iostat: reports CPU utilization and disk I/O statistics;
  – mpstat: reports global and per-processor statistics;
  – pidstat: reports statistics for Linux tasks (processes);
  – sadf: displays data collected by sar in various formats;
  – nfsiostat: reports I/O statistics for network filesystems;
  – cifsiostat: reports I/O statistics for CIFS filesystems.
 .
 The statistics reported by sar deal with I/O transfer rates,
 paging activity, process-related activities, interrupts,
 network activity, memory and swap space utilization, CPU
 utilization, kernel activities and TTY statistics, among
 others. Both UP and SMP machines are fully supported.
Homepage: http://pagesperso-orange.fr/sebastien.godard/

 

If you happen to install sysstat on a Debian / Ubuntu server with:

server:~# apt-get install –yes sysstat


, and you try to get some statistics with sar command but you get some ugly error output from:

 

server:~# sar Cannot open /var/log/sysstat/sa20: No such file or directory


And you wonder how to resolve it and to be able to have the server log in text databases periodically the nice sar stats load avarages – %idle, %iowait, %system, %nice, %user, then to FIX that Cannot open /var/log/sysstat/sa20: No such file or directory

You need to:

server:~# vim /etc/default/sysstat


By Default value you will find out sysstat stats it is disabled, e.g.:

ENABLED="false"

Switch the value to "true"

ENABLED="true"


Then restart sysstat init script with:

server:~# /etc/init.d/sysstat restart

However for those who prefer to do things from menu Ncurses interfaces and are not familiar with Vi Improved, the easiest way is to run dpkg reconfigure of the sysstat:

server:~# dpkg –reconfigure


sysstat-reconfigure-on-gnu-linux

 

root@server:/# sar
Linux 2.6.32-5-amd64 (pcfreak) 15.02.2016 _x86_64_ (2 CPU)

0,00,01 CPU %user %nice %system %iowait %steal %idle
0,15,01 all 24,32 0,54 3,10 0,62 0,00 71,42
1,15,01 all 18,69 0,53 2,10 0,48 0,00 78,20
10,05,01 all 22,13 0,54 2,81 0,51 0,00 74,01
10,15,01 all 17,14 0,53 2,44 0,40 0,00 79,49
10,25,01 all 24,03 0,63 2,93 0,45 0,00 71,97
10,35,01 all 18,88 0,54 2,44 1,08 0,00 77,07
10,45,01 all 25,60 0,54 3,33 0,74 0,00 69,79
10,55,01 all 36,78 0,78 4,44 0,89 0,00 57,10
16,05,01 all 27,10 0,54 3,43 1,14 0,00 67,79


Well that's it now sysstat error resolved, text reporting stats data works again, Hooray! 🙂

How to find and Delete Duplicate files in directory on Linux server with find and fdupes command

Monday, March 16th, 2015

search-duplcate-files-linux-command-and-graphical-tools-how-to-find-duplicate-files-on-linux-mac-and-windows-os

Linux / UNIX find command is very helpful to do a lot of tasks to us admins such as Deleting empty directories to free up occupied inodes or finding and printing only empty files within a root file system within all sub-directories
There is too much of uses of find, however one that is probably rarely used known by sysadmins find command use is how to search for duplicate files on a Linux server:
 

find -not -empty -type f -printf “%s\n” | sort -rn | uniq -d | xargs -I{} -n1 find -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 –all-repeated=separate

If you're curious how does duplicate files finding works, they are found by comparing file sizes and MD5 signatures, followed by a byte-by-byte comparison.

Most common application of below command is when you want to search and get rid of some old obsolete files which you forgot to delete such as old /etc/ configurations, old SQL backups and PHP / Java / Python programming code files etc.

If you have to do a regular duplicate file find on multiple servers Linux servers perhaps you should install and use  fdupes command.
On Debian Linux to install it:

root@pcfreak:/# apt-cache show fdupes|grep -i descr -A 4
Description: identifies duplicate files within given directories
 FDupes uses md5sums and then a byte by byte comparison to find
 duplicate files within a set of directories. It has several useful
 options including recursion.
Homepage: http://code.google.com/p/fdupes/

 

root@pc-freak.net:/# apt-get install –yes fdupes

To search for duplicate files with fdupes in lets /etc/ just run fdupes without arguments:

 

root@pcfreak:/# fdupes /etc/
/etc/magic
/etc/magic.mime

/etc/odbc.ini
/etc/.pwd.lock
/etc/environment
/etc/odbcinst.ini

/etc/shadow-
/etc/shadow


If you want to look up for all duplicate files within root directory:
 

root@pcfreak:/# fdupes -r /etc/
Building file list /

 

You can also find duplicate files for multiple directories by just passing all directories as arguments to fdupes

 

root@pcfreak:/# fdupes -r /etc/ /usr/ /root /disk /nfs_mount /nas


The -r argument (makes a recursive subdirectory search for duplicates), if you want to also see what is the size of duplicate files found add -S option

 

fdupes -r -S /etc/ /usr/ /root /disk /nfs_mount /nas

 


If you want to delete all duplicate files within lets say /etc/

 

root@pcfreak:/# fdupes -d /etc/

fdupes is also available and installable also on RPM based Linux distros Fedora / RHEL / CentOS etc., install on CentOS with:
 

[root@centos~ ]# yum -y install fdupes


There is also a port available for those who want to run it on FreeBSD on BSD install it from ports:

 

freebsd# cd /usr/ports/sysutils/fdupes
freebsd# make install clean


If you have a GUI environment installed on the server and you don't want to bother with command line to search for all duplicate files under main filesystem and other lint (junk) files take a look at FSlint

FSlint-2.02-search-for-duplicate-and-lint-files-linux-gui-tool

If you're looking for a GUI cross platform duplicate file finder tool that runs on all major used Operating Systems Mac OS X / Windows / Linux take a look at dupeGuru

 

Windows how to check which process locks file command – A M$ Windows equivalent of lsof command

Monday, February 23rd, 2015

windows-how-to-check-which-process-locks-file-command-a-ms-windows-equivalent-of-lsof-command

I've had a task today to deploy a new WAR (Web Application Archive) Tomcat file on Apache Tomcat server running  on Windows server 2008 R2 UAT environment.
The client Tomcat application within war is providing a frontend to an proprietary Risk Analysis application called Risiko Management (developed by a German vendor called Schleupen).
The update of WAR file was part of a version upgrade of application so, both "Risk Analysis" desktop standalone server RiskKit and the Web frontend was developed by Schleupen had to be updated.
In order to update I followed the usual .WAR Tomcat Javafile upadate Tomcat process.

1. Stopped Tomcat running service Instance via services.msc command e.g.
 

Start (menu) -> Run
 

services.msc

 

stopping-tomcat-application-howto-stop-service-ms-windows-screenshot
 


2. Move (by Renaming) old risk-analysis.war to risk-analysis_backup_2015.war

and also rename the automatically Tomcat extracted folder (named same name as the WAR archive file directory – D:\web\Apache-Tomcat-7.0.33\webapps\Risiko-Analysis\ to :\web\Apache-Tomcat-7.0.33\webapps\Risiko-Analysis_backup_2015, i.e. run:
 

C:\Users\risk-analysis> D:
D:\>
D:\> CD \Web\Apache-Tomcat-7.0.33\webapps\

D:\Web\Apache-Tomcat-7.0.33\webapps> move risk-analysis.war risk-analysis_2015.war
D:\Web\Apache-Tomcat-7.0.33\webapps> move  
Risiko-Analysis\  Risiko-Analysis_backup_2015\


But unfortunately I couldn't rename it and I got below error:

move-windows-command-access-is-denied-tiny-screenshot

Also I tried copying it using Windows Explorer Copy / Paste but this didn't worked either, and I got below error :

cant-move-risk-analysis-tomcat-java-application-error-ms-windows-screenshot

3. Finding what Locks a directory or File on M$ Windows


Obviously, the reason for unable to copy the directory was something was locking it. Actually there are plenty of locked files many running applications like Explorer do. A good example for all time locked file is Windows (swap file) pagefile.sys – this is Windows Linux equivalent of swap filesystem (enabled / disabled with spapon / swapoff commands)

Having the directory locked was a strange problem, because the Tomcat process was not running as I checked closely both in Windows taskmgr GUI interface and manually grepped for the process with tasklist command like so:

 

d:\>tasklist /m|find /i "tomcat"


tomcat7.exe                   4396 ntdll.dll, kernel32.dll, KERNELBASE.dll,

For people like me who use primary Linux , above command shows you very precious debugging information, it shows which Windows libraries (DLL) are loaded in memory and used by the process 

 

(Note that when Tomcat is running, it is visible with command)
 

D:\> wmic.exe process list brief | find /i "tomcat"
526          tomcat7.exe          8         4396       49           156569600


Just for those wondering the 156569600 number is number of bytes loaded in Windows memory used by Tomcat.

After tomcat was stopped above command returned empty string meaning obviously that tomcat is stopped ..

BTW, wmic command is very useful to get a list of process names (to list all running processes):

 D:> wmic.exe process list brief

get-all-process-names-in-command-line-with-windows-wmic-command-screenshot

Well obviously something was locking this directory (some of its subdirectories or a file name within the directory / folder), so I couldn't rename it just like that.
In Linux finding which daemon (service) is locking a file is pretty easy with lsof command (for those new to lsof check my previous article how to how to check what process listens on network port in Linux), however it was unknown to me how I can check which running service is locking a file and did a quick google search which pointed me to the famous handle part of SysInternals tools.
The command tool Handle.exe was exactly what I was looking for. 

handle-sysinternals-tool-to-windows-see-all-locked-files-and-what-is-locking-them-ms-windows-screenshot

To get list of all opened (locked) files and see which application has opened it just exec command without arguments, you will get
plenty of useful info which will help you to better understand what Windows OS is doing invisible in the background and what app uses what.

handle-command-part-of-sysinternals-witout-any-arguments-display-opened-locked-files-in-windows

handle is pretty much Windows equivalent command of Linux lsof

To get which file was locked by Tomcat I used handle in conjuntion with find /i command which is pretty much like Linux's grep equivalent

 

C:\TEMP> Handle.exe | FIND /I "Tomcat"
   1C: File  (RW-)   D:\Web\Apache-Tomcat-7.0.33\webapps\Risk-Analysis\images\app


Alternatively if you have sysinternals and prefer GUI environment you can use SysInternals Process Explorer (press CTRL + F) and look for a string:

process-explorer-toolbar-find-what-is-locking-a-file-or-directory-windows

Next to handle I found also another GUI program (Internet Explorer extension) WhoLockMe, that can be used to show you all running programs and locked files by this programs.
WhoLockMe is pretty straight forward to use, though it shows GUI output you have to run the command from cmd line. Below is sample output screenshot of wholockme.


who-lock-me-windows-screenshot-see-which-files-running-programs-are-locking-on-ms-windows

 

To Install Wholockme 


Unzip "WhoLockMe.zip" in a directory (for exemple : "C:\Program Files\WhoLockMe")
Launch "Install.bat" or execute this Windows registry modification command :
 

regsvr32 "C:\Program Files\WhoLockMe\WhoLockMe.dll"


To Uninstall WhoLockMe – if you need to later:

 

Execute command :
 

regsvr32 /u "C:\Program Files\WhoLockMe\WhoLockMe.dll"


Reboot (Or Kill Explorer.exe).

Removes the "C:\Program Files\WhoLockMe" directory and its contents.

Probably there are other ways to find out what is locking a file or direcotry using powershell scripts or .bat (batch) scripting. If you know of other way using default Windows embedded commands, please share in comments.