Posts Tagged ‘directory’

Must have software on freshly installed windows – Essential Software after fresh Windows install

Friday, March 18th, 2016

Install-update-multiple-programs-applications-at-once-using-ninite

If you're into IT industry even if you don't like installing frequently Windows or you're completely Linux / BSD user, you will certainly have a lot of friends which will want help from you to re-install or fix their Windows 7 / 8 / 10 OS. At least this is the case with me every year, I'm kinda of obliged to install fresh windowses on new bought friends or relatives notebooks / desktop PCs.

Of course according to for whom the new Windows OS installed the preferrences of necessery software varies, however more or less there is sort of standard list of Windows Software which is used daily by most of Avarage Computer user, such as:
 

Not to forget a good candidate from the list to install on new fresh windows Installation candidates are:

  • Winrar
  • PeaZIP
  • WinZip
  • GreenShot (to be able to easily screenshot stuff and save pictures locally and to the cloud)
  • AnyDesk (non free but very functional alternative to TeamViewer) to be able to remotely access remote PC
  • TightVNC
  • ITunes / Spotify (for people who have also iPhone smart phone)
  • DropBox or pCloud (to have some extra cloud free space)
  • FBReader (for those reading a lot of books in different formats)
  • Rufus – Rufus is an efficient and lightweight tool to create bootable USB drives. It helps you to create BIOS or UEFI bootable devices. It helps you to create Windows TO Go drives. It provides support for various disk, format, and partition.
  • Recuva is a data recovery software for Windows 10 (non free)
  • EaseUS (for specific backup / restore data purposes but unfortunately (non free)
  • For designers
  • Adobe Photoshop
  • Adobe Illustrator
  • f.lux –  to control brightness of screen and potentially Save your eyes
  • ImDisk virtual Disk Driver
  • KeePass / PasswordSafe – to Securely store your passwords
  • Putty / MobaXterm / SecureCRT / mPutty (for system administrators and programmers that has to deal with Linux / UNIX)

I tend to install on New Windows installs and thus I have more or less systematized the process.

I try to usually stick to free software where possible for each of the above categories as a Free Software enthusiast and luckily nowadays there is a lot of non-priprietary or at least free as in beer software available out there.

For Windows sysadmins or College and other public institutions networks including multiple of Windows Computers which are not inside a domain and also for people in computer repair shops where daily dozens of windows pre-installs or a set of software Automatic updates are  necessery make sure to take a look at Ninite

ninite-automate-windows-program-deploy-and-update-on-new-windows-os-openoffice-screenshot

As official website introduces Ninite:

Ninite – Install and Update All Your Programs at Once

Of course as Ninite is used by organizations as NASA, Harvard Medical School etc. it is likely the tool might reports your installed list of Windows software and various other Win PC statistical data to Ninite developers and most likely NSA, but this probably doesn't much matter as this is probably by the moment you choose to have installed a Windows OS on your PC.

ninite-choises-to-build-an-install-package-with-useful-essential-windows-software-screenshot
 

For Windows System Administrators managing small and middle sized network PCs that are not inside a Domain Controller, Ninite could definitely save hours and at cases even days of boring install and maintainance work. HP Enterprise or HP Inc. Employees or ex-employees would definitely love Ninite, because what Ninite does is pretty much like the well known HP Internal Tool PC COE.

Ninite could also prepare an installer containing multiple applications based on the choice on Ninite's website, so that's also a great thing especially if you need to deploy a different type of Users PCs (Scientific / Gamers / Working etc.)

Perhaps there are also other useful things to install on a new fresh Windows installations, if you're using something I'm missing let me know in comments.

Analyze disk space usage in Linux / BSD with du / find and filelight /qdirstat / baobab GUI disk usage analyzers to check what takes up your disk space on Unix like OSes

Friday, April 21st, 2023

linux-how-to-find-out-what-files-and-directories-has-occupied-all-your-disk-space-partition-from-console-and-GUI_du-find-filelight-baobab-qdirstat-duff-linux-450x450

If you're a Desktop Linux or BSD UNIX user and your hard disk / external SSD / flash drive etc. space starts to be misteriously disapper due to whatever reaseon such as a crashing applications producing rapidly log error / warning messages leading quickly to filling up the disk or out of a sudden you have some Disk space lost without knowing what kind of data filled up the disk or you're downloading some big sized bittorrent files forgotten in your bittorrent client or complete mirroring a large website and you suddenly get the result of root directory ( / ) getting fully or nearly filled up, then you definitely would want to check out what has disk activity has eaten up your disk space and leaing to OS and Aplication slow responsiveness.

For the Linux regular *nix user finding out what is filling the disk is a trivial task with with find / du -hsc * but as people have different habits to use find and du I'll show you the most common ways I use this two command line tools to identify disk space low issues for the sake of comparison.
Others who have better easier ways to do it are very welcome to share it with me in the comments.
 

1. Finding large files on hard disk with find Linux command tool
 

host:~# find /home -type f -printf "%s\t%p\n" | sort -n | tail -10
2100000000    /home/hipo/Downloads/MameUIfx incl. ROMs/MameUIfx incl. ROMs-6.bin
2100000000    /home/hipo/Downloads/MameUIfx incl. ROMs/MameUIfx incl. ROMs-7.bin
2100000000    /home/hipo/Downloads/MameUIfx incl. ROMs/MameUIfx incl. ROMs-8.bin
2100000000    /home/hipo/Downloads/MameUIfx incl. ROMs/MameUIfx incl. ROMs-9.bin
2815424080    /home/hipo/.thunderbird/h3dasfii.default\
/ImapMail/imap.gmail.com/INBOX
2925584895    /home/hipo/Documents/.git/\
objects/pack/pack-8590b069cad26ac0af7560fb42b51fa9bfe41050.pack
4336918067    /home/hipo/Games/Mames_4GB-compilation-best-arcade-games-of-your-14_04_2021.tar.gz
6109003776    /home/hipo/VirtualBox VMs/CentOS/CentOS.vdi
23599251456    /home/hipo/VirtualBox VMs/Windows 7/Windows 7.vdi
33913044992    /home/hipo/VirtualBox VMs/Windows 10/Windows 10.vdi

I use less rarely find on Desktops and more when I have to do some kind of data usage analysis on servers, of course for my Linux home computer and any other Linux desktop machines, or just a small incomprehensive analysis du cmd is much more appropriate to use.


2. Finding large files Megabyte occupying space files sorted in Megabytes and Gigas with du
 

  • Check main 10 files sorted in megabytes that are hanging in a directory

pcfkreak:~# du -hsc /home/hipo/*|grep 'M\s'|sort -rn|head -n 10
956M    /home/hipo/last_dump1.sql
711M    /home/hipo/hipod
571M    /home/hipo/from-thinkpad_r61
453M    /home/hipo/ultimate-edition-themes
432M    /home/hipo/metasploit-framework
355M    /home/hipo/output-upgrade.txt
333M    /home/hipo/Плот
209M    /home/hipo/Work-New.tar.gz
98M    /home/hipo/DOOM64
90M    /home/hipo/mp3

  • Get 10 top larges files in Gigabytes that are space hungry and eating up your space

pcfkreak:~# du -hsc /home/hipo/*|grep 'G\s'|sort -rn|head -n 10
156G    total
60G    /home/hipo/VirtualBox VMs
37G    /home/hipo/Downloads
18G    /home/hipo/Desktop
11G    /home/hipo/Games
7.4G    /home/hipo/ownCloud
7.1G    /home/hipo/Документи
4.6G    /home/hipo/music
2.9G    /home/hipo/root
2.8G    /home/hipo/Documents


If you want to still work on the console terminal but you don't want to type too much you can use ncdu (ncurses) text tool, install it with

# apt install –yes ncdu


https://www.pc-freak.net/images/ncdu-gnu-linux-debian-screenshot.png

 For the most lazy ones or complete Linux newbies that doesn't want to spend time typing / learing or using text commands or softwares you can also check what has eaten up your full disk space with GUI tools as well.

There are at least 3 tools to use to check in Graphical Interface what has occupied your disk space on Linux / BSD, I'm aware of:

3. Filelight GUI disk usage analysis Linux tool

For those using KDE or preferring a shiny GUI interface that will capture the eye, perhaps filelight would be the option of choice tool to get analysis sum of your directory sturctures and file use on the laptop or desktop *unix OS.

unix-desktop:~# apt-cache show filelight|grep -i description-en -A 7
Description-en: show where your diskspace is being used
 Filelight allows you to understand your disk usage by graphically
 representing your filesystem as a set of concentric, segmented rings.
 .
 It is like a pie-chart, but the segments nest, allowing you to see both
 which directories take up all your space, and which directories
 and files inside those directories are the real culprits.
Description-md5: 397ff9a469e07a772f22460c66b66875


To use it simply go ahead and install it with apt or yum / dnf or whatever Linux package manager your distro uses:

unix-desktop:~# apt-get install –yes filelight

filelight-show-where-disk-space-is-being-used-graphically-tool-linux

4. GNOME DIsk Usage Analyzer Baobab GUI tool

For those being a GNOME / Mate / Budgie / Cinnamon Graphical interface users baobab shold be the program to use as it uses the famous LibGD library.

unix-desktop:~# apt-cache show baobab|grep -i description-en -A10
Description-en: GNOME disk usage analyzer
 Disk Usage Analyzer is a graphical, menu-driven application to analyse
 disk usage in a GNOME environment. It can easily scan either the whole
 filesystem tree, or a specific user-requested directory branch (local or
 remote).
 .
 It also auto-detects in real-time any changes made to your home
 directory as far as any mounted/unmounted device. Disk Usage Analyzer
 also provides a full graphical treemap window for each selected folder.
Description-md5: 5f6072b89ebb1dc83433fa7658814dc6
Homepage: https://wiki.gnome.org/Apps/Baobab

 

gnome-disk-analyzer-baobab-tool-screenshot-of-hard-disk-directory-locations-sorted-by-size

5. Qdirstat graphical application to show where your disk space has gone on Linux

Qdirstat is perhaps well known tool to track disk space issues on Linux desktop hosts, known by the hardcore KDE / LXDE / LXQT / DDE GUI interface / environment lovers and as a KDE tool uses the infamous Qt library. I personally don't like it and don't put it on machines I use because I never use kde and don't want to waste my disk space with additional libraries such as the QT Library which historically was not totally free in terms of licensing and even now is in both free and non free licensing GPL / LGPL and QT Commercial Licensing license.

unix-desktop:~# apt-cache show qdirstat|grep -i description-en -A10
Description-en: Qt-based directory statistics
 QDirStat is a graphical application to show where your disk space has gone and
 to help you to clean it up.
 .
 QDirStat has a number of new features compared to KDirStat. To name a few:
  * Multi-selection in both the tree and the treemap.
  * Unlimited number of user-defined cleanup actions.
  * Properly show errors of cleanup actions (and their output, if desired).
  * File categories (MIME types) and their treemap color are now configurable.
  * Exclude rules for directories are easily configurable.
  * Desktop-agnostic; no longer relies on KDE or any other specific desktop.


qdirstat-linux-screenshot-show-what-directory-uses-most-hard-disk-space

That shiny fuzed graphics is actually a repsesantation of all directories the bigger and if one scrolls on the colorful gamma a text with directory and size or file will appear. Though the graphical represantation is really c00l to me it is a bit unreadable, thus I prefer and recommend the other two GUI tools filelight or baobab instead.

6. Finding duplicate files on Linux system with duff command tool

Talking about big unknown left-over files on your hard drives, it is appropriate to mention one tool here that is a console one but very useful to anyone willing to get rid of old duplicate files that are hanging around on the disk. Sometimes such copies are produced while copying large amount of files from place to place or simply by mistake while copying Photo / Video files from your Smart Phone to Linux desktop etc. 

This is where the duff command line utility might be super beneficial for you.

unix-desktop:~# apt-cache show duff|grep -i description-en -A3
Description-en: Duplicate file finder
 Duff is a command-line utility for identifying duplicates in a given set of
 files.  It attempts to be usably fast and uses the SHA family of message
 digests as a part of the comparisons.

Using duff tool is very straight forward to see all the duplicate files hanging in a directory lets say your home folder.

unix-desktop:~#  duff -rP /home/hipo

/home/hipo/music/var/Quake II Soundtrack – Kill Ratio.mp3
/home/hipo/mp3/Quake II Soundtrack – Kill Ratio.mp3
2 files in cluster 44 (7913472 bytes, digest 98f38be49e2ffcbf90927f9357b3e24a81d5a649)
/home/hipo/music/var/HYPODIL_01-Scakauec.mp3
/home/hipo/mp3/HYPODIL_01-Scakauec.mp3
2 files in cluster 45 (2807808 bytes, digest ce9067ce1f132fc096a5044845c7fac73e99c0ed)
/home/hipo/music/var/Quake II Suondtrack – March Of The Stoggs.mp3
/home/hipo/mp3/Quake II Suondtrack – March Of The Stoggs.mp3
2 files in cluster 46 (3506176 bytes, digest efcc401b4ebda9b0b2367aceb8e334c8ba1a357d)
/home/hipo/music/var/Quake II Suondtrack – Quad Machine.mp3
/home/hipo/mp3/Quake II Suondtrack – Quad Machine.mp3
2 files in cluster 47 (7917568 bytes, digest 0905c1d790654016c2ecf2949f78d47a870c3822)
/home/hipo/music/var/Cyberpunk Group – Futureshock!.mp3
/home/hipo/mp3/Cyberpunk Group – Futureshock!.mp3

-r (Recursively search into all specified directories.)

P (Don't follow any symbolic links.  This overrides any previous -H or -L option.  This is the default.  Note that this only applies to directories, as sym‐
             bolic links to files are never followed.)

7. Deleting duplicate files with duff

If you're absolutely sure you know what you're doing and you have a backup in case if something messes up during duplicate teletions, to get rid of lets say any duplicate Picture files found by duff run sommething like:

# duff -e0 -r /home/hipo/Pictures/ | xargs -0 rm

!!! Please note that using duff is for those who absolutely know what they're doing and have their data recent data. Deleting the wrong data by mistake with the tool might put you in the first grade and you'll be the only one to blame  🙂 !!!

Wrap it Up

Filling up the disk with unknown large files is a task to resolve that happens often. For the unlazy on Linux / BSD / Mac OS and other UNIX like OS-es the easiest way is to use find or du with some one liner command. For the lazy Windows addicted Graphical users filelightqdirstat or baobab GUI disk usage analysis tools are there.
If you have a lot of files and many of thems are duplicates you can use duff to check them out and remove all unneded duplicates and save space. 
Hope this article, was helpful for someone.
That's all folks, enjoy your data profilactics, if you know any other good easy command or GUI tools or hints for drive disk space profilactics please share.

Check when Windows Active Directory user expires and set user password expire to Never

Thursday, January 9th, 2020

micorosoft-windows-10-logo-net-user-command-check-expiry-dates

If you're working for a company that is following high security / PCI Security Standards and you're using m$ Windows OS that belongs to the domain it is useful to know when your user is set to expiry
to know how many days are left until you'll be forced to change your Windows AD password.
In this short article I'll explain how to check Windows AD last password set date / date expiry date and how you can list expiry dates for other users, finally will explain how to set your expiry date to Never
to get rid of annoying change password every 90 days.

 

1. Query domain Username for Password set / Password Expires set dates

To know this info you need to know the Password expiration date for Active Directory user account, to know it just open Command Line Prompt cmd.exe

And run command:
 

 

NET USER Your-User-Name /domain


net-user-domain-command-check-AD-user-expiry

Note that, many companies does only connect you to AD for security reason only on a VPN connect with something like Cisco AnyConnect Secure Mobility Client whatever VPN connect tool is used to encrypt the traffic between you and the corporate DMZ-ed network

Below is basic NET USER command usage args:

Net User Command Options
 

Item          Explanation

net user    Execute the net user command alone to show a very simple list of every user account, active or not, on the computer you're currently using.

username    This is the name of the user account, up to 20 characters long, that you want to make changes to, add, or remove. Using username with no other option will show detailed information about the user in the Command Prompt window.

password    Use the password option to modify an existing password or assign one when creating a new username. The minimum characters required can be viewed using the net accounts command. A maximum of 127 characters is allowed1.
*    You also have the option of using * in place of a password to force the entering of a password in the Command Prompt window after executing the net user command.

/add    Use the /add option to add a new username on the system.
options    See Additional Net User Command Options below for a complete list of available options to be used at this point when executing net user.

/domain    This switch forces net user to execute on the current domain controller instead of the local computer.

/delete    The /delete switch removes the specified username from the system.

/help    Use this switch to display detailed information about the net user command. Using this option is the same as using the net help command with net user: net help user.
/?    The standard help command switch also works with the net user command but only displays the basic command syntax. Executing net user without options is equal to using the /? switch.

 

 

2. Listing all Active Directory users last set date / never expires and expiration dates


If you have the respective Active Directory rights and you have the Remote Server Administration Tools for Windows (RSAT Tools), you are able to do also other interesting stuff,

 

such as

– using PowerShell to list all user last set dates, to do so use Open Power Shell and issue:
 

get-aduser -filter * -properties passwordlastset, passwordneverexpires |ft Name, passwordlastset, Passwordneverexpires


get-aduser-properties-passwordlastset-passwordneverexpires1

This should show you info as password last set date and whether password expiration is set for account.

– Using PS to get only the password expirations for all AD existing users is with:

 

Get-ADUser -filter {Enabled -eq $True -and PasswordNeverExpires -eq $False} –Properties "DisplayName", "msDS-UserPasswordExpiryTimeComputed" |
Select-Object -Property "Displayname",@{Name="ExpiryDate";Expression={[datetime]::FromFileTime($_."msDS-UserPasswordExpiryTimeComputed")}}


If you need the output data to get stored in CSV file delimitered format you can add to above PS commands
 

| export-csv YOUR-OUTPUT-FILE.CSV

 

3. Setting a user password to never Expiry

 

If the user was created with NET USER command by default it will have been created to have a password expiration. 
However if you need to create new users for yourself (assuming you have the rights), with passwords that never expire on lets say Windows Server 2016 – (if you don't care about security so much), use:
 

NET USER "Username" /Add /Active:Yes

WMIC USERACCOUNT WHERE "Name='Username' SET PasswordExpires=False

 

NET-USER-ADD_Active-yes-Microsoft-Windows-screenshot

NET-USER-set-password-policy-to-Never-expiry-MS-Windows

To view the general password policies, type following:
 

NET ACCOUNTS


NET-ACCOUNTS-view-default-Microsoft-Windows-password-policy
 

 

What is inode and how to find out which directory is eating up all your filesystem inodes on Linux, Increase inode count on a ext3 ext4 and ufs filesystems

Tuesday, August 20th, 2019

what-is-inode-find-out-which-filesystem-or-directory-eating-up-all-your-system-inodes-linux_inode_diagram

If you're a system administrator of multiple Linux servers used for Web serving delivery / Mail server sysadmin, Database admin or any High amount of Drives Data Storage used for backup servers infra, Data Repository administrator such as Linux hosted Samba / CIFS shares, etc. or using some Linux Hosting Provider to host your website or any other UNIX like Infrastructure servers that demands a storage of high number of files under a Directory  you might end up with the common filesystem inode depletion issues ( Maximum Inode number for a filesystem is predefined, limited and depending on the filesystem configured size).

In case a directory stored files end up exceding the amount of possible addressable inodes could prevent any data to be further assiged and stored on the Filesystem.

When a device runs out of inodes, new files cannot be created on the device, even though there may be plenty free space available and the first time it happened to me very long time ago I was completely puzzled how this is possible as I was not aware of Inodes existence  …

Reaching maximum inodes number (e.g. inode depletion), often happens on Busy Mail servers (receivng tons of SPAM email messages) or Content Delivery Network (CDN – Website Image caching servers) which contain many small files on EXT3 or EXT4 Journalled filesystems. File systems (such as Btrfs, JFS or XFS) escape this limitation with extents or dynamic inode allocation, which can 'grow' the file system or increase the number of inodes.

 

Hence ending being out of inodes could cause various oddities on how stored data behaves or communicated to other connected microservices and could lead to random application disruptions and odd results costing you many hours of various debugging to find the root cause of inodes (index nodes) being out of order.

In below article, I will try to give an overall explanation on what is an I-Node on a filesystem, how inodes of FS unit could be seen, how to diagnose a possible inode poblem – e.g.  see the maximum amount of inodes available per filesystem and how to prepare (format) a new filesystem with incrsed set of maximum inodes.

 

What are filesystem i-nodes?

 

This is a data structure in a Unix-style file system that describes a file-system object such as a file or a directory.
The data structure described in the inodes might vary slightly depending on the filesystem but usually on EXT3 / EXT4 Linux filesystems each inode stores the index to block that contains attributes and disk block location(s) of the object's data.
– Yes for those who are not aware on how a filesystem is structured on *nix it does allocate all stored data in logical separeted structures called data blocks. Each file stored on a local filesystem has a file descriptor, there are virtual unit structures file tables and each of the inodes that are a reference number has a own data structure (inode table).

Inodes / "Index" are slightly unusual on file system structure that stored the access information of files as a flat array on the disk, with all the hierarchical directory information living aside from this as explained by Unix creator and pioneer- Dennis Ritchie (passed away few years ago).

what-is-inode-very-simplified-explanation-diagram-data

Simplified explanation on file descriptors, file table and inode, table on a common Linux filesystem

Here is another description on what is I-node, given by Ken Thompson (another Unix pioneer and father of Unix) and Denis Ritchie, described in their paper published in 1978:

"    As mentioned in Section 3.2 above, a directory entry contains only a name for the associated file and a pointer to the file itself. This pointer is an integer called the i-number (for index number) of the file. When the file is accessed, its i-number is used as an index into a system table (the i-list) stored in a known part of the device on which the directory resides. The entry found thereby (the file's i-node) contains the description of the file:…
    — The UNIX Time-Sharing System, The Bell System Technical Journal, 1978  "


 

What is typical content of inode and how I-nodes play with rest of Filesystem units?


The inode is just a reference index to a data block (unit) that contains File-system object attributes. It may include metadata information such as (times of last change, access, modification), as well as owner and permission data.

 

On a Linux / Unix filesystem, directories are lists of names assigned to inodes. A directory contains an entry for itself, its parent, and each of its children.

Structure-of-inode-table-on-Linux-Filesystem-diagram

 

Structure of inode table-on Linux Filesystem diagram (picture source GeeksForGeeks.org)

  • Information about files(data) are sometimes called metadata. So you can even say it in another way, "An inode is metadata of the data."
  •  Inode : Its a complex data-structure that contains all the necessary information to specify a file. It includes the memory layout of the file on disk, file permissions, access time, number of different links to the file etc.
  •  Global File table : It contains information that is global to the kernel e.g. the byte offset in the file where the user's next read/write will start and the access rights allowed to the opening process.
  • Process file descriptor table : maintained by the kernel, that in turn indexes into a system-wide table of files opened by all processes, called the file table .

The inode number indexes a table of inodes in a known location on the device. From the inode number, the kernel's file system driver can access the inode contents, including the location of the file – thus allowing access to the file.

  •     Inodes do not contain its hardlink names, only other file metadata.
  •     Unix directories are lists of association structures, each of which contains one filename and one inode number.
  •     The file system driver must search a directory looking for a particular filename and then convert the filename to the correct corresponding inode number.

The operating system kernel's in-memory representation of this data is called struct inode in Linux. Systems derived from BSD use the term vnode, with the v of vnode referring to the kernel's virtual file system layer.


But enough technical specifics, lets get into some practical experience on managing Filesystem inodes.
 

Listing inodes on a Fileystem


Lets say we wan to to list an inode number reference ID for the Linux kernel (files):

 

root@linux: # ls -i /boot/vmlinuz-*
 3055760 /boot/vmlinuz-3.2.0-4-amd64   26091901 /boot/vmlinuz-4.9.0-7-amd64
 3055719 /boot/vmlinuz-4.19.0-5-amd64  26095807 /boot/vmlinuz-4.9.0-8-amd64


To list an inode of all files in the kernel specific boot directory /boot:

 

root@linux: # ls -id /boot/
26091521 /boot/


Listing inodes for all files stored in a directory is also done by adding the -i ls command flag:

Note the the '-1' flag was added to to show files in 1 column without info for ownership permissions

 

root@linux:/# ls -1i /boot/
26091782 config-3.2.0-4-amd64
 3055716 config-4.19.0-5-amd64
26091900 config-4.9.0-7-amd64
26095806 config-4.9.0-8-amd64
26091525 grub/
 3055848 initrd.img-3.2.0-4-amd64
 3055644 initrd.img-4.19.0-5-amd64
26091902 initrd.img-4.9.0-7-amd64
 3055657 initrd.img-4.9.0-8-amd64
26091756 System.map-3.2.0-4-amd64
 3055703 System.map-4.19.0-5-amd64
26091899 System.map-4.9.0-7-amd64
26095805 System.map-4.9.0-8-amd64
 3055760 vmlinuz-3.2.0-4-amd64
 3055719 vmlinuz-4.19.0-5-amd64
26091901 vmlinuz-4.9.0-7-amd64
26095807 vmlinuz-4.9.0-8-amd64

 

To get more information about Linux directory, file, such as blocks used by file-unit, Last Access, Modify and Change times, current External Symbolic or Static links for filesystem object:
 

root@linux:/ # stat /etc/
  File: /etc/
  Size: 16384         Blocks: 32         IO Block: 4096   catalog
Device: 801h/2049d    Inode: 6365185     Links: 231
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2019-08-20 06:29:39.946498435 +0300
Modify: 2019-08-14 13:53:51.382564330 +0300
Change: 2019-08-14 13:53:51.382564330 +0300
 Birth: –

 

Within a POSIX system (Linux-es) and *BSD are more or less such, a file has the following attributes[9] which may be retrieved by the stat system call:

   – Device ID (this identifies the device containing the file; that is, the scope of uniqueness of the serial number).
    File serial numbers.
    – The file mode which determines the file type and how the file's owner, its group, and others can access the file.
    – A link count telling how many hard links point to the inode.
    – The User ID of the file's owner.
    – The Group ID of the file.
    – The device ID of the file if it is a device file.
    – The size of the file in bytes.
    – Timestamps telling when the inode itself was last modified (ctime, inode change time), the file content last modified (mtime, modification time), and last accessed (atime, access time).
    – The preferred I/O block size.
    – The number of blocks allocated to this file.

 

Getting more extensive information on a mounted filesystem


Most Linuxes have the tune2fs installed by default (in debian Linux this is through e2fsprogs) package, with it one can get a very good indepth information on a mounted filesystem, lets say about the ( / ) root FS.
 

root@linux:~# tune2fs -l /dev/sda1
tune2fs 1.44.5 (15-Dec-2018)
Filesystem volume name:   <none>
Last mounted on:          /
Filesystem UUID:          abe6f5b9-42cb-48b6-ae0a-5dda350bc322
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Filesystem flags:         signed_directory_hash
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              30162944
Block count:              120648960
Reserved block count:     6032448
Free blocks:              13830683
Free inodes:              26575654
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      995
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Filesystem created:       Thu Sep  6 21:44:22 2012
Last mount time:          Sat Jul 20 11:33:38 2019
Last write time:          Sat Jul 20 11:33:28 2019
Mount count:              6
Maximum mount count:      22
Last checked:             Fri May 10 18:32:27 2019
Check interval:           15552000 (6 months)
Next check after:         Wed Nov  6 17:32:27 2019
Lifetime writes:          338 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:              256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
First orphan inode:       21554129
Default directory hash:   half_md4
Directory Hash Seed:      d54c5a90-bc2d-4e22-8889-568d3fd8d54f
Journal backup:           inode blocks


Important note to make here is file's inode number stays the same when it is moved to another directory on the same device, or when the disk is defragmented which may change its physical location. This also implies that completely conforming inode behavior is impossible to implement with many non-Unix file systems, such as FAT and its descendants, which don't have a way of storing this invariance when both a file's directory entry and its data are moved around. Also one inode could point to a file and a copy of the file or even a file and a symlink could point to the same inode, below is example:

$ ls -l -i /usr/bin/perl*
266327 -rwxr-xr-x 2 root root 10376 Mar 18  2013 /usr/bin/perl
266327 -rwxr-xr-x 2 root root 10376 Mar 18  2013 /usr/bin/perl5.14.2

A good to know is inodes are always unique values, so you can't have the same inode number duplicated. If a directory is damaged, only the names of the things are lost and the inodes become the so called “orphan”, e.g.  inodes without names but luckily this is recoverable. As the theory behind inodes is quite complicated and is complicated to explain here, I warmly recommend you read Ian Dallen's Unix / Linux / Filesystems – directories inodes hardlinks tutorial – which is among the best academic Tutorials explaining various specifics about inodes online.

 

How to Get inodes per mounted filesystem

 

root@linux:/home/hipo# df -i
Filesystem       Inodes  IUsed   IFree IUse% Mounted on

 

dev             2041439     481   2040958   1% /dev
tmpfs            2046359     976   2045383   1% /run
tmpfs            2046359       4   2046355   1% /dev/shm
tmpfs            2046359       6   2046353   1% /run/lock
tmpfs            2046359      17   2046342   1% /sys/fs/cgroup
/dev/sdb5        1221600    2562   1219038   1% /usr/var/lib/mysql
/dev/sdb6        6111232  747460   5363772  13% /var/www/htdocs
/dev/sdc1      122093568 3083005 119010563   3% /mnt/backups
tmpfs            2046359      13   2046346   1% /run/user/1000


As you see in above output Inodes reported for each of mounted filesystems has a specific number. In above output IFree on every mounted FS locally on Physical installed OS Linux is good.


Here is an example on how to recognize a depleted Inodes on a OpenXen Virtual Machine with attached Virtual Hard disks.

linux:~# df -i
Filesystem         Inodes     IUsed      IFree     IUse%   Mounted on
/dev/xvda         2080768    2080768     0      100%    /
tmpfs             92187      3          92184   1%     /lib/init/rw
varrun            92187      38          92149   1%    /var/run
varlock            92187      4          92183   1%    /var/lock
udev              92187     4404        87783   5%    /dev
tmpfs             92187       1         92186   1%    /dev/shm

 

Finding files with a certain inode


At some cases if you want to check all the copy files of a certain file that have the same i-node pointer it is useful to find them all by their shared inode this is possible with simple find (below example is for /usr/bin/perl binary sharing same inode as perl5.28.1:

 

ls -i /usr/bin/perl
23798851 /usr/bin/perl*

 

 find /usr/bin -inum 435308 -print
/usr/bin/perl5.28.1
/usr/bin/perl

 

Find directory that has a large number of files in it?

To get an overall number of inodes allocated by a certain directory, lets say /usr /var

 

root@linux:/var# du -s –inodes /usr /var
566931    /usr
56020    /var/

To get a list of directories use by inode for a directory with its main contained sub-directories sorted from 1 till highest number use:
 

du -s –inodes * 2>/dev/null |sort -g

 

Usually running out of inodes means there is a directory / fs mounts that has too many (small files) that are depleting the max count of possible inodes.

The most simple way to list directories and number of files in them on the server root directory is with a small bash shell loop like so:
 

for i in /*; do echo $i; find $i |wc -l; done


Another way to identify the exact directory that is most likely the bottleneck for the inode depletion in a sorted by file count, human readable form:
 

find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n


This will dump a list of every directory on the root (/) filesystem prefixed with the number of files (and subdirectories) in that directory. Thus the directory with the largest number of files will be at the bottom.

 

The -xdev switch is used to instruct find to narrow it's search to only the device where you're initiating the search (any other sub-mounted NAS / NFS filesystems from a different device will be omited).

 

Print top 10 subdirectories with Highest Inode Usage

 

Once identifed the largest number of files directories that is perhaps the issue, to further get a list of Top subdirectories in it with highest amount of inodes used, use below cmd:

 

for i in `ls -1A`; do echo "`find $i | sort -u | wc -l` $i"; done | sort -rn | head -10

 

To list more than 10 of the top inodes used dirs change the head -10 to whatever num needed.

N.B. ! Be very cautious when running above 2 find commands on a very large filesystems as it will be I/O Excessive and in filesystems that has some failing blocks this could create further problems.

To omit putting a high I/O load on a production filesystem, it is possible to also use du + very complex regular expression:
 

cd /backup
du –inodes -S | sort -rh | sed -n         '1,50{/^.\{71\}/s/^\(.\{30\}\).*\(.\{37\}\)$/\1…\2/;p}'


Results returned are from top to bottom.

 

How to Increase the amount of Inodes count on a new created volume EXT4 filesystem

Some FS-es XFS, JFS do have an auto-increase inode feature in case if their is physical space, whether otheres such as reiserfs does not have inodes at all but still have a field reported when queried for errors. But the classical Linux ext3 / ext4 does not have a way to increase the inode number on a live filesystem. Instead the way to do it there is to prepare a brand new filesystem on a Disk / NAS / attached storage.

The number of inodes at format-time of the block storage can be as high as 4 billion inodes. Before you create the new FS, you have to partition the new the block storage as ext4 with lets say parted command (or nullify the content of an with dd to clean up any previous existing data on a volume if there was already existing data:
 

parted /dev/sda


dd if=/dev/zero of=/dev/path/to/volume


  then format it with this additional parameter:

 

mkfs.ext4 -N 3000000000 /dev/path/to/volume

 

Here in above example the newly created filesystem of EXT4 type will be created with 3 Billion inodes !, for setting a higher number on older ext3 filesystem max inode count mkfs.ext3 could be used instead.

Bear in mind that 3 Billion number is a too high number and if you plan to have some large number of files / directories / links structures just raise it up to your pre-planning requirements for FS. In most cases it will be rarely anyone that want to have this number higher than 1 or 2 billion of inodes.

On FreeBSD / NetBSD / OpenBSD setting inode maximum number for a UFS / UFS2 (which is current default FreeBSD FS), this could be done via newfs filesystem creation command after the disk has been labeled with disklabel:

 

freebsd# newfs -i 1024 /dev/ada0s1d

 

Increase the Max Count of Inodes for a /tmp filesystem

 

Sometimes on some machines it is necessery to have ability to store very high number of small files (e.g. have a very large number of inodes) on a temporary filesystem kept in memory. For example some web applications served by Web Server Apache + PHP, Nginx + Perl-FastCGI are written in a bad manner so they kept tons of temporary files in /tmp, leading to issues with exceeded amount of inodes.
If that's the case to temporary work around you can increase the count of Inodes for /tmp to a very high number like 2 billions using:

 

mount -o remount,nr_inodes=<bignum> /tmp

To make the change permanent on next boot if needed don't forget to put the nr_inodes=whatever_bignum as a mount option for the temporary fs to /etc/fstab

Eventually, if you face this issues it is best to immediately track which application produced the mess and ask the developer to fix his messed up programs architecture.

 

Conclusion

 

It was explained on the very common issue of having maximum amount of inodes on a filesystem depleted and the unpleasent consequences of inability to create new files on living FS.
Then a general overview was given on what is inode on a Linux / Unix filesystem, what is typical content of inode, how inode addressing is handled on a FS. Further was explained how to get basic information about available inodes on a filesystem, how to get a filename/s based on inode number (with find), the well known way to determine inode number of a directory or file (with ls) and get more extensive information on a FS on inodes with tune2fs.
Also was explained how to identify directories containing multitudes of files in order to determine a sub-directories that is consuming most of the inodes on a filesystem. Finally it was explained very raughly how to prepare an ext4 filesystem from scratch with predefined number to inodes to much higher than the usual defaults by mkfs.ext3 / mkfs.ext4 and *bsds newfs as well as how to raise the number of inodes of /tmp tmpfs temporary RAM filesystem.

How to use zip command to archive directory and files in GNU / Linux

Monday, November 6th, 2017

how-to-use-zip-command-to-archive-directory-and-files-in-gnu-linux-and-freebsd

How to zip directory or files with ZIP command in LInux or any other Unix like OS?

Why would you want to ZIP files in Linux if you have already gzip and bzip archive algorithms? Well for historical reasons .ZIP is much supported across virtually all major operating systems like Unix, Linux, VMS, MSDOS, OS/2, Windows NT, Minix, Atari and Macintosh, FreeBSD, OpenBSD, NetBSD, Amiga and Acorn RISC and many other operating systems.

Assuming that zip command line tool is available across most GNU / Linux and WinZIP is available across almost all Windowses, the reason you might need to create .zip archive might be to just transfer the files from your Linux / FreeBSD desktop system or a friend with M$ Windows.

So below is how to archive recursively files inside a directory using zip command:
 

 $ zip -r myvacationpics.zip /home/your-directory/your-files-pictures-text/

 


or you can write it shorter with omitting .zip as by default zip command would create .zip files

 

$ zip -r whatever-zip-file-name /home/your-directory/your-files-pictures-text/

 


The -r tells zip to recurse into directories (e.g. archive all files and directories inside your-files-pictures-text/)

If you need to archive just a files recursively with a file extension such as .txt inside current directory

 

$ zip -R my-zip-archive.zip '*.txt'


Above command would archive any .txt found inside your current directory if the zip command is for example issued from /home/hipo all found files such as /home/hipo/directory1, /home/hipo/directory2, /home/hipo/directory2/directory3/directory4 and all the contained subdirs that contain any .txt extension files will be added to the archive.

For the Linux desktop users that are lazy and want to zip files without much typing take a look at PeaZip for Linux 7Z / ZIP GUI interface tool

 

What is this directory /run/user/1000 on Debian and Fedora GNU / Linux?

Monday, October 23rd, 2017

what-is-this-folder-directory-run-user-1000-in-debian-debianfedoraubuntu-linux

So what are these /run/user/1000, /run/user/0, /run/user109, /run/user/1000 showing in my df -h I'm I hacked or what?

root@noah:~# df -h|grep -i tmpfs
tmpfs 201M 22M 179M 11% /run
tmpfs 1001M 0 1001M 0% /dev/shm
tmpfs 5,0M 4,0K 5,0M 1% /run/lock
tmpfs 1001M 0 1001M 0% /sys/fs/cgroup
tmpfs 201M 0 201M 0% /run/user/0
tmpfs 201M 36K 201M 1% /run/user/1000
tmpfs 201M 16K 201M 1% /run/user/109

 

/run/user/$uid is created by pam_systemd and used for storing files used by running processes for that
user. These might be things such as your keyring daemon, pulseaudio, etc.

Prior to systemd, these applications typically stored their files in /tmp. They couldn't use a location in
/home/$user as home directories are often mounted over network filesystems, and these files should not be
shared among hosts. /tmp was the only location specified by the FHS which is local, and writable by all
users.

However storing all these files in /tmp is problematic as /tmp is writable by everyone, and while you can
change the ownership & mode on the files being created, it's more difficult to work with.

However storing all these files in /tmp is problematic as /tmp is writable by everyone, and while you can
change the ownership & mode on the files being created, it's more difficult to work with.

So systemd came along and created /run/user/$uid.
This directory is local to the system and only
accessible by the target user. So applications looking to store their files locally no longer have to
worry about access control.

It also keeps things nice and organized. When a user logs out, and no active sessions remain, pam_systemd will wipe the /run/user/$uid directory out. With various files scattered around /tmp, you couldn't dothis.

Should mention that it is called $XDG_RUNTIME_DIR, documented at 8 standards.freedesktop.org/basedir-spec/basedir-spec-latest.h‌​tml.

What if: I have started a "background" computation process with nohup, and it saves its intermediate results/data in a temp file. Can I count on it not being wiped while the process is running, or it will be wiped, and the process started with nohup will loose its data? 

It's unlikely to be wiped, but /run/user is a tmpfs filesystem in debian, ubuntu and fedora, so it'll be limited.

What if the pidfile is a service running under root.
Should it's PID go under /var/run or /var/run/user/0 ?

If since there is no active sessions will it be removed? 

This directory contains system information data describing the system since it was booted.
Files under this directory must be cleared (removed or truncated as appropriate) at the beginning of the boot process.

The purposes of this directory were once served by /var/run. In general, programs may continue to use /var/run to fulfill the requirements set out for /run for the purposes of backwards
compatibility.

Programs which have migrated to use /run should cease their usage of /var/runexcept as noted in the section on /var/run.

Programs may have a subdirectory of /run; this is encouraged for programs that use more than one run-time file.

Users may also have a subdirectory of /run, although care must be taken to appropriately limit access rights to prevent unauthorized use of /run itself and other subdirectories.

In the case of the /run/user directory, is used by the different user services, like dconf, pulse,systemd, etc. that needs a place for their lock files and sockets. There are as many directories as
different users UID's are logged in the system.

How to synchronize with / from Remote FTP server using LFTP like with rsync

Sunday, October 15th, 2017

how-to-synchronize-from-remote-ftp-server-easily-like-rsync.jpg

Have you ever been in a need to easily synchronize with a remote host which only runs FTP server?

Or are you in a local network and you need to mirror a directory or a couple of directories in a fast and easy to remember way?

If so then you'll be happy to use below LFTP command that is doing pretty much the same as Rsync, with only difference that it can mirror files over FTP (old but gold File Transfer Protocol).
 

lftp -u FTP_USERNAME,FTP_PASSWORD -e 'mirror REMOTE_DIRECTORY LOCAL_DIRECTORY' FTP_SERVER_HOSTNAME


Enjoy and thanks to my dear friend Amridikon for the tip ! 🙂

Some standard software programs to install on Windows to make your Windows feel more like a Linux / Unix Desktop host

Friday, March 17th, 2017

linux-freebsd-unix-migration-to-windows-some-useful-customizations-and-program-softwares-to-install-to-make-your-windows-feel-like-more-linux-and-bsd-unix

If you're Windows user like me with a Linux / FreeBSD / OpenBSD / NetBSD – a dedicated Unix user and end up working for financial reasons in some TOP 100 Fortune companies (CSC, SAP, IBM, Hewlett Packard,Enterprise, Oracle) etc.  and forced for business purposes (cause some programs such as Skype for Business Desktop Share does not run fine on Unix like and thus you have to work notebook pre-installed with Windows 7 / 8 or 10 but you're so accustomed to customizations already from UNIX environments and you would like to create yourself the Windows to resemble Linux and probably customize much of how Windows behaves by default.

Here is what I personally did on my work Windows 7 Enterprise on my HP Elitebook notebook to give myself the extra things I'm used to my Debian Linux Desktop.


1. Downloaded and instaled standard gnome-terminal xterm like immediately (E.g. check MobaXterm great alternative to Putty),
2. Changed cutomize Windows 7 appearance to be more like classical Windows XP,  change Windows 8 / 10 start menu appearance to be more like in classic Windows 2000
3. Installed following bunch of softwares

  • VIM Text Editor for Windows
  • Thunderbird Mail Client
  • OpenVPN client
  • Oracle VM Virtualbox
  • Opera
  • Mozilla Firefox
  • Password Safe
  • Ext2FS / Ext3FS (support programs)
  • F.lux (to auto adjust screen brightness day and night for better sleep)
  • install ActivePerl for Windows
  • Install GNUWin Tools (and perhaps most importantly)
  • CygWin,  (to provide Windows with most needed console Linux tools), Clink.
  • WinSCP
  • Swish (to be able to remotely mount your Linux partitions and see them as local Windows drives)
  • dosbox (to play some of the good old Dos games :))
  • Windirstat (to easily check the size of complete directory and subdirectories)
  • SpaceSniffer (to be able to see which directory or files are taking the most space on the system)


Along with all above goodies here is also some good software I find essential for every web developer / system administrator / network administrator or java,  C, php pprogrammer out there that's using Windows as his Desktop platrofm.

Another thing I prefer  on Windows 7 when used as workstation is to change the default Windows 7 LogonUI screen background as well check out how here

Perhaps there is plenty of other goodprograms to install on Windows to make it feel even more like a Linux / Unix Desktop host, if you happen to somehow stuck to this article and you've migrated from Llinux / BSD desktop to Windows for work purposes please share with me any other goodies you happen to use that is from *Unix.

chmod all directories permissions only and omit files (recursively) on Linux howto

Friday, March 11th, 2016

execute-write-read-of-user-group-and-others-on-linux-unix-bsd-explanationary-picture

If you mistakenly chmod-ed all files within directory full of multiple other subdirectories and files and you want to revert back and set a certain file permissions (read, wite execute) privileges only to all directories:
 

find /path/to/base/dir -type d -exec chmod 755 {} +


If there are too many files or directories you need to change mod use
 

chmod 755 $(find /path/to/base/dir -type d) chmod 644 $(find /path/to/base/dir -type f)

Above willl run evaluate $() all files searched and print them and pass them to chmod so if you have too many files / directories to change it will drastically reduce execution time.

An alternative and perhaps a better way to do it for those who don't remember by heart the chmod permission (numbers), use something like:
 

chmod -R u+rwX,go+rX,go-w /path

Below is arguments meaning:

    -R = recursively;
    u+rwX = Users can read, write and execute;
    go+rX = group and others can read and execute;
    go-w = group and others can't write

If like piping, a less efficient but still working way to change all directory permissions only is with:
 

find /path/to/base/dir -type d -print0 | xargs -0 chmod 755
find /path/to/base/dir -type f -print0 | xargs -0 chmod 644


For those who wish to automate and often do change permissions of only files or only directories it might be also nice to look at (chmod_dir_files-recursive.sh) shell script

Tadadam 🙂