Posts Tagged ‘Linux Unix’

Zabbix script to track arp address cache loss (arp incomplete) from Linux server to gateway IP

Tuesday, January 30th, 2024

Zabbix_arp-network-incomplete-check-logo.svg

Some of the Linux servers recently, I'm responsible had a very annoying issue recently. The problem is ARP address to default configured server gateway is being lost, every now and then and it takes up time, fot the remote CISCO router to realize the problem and resolve it. We have debugged with the Network expert colleague, while he was checking the Cisco router and we were checking the arp table on the Linux server with arp command. And we came to conclusion this behavior is due to some network mess because of too many NAT address configurations on the network or due to a Cisco bug. The colleagues asked Cisco but cisco does not have any solution to the issue and the only close work around for the gateway loosing the mac is to set a network rule on the Cisco router to flush its arp record for the server it was loosing the MAC address for.
This does not really solve completely the problem but at least, once we run into the issue, it gets resolved as quick as 5 minutes time. }

As we run a cluster environment it is useful to Monitor and know immediately once we hit into the MAC gateway disappear issue and if the issue persists, exclude the Linux node from the Cluster so we don't loose connection traffic.
For the purpose of Monitoring MAC state from the Linux haproxy machine towards the Network router GW, I have developed a small userparameter script, that is periodically checking the state of the MAC address of the IP address of remote gateway host and log to a external file for any problems with incomplete MAC address of the Remote configured default router.

In case if you happen to need the same MAC address state monitoring for your servers, I though that might be of a help to anyone out there.
To monitor MAC address incomplete state with Zabbix, do the following:
 

1. Create  userparamater_arp_gw_check.conf Zabbix script
 

# cat userparameter_arp_gw_check.conf 
UserParameter=arp.check,/usr/local/bin/check_gw_arp.sh

 

2. Create the following shell script /usr/local/bin/check_gw_arp.sh

 

#!/bin/bash
# simple script to run on cron peridically or via zabbix userparameter
# to track arp loss issues to gateway IP
#gw_ip='192.168.0.55';
gw_ip=$(ip route show|grep -i default|awk '{ print $3 }');
log_f='/var/log/arp_incomplete.log';
grep_word='incomplete';
inactive_status=$(arp -n "$gw_ip" |grep -i $grep_word);
# if GW incomplete record empty all is ok
if [[ $inactive_status == ” ]]; then 
echo $gw_ip OK 1; 
else 
# log inactive MAC to gw_ip
echo "$(date '+%Y-%m-%d %H:%M:%S')" "ARP_ERROR $inactive_status 0" | tee -a $log_f 2>&1 >/dev/null;
# printout to zabbix
echo "1 ARP FAILED: $inactive_status"; 
fi

You can download the check_gw_arp.sh here.

The script is supposed to automatically grep for the Default Gateway router IP, however before setting it up. Run it and make sure this corresponds correctly to the default Gateway IP MAC you would like to monitor.
 

3. Create New Zabbix Template for ARP incomplete monitoring
 

arp-machine-to-default-gateway-failure-monitoring-template-screenshot

Create Application 

*Name
Default Gateway ARP state

4. Create Item and Dependent Item 
 

Create Zabbix Item and Dependent Item like this

arp-machine-to-default-gateway-failure-monitoring-item-screenshot

 

arp-machine-to-default-gateway-failure-monitoring-item1-screenshot

arp-machine-to-default-gateway-failure-monitoring-item2-screenshot


5. Create Trigger to trigger WARNING or whatever you like
 

arp-machine-to-default-gateway-failure-monitoring-trigger-screenshot


arp-machine-to-default-gateway-failure-monitoring-trigger1-screenshot

arp-machine-to-default-gateway-failure-monitoring-trigger2-screenshot


6. Create Zabbix Action to notify via Email etc.
 

arp-machine-to-default-gateway-failure-monitoring-action1-screenshot

 

arp-machine-to-default-gateway-failure-monitoring-action2-screenshot

That's all. Once you set up this few little things, you can enjoy having monitoring Alerts for your ARP state incomplete on your Linux / Unix servers.
Enjoy !

What is inode and how to find out which directory is eating up all your filesystem inodes on Linux, Increase inode count on a ext3 ext4 and ufs filesystems

Tuesday, August 20th, 2019

what-is-inode-find-out-which-filesystem-or-directory-eating-up-all-your-system-inodes-linux_inode_diagram

If you're a system administrator of multiple Linux servers used for Web serving delivery / Mail server sysadmin, Database admin or any High amount of Drives Data Storage used for backup servers infra, Data Repository administrator such as Linux hosted Samba / CIFS shares, etc. or using some Linux Hosting Provider to host your website or any other UNIX like Infrastructure servers that demands a storage of high number of files under a Directory  you might end up with the common filesystem inode depletion issues ( Maximum Inode number for a filesystem is predefined, limited and depending on the filesystem configured size).

In case a directory stored files end up exceding the amount of possible addressable inodes could prevent any data to be further assiged and stored on the Filesystem.

When a device runs out of inodes, new files cannot be created on the device, even though there may be plenty free space available and the first time it happened to me very long time ago I was completely puzzled how this is possible as I was not aware of Inodes existence  …

Reaching maximum inodes number (e.g. inode depletion), often happens on Busy Mail servers (receivng tons of SPAM email messages) or Content Delivery Network (CDN – Website Image caching servers) which contain many small files on EXT3 or EXT4 Journalled filesystems. File systems (such as Btrfs, JFS or XFS) escape this limitation with extents or dynamic inode allocation, which can 'grow' the file system or increase the number of inodes.

 

Hence ending being out of inodes could cause various oddities on how stored data behaves or communicated to other connected microservices and could lead to random application disruptions and odd results costing you many hours of various debugging to find the root cause of inodes (index nodes) being out of order.

In below article, I will try to give an overall explanation on what is an I-Node on a filesystem, how inodes of FS unit could be seen, how to diagnose a possible inode poblem – e.g.  see the maximum amount of inodes available per filesystem and how to prepare (format) a new filesystem with incrsed set of maximum inodes.

 

What are filesystem i-nodes?

 

This is a data structure in a Unix-style file system that describes a file-system object such as a file or a directory.
The data structure described in the inodes might vary slightly depending on the filesystem but usually on EXT3 / EXT4 Linux filesystems each inode stores the index to block that contains attributes and disk block location(s) of the object's data.
– Yes for those who are not aware on how a filesystem is structured on *nix it does allocate all stored data in logical separeted structures called data blocks. Each file stored on a local filesystem has a file descriptor, there are virtual unit structures file tables and each of the inodes that are a reference number has a own data structure (inode table).

Inodes / "Index" are slightly unusual on file system structure that stored the access information of files as a flat array on the disk, with all the hierarchical directory information living aside from this as explained by Unix creator and pioneer- Dennis Ritchie (passed away few years ago).

what-is-inode-very-simplified-explanation-diagram-data

Simplified explanation on file descriptors, file table and inode, table on a common Linux filesystem

Here is another description on what is I-node, given by Ken Thompson (another Unix pioneer and father of Unix) and Denis Ritchie, described in their paper published in 1978:

"    As mentioned in Section 3.2 above, a directory entry contains only a name for the associated file and a pointer to the file itself. This pointer is an integer called the i-number (for index number) of the file. When the file is accessed, its i-number is used as an index into a system table (the i-list) stored in a known part of the device on which the directory resides. The entry found thereby (the file's i-node) contains the description of the file:…
    — The UNIX Time-Sharing System, The Bell System Technical Journal, 1978  "


 

What is typical content of inode and how I-nodes play with rest of Filesystem units?


The inode is just a reference index to a data block (unit) that contains File-system object attributes. It may include metadata information such as (times of last change, access, modification), as well as owner and permission data.

 

On a Linux / Unix filesystem, directories are lists of names assigned to inodes. A directory contains an entry for itself, its parent, and each of its children.

Structure-of-inode-table-on-Linux-Filesystem-diagram

 

Structure of inode table-on Linux Filesystem diagram (picture source GeeksForGeeks.org)

  • Information about files(data) are sometimes called metadata. So you can even say it in another way, "An inode is metadata of the data."
  •  Inode : Its a complex data-structure that contains all the necessary information to specify a file. It includes the memory layout of the file on disk, file permissions, access time, number of different links to the file etc.
  •  Global File table : It contains information that is global to the kernel e.g. the byte offset in the file where the user's next read/write will start and the access rights allowed to the opening process.
  • Process file descriptor table : maintained by the kernel, that in turn indexes into a system-wide table of files opened by all processes, called the file table .

The inode number indexes a table of inodes in a known location on the device. From the inode number, the kernel's file system driver can access the inode contents, including the location of the file – thus allowing access to the file.

  •     Inodes do not contain its hardlink names, only other file metadata.
  •     Unix directories are lists of association structures, each of which contains one filename and one inode number.
  •     The file system driver must search a directory looking for a particular filename and then convert the filename to the correct corresponding inode number.

The operating system kernel's in-memory representation of this data is called struct inode in Linux. Systems derived from BSD use the term vnode, with the v of vnode referring to the kernel's virtual file system layer.


But enough technical specifics, lets get into some practical experience on managing Filesystem inodes.
 

Listing inodes on a Fileystem


Lets say we wan to to list an inode number reference ID for the Linux kernel (files):

 

root@linux: # ls -i /boot/vmlinuz-*
 3055760 /boot/vmlinuz-3.2.0-4-amd64   26091901 /boot/vmlinuz-4.9.0-7-amd64
 3055719 /boot/vmlinuz-4.19.0-5-amd64  26095807 /boot/vmlinuz-4.9.0-8-amd64


To list an inode of all files in the kernel specific boot directory /boot:

 

root@linux: # ls -id /boot/
26091521 /boot/


Listing inodes for all files stored in a directory is also done by adding the -i ls command flag:

Note the the '-1' flag was added to to show files in 1 column without info for ownership permissions

 

root@linux:/# ls -1i /boot/
26091782 config-3.2.0-4-amd64
 3055716 config-4.19.0-5-amd64
26091900 config-4.9.0-7-amd64
26095806 config-4.9.0-8-amd64
26091525 grub/
 3055848 initrd.img-3.2.0-4-amd64
 3055644 initrd.img-4.19.0-5-amd64
26091902 initrd.img-4.9.0-7-amd64
 3055657 initrd.img-4.9.0-8-amd64
26091756 System.map-3.2.0-4-amd64
 3055703 System.map-4.19.0-5-amd64
26091899 System.map-4.9.0-7-amd64
26095805 System.map-4.9.0-8-amd64
 3055760 vmlinuz-3.2.0-4-amd64
 3055719 vmlinuz-4.19.0-5-amd64
26091901 vmlinuz-4.9.0-7-amd64
26095807 vmlinuz-4.9.0-8-amd64

 

To get more information about Linux directory, file, such as blocks used by file-unit, Last Access, Modify and Change times, current External Symbolic or Static links for filesystem object:
 

root@linux:/ # stat /etc/
  File: /etc/
  Size: 16384         Blocks: 32         IO Block: 4096   catalog
Device: 801h/2049d    Inode: 6365185     Links: 231
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2019-08-20 06:29:39.946498435 +0300
Modify: 2019-08-14 13:53:51.382564330 +0300
Change: 2019-08-14 13:53:51.382564330 +0300
 Birth: –

 

Within a POSIX system (Linux-es) and *BSD are more or less such, a file has the following attributes[9] which may be retrieved by the stat system call:

   – Device ID (this identifies the device containing the file; that is, the scope of uniqueness of the serial number).
    File serial numbers.
    – The file mode which determines the file type and how the file's owner, its group, and others can access the file.
    – A link count telling how many hard links point to the inode.
    – The User ID of the file's owner.
    – The Group ID of the file.
    – The device ID of the file if it is a device file.
    – The size of the file in bytes.
    – Timestamps telling when the inode itself was last modified (ctime, inode change time), the file content last modified (mtime, modification time), and last accessed (atime, access time).
    – The preferred I/O block size.
    – The number of blocks allocated to this file.

 

Getting more extensive information on a mounted filesystem


Most Linuxes have the tune2fs installed by default (in debian Linux this is through e2fsprogs) package, with it one can get a very good indepth information on a mounted filesystem, lets say about the ( / ) root FS.
 

root@linux:~# tune2fs -l /dev/sda1
tune2fs 1.44.5 (15-Dec-2018)
Filesystem volume name:   <none>
Last mounted on:          /
Filesystem UUID:          abe6f5b9-42cb-48b6-ae0a-5dda350bc322
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Filesystem flags:         signed_directory_hash
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              30162944
Block count:              120648960
Reserved block count:     6032448
Free blocks:              13830683
Free inodes:              26575654
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      995
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Filesystem created:       Thu Sep  6 21:44:22 2012
Last mount time:          Sat Jul 20 11:33:38 2019
Last write time:          Sat Jul 20 11:33:28 2019
Mount count:              6
Maximum mount count:      22
Last checked:             Fri May 10 18:32:27 2019
Check interval:           15552000 (6 months)
Next check after:         Wed Nov  6 17:32:27 2019
Lifetime writes:          338 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:              256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
First orphan inode:       21554129
Default directory hash:   half_md4
Directory Hash Seed:      d54c5a90-bc2d-4e22-8889-568d3fd8d54f
Journal backup:           inode blocks


Important note to make here is file's inode number stays the same when it is moved to another directory on the same device, or when the disk is defragmented which may change its physical location. This also implies that completely conforming inode behavior is impossible to implement with many non-Unix file systems, such as FAT and its descendants, which don't have a way of storing this invariance when both a file's directory entry and its data are moved around. Also one inode could point to a file and a copy of the file or even a file and a symlink could point to the same inode, below is example:

$ ls -l -i /usr/bin/perl*
266327 -rwxr-xr-x 2 root root 10376 Mar 18  2013 /usr/bin/perl
266327 -rwxr-xr-x 2 root root 10376 Mar 18  2013 /usr/bin/perl5.14.2

A good to know is inodes are always unique values, so you can't have the same inode number duplicated. If a directory is damaged, only the names of the things are lost and the inodes become the so called “orphan”, e.g.  inodes without names but luckily this is recoverable. As the theory behind inodes is quite complicated and is complicated to explain here, I warmly recommend you read Ian Dallen's Unix / Linux / Filesystems – directories inodes hardlinks tutorial – which is among the best academic Tutorials explaining various specifics about inodes online.

 

How to Get inodes per mounted filesystem

 

root@linux:/home/hipo# df -i
Filesystem       Inodes  IUsed   IFree IUse% Mounted on

 

dev             2041439     481   2040958   1% /dev
tmpfs            2046359     976   2045383   1% /run
tmpfs            2046359       4   2046355   1% /dev/shm
tmpfs            2046359       6   2046353   1% /run/lock
tmpfs            2046359      17   2046342   1% /sys/fs/cgroup
/dev/sdb5        1221600    2562   1219038   1% /usr/var/lib/mysql
/dev/sdb6        6111232  747460   5363772  13% /var/www/htdocs
/dev/sdc1      122093568 3083005 119010563   3% /mnt/backups
tmpfs            2046359      13   2046346   1% /run/user/1000


As you see in above output Inodes reported for each of mounted filesystems has a specific number. In above output IFree on every mounted FS locally on Physical installed OS Linux is good.


Here is an example on how to recognize a depleted Inodes on a OpenXen Virtual Machine with attached Virtual Hard disks.

linux:~# df -i
Filesystem         Inodes     IUsed      IFree     IUse%   Mounted on
/dev/xvda         2080768    2080768     0      100%    /
tmpfs             92187      3          92184   1%     /lib/init/rw
varrun            92187      38          92149   1%    /var/run
varlock            92187      4          92183   1%    /var/lock
udev              92187     4404        87783   5%    /dev
tmpfs             92187       1         92186   1%    /dev/shm

 

Finding files with a certain inode


At some cases if you want to check all the copy files of a certain file that have the same i-node pointer it is useful to find them all by their shared inode this is possible with simple find (below example is for /usr/bin/perl binary sharing same inode as perl5.28.1:

 

ls -i /usr/bin/perl
23798851 /usr/bin/perl*

 

 find /usr/bin -inum 435308 -print
/usr/bin/perl5.28.1
/usr/bin/perl

 

Find directory that has a large number of files in it?

To get an overall number of inodes allocated by a certain directory, lets say /usr /var

 

root@linux:/var# du -s –inodes /usr /var
566931    /usr
56020    /var/

To get a list of directories use by inode for a directory with its main contained sub-directories sorted from 1 till highest number use:
 

du -s –inodes * 2>/dev/null |sort -g

 

Usually running out of inodes means there is a directory / fs mounts that has too many (small files) that are depleting the max count of possible inodes.

The most simple way to list directories and number of files in them on the server root directory is with a small bash shell loop like so:
 

for i in /*; do echo $i; find $i |wc -l; done


Another way to identify the exact directory that is most likely the bottleneck for the inode depletion in a sorted by file count, human readable form:
 

find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n


This will dump a list of every directory on the root (/) filesystem prefixed with the number of files (and subdirectories) in that directory. Thus the directory with the largest number of files will be at the bottom.

 

The -xdev switch is used to instruct find to narrow it's search to only the device where you're initiating the search (any other sub-mounted NAS / NFS filesystems from a different device will be omited).

 

Print top 10 subdirectories with Highest Inode Usage

 

Once identifed the largest number of files directories that is perhaps the issue, to further get a list of Top subdirectories in it with highest amount of inodes used, use below cmd:

 

for i in `ls -1A`; do echo "`find $i | sort -u | wc -l` $i"; done | sort -rn | head -10

 

To list more than 10 of the top inodes used dirs change the head -10 to whatever num needed.

N.B. ! Be very cautious when running above 2 find commands on a very large filesystems as it will be I/O Excessive and in filesystems that has some failing blocks this could create further problems.

To omit putting a high I/O load on a production filesystem, it is possible to also use du + very complex regular expression:
 

cd /backup
du –inodes -S | sort -rh | sed -n         '1,50{/^.\{71\}/s/^\(.\{30\}\).*\(.\{37\}\)$/\1…\2/;p}'


Results returned are from top to bottom.

 

How to Increase the amount of Inodes count on a new created volume EXT4 filesystem

Some FS-es XFS, JFS do have an auto-increase inode feature in case if their is physical space, whether otheres such as reiserfs does not have inodes at all but still have a field reported when queried for errors. But the classical Linux ext3 / ext4 does not have a way to increase the inode number on a live filesystem. Instead the way to do it there is to prepare a brand new filesystem on a Disk / NAS / attached storage.

The number of inodes at format-time of the block storage can be as high as 4 billion inodes. Before you create the new FS, you have to partition the new the block storage as ext4 with lets say parted command (or nullify the content of an with dd to clean up any previous existing data on a volume if there was already existing data:
 

parted /dev/sda


dd if=/dev/zero of=/dev/path/to/volume


  then format it with this additional parameter:

 

mkfs.ext4 -N 3000000000 /dev/path/to/volume

 

Here in above example the newly created filesystem of EXT4 type will be created with 3 Billion inodes !, for setting a higher number on older ext3 filesystem max inode count mkfs.ext3 could be used instead.

Bear in mind that 3 Billion number is a too high number and if you plan to have some large number of files / directories / links structures just raise it up to your pre-planning requirements for FS. In most cases it will be rarely anyone that want to have this number higher than 1 or 2 billion of inodes.

On FreeBSD / NetBSD / OpenBSD setting inode maximum number for a UFS / UFS2 (which is current default FreeBSD FS), this could be done via newfs filesystem creation command after the disk has been labeled with disklabel:

 

freebsd# newfs -i 1024 /dev/ada0s1d

 

Increase the Max Count of Inodes for a /tmp filesystem

 

Sometimes on some machines it is necessery to have ability to store very high number of small files (e.g. have a very large number of inodes) on a temporary filesystem kept in memory. For example some web applications served by Web Server Apache + PHP, Nginx + Perl-FastCGI are written in a bad manner so they kept tons of temporary files in /tmp, leading to issues with exceeded amount of inodes.
If that's the case to temporary work around you can increase the count of Inodes for /tmp to a very high number like 2 billions using:

 

mount -o remount,nr_inodes=<bignum> /tmp

To make the change permanent on next boot if needed don't forget to put the nr_inodes=whatever_bignum as a mount option for the temporary fs to /etc/fstab

Eventually, if you face this issues it is best to immediately track which application produced the mess and ask the developer to fix his messed up programs architecture.

 

Conclusion

 

It was explained on the very common issue of having maximum amount of inodes on a filesystem depleted and the unpleasent consequences of inability to create new files on living FS.
Then a general overview was given on what is inode on a Linux / Unix filesystem, what is typical content of inode, how inode addressing is handled on a FS. Further was explained how to get basic information about available inodes on a filesystem, how to get a filename/s based on inode number (with find), the well known way to determine inode number of a directory or file (with ls) and get more extensive information on a FS on inodes with tune2fs.
Also was explained how to identify directories containing multitudes of files in order to determine a sub-directories that is consuming most of the inodes on a filesystem. Finally it was explained very raughly how to prepare an ext4 filesystem from scratch with predefined number to inodes to much higher than the usual defaults by mkfs.ext3 / mkfs.ext4 and *bsds newfs as well as how to raise the number of inodes of /tmp tmpfs temporary RAM filesystem.

Share SCREEN terminal session in Linux / Screen share between two or more users howto

Wednesday, October 11th, 2017

share-screen-terminal-session-in-linux-share-linux-unix-shell-between-two-or-more-users

 

1. Short Intro to Screen command and what is Shared Screen Session

Do you have friends who want to learn some GNU / Linux or BSD basics remotely? Do you have people willing to share a terminal session together for educational purposes within a different network? Do you just want to have some fun and show off yourself between two or more users?

If the answer to the questions is yes, then continue on reading, otherwise if you're already aware how this is being done, just ignore this article and do something more joyful.

So let me start.

Some long time ago when I was starting to be a Free Software user and dedicated enthusiast, I've been given by a friend an interesting freeshell hosting access and I stumbled upon / observed an interesting phenomenon, multiple users like 5 or 10 were connected simultaneously to the same shell sharing their command line.

I can't remember what kind of shell I happen to be sharing with the other logged in users with the same account, was that bash / csh / zsh or another one but it doesn't matter, it was really cool to find out multiple users could be standing together on GNU / Linux and *BSD with the same account and use the regular shell for chatting or teaching each others  new Linux / Unix commands e.g. being able to type in shell simultaneously.

The multiple shared shell session was possible thanks to the screen command

For those who hear about screen for a first time, here is the package description:

 

# apt-cache show screen|grep -i desc -A 1
Description-en: terminal multiplexer with VT100/ANSI terminal emulation
 GNU Screen is a terminal multiplexer that runs several separate "screens" on

Description-md5: 2d86b86ed6058a04c540802e49312f40
Homepage: https://savannah.gnu.org/projects/screen
root@jericho:/usr/local/src/pure-python-otr# apt-cache show screen|grep -i desc -A 2
Description-en: terminal multiplexer with VT100/ANSI terminal emulation
 GNU Screen is a terminal multiplexer that runs several separate "screens" on
 a single physical character-based terminal. Each virtual terminal emulates a


Description-md5: 2d86b86ed6058a04c540802e49312f40
Homepage: https://savannah.gnu.org/projects/screen
Tag: hardware::input:keyboard, implemented-in::c, interface::text-mode,


There is plenty of things to use screen for as it provides you a way to open Virtual Terminals into a single ssh or physical console TTY login session and I've been in love with screen command since day 1 I found out about it.

To start using screen just invoke it into a shell and enter a screen command combinations that make various stuff for you.

 

2. Some of the most useful Daily Screen Key Combinations for the Sys Admin


To do use the various screen options, use the escape sequence (CTRL + Some Word), following by the command. For a full list of all of the available commands, run man screen, however
for the sake of interest below short listing shows some of most useful screen key combination invoked commands:

 

 

Ctrl-a a Passes a Ctrl-a through to the terminal session running within screen.
Ctrl-a c Create a new Virtual shell screen session within screen
Ctrl-a d Detaches from a screen session.
Ctrl-a f Toggle flow control mode (enable/disable Ctrl-Q and Ctrl-S pass through).
Ctrl-a k Detaches from and kills (terminates) the screen session.
Ctrl-a q Passes a Ctrl-q through to the terminal session running within screen (or use Ctrl-a f to toggle whether screen captures flow control characters).
Ctrl-a s Passes a Ctrl-s through to the terminal session running within screen (or use Ctrl-a f to toggle whether screen captures flow control characters).
Ctrl-a :kill Also detaches from and kills (terminates) the screen session.
Ctrl-a :multiuser on Make the screen session a multi-user session (so other users can attach).
Ctrl-a :acladd USER Allow the user specified (USER) to connect to a multi-user screen session.
Ctrl-a p Move around multiple opened Virtual terminals in screen (Move to previous)
Ctrl-a n Move backwards in multiple opened screen sessions under single shell connection


I have to underline strongly for me personally, I'm using the most

 

CTRL + A + D (to detach session),

CTRL + A + C to open new session within screen (I tend to open multiple sessions for multiple ssh connections with this),

CTRL + A + P, CTRL +  A + N – I use this twoto move around all my open screen Virtual sessions.
 

3. HOW TO ACTUALLY SHARE TERMINAL SESSION BETWEEN MULTIPLE USERS?


3.1 Configuring Shared Sessions so other users can connect

You need to  have a single user account on a Linux or Unix like server lets say that might be the /etc/passwd, /etc/shadow, /etc/group account screen and you have to give the password to all users to be participating into the shared screen shell session.

E.g. create new system account screen

root@jericho:~# adduser screen
Adding user `screen' …
Adding new group `screen' (1001) …
Adding new user `screen' (1001) with group `screen' …
The home directory `/home/screen' already exists.  Not copying from `/etc/skel'.
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Changing the user information for screen
Enter the new value, or press ENTER for the default
    Full Name []: Screen user to give users shared access to /bin/bash
    Room Number []:
    Work Phone []:
    Home Phone []:
    Other []:
Is the information correct? [Y/n] Y

Now distribute the user / pass pair around all users who are to be sharing the same virtual bash session via screen and instruct each of them to run:

hipo@jericho:~$  screen -d -m -S shared-session
hipo@jericho:~$

hipo@jericho:~$ screen -list

There is a screen on:
    4095.shared-session    (10.10.2017 20:22:22)    (Detached)
1 Socket in /run/screen/S-hipo.


3.2. Attaching to just created session
 

Simply login with as many users you need with SSH to the remote server and instruct them to run the following command to re-attach to the just created new session by you:

hipo@jericho:~$ screeen -x

That's all folks now everyone can type in simultaneously and enjoy the joys of the screen shared session.

If for some reasons more than one session is created by the simultaneously logged in users either as an exercise or by mistake i.e.:

hipo@jericho:~$ screen -list

There are screens on:
    4880.screen-session    (10.10.2017 20:30:09)    (Detached)
    4865.another-session    (10.10.2017 20:29:58)    (Detached)
    4847.hey-man    (10.10.2017 20:29:49)    (Detached)
    4831.another-session1    (10.10.2017 20:29:45)    (Detached)
4 Sockets in /run/screen/S-hipo.

You have to instruct everyone to connect actually to the exact session we need, as screen -x will ask them to what session they like to connect.

In that case to connect to screen-session, each user has to run with their account:

hipo@jericho:~$ screen -x shared-session


If under some circumstances it happened that there is more than one opened shared screen virtual session, for example screen -list returns:

 

hipo@jericho:~$ screen -list
There are screens on:
    5065.screen-session    (10.10.2017 20:33:20)    (Detached)
    4095.screen-session    (10.10.2017 20:30:08)    (Detached)

All users have to connect to the exact screen-session created name and ID, like so:

hipo@jericho:~$ screen -x 4095.screen-session
 


Here is the meaning of used options

 

-d option instructs screen to detach,
-m makes it multiuser session so other users can attach
-S argument is just to give the screen session a name
-list Sesssion gives the screen-session ID

Once you're over with screen session (e.g. all users that are learning and you show them stuff and ask them to test by themselves and have completed, scheduled tasks), to kill it just press CTRL + A + K
 

4. Share screen /bin/bash shell session with another user

Sharing screen session between different users is even more useful to the shared session of one user as you might have a *nix server with many users who might attach to your opened session directly, instead of being beforehand instructed to connect with a single user.

That's perfect also for educational purposes if you want to learn some Linux to a class of people, as you can use their ordinary accounts and show them stuff on a Linux / BSD  machine.

Assuming that you follow and created already screen-session with screen cmd

hipo@jericho:~$ screen -list
There is a screen on:
        5560.screen-session      (10.10.2017 20:41:06)   (Multi, attached)
1 Socket in /run/screen/S-hipo.

hipo@jericho:~$

Next attach to the session

bunny@jericho:~$ screen -r shared-session
bunny@jericho:~$ Ctrl-a :multiuser on
bunny@jericho:~$ Ctrl-a :acladd user2
bunny@jericho:~$ screen -x UserNameHere/shared-session
 

Here are 2 screenshots on what should happen if you had done above command combinations correctly:

screen-share-session-to-multi-users-screenshot-multiuser-on-on-gnome-terminal2

screen-share-session-to-multi-users-screenshot-multiuser-on-on-gnome-terminal3

In order to be able to share screen Virtual terminal ( VTY ) sessions between separate (different) logged in users, you have to have screen command be suid (SUID bit for screen is disabled in most Linux distributions for security reasons).

Without making SUID the screen binary file, you are to get the error:

hipo@jericho:/home/hipo$ screen -x hipo/shared

Must run suid root for multiuser support.

If you are absolutely sure you know what you're doing here is how to make screen command sticky bit:

 

root@jericho:/home/hipo# which screen
root@jericho:/home/hipo# /usr/bin/screen
root@jericho:/home/hipo# root@jericho:/home/hipo# root@jericho:/home/hipo# chmod u+s $(which screen)
chmod 755 /var/run/screen
root@jericho:/home/hipo# rm -fr /var/run/screen/*
exit

How to Copy large data directories between 2 Linux / Unix servers without direct ssh / ftp access between server1 and server2 other by using SSH, TAR and Unix pipes

Monday, April 27th, 2015

how-to-copy-large-data-directories-between-2-linux-unix-servers-without-direct-ssh-ftp-access-btween-each-other

In a Web application data migration project, I've come across a situation where I have to copy / transfer 500 Gigabytes of data from Linux server 1 (host A) to Linux server 2 (host B). However the two machines doesn't have direct access to each other (via port 22) for security reasons and hence I cannot use sshfs to mount remotely host dir via ssh and copy files like local ones.

As this is a data migration project its however necessery to migrate the data finding a way … Normal way companies do it is to copy the data to External Hard disk storage and send it via some Country Post services or some employee being send in Data center to attach the SAN to new server where data is being migrated However in my case this was not possible so I had to do it different.

I have access to both servers as they're situated in the same Corporate DMZ network and I can thus access both UNIX machines via SSH.

Thanksfully there is a small SSH protocol + TAR archiver and default UNIX pipe's capabilities hack that makes possible to transfer easy multiple (large) files and directories. The only requirement to use this nice trick is to have SSH client installed on the middle host from which you can access via SSH protocol Server1 (from where data is migrated) and Server2 (where data will be migrated).

If the hopping / jump server from which you're allowed to have access to Linux  servers Server1 and Server2 is not Linux and you're missing the SSH client and don't have access on Win host to install anything on it just use portable mobaxterm (as it have Cygwin SSH client embedded )

Here is how:
 

jump-host:~$ ssh server1 "tar czf – /somedir/" | pv | ssh server2 "cd /somedir/; tar xf


As you can see from above command line example an SSH is made to server1  a tar is used to archive the directory / directories containing my hundred of gigabytes and then this is passed to another opened ssh session to server 2  via UNIX Pipe mechanism and then TAR archiver is used second time to unarchive previously passed archived content. pv command which is in the middle is not obligitory though it is a nice way to monitor status about data transfer like below:
 

500GB 0:00:01 [10,5MB/s] [===================================================>] 27%


P.S. If you don't have PV installed install it either with apt-get on Debian:

 

debian:~# apt-get install –yes pv

 

Or on CentOS / Fedora / RHEL etc.

 

[root@centos ~]# yum -y install pv

 

Below is a small chunk of PV manual to give you better idea of what it does:

NAME
       pv – monitor the progress of data through a pipe

SYNOPSIS
       pv [OPTION] [FILE]…
       pv [-h|-V]

DESCRIPTION
       pv  allows  a  user to see the progress of data through a pipeline, by giving information such as time elapsed, percentage
       completed (with progress bar), current throughput rate, total data transferred, and ETA.

       To use it, insert it in a pipeline between two processes, with the appropriate options.  Its standard input will be passed
       through to its standard output and progress will be shown on standard error.

       pv  will  copy  each  supplied FILE in turn to standard output (- means standard input), or if no FILEs are specified just
       standard input is copied. This is the same behaviour as cat(1).

       A simple example to watch how quickly a file is transferred using nc(1):

              pv file | nc -w 1 somewhere.com 3000

       A similar example, transferring a file from another process and passing the expected size to pv:

              cat file | pv -s 12345 | nc -w 1 somewhere.com 3000


Note that with too big file transfers using PV will delay data transfer because everything will have to pass through another 2 pipes, however for file transfers up to few gigabytes its really nice to include it.

If you only need to transfer huge .tar.gz archive and you don't bother about traffic security (i.e. don't care whether transferred traffic is going through encrypted SSH tunnel and don't want to put an overhead to both systems for encrypting the data and you have some unfiltered ports between host 1 and host 2 you can run netcat on host 2 to listen for connections and forward .tar.gz content via netcat's port like so:
 

linux2:~$ nc -l -p 12345 > /path/destinationfile
linux2:~$ cat /path/sourcfile | nc desti.nation.ip.address 12345


Another way to transfer large data without having connection with server1 and server2 but having connection to a third host PC is to use rsync and good old SSH Tunneling, like so:
 

jump-host:~$ ssh -R 2200:Linux-server1:22 root@Linux-server2 "rsync -e 'ssh -p 2200' –stats –progress -vaz /directory/to/copy root@localhost:/copy/destination/dir"

The greatest tracker (demoscene) composers / Purple Motion, Necros, Skaven, Pro-XeX

Thursday, July 29th, 2010

For all of us who yet remember the Demoscene , Purple Motion, Necros and Skaven are absolutely legendary names.
Their music work contribution for tracked Electronic music, video games music and general development of the IT culture is truly invaluable.

Many younger computer users (I’m 26 now), and probably IT starters would probably never heard about neither Demoscene nor Purple Motion or the other three patriarchs of tracked Electronic music.

This musicians have a special value for people who has ever composed music with Impulse Tracker and the many other programs to compose music from samples.

Purple Motion has his own home page for quite some time now, however I just noticed that he has recently turned his home page to a PHPBB Forum where there is plenty of information about the composer as well as open discission and many questions and answers of people who are interested into the great electronic composer.

The “foster-father” of tracked electronic music is Necros.
Necros is the Artistic Pseudonim of Andrew Sega, probably his most notable piece of music work is called mech8Mechanism 8 .

The third by significance electronic musician who is probably known by the many old school computer users and musicians is Skaven .
Skaven is part of the Future Crew .

One of the most notable Skaven music works is actually the soundtrack song produced for the Second RealityDemo .
It’s worthy to say few words about Future Crew as well, Future Crew (link to wikipedia)
– “Future Crew is a now-defunct group of Finnish computer coders and artists who created PC demos and software, active mostly between 1992 and 1994.”
You might also consider checking Necros profile in modarchive
There you can find plenty if not all of his works for download and listening.
A descendant of the the 3 up-mentioned wonderful tracker musicians is Pro-XeX also known under the artistic pseudonim Necroleak
Though his works are much later first originating from the distant 1995 his works are comparable by quality and goodness to (PM, Necros and Skaven’s works).
If you really want to completely turn back some memories about the good old times when we used to use DOS environment and to listen the great old MOD, S3M, XM etc. songs with the good old Cubic player which is already available under the free port called Open Cubic Player

A port is even available for most UNIX platforms You can download and install the Linux / Unix port of Cubic player here plus on the below link you will find some brief instructions on how to make it work on Debian, Ubuntu, Redhat, Gentoo and FreeBSD.

Under Debian Lenny, Squeeze/Sid installing opencubicplayer is pretty easy and comes to simple installation via apt-get as follows:

debian-desktop:~# apt-get install opencubicplayer

First time I’ve noticed Cubic player I should admit it was a real joy to know there is already a Unix port since Linux and BSD are my OS choice for almost 10 years already.
Another possible way to play the old school songs on Linux is through the well known console player mikmod
I’ve prepared a downloadable song archive for each of the 3 great electronic composers (Purple Motion, Necros, Skaven) on my personal webserver.

Below I present you with links to their music.

Download all tracked music by Purple Motion

Download Necros composed music works

Download the songs composed by Skaven

Download a collection of all composed songs by Pro-XeX / Necroleak

I have few other composers who are very liked by me, their music works can be obtained through my tracked music tiny collection available here

Many of the demos created and works by Jonne Valtonen known under the artistic pseudonim Purple Motion are currently uploaded and available for watching via Youtube – (search) Purple Motion

I’ll close this post with a the Award Winning Demos (Second Reality and Panic) which are the most notable produced Computer Simulation Demos of all times created by the collossal Future Crew group.


Second Reality by Future Crew [ Winner Demo of Assembly ’93 competition ]


Panic by Future Crew


Toasted by Cubic Team & $een
Also don’t forget to check The ultimate source for MOD Music – modarchive.org