Posts Tagged ‘etc passwd’

Linux extending life time for a damaged hard drive server tricks on a live server. Force fcsk on next reboot.Read-only file system error solutions

Friday, February 17th, 2023

linux-extending-life-time-for-a-damaged-hard-drive-server-tricks-can-not-read-superblock-linux-force-fsck-on-next-reboot

In our daily work as system administrators we have some very old Legacy systems running Clustered High Availability proxies using CRM (Cluster Resource Manager) and some legacy systems still using Heartbeat to manage the cluster instead of the newer and modern Corosync variant.

The HA cluster is only 2 nodes Linux machine and running the obscure already long time unsupported version of Redhat 5.11 (Ootpa) who was officially became stable distant year 1998 (yeath the years were good) and whose EOL (End of Life) has been reached long time ago and the OS is no longer supported, however for about 14 years the machines has been running perfectly fine until one of the Cluster nodes managed by ocf::heartbeat:IPAddr2 , that is  /etc/ha.d/resource.d/IPAddr2 shell script. Yeah for the newbies Heartbeat Application Cluster in Linux does work like that it uses a number of extendable pair of shell scripts written for different kind of Network / Web / Mail / SQL or whatever services HA management.

The first node configured however, started failing due to some errors like:
 

EXT3-fs error (device dm-1): ext3_journal_start_sb: Detected aborted journal
sd 0:2:0:0: rejecting I/O to offline device
Aborting journal on device sda1.
sd 0:2:0:0: rejecting I/O to offline device
printk: 159 messages suppressed.
Buffer I/O error on device sda1, logical block 526
lost page write due to I/O error on sda1
sd 0:2:0:0: rejecting I/O to offline device
sd 0:2:0:0: rejecting I/O to offline device
ext3_abort called.
EXT3-fs error (device sda1): ext3_journal_start_sb: Detected aborted journal
Remounting filesystem read-only
sd 0:2:0:0: rejecting I/O to offline device
sd 0:2:0:0: rejecting I/O to offline device
sd 0:2:0:0: rejecting I/O to offline device
sd 0:2:0:0: rejecting I/O to offline device
sd 0:2:0:0: rejecting I/O to offline device
megaraid_sas: FW was restarted successfully, initiating next stage…
megaraid_sas: HBA recovery state machine, state 2 starting…
megasas: Waiting for FW to come to ready state
megasas: FW in FAULT state!!
FW state [-268435456] hasn't changed in 180 secs
megaraid_sas: out: controller is not in ready state
megasas: waiting_for_outstanding: after issue OCR. 
megasas: waiting_for_outstanding: before issue OCR. FW state = f0000000
megaraid_sas: pending commands remain even after reset handling. megasas[0]: Dumping Frame Phys Address of all pending cmds in FW
megasas[0]: Total OS Pending cmds : 0 megasas[0]: 64 bit SGLs were sent to FW
megasas[0]: Pending OS cmds in FW :

The result out of that was a frequently the filesystem of the machine got re-mounted as Read Only and of course that is
quite bad if you have a running processess of haproxy that should be able to be living their and take up some Web traffic
for high availability and you run all the traffic only on the 2nd pair of machine.

This of course was a clear sign for a failing disks or some hit bad blocks regions or as the messages indicates, some
problem with system hardware or Raid SAS Array.

The physical raid on the system, just like rest of the hardware is very old stuff as well.

[root@haproxy_lb_node1 ~]# lspci |grep -i RAI
01:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2108 [Liberator] (rev 05)

The produced errors not only made the machine to auto-mount its root / filesystem in Read-Only mode but besides has most
likely made the machine to automatically reboot every few days or few times every day in a raw.

The second Load Balancer node2 did operated perfectly, and we thought that we might just keep the broken machine in that half running
and inconsistent state for few weeks until we have built the new machines with Pre-Installed new haproxy cluster with modern
RedHat Linux 8.6 distribution, but since we have to follow SLAs (Service Line Agreements) with Customers and the end services behind the
High Availability (HA) Haproxy cluster were at danger … 

We as sysadmins had the task to make our best to try to stabilize the unstable node with disk errors for the system to servive
and be able to normally serve traffic (if node2 that is in a separate Data center fails due to a hardware or electricity issues etc.)
.

Here is few steps we took, that has hopefully improved the situation.

1. Make backups of most important files of high importance

Always before doing anything with a broken system, prepare backup of the most important files, if that is a cluster that should be a backup of the cluster configurations (if you don't have already ones) backup of /etc/hosts / backup of any important services configs /etc/haproxy/haproxy.cfg /etc/postfix/postfix.cfg (like it was my case), preferrably backup of whole /etc/  any important files from /root/ or /home/users* directories backup of at leasts latest logs from /var/log etc.
 

2. Clear up all unnecessery services scripts from the server

Any additional Softwares / Services and integrity checking tools (daemons) / scripts and cron jobs, were immediately stopped and wheter unused removed.

E.g. we had moved through /etc/cron* to check what's there,

# ls -ld /etc/cron.*
drwx—— 2 root root 4096 Feb  7 18:13 /etc/cron.d
drwxr-xr-x 2 root root 4096 Feb  7 17:59 /etc/cron.daily
-rw-r–r– 1 root root    0 Jul 20  2010 /etc/cron.deny
drwxr-xr-x 2 root root 4096 Jan  9  2013 /etc/cron.hourly
drwxr-xr-x 2 root root 4096 Jan  9  2013 /etc/cron.monthly
drwxr-xr-x 2 root root 4096 Aug 26  2015 /etc/cron.weekly

 

And like well professional butchers removed everything unnecessery that could trigger any extra unnecessery disk read / writes to HDD.

E.g. just create

# mkdir -p /root/etc_old/{/etc/cron.d,\
/etc/cron.daily,/etc/cron.hourly,/etc/cron.monthly\
,/etc/cron.weekly}

 

And moved all unnecessery cron job scripts like:

1. nmon (old school network / memory / hard disk console tool for monitoring and tuning server parameters)
2. clamscan / freshclam crons
3. mlocate (the script that is taking care for periodic run of updatedb command to keep the locate command to easily search
for files inside the DB to put less read operations on disk in case if you need to find file (e.g. prevent yourself to everytime
run cmd like: find / . -iname '*whatever_you_look_for*'
4. cups cron jobs
5. logwatch cron
6. rkhunter stuff
7. logrotate (yes we stopped even logrotation trigger job as we found the server was crashing sometimes at the same time when
the lograte job to rotate logs inside /var/log/* was running perhaps leading to a hit of the I/O read error (bad blocks).


Also inspected the Administrator user root cron job for any unwated scripts and stopped two report bash scripts that were part of the PCI tightened Security procedures.
Therein found script responsible to periodically report the list of installed packages and if they have not changed, as well a script to periodically report via email the list of
/etc/{passwd,/etc/shadow} created users, used to historically keep an eye on the list of users and easily see if someone
has created new users on the machine. Those were enabled via /var/spool/cron/root cron jobs, in other cases, on other machines if it happens for you
it is a good idea to check out all the existing user cron jobs and stop anything that might be putting Read / Write extra heat pressure on machine attached the Hard drives.

# ls -al /var/spool/cron/
total 20
drwx——  2 root root 4096 Nov 13  2015 .
drwxr-xr-x 12 root root 4096 May 11  2011 ..
-rw——-  1 root root  133 Nov 13  2015 root


3. Clear up old log files and any files unnecessery

Under /var/log and /home /var/tmp /var/spool/tmp immediately try to clear up the old log files.
From my past experience this has many times made the FS file inodes that are storing on a unbroken part (good blocks) of the hard drive and
ready to be reused by newly written rsyslog / syslogd services spitted files.

!!! Note that during the removal of some files you might hit a files stored on a bad blocks that might lead to a unexpected system reboot.

But that's okay, don't worry most likely after a hard reset by a technician in the Datacenter the machine will boot again and you can enjoy
removing remaining still files to send them to the heaven for old files.

 

4. Trigger an automatic system file system check with fsck on next boot

The standard way to force a Linux to aumatically recheck its Root filesystem is to simply create the /forcefsck to root partition or any other secondary disk partition you would like to check.

# touch /forcefsck

# reboot


However at some occasions you might be unable to do it because, the / (root fs) has been remounted in ReadOnly mode, yackes …

Luckily old Linux distibutions like this RHEL 5.1, has a way to force a filesystem check after reboot fsck and identify any
unknown bad-blocks and hopefully succceed in isolating them, so you don't hit into the same auto-reboots if the hard drive or Software / Hardware RAID
is not in terrible state
, you can use an option built in in /sbin/shutdown command the '-F'

   -F     Force fsck on reboot.


Hence to make the machine reboot and trigger immediately fsck:

# shutdown -rF now


Just In case you wonder why to reboot before check the Filesystem. Well simply because you need to have them unmounted before you check.

In that specific case this produced so far a good result and the machine booted just fine and we crossed the fingers and prayed that the machine would work flawlessly in the coming few weeks, before we finalize the configuration of the substitute machines, where this old infrastructure will be migrated to a new built cluster with new Haproxy and Corosync / Pacemaker Cluster on a brand new RHEL.

NB! On newer machines this won't work however as shutdown command has been stripped off this option because no SystemV (SystemInit) or Upstart and not on SystemD newer services architecture.
 

5. Hints on checking the hard drives with fsck

If you happen to be able to have physical access to the remote Hardare machine via a TTY[1-9] Console, that's even better and is the standard way to do it but with this specific case we had no easy way to get access to the Physical server console.

It is even better to go there and via either via connected Monitor (Display) or KVM Switch (Those who hear KVM switch first time this is a great device in server rooms to connect multiple monitors to same Monitor Display), it is better to use a some of the multitude of options to choose from for USB Distro Linux recovery OS versions or a CDROM / DVD on older machines like this with the Redhat's recovery mode rolled on.
After mounting the partition simply check each of the disks
e.g. :

# fsck -y /dev/sdb
# fsck -y /dev/sdc

Or if you want to not waste time and look for each hard drive but directly check all the ones that are attached and known by Linux distro via /etc/fstab definition run:

# fsck -AR

If necessery and you have a mixture of filesystems for example EXT3 , EXT4 , REISERFS you can tell it to omit some filesystem, for example ext3, like that:

# fsck -AR -t noext3 -y


To skip fsck on mounted partitions with fsck:

# fsck -M /dev/sdb


One remark to make here on fsck is usually fsck to complete its job on various filesystem it uses other external component binaries usually stored in /sbin/fsck*

ls -al /sbin/fsck*
-rwxr-xr-x 1 root root  55576 20 яну 2022 /sbin/fsck*
-rwxr-xr-x 1 root root  43272 20 яну 2022 /sbin/fsck.cramfs*
lrwxrwxrwx 1 root root      9  4 юли 2020 /sbin/fsck.exfat -> exfatfsck*
lrwxrwxrwx 1 root root      6  7 юни 2021 /sbin/fsck.ext2 -> e2fsck*
lrwxrwxrwx 1 root root      6  7 юни 2021 /sbin/fsck.ext3 -> e2fsck*
lrwxrwxrwx 1 root root      6  7 юни 2021 /sbin/fsck.ext4 -> e2fsck*
-rwxr-xr-x 1 root root  84208  8 фев 2021 /sbin/fsck.fat*
-rwxr-xr-x 2 root root 393040 30 ное 2009 /sbin/fsck.jfs*
-rwxr-xr-x 1 root root 125184 20 яну 2022 /sbin/fsck.minix*
lrwxrwxrwx 1 root root      8  8 фев 2021 /sbin/fsck.msdos -> fsck.fat*
-rwxr-xr-x 1 root root    333 16 дек 2021 /sbin/fsck.nfs*
lrwxrwxrwx 1 root root      8  8 фев 2021 /sbin/fsck.vfat -> fsck.fat*


6. Using tune2fs to  adjust tunable filesystem parameters on ext2/ext3/ext4 filesystems (few examples)

a) To check whether really the filesystem was checked on boot time or check a random filesystem on the server for its last check up date with fsck:

#  tune2fs -l /dev/sda1 | grep checked
Last checked:             Wed Apr 17 11:04:44 2019

On some distributions like old Debian and Ubuntu, it is even possible to enable fsck to log its operations during check on reboot via changing the verbosity from NO to YES:

# sed -i "s/#VERBOSE=no/VERBOSE=yes/" /etc/default/rcS


If you're having the issues on old Debian Linuxes  and not on RHEL  it is possible to;

b) Enable all fsck repairs automatic on boot

by running via:
 

# sed -i "s/FSCKFIX=no/FSCKFIX=yes/" /etc/default/rcS


c) Forcing fcsk check on for server attached Hard Drive Partitions with tune2fs

# tune2fs -c 1 /dev/sdXY

Note that:
tune2fs can force a fsck on each reboot for EXT4, EXT3 and EXT2 filesystems only.

tune2fs can trigger a forced fsck on every reboot using the -c (max-mount-counts) option.
This option sets the number of mounts after which the filesystem will be checked, so setting it to 1 will run fsck each time the computer boots.
Setting it to -1 or 0 resets this (the number of times the filesystem is mounted will be disregarded by e2fsck and the kernel).


 For example you could:

d) Set fsck to run a filesystem check every 30 boots, by using -c 30 
 

# tune2fs -c 30 /dev/sdXY


e) Checking whether a Hard Drive has been really checked on the boot

 

#  tune2fs -l /dev/sda1 | grep checked
Last checked:             Wed Apr 17 11:04:44 2019


e) Check when was the last time the file system /dev/sdX was checked:
 

# tune2fs -l /dev/sdX | grep Last\ c
Last checked:             Thu Jan 12 20:28:34 2017


f) Check how many times our /dev/sdX filesystem was mounted

# tune2fs -l /dev/sdX | grep Mount
Mount count:              157

g) Check how many mounts are allowed to pass before filesystem check is forced
 

# tune2fs -l /dev/sdX | grep Max
Maximum mount count:      -1


7. Repairing disk / partitions via GRUB fsck.mode and fsck.repair kernel module options

It is also possible to force a fsck.repair on boot via GRUB, but that usually is not an option someone would like as the machine might fail too boot if it hards to repair hardly, however in difficult situations with failing disks temporary enabling it is good idea.

This can be done by including for grub initial config

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash fsck.mode=force fsck.repair=yes"

fsck.mode=force – will force a fsck each time a system boot and keeping that value enabled for a long time inside GRUB is stupid for servers as

sometimes booting could be severely prolonged because of the checks especially with servers with many or slow old hard drives.

fsck.repair=yes – will make the fsck try to repair if it finds bad blocks when checking (be absolutely sure you know, what you're doing if passing this options)

The options can be also set via editing the GRUB boot screen, if you have physical access to the server and don't want to reload the grub loader and possibly make the machine unbootable on next boot.
 

8. Few more details on how /etc/fstab disk fsck check parameters values for Systemd Linux machines works

The "proper" way on systemd (if we can talk about proper way on Linux) to runs fsck for each filesystem that has a fsck is to pass number greater than 0 set in
/etc/fstab (last column in /etc/fstab), so make sure you edit your /etc/fstab if that's not the case.

The root partition should be set to 1 (first to be checked), while other partitions you want to be checked should be set to 2.

Example /etc/fstab:
 

# /etc/fstab: static file system information.

/dev/sda1  /      ext4  errors=remount-ro  0  1
/dev/sda5  /home  ext4  defaults           0  2

The values you can put here as a second number meaning is as follows:
0 – disabled, that is do not check filesystem
1 – partition with this PASS value has a higher priority and is checked first. This value is usually set to the root / partition
2 – partitions with this PASS value will be checked last

a) Check the produced log out of fsck

Unfortunately on the older versions of Linux distros with SystemV fsck log output might be not generated except on the physical console so if you have a kind of duplicator device physical tty on the display port of the server, you might capture some bad block reports or fixed errors messages, but if you don't you might just cross the fingers and hope that anything found FS irregularities was recovered.

On systemd Linux machines the fsck log should be produced either in /run/initramfs/fsck.log or some other location depending on the Linux distro and you should be able to see something from fsck inside /var/log/* logs:

# grep -rli fsck /var/log/*


Close it up

Having a system with failing disk is a really one of the worst sysadmin nightmares to get. The good news is that most of the cases we're prepared with some working backup or some work around stuff like the few steps explained to mitigate the amount of Read / Writes to hard disks on the failing machine HDDs. If the failing disk is a primary Linux filesystem all becomes even worse as every next reboot, you have no guarantee, whether the kernel / initrd or some of the other system components required to run the Core Linux system won't break up the normal boot. Thus one side changes on the hard drives is a risky business on ther other side, if you're in a situation where you have a mirror system or the failing system is just a Linux server installed without a Cluster pair, then this is not a big deal as you can guarantee at least one of the nodes still up, unning and serving. Still doing too much of operations with HDD is always a danger so the steps described, though in most cases leading to improvement on how the system behaves, the system should be considered totally unreliable and closely monitored not only by some monitoring stuff like Zabbix / Prometheus whatever but regularly check the systems state via normal SSH logins. It is important if you have some important datas or logs on the system that are not synchronized to a system node to copy them before doing any of the described operations. After all minimal is backuped, proceed to clear up everything that might be cleared up and still the machine to continue providing most of its functionalities, trigger fsck automatic HDD check on next reboot, reboot, check what is going on and monitor the machine from there on.

Hopefully the few described steps, has helped some sysadmin. There is plenty of things which I've described that might go wrong, even following the described steps, might not help if the machines Storage Drives / SAS / SSD has too much of a damage. But as said in most cases following this few steps would improve the machine state.

Wish you the best of luck!

 

How to add local user to admin access via /etc/sudoers with sudo su – root / Create a sudo admin group to enable users belonging to group become superuser

Friday, January 15th, 2021

sudo_logo-how-to-add-user-to-sysadmin-group

Did you had to have a local users on a server and you needed to be able to add Admins group for all system administrators, so any local user on the system that belongs to the group to be able to become root with command lets say sudo su – root / su -l root / su – root?
If so below is an example /etc/sudoers file that will allow your users belonging to a group local group sysadmins with some assigned group number

Here is how to create the sysadmins group as a starter

linux:~# groupadd -g 800 sysadmins

Lets create a new local user georgi and append the user to be a member of sysadmins group which will be our local system Administrator (superuser) access user group.

To create a user with a specific desired userid lets check in /etc/passwd and create it:

linux:~# grep :811: /etc/passwd || useradd -u 811 -g 800 -c 'Georgi hip0' -d /home/georgi -m georgi

Next lets create /etc/sudoers (if you need to copy paste content of file check here)and paste below configuration:

linux:~# mcedit /etc/sudoers

## Updating the locate database
# Cmnd_Alias LOCATE = /usr/bin/updatedb

 

## Storage
# Cmnd_Alias STORAGE = /sbin/fdisk, /sbin/sfdisk, /sbin/parted, /sbin/partprobe, /bin/mount, /bin/umount

## Delegating permissions
# Cmnd_Alias DELEGATING = /usr/sbin/visudo, /bin/chown, /bin/chmod, /bin/chgrp

## Processes
# Cmnd_Alias PROCESSES = /bin/nice, /bin/kill, /usr/bin/kill, /usr/bin/killall

## Drivers
# Cmnd_Alias DRIVERS = /sbin/modprobe

Cmnd_Alias PASSWD = /usr/bin/passwd [a-zA-Z][a-zA-Z0-9_-]*, \\
!/usr/bin/passwd root

Cmnd_Alias SU_ROOT = /bin/su root, \\
                     /bin/su – root, \\
                     /bin/su -l root, \\
                     /bin/su -p root


# Defaults specification

#
# Refuse to run if unable to disable echo on the tty.
#
Defaults   !visiblepw

#
# Preserving HOME has security implications since many programs
# use it when searching for configuration files. Note that HOME
# is already set when the the env_reset option is enabled, so
# this option is only effective for configurations where either
# env_reset is disabled or HOME is present in the env_keep list.
#
Defaults    always_set_home
Defaults    match_group_by_gid

Defaults    env_reset
Defaults    env_keep =  "COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS"
Defaults    env_keep += "MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE"
Defaults    env_keep += "LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES"
Defaults    env_keep += "LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE"
Defaults    env_keep += "LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY"

#
# Adding HOME to env_keep may enable a user to run unrestricted
# commands via sudo.
#
# Defaults   env_keep += "HOME"
Defaults    secure_path = /sbin:/bin:/usr/sbin:/usr/bin

## Next comes the main part: which users can run what software on
## which machines (the sudoers file can be shared between multiple
## systems).
## Syntax:
##
##      user    MACHINE=COMMANDS
##
## The COMMANDS section may have other options added to it.
##
## Allow root to run any commands anywhere
root    ALL=(ALL)       ALL

## Allows members of the 'sys' group to run networking, software,
## service management apps and more.
# %sys ALL = NETWORKING, SOFTWARE, SERVICES, STORAGE, DELEGATING, PROCESSES, LOCATE, DRIVERS

## Allows people in group wheel to run all commands
%wheel  ALL=(ALL)       ALL

## Same thing without a password
# %wheel        ALL=(ALL)       NOPASSWD: ALL

## Allows members of the users group to mount and unmount the
## cdrom as root
# %users  ALL=/sbin/mount /mnt/cdrom, /sbin/umount /mnt/cdrom
## Allows members of the users group to shutdown this system
# %users  localhost=/sbin/shutdown -h now

%sysadmins            ALL            = SU_ROOT, \\
                                   NOPASSWD: PASSWD

## Read drop-in files from /etc/sudoers.d (the # here does not mean a comment)
#includedir /etc/sudoers.d

zabbix  ALL=(ALL) NOPASSWD:/usr/bin/grep


Save the config and give it a try now to become root with sudo su – root command

linux:~$ id
uid=811(georgi) gid=800(sysadmins) groups=800(sysadmins)
linux:~$ sudo su – root
linux~#

w00t Voila your user is with super rights ! Enjoy 🙂

 

Share SCREEN terminal session in Linux / Screen share between two or more users howto

Wednesday, October 11th, 2017

share-screen-terminal-session-in-linux-share-linux-unix-shell-between-two-or-more-users

 

1. Short Intro to Screen command and what is Shared Screen Session

Do you have friends who want to learn some GNU / Linux or BSD basics remotely? Do you have people willing to share a terminal session together for educational purposes within a different network? Do you just want to have some fun and show off yourself between two or more users?

If the answer to the questions is yes, then continue on reading, otherwise if you're already aware how this is being done, just ignore this article and do something more joyful.

So let me start.

Some long time ago when I was starting to be a Free Software user and dedicated enthusiast, I've been given by a friend an interesting freeshell hosting access and I stumbled upon / observed an interesting phenomenon, multiple users like 5 or 10 were connected simultaneously to the same shell sharing their command line.

I can't remember what kind of shell I happen to be sharing with the other logged in users with the same account, was that bash / csh / zsh or another one but it doesn't matter, it was really cool to find out multiple users could be standing together on GNU / Linux and *BSD with the same account and use the regular shell for chatting or teaching each others  new Linux / Unix commands e.g. being able to type in shell simultaneously.

The multiple shared shell session was possible thanks to the screen command

For those who hear about screen for a first time, here is the package description:

 

# apt-cache show screen|grep -i desc -A 1
Description-en: terminal multiplexer with VT100/ANSI terminal emulation
 GNU Screen is a terminal multiplexer that runs several separate "screens" on

Description-md5: 2d86b86ed6058a04c540802e49312f40
Homepage: https://savannah.gnu.org/projects/screen
root@jericho:/usr/local/src/pure-python-otr# apt-cache show screen|grep -i desc -A 2
Description-en: terminal multiplexer with VT100/ANSI terminal emulation
 GNU Screen is a terminal multiplexer that runs several separate "screens" on
 a single physical character-based terminal. Each virtual terminal emulates a


Description-md5: 2d86b86ed6058a04c540802e49312f40
Homepage: https://savannah.gnu.org/projects/screen
Tag: hardware::input:keyboard, implemented-in::c, interface::text-mode,


There is plenty of things to use screen for as it provides you a way to open Virtual Terminals into a single ssh or physical console TTY login session and I've been in love with screen command since day 1 I found out about it.

To start using screen just invoke it into a shell and enter a screen command combinations that make various stuff for you.

 

2. Some of the most useful Daily Screen Key Combinations for the Sys Admin


To do use the various screen options, use the escape sequence (CTRL + Some Word), following by the command. For a full list of all of the available commands, run man screen, however
for the sake of interest below short listing shows some of most useful screen key combination invoked commands:

 

 

Ctrl-a a Passes a Ctrl-a through to the terminal session running within screen.
Ctrl-a c Create a new Virtual shell screen session within screen
Ctrl-a d Detaches from a screen session.
Ctrl-a f Toggle flow control mode (enable/disable Ctrl-Q and Ctrl-S pass through).
Ctrl-a k Detaches from and kills (terminates) the screen session.
Ctrl-a q Passes a Ctrl-q through to the terminal session running within screen (or use Ctrl-a f to toggle whether screen captures flow control characters).
Ctrl-a s Passes a Ctrl-s through to the terminal session running within screen (or use Ctrl-a f to toggle whether screen captures flow control characters).
Ctrl-a :kill Also detaches from and kills (terminates) the screen session.
Ctrl-a :multiuser on Make the screen session a multi-user session (so other users can attach).
Ctrl-a :acladd USER Allow the user specified (USER) to connect to a multi-user screen session.
Ctrl-a p Move around multiple opened Virtual terminals in screen (Move to previous)
Ctrl-a n Move backwards in multiple opened screen sessions under single shell connection


I have to underline strongly for me personally, I'm using the most

 

CTRL + A + D (to detach session),

CTRL + A + C to open new session within screen (I tend to open multiple sessions for multiple ssh connections with this),

CTRL + A + P, CTRL +  A + N – I use this twoto move around all my open screen Virtual sessions.
 

3. HOW TO ACTUALLY SHARE TERMINAL SESSION BETWEEN MULTIPLE USERS?


3.1 Configuring Shared Sessions so other users can connect

You need to  have a single user account on a Linux or Unix like server lets say that might be the /etc/passwd, /etc/shadow, /etc/group account screen and you have to give the password to all users to be participating into the shared screen shell session.

E.g. create new system account screen

root@jericho:~# adduser screen
Adding user `screen' …
Adding new group `screen' (1001) …
Adding new user `screen' (1001) with group `screen' …
The home directory `/home/screen' already exists.  Not copying from `/etc/skel'.
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Changing the user information for screen
Enter the new value, or press ENTER for the default
    Full Name []: Screen user to give users shared access to /bin/bash
    Room Number []:
    Work Phone []:
    Home Phone []:
    Other []:
Is the information correct? [Y/n] Y

Now distribute the user / pass pair around all users who are to be sharing the same virtual bash session via screen and instruct each of them to run:

hipo@jericho:~$  screen -d -m -S shared-session
hipo@jericho:~$

hipo@jericho:~$ screen -list

There is a screen on:
    4095.shared-session    (10.10.2017 20:22:22)    (Detached)
1 Socket in /run/screen/S-hipo.


3.2. Attaching to just created session
 

Simply login with as many users you need with SSH to the remote server and instruct them to run the following command to re-attach to the just created new session by you:

hipo@jericho:~$ screeen -x

That's all folks now everyone can type in simultaneously and enjoy the joys of the screen shared session.

If for some reasons more than one session is created by the simultaneously logged in users either as an exercise or by mistake i.e.:

hipo@jericho:~$ screen -list

There are screens on:
    4880.screen-session    (10.10.2017 20:30:09)    (Detached)
    4865.another-session    (10.10.2017 20:29:58)    (Detached)
    4847.hey-man    (10.10.2017 20:29:49)    (Detached)
    4831.another-session1    (10.10.2017 20:29:45)    (Detached)
4 Sockets in /run/screen/S-hipo.

You have to instruct everyone to connect actually to the exact session we need, as screen -x will ask them to what session they like to connect.

In that case to connect to screen-session, each user has to run with their account:

hipo@jericho:~$ screen -x shared-session


If under some circumstances it happened that there is more than one opened shared screen virtual session, for example screen -list returns:

 

hipo@jericho:~$ screen -list
There are screens on:
    5065.screen-session    (10.10.2017 20:33:20)    (Detached)
    4095.screen-session    (10.10.2017 20:30:08)    (Detached)

All users have to connect to the exact screen-session created name and ID, like so:

hipo@jericho:~$ screen -x 4095.screen-session
 


Here is the meaning of used options

 

-d option instructs screen to detach,
-m makes it multiuser session so other users can attach
-S argument is just to give the screen session a name
-list Sesssion gives the screen-session ID

Once you're over with screen session (e.g. all users that are learning and you show them stuff and ask them to test by themselves and have completed, scheduled tasks), to kill it just press CTRL + A + K
 

4. Share screen /bin/bash shell session with another user

Sharing screen session between different users is even more useful to the shared session of one user as you might have a *nix server with many users who might attach to your opened session directly, instead of being beforehand instructed to connect with a single user.

That's perfect also for educational purposes if you want to learn some Linux to a class of people, as you can use their ordinary accounts and show them stuff on a Linux / BSD  machine.

Assuming that you follow and created already screen-session with screen cmd

hipo@jericho:~$ screen -list
There is a screen on:
        5560.screen-session      (10.10.2017 20:41:06)   (Multi, attached)
1 Socket in /run/screen/S-hipo.

hipo@jericho:~$

Next attach to the session

bunny@jericho:~$ screen -r shared-session
bunny@jericho:~$ Ctrl-a :multiuser on
bunny@jericho:~$ Ctrl-a :acladd user2
bunny@jericho:~$ screen -x UserNameHere/shared-session
 

Here are 2 screenshots on what should happen if you had done above command combinations correctly:

screen-share-session-to-multi-users-screenshot-multiuser-on-on-gnome-terminal2

screen-share-session-to-multi-users-screenshot-multiuser-on-on-gnome-terminal3

In order to be able to share screen Virtual terminal ( VTY ) sessions between separate (different) logged in users, you have to have screen command be suid (SUID bit for screen is disabled in most Linux distributions for security reasons).

Without making SUID the screen binary file, you are to get the error:

hipo@jericho:/home/hipo$ screen -x hipo/shared

Must run suid root for multiuser support.

If you are absolutely sure you know what you're doing here is how to make screen command sticky bit:

 

root@jericho:/home/hipo# which screen
root@jericho:/home/hipo# /usr/bin/screen
root@jericho:/home/hipo# root@jericho:/home/hipo# root@jericho:/home/hipo# chmod u+s $(which screen)
chmod 755 /var/run/screen
root@jericho:/home/hipo# rm -fr /var/run/screen/*
exit

How to install and configure djbdns from source as a Cachening Localhost Proxy resolver to increase resolving efficiency on Debian 6 Squeeze

Monday, August 1st, 2011

djbdns-logo-install-configure-djbdns-from-source-on-gnu-linux-to-accelerate-server-dns-resolving
It seems DjbDNS on Debian Squeeze has been not included as a Debian package. There is still possibility to install djbdns from an older deb package or install it from source. I however decided to install it from source as finding the old Debian package for Lenny and Etch takes time, plus I'm running an amd64 version of Debian and this might even more complicate the situation.
Installing it from source is not really a Debian way but at least it works.

In this article I assume that daemontools and ucspi-tcp are preliminary installed, if not one needs to install them with:

debian:~# apt-get install ucspi-tcp daemontools daemontools-run
...

The above two ones are required as DJBDNS is originally made to run through djb's daemontools.

Here is the exact step I took to have it installed as local caching DNS server on a Debian Squeeze server:

1. Download and untar DjbDNS

debian:~# wget -q http://cr.yp.to/djbdns/djbdns-1.05.tar.gz debian:~# tar -zxvvf djbdns-1.05.tar.gz
...

2. Add DjbDNS users to /etc/passwd

Creating the below two users is not arbitrary but it's recommendable.

echo 'dnscache:*:54321:54321:dnscache:/dev/null:/dev/null' >> /etc/passwd
echo 'dnslog:*:54322:54322:dnslog:/dev/null:/dev/null' >> /etc/passwd

3. Compile DJBDNS nameserver

First it's necessery to use the below echo command to work around a common Linux bug:

debian:~# cd djbdns-1.05
debian:/root/djbdns-1.05# echo gcc -O2 -include /usr/include/errno.h > conf-cc

Next let's make it:

debian:/root/djbdns-1.05# make

4. Install the compiled djbdns binaries

debian:/root/djbdns-1.05# make setup check
# here comes some long install related output

If no errors are produced by make setup check this means that the djbdns should have installed itself fine.

As installation is compileted it's a good idea to report about the newly installed DjbDNS server if running a mail server. This info is used by Dan Bernstein to gather statistical data about the number of installations of djbdns servers throughout the world.

5. Do some general configurations to the newly installed DJBDNS

Now let's copy the list of the IP addresses of the global DNS root servers in /etc/.

debian:/root/djbdns-1.05# cp -rpf dnsroots.global /etc/ debian:/root/djbdns-1.05# ./dnscache-conf dnscache dnslog /etc/dnscache 0.0.0.0

dnscache-conf will generate some default configuration files for djbdns in /etc/dnscache

Next allow the networks which should be able to use the just installed djbdns server as a caching server:

debian:/root/djbdns-1.05# cd /etc/dnscache/root/ip
debian:/etc/dnscache/root# touch 192.168.1
debian:/root/djbdns-1.05# touch 123.123

First command will allow all ips in range 192.168.1.* to be able to access the DNS server and the second command will allow all ips from 123.123.1-255.1-255 to be able to query the server.

Some further fine tunning can be done from the files:

/etc/dnscache/env/CACHESIZE and /etc/dnscache/env/DATALIMIT

As a last step, before it's running, we have to link the /etc/dnscache to daemontools like so:

debian:/root/djbdns-1.05# ln -sf /etc/dnscache /etc/service/dnscache

If the daemontools is not linked to be accessible via /etc/service it's also a good to link it there:

debian:~# ln -sf /etc/service /

Now the DJBDNS should be running fine, to test if it's running without errors through daemontools I used:

debian:~# ps ax|grep -i readproc
5358 pts/18 R+ 0:00 grep -i readproc
11824 ? S 0:00 readproctitle service errors: ...........

If no errors are displayed it's configured and running to also test if it's capable of resolving I used the host command:

debian:~# host www.pc-freak.net localhost
Using domain server:
Name: localhost
Address: 127.0.0.1#53
Aliases:

www.pc-freak.net has address 83.228.93.76
www.pc-freak.net mail is handled by 0 mail.www.pc-freak.net.

Now the DJBDNS is properly installed and if you test it for a while with time host somehost.com localhost , you will see how quick it is in resolving.

The advantage of running DJBDNS is it does not require almost no maintance, its rock solid and great just like all other Dan Bernstein's written software.
Enjoy 😉

Linux: how to show all users crontab – List all cronjobs

Thursday, May 22nd, 2014

linux-unix-list-all-crontab-users-and-scripts
I'm doing another server services decomissioning and part of decomissioning plan is: Removing application and all related scripts from related machines (FTP, RSYNC, …). In project documentation I found a list with Cron enabled shell scripts:

#Cron tab excerpt:
1,11,21,31,41,51 * * * */webservices/tools/scripts/rsync_portal_sync.sh

that has to be deleted, however there was nowhere mentioned under what kind of credentials (with what kind of user) are the cron scripts running? Hence I had to look up all users that has cronjobs and find inside each user's cronjobs whether respective script is set to run. Herein I will explain shortly how I did that.

Cronjobs by default has few locations from where cronjobs are setupped depending on their run time schedule. First place I checked for the scripts is

/etc/crontabs # cat /etc/crontabs SHELL=/bin/sh
PATH=/usr/bin:/usr/sbin:/sbin:/bin:/usr/lib/news/bin
MAILTO=root
#
# check scripts in cron.hourly, cron.daily, cron.weekly, and cron.monthly
#
-*/15 * * * * root test -x /usr/lib/cron/run-crons && /usr/lib/cron/run-crons >/dev/null 2>&1
59 * * * * root rm -f /var/spool/cron/lastrun/cron.hourly
14 4 * * * root rm -f /var/spool/cron/lastrun/cron.daily
29 4 * * 6 root rm -f /var/spool/cron/lastrun/cron.weekly
44 4 1 * * root rm -f /var/spool/cron/lastrun/cron.monthly

I was not really user via what user is shell script run, therefore I looked first if someone doesn't set the script to run via crontab's standard locations for Daily, Hourly,Weekly and Monthly cronjobs:
 

a) Daily set cron jobs are in:

/etc/cron.daily/

b) Hourly set cron jobs:

/etc/cron.hourly

c) Weekly cron jobs are in:

/etc/cron.weekly/

d) Monthly cron jobs:

/etc/cron.monthly

There is also a location read by crontab for all Software (package distribution) specific cronjobs – all run under root user privileges.:

e) Software specific script cron jobs are in:

/etc/cron.d/  
As the system has about 327 users in /etc/passwd, checking each user's cronjob manually with:

# crontab -u UserName -l

was too much time consuming thus it is a good practice to list

/var/spool/cron/*

directory and to see which users has cron jobs defined

 

# ls -al /var/spool/cron/*
-rw——- 1 root root 11 2007-07-09 17:08 /var/spool/cron/deny

/var/spool/cron/lastrun:
total 0
drwxr-xr-x 2 root root 80 2014-05-22 11:15 .
drwx—— 4 root root 120 2008-02-25 15:45 ..
-rw-r–r– 1 root root 0 2014-05-22 04:15 cron.daily

/var/spool/cron/tabs:
total 8
drwx—— 2 root root 72 2014-04-03 03:43 .
drwx—— 4 root root 120 2008-02-25 15:45 ..
-rw——- 1 root root 4901 2014-04-03 03:43 root
 


/var/spool/cron – is crond (/usr/bin/cron/)'s spool directory.

# ls -al /var/spool/cron/tabs/ total 8
drwx------ 2 root root 72 2014-04-03 03:43 .
drwx------ 4 root root 120 2008-02-25 15:45 ..
-rw------- 1 root root 4901 2014-04-03 03:43 root

Above output shows only root superuser has defined crons.

Alternative way to check all user crontabs is via quick Linux one liner shell script show all user cron jobs

for i in $(cat /etc/passwd | sed -e "s#:# #g" | awk '{ print $1 }'); do
echo "user $i --- crontab ---";
crontab -u $i -l 2>&1 >/dev/null;
echo '----------';
done|less

Note that above short script has to run with root user. Enjoy 🙂