Archive for the ‘Linux and FreeBSD Desktop’ Category

Defining multiple short Server Hostname aliases via SSH config files and defining multiple ssh options for it, Use passwordless authentication via public keys

Thursday, September 16th, 2021

using-ssh-host-acronym-aliases-ssh-client-explained-openssh-logo

In case you have to access multiple servers from your terminal client such as gnome-terminal, kterminal (if on Linux) or something such as mobaxterm + cygwin (if on Windows) with an opens ssh client (ssh command). There is a nifty trick to save time and keyboard typing through creating shortcuts aliases by adding few definitions inside your $HOME/.ssh/config ( ~/.ssh/config ) for your local non root user or even make the configuration system wide (for all existing local /etc/passwd users) via /etc/ssh/ssh_config.
By adding a pseudonym alias for each server it makes sysadmin life much easier as you don't have to type in each time the FQDN (Fully Qualified Domain Name) hostname of remote accessed Linux / Unix / BSD / Mac OS or even Windows sshd ready hosts accessible via remote TCP/IP port 22.


1. Adding local user remote server pointer aliases via ~/.ssh/config


The file ~/.ssh/config is read by the ssh client part of the openssh-client (Linux OS package) on each invokement of the client, and besides defining a pseudonym for the hosts you like to save you time when accessing remote host and hence increase your productivity. Moreover you can also define various other nice options through it to define specifics of remote ssh session for each desired host such as remote host default SSH port (for example if your OpenSSHD is configured to run on non-standard SSH port as lets say 2022 instead of default port TCP 22 for some reason, e.g. security through obscurity etc.).

 

The general syntax of .ssh/config file si simplistic, it goes like this:
 

Host MACHNE_HOSTNAME

SSH_OPTION1 value1
SSH_OPTION1 value1 value2
SSH_OPTION2 value1 value2

 

Host MACHNE_HOSTNAME

SSH_OPTION value
SSH_OPTION1 value1 value2

  • Another understood syntax if you prefer to not have empty whitespaces is to use ( = )
    between the parameter name and values.

Host MACHINE_HOSTNAME
SSH_config=value
SSH_config1=value1 value2

  • All empty lines and lines starting with the hash shebang sign ( # ) would be ignored.
  • All values are case-sensitive, but parameter names are not.

If you have never so far used the $HOME/.ssh/config you would have to create the file and set the proper permissions to it like so:

mkdir -p $HOME/.ssh
chmod 0700 $HOME/.ssh


Below are examples taken from my .ssh/config configuration for all subdomains for my pcfreak.org domain

 

# Ask for password for every subdomain under pc-freak.net for security
Host *.pcfreak.org
user hipopo
passwordauthentication yes
StrictHostKeyChecking no

# ssh public Key authentication automatic login
Host www1.pc-freak.net
user hipopo
Port 22
passwordauthentication no
StrictHostKeyChecking no

UserKnownHostsFile /dev/null

Host haproxy2
    Hostname 213.91.190.233
    User root
    Port 2218
    PubkeyAuthentication yes
    IdentityFile ~/.ssh/haproxy2.pub    
    StrictHostKeyChecking no
    LogLevel INFO     

Host pcfrxenweb
    Hostname 83.228.93.76
    User root
    Port 2218

    PubkeyAuthentication yes
    IdentityFile ~/.ssh/pcfrxenweb.key    
    StrictHostKeyChecking no

Host pcfreak-sf
    Hostname 91.92.15.51
    User root
    Port 2209
    PreferredAuthentications password
    StrictHostKeyChecking no

    Compression yes


As you can see from above configuration the Hostname could be referring either to IP address or to Hostname.

Now to connect to defined IP 91.92.15.51 you can simply refer to its alias

$ ssh pcfreak-sf -v

and you end up into the machine ssh on port 2209 and you will be prompted for a password.

$ ssh pcfrxenweb -v


would lead to IP 83.228.93.76 SSH on Port 2218 and will use the defined public key for a passwordless login and will save you the password typing each time.

Above ssh command is a short alias you can further use instead of every time typing:

$ ssh -i ~/.ssh/pcfrxenweb.key -p 2218 root@83.228.93.76

There is another nifty trick worthy to mention, if you have a defined hostname such as the above config haproxy2 to use a certain variables, but you would like to override some option for example you don't want to connet by default with User root, but some other local account, lets say ssh as devuser@haproxy2 you can type:

$ ssh -o "User=dev" devuser

StrictHostKeyChecking no

– variable will instruct the ssh to not check if the finger print of remote host has changed. Usually this finger print check sum changes in case if for example for some reason the opensshd gets updated or the default /etc/ssh/ssh_host_dsa_key /etc/ssh/sshd_host_dsa_* files have changed due to some reason.
Of course you should use this option only if you tend to access your remote host via a secured VPN or local network, otherwise the Host Key change could be an indicator someone is trying to intercept your ssh session.

 

Compression yes


– variable  enables compression of connection saves few bits was useful in the old modem telephone lines but still could save you few bits
It is also possible to define a full range of IP addresses to be accessed with one single public rsa / dsa key

Below .ssh/config
 

Host 192.168.5.?
     Hostname 192.168.2.18
     User admin
     IdentityFile ~/.ssh/id_ed25519.pub


Would instruct each host attemted to be reached in the IP range of 192.168.2.1-254 to be automatically reachable by default with ssh client with admin user and the respective ed25519.pub key.
 

$ ssh 192.168.1.[1-254] -v

 

2. Adding ssh client options system wide for all existing local or remote LDAP login users


The way to add any Host block is absolutely the same as with a default user except you need to add the configuration to /etc/ssh/ssh_config. Here is a confiugaration from mine Latest Debian Linux

$ cat /etc/ssh/ssh_config

# This is the ssh client system-wide configuration file.  See
# ssh_config(5) for more information.  This file provides defaults for
# users, and the values can be changed in per-user configuration files
# or on the command line.

# Configuration data is parsed as follows:
#  1. command line options
#  2. user-specific file
#  3. system-wide file
# Any configuration value is only changed the first time it is set.
# Thus, host-specific definitions should be at the beginning of the
# configuration file, and defaults at the end.

# Site-wide defaults for some commonly used options.  For a comprehensive
# list of available options, their meanings and defaults, please see the
# ssh_config(5) man page.

Host *
#   ForwardAgent no
#   ForwardX11 no
#   ForwardX11Trusted yes
#   PasswordAuthentication yes
#   HostbasedAuthentication no
#   GSSAPIAuthentication no
#   GSSAPIDelegateCredentials no
#   GSSAPIKeyExchange no
#   GSSAPITrustDNS no
#   BatchMode no
#   CheckHostIP yes
#   AddressFamily any
#   ConnectTimeout 0
#   StrictHostKeyChecking ask
#   IdentityFile ~/.ssh/id_rsa
#   IdentityFile ~/.ssh/id_dsa
#   IdentityFile ~/.ssh/id_ecdsa
#   IdentityFile ~/.ssh/id_ed25519
#   Port 22
#   Protocol 2
#   Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc
#   MACs hmac-md5,hmac-sha1,umac-64@openssh.com
#   EscapeChar ~
#   Tunnel no
#   TunnelDevice any:any
#   PermitLocalCommand no
#   VisualHostKey no
#   ProxyCommand ssh -q -W %h:%p gateway.example.com
#   RekeyLimit 1G 1h
    SendEnv LANG LC_*
    HashKnownHosts yes
    GSSAPIAuthentication yes

As you can see pretty much can be enabled by default such as the forwarding of the Authentication agent option ( -A ) option, necessery for some Company server environments to be anbled. So if you have to connect to remote host with enabled Agent Forwarding instead of typing

ssh -A user@remotehostname


To enable Agent Forwarding instead of

ssh -X user@remotehostname


Simply uncomment and set to yes
 

ForwardX11 yes
ForwardX11Trusted yes


Just simply uncomment above's config ForwardAgent no

As you can see ssh could do pretty much, you can configure enable SSH Tunneling or run via a Proxy with the ProxyCommand (If it is the first time you hear about ProxyCommand I warmly recommend you check my previous article – How to pass SSH traffic through a secured Corporate Proxy Server with corkscrew).

Sometimes for a defines hostname, due to changes on remote server ssh configuration, SSH encryption type or a host key removal you might end up with issues connecting, therefore to override all the previously defined options inside .ssh/config by ignoring the configuration with -F /dev/null

$ ssh -F /dev/null user@freak -v


What we learned ?

To sum it up In this article, we have learned how to easify the stressed sysadmin life, by adding Aliases with certain port numbering and configurations for different remote SSH administrated Linux / Unix, hosts via local ~/.ssh/config or global wide /etc/ssh/ssh_config configuration options, as well as how already applied configuration from ~/.ssh/config affecting each user ssh command execution, could be overriden.

Stop haproxy log requests to /var/log/messages / Disable haproxy double logging

Friday, June 25th, 2021

haproxy-logo

On a CentOS Linux release 7.9.2009 (Core) I've running haproxies on two KVM virtual machines that are configured in a High Avaialability cluster with Corosync and Pacemaker, the machines are inherited from another admin (I did not install the servers hardware) and OS but have been received the system for support.
The old sysadmins seems to not care much about the system so they've left the haprxoy with Double logging one time under separate configured log in /var/log/haproxy/haproxyprod.log and each Haproxy TCP mode flown request has been double logged to /var/log/messages as well. As you can guess this shouldn't be so because we're wasting Hard drive space so to fix that I had to stop haproxy doble logging to /var/log/messages.

The logging is done under a separate local pointer local6 the /etc/haproxy/haproxyprod.cfg goes as follows:
 

[root@haproxy01 ~]# cat /etc/haproxy/haproxyprod.cfg

global
    # log <address> [len ] [max level [min level]]
    log 127.0.0.1 local6 debug

 

The logging is handled by rsyslog via the local6, so obviously to keep out the logging from /var/log/messages
The logging to the separate log file configuration in rsyslog is as follows:

local6.*                                                /var/log/haproxy/haproxyprod.log

It turned to be really easy to prevent haproxy get its requests log to /var/log/messages all I had to change is under /etc/rsyslogd.conf

local6.none config has to be placed for /var/log/messages the full line configuration in /etc/rsyslog.conf that stopped double logging is:

# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none;local5.none;local6.none                /var/log/messages

 

How to automate open xen Hypervisor Virtual Machines backups shell script

Tuesday, June 22nd, 2021

openxen-backup-logo As a sysadmin that have my own Open Xen Debian Hypervisor running on a Lenovo ThinkServer few months ago due to a human error I managed to mess up one of my virtual machines and rebuild the Operating System from scratch and restore files service and MySQl data from backup that really pissed me of and this brought the need for having a decent Virtual Machine OpenXen backup solution I can implement on the Debian ( Buster) 10.10 running the free community Open Xen version 4.11.4+107-gef32c7afa2-1. The Hypervisor is a relative small one holding just 7 VM s:

HypervisorHost:~#  xl list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0 11102    24     r—–  214176.4
pcfrxenweb                                  11 12288     4     -b—-  247425.5
pcfrxen                                     12 16384    10     -b—-  1371621.4
windows7                                    20  4096     2     -b—-   97887.2
haproxy2                                    21  4096     2     -b—-   11806.9
jitsi-meet                                  22  2048     2     -b—-   12843.9
zabbix                                      23  2048     2     -b—-   20275.1
centos7                                     24  2040     2     -b—-   10898.2

HypervisorHost:~# xl list|grep -v 'Name ' |grep  -v 'Domain-0'  |wc -l
7


The backup strategy of the script is very simple to shutdown the running VM machine, make a copy with rsync to a backup location the image of each of the Virtual Machines in a bash shell loop for each virtual machine shown in output of xl command and backup to a preset local directory in my case this is /backups/ the backup of each virtual machine is produced within a separate backup directory with a respective timestamp. Backup VM .img files are produced in my case to mounted 2x external attached hard drives each of which is a 4 Terabyte Seagate Plus Backup (Storage). The original version of the script was made to be a slightly different by Zhiqiang Ma whose script I used for a template to come up with my xen VM backup solution. To prevent the Hypervisor's load the script is made to do it with a nice of (nice -n 10) this might be not required or you might want to modify it to better suit your needs. Below is the script itself you can fetch a copy of it /usr/sbin/xen_vm_backups.sh :

#!/bin/bash

# Author: Zhiqiang Ma (http://www.ericzma.com/)
# Modified to work with xl and OpenXen by Georgi Georgiev – https://pc-freak.net
# Original creation dateDec. 27, 2010
# Script takes all defined vms under xen_name_list and prepares backup of each
# after shutting down the machine prepares archive and copies archive in externally attached mounted /backup/disk1 HDD
# Latest update: 08.06.2021 G. Georgiev – hipo@pc-freak.net

mark_file=/backups/disk1/tmp/xen-bak-marker
log_file=/var/log/xen/backups/bak-$(date +%Y_%m_%d).log
err_log_file=/var/log/xen/backups/bak_err-$(date +%H_%M_%Y_%m_%d).log
xen_dir=/xen/domains
xen_vmconfig_dir=/etc/xen/
local_bak_dir=/backups/disk1/tmp
#bak_dir=xenbak@backup_host1:/lhome/xenbak
bak_dir=/backups/disk1/xen-backups/xen_images/$(date +%Y_%m_%d)/xen/domains
#xen_name_list="haproxy2 pcfrxenweb jitsi-meet zabbix windows7 centos7 pcfrxenweb pcfrxen"
xen_name_list="windows7 haproxy2 jitsi-meet zabbix centos7"

if [ ! -d /var/log/xen/backups ]; then
echo mkdir -p /var/log/xen/backups
 mkdir -p /var/log/xen/backups
fi

if [ ! -d $bak_dir ]; then
echo mkdir -p $bak_dir
 mkdir -p $bak_dir

fi


# check whether bak runned last week
if [ -e $mark_file ] ; then
        echo  rm -f $mark_file
 rm -f $mark_file
else
        echo  touch $mark_file
 touch $mark_file
  # exit 0
fi

# set std and stderr to log file
        echo mv $log_file $log_file.old
       mv $log_file $log_file.old
        echo mv $err_log_file $err_log_file.old
       mv $err_log_file $err_log_file.old
        echo "exec 2> $err_log_file"
       exec 2> $err_log_file
        echo "exec > $log_file"
       exec > $log_file


# check whether the VM is running
# We only backup running VMs

echo "*** Check alive VMs"

xen_name_list_tmp=""

for i in $xen_name_list
do
        echo "/usr/sbin/xl list > /tmp/tmp-xen-list"
        /usr/sbin/xl list > /tmp/tmp-xen-list
  grepinlist=`grep $i" " /tmp/tmp-xen-list`;
  if [[ “$grepinlist” == “” ]]
  then
    echo $i is not alive.
  else
    echo $i is alive.
    xen_name_list_tmp=$xen_name_list_tmp" "$i
  fi
done

xen_name_list=$xen_name_list_tmp

echo "Alive VM list:"

for i in $xen_name_list
do
   echo $i
done

echo "End alive VM list."

###############################
date
echo "*** Backup starts"

###############################
date
echo "*** Copy VMs to local disk"

for i in $xen_name_list
do
  date
  echo "Shutdown $i"
        echo  /usr/sbin/xl shutdown $i
        /usr/sbin/xl shutdown $i
        if [ $? != ‘0’ ]; then
                echo 'Not Xen Disk image destroying …';
                /usr/sbin/xl destroy $i
        fi
  sleep 30

  echo "Copy $i"
  echo "Copy to local_bak_dir: $local_bak_dir"
      echo /usr/bin/rsync -avhW –no-compress –progress $xen_dir/$i/{disk.img,swap.img} $local_bak_dir/$i/
     time /usr/bin/rsync -avhW –no-compress –progress $xen_dir/$i/{disk.img,swap.img} $local_bak_dir/$i/
      echo /usr/bin/rsync -avhW –no-compress –progress $xen_vmconfig_dir/$i.cfg $local_bak_dir/$i.cfg
     time /usr/bin/rsync -avhW –no-compress –progress $xen_vmconfig_dir/$i.cfg $local_bak_dir/$i.cfg
  date
  echo "Create $i"
  # with vmmem=1024"
  # /usr/sbin/xm create $xen_dir/vm.run vmid=$i vmmem=1024
          echo /usr/sbin/xl create $xen_vmconfig_dir/$i.cfg
          /usr/sbin/xl create $xen_vmconfig_dir/$i.cfg
## Uncomment if you need to copy with scp somewhere
###       echo scp $log_file $bak_dir/xen-bak-111.log
###      echo  /usr/bin/rsync -avhW –no-compress –progress $log_file $local_bak_dir/xen-bak-111.log
done

####################
date
echo "*** Compress local bak vmdisks"

for i in $xen_name_list
do
  date
  echo "Compress $i"
      echo tar -z -cfv $bak_dir/$i-$(date +%Y_%m_%d).tar.gz $local_bak_dir/$i-$(date +%Y_%m_%d) $local_bak_dir/$i.cfg
     time nice -n 10 tar -z -cvf $bak_dir/$i-$(date +%Y_%m_%d).tar.gz $local_bak_dir/$i/ $local_bak_dir/$i.cfg
    echo rm -vf $local_bak_dir/$i/ $local_bak_dir/$i.cfg
    rm -vrf $local_bak_dir/$i/{disk.img,swap.img}  $local_bak_dir/$i.cfg
done

####################
date
echo "*** Copy local bak vmdisks to remote machines"

copy_remote () {
for i in $xen_name_list
do
  date
  echo "Copy to remote: vm$i"
        echo  scp $local_bak_dir/vmdisk0-$i.tar.gz $bak_dir/vmdisk0-$i.tar.gz
done

#####################
date
echo "Backup finishes"
        echo scp $log_file $bak_dir/bak-111.log

}

date
echo "Backup finished"

 

Things to configure before start using using the script to prepare backups for you is the xen_name_list variable

#  directory skele where to store already prepared backups
bak_dir=/backups/disk1/xen-backups/xen_images/$(date +%Y_%m_%d)/xen/domains

# The configurations of the running Xen Virtual Machines
xen_vmconfig_dir=/etc/xen/
# a local directory that will be used for backup creation ( I prefer this directory to be on the backup storage location )
local_bak_dir=/backups/disk1/tmp
#bak_dir=xenbak@backup_host1:/lhome/xenbak
# the structure on the backup location where daily .img backups with be produced with rsync and tar archived with bzip2
bak_dir=/backups/disk1/xen-backups/xen_images/$(date +%Y_%m_%d)/xen/domains

# list here all the Virtual Machines you want the script to create backups of
xen_name_list="windows7 haproxy2 jitsi-meet zabbix centos7"

If you need the script to copy the backup of Virtual Machine images to external Backup server via Local Lan or to a remote Internet located encrypted connection with a passwordless ssh authentication (once you have prepared the Machines to automatically login without pass over ssh with specific user), you can uncomment the script commented section to adapt it to copy to remote host.

Once you have placed at place /usr/sbin/xen_vm_backups.sh use a cronjob to prepare backups on a regular basis, for example I use the following cron to produce a working copy of the Virtual Machine backups everyday.
 

# crontab -u root -l 

# create windows7 haproxy2 jitsi-meet centos7 zabbix VMs backup once a month
00 06 1,2,3,4,5,6,7,8,9,10,11,12 * * /usr/sbin/xen_vm_backups.sh 2>&1 >/dev/null


I do clean up virtual machines Images that are older than 95 days with another cron job

# crontab -u root -l

# Delete xen image files older than 95 days to clear up space from backup HDD
45 06 17 * * find /backups/disk1/xen-backups/xen_images* -type f -mtime +95 -exec rm {} \; 2>&1 >/dev/null

#### Delete xen config backups older than 1 year+3 days (368 days)
45 06 17 * * find /backups/disk1/xen-backups/xen_config* -type f -mtime +368 -exec rm {} \; 2>&1 >/dev/null

 

# Delete xen image files older than 95 days to clear up space from backup HDD
45 06 17 * * find /backups/disk1/xen-backups/xen_images* -type f -mtime +95 -exec rm {} \; 2>&1 >/dev/null

#### Delete xen config backups older than 1 year+3 days (368 days)
45 06 17 * * find /backups/disk1/xen-backups/xen_config* -type f -mtime +368 -exec rm {} \; 2>&1 >/dev/null

Howto install Google Chrome web browser on CentOS Linux 7

Friday, December 11th, 2020

After installing CentOS 7 Linux testing Virtual Machine in Oracle Virtualbox 6.1 to conduct some testing with php / html / javascript web script pages and use the VM for other work stuff that I later plan to deploy on production CentOS systems, I came to requirement of having a working Google Chrome browser.

In that regards, next to Firefox, I needed to test the web applications in commercial Google Chrome to see what its usercan expect. For those who don't know it Google Chrome is based on Chromium Open source browser (https://chromium.org) which is available by default via default CentOS EPEL repositories.

One remark to make here is before installing Google Chrome, I've also test my web scripts first with chromium, to install Chromium free browser on CentOS:

[root@localhost mozilla_test0]# yum install chromium
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirror.wwfx.net
 * epel: mirror.t-home.mk
 * extras: mirror.wwfx.net
 * updates: mirror.wwfx.net
Resolving Dependencies
–> Running transaction check
—> Package chromium.x86_64 0:85.0.4183.121-1.el7 will be installed
–> Processing Dependency: chromium-common(x86-64) = 85.0.4183.121-1.el7 for package: chromium-85.0.4183.121-1.el7.x86_64
–> Processing Dependency: nss-mdns(x86-64) for package: chromium-85.0.4183.121-1.el7.x86_64
–> Processing Dependency: libminizip.so.1()(64bit) for package: chromium-85.0.4183.121-1.el7.x86_64
–> Running transaction check
—> Package chromium-common.x86_64 0:85.0.4183.121-1.el7 will be installed
—> Package minizip.x86_64 0:1.2.7-18.el7 will be installed
—> Package nss-mdns.x86_64 0:0.14.1-9.el7 will be installed
–> Finished Dependency Resolution

 

Dependencies Resolved

============================================================================================================================================
 Package                              Arch                        Version                                   Repository                 Size
============================================================================================================================================
Installing:
 chromium                             x86_64                      85.0.4183.121-1.el7                       epel                       97 M
Installing for dependencies:
 chromium-common                      x86_64                      85.0.4183.121-1.el7                       epel                       16 M
 minizip                              x86_64                      1.2.7-18.el7                              base                       34 k
 nss-mdns                             x86_64                      0.14.1-9.el7                              epel                       43 k

Transaction Summary
============================================================================================================================================
Install  1 Package (+3 Dependent packages)

Total download size: 113 M
Installed size: 400 M
Is this ok [y/d/N]: y
Downloading packages:
(1/4): minizip-1.2.7-18.el7.x86_64.rpm                                                                               |  34 kB  00:00:00     
(2/4): chromium-common-85.0.4183.121-1.el7.x86_64.rpm                                                                |  16 MB  00:00:08     
(3/4): chromium-85.0.4183.121-1.el7.x86_64.rpm                                                                       |  97 MB  00:00:11     
(4/4): nss-mdns-0.14.1-9.el7.x86_64.rpm                                                                              |  43 kB  00:00:00     
——————————————————————————————————————————————–
Total                                                                                                       9.4 MB/s | 113 MB  00:00:12     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : minizip-1.2.7-18.el7.x86_64                                                                                              1/4
  Installing : chromium-common-85.0.4183.121-1.el7.x86_64                                                                               2/4
  Installing : nss-mdns-0.14.1-9.el7.x86_64                                                                                             3/4
  Installing : chromium-85.0.4183.121-1.el7.x86_64                                                                                      4/4
  Verifying  : chromium-common-85.0.4183.121-1.el7.x86_64                                                                               1/4
  Verifying  : minizip-1.2.7-18.el7.x86_64                                                                                              2/4
  Verifying  : chromium-85.0.4183.121-1.el7.x86_64                                                                                      3/4
  Verifying  : nss-mdns-0.14.1-9.el7.x86_64                                                                                             4/4

Installed:
  chromium.x86_64 0:85.0.4183.121-1.el7                                                                                                     

Dependency Installed:
  chromium-common.x86_64 0:85.0.4183.121-1.el7            minizip.x86_64 0:1.2.7-18.el7            nss-mdns.x86_64 0:0.14.1-9.el7           

Complete!

Chromium browser worked however it is much more buggy than Google Chrome and the load it puts on the machine as well as resources it consumes is terrible if compared to Proprietary G. Chrome.

Usually I don't like google chrome as it is a proprietary product and I don't even install it on my Linux Desktops, neither use as using is against any secure wise practice and but I needed this time ..

Thus to save myself some pains therefore proceeded and installed Google Chromium.
Installion  of Google Chrome is a straight forward process you download the latest rpm run below command to resolve all library dependencies and you're in:

chromium-open-source-browser-on-centos-7-screenshot

 

[root@localhost mozilla_test0]# rpm -ivh google-chrome-stable_current_x86_64.rpm
warning: google-chrome-stable_current_x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 7fac5991: NOKEY
error: Failed dependencies:
    liberation-fonts is needed by google-chrome-stable-87.0.4280.88-1.x86_64
    libvulkan.so.1()(64bit) is needed by google-chrome-stable-87.0.4280.88-1.x86_64
[root@localhost mozilla_test0]# wget https://dl.google.com/linux/direct/google-chrome-stable_current_x86_64.rpm
–2020-12-11 07:03:02–  https://dl.google.com/linux/direct/google-chrome-stable_current_x86_64.rpm
Resolving dl.google.com (dl.google.com)… 172.217.17.238, 2a00:1450:4017:802::200e
Connecting to dl.google.com (dl.google.com)|172.217.17.238|:443… connected.
HTTP request sent, awaiting response… 200 OK
Length: 72280700 (69M) [application/x-rpm]
Saving to: ‘google-chrome-stable_current_x86_64.rpm

 

100%[==================================================================================================>] 72,280,700  11.0MB/s   in 6.6s   

2020-12-11 07:03:09 (10.4 MB/s) – ‘google-chrome-stable_current_x86_64.rpm’ saved [72280700/72280700]

[root@localhost mozilla_test0]# yum localinstall google-chrome-stable_current_x86_64.rpm
Loaded plugins: fastestmirror, langpacks
Examining google-chrome-stable_current_x86_64.rpm: google-chrome-stable-87.0.4280.88-1.x86_64
Marking google-chrome-stable_current_x86_64.rpm to be installed
Resolving Dependencies
–> Running transaction check
—> Package google-chrome-stable.x86_64 0:87.0.4280.88-1 will be installed
–> Processing Dependency: liberation-fonts for package: google-chrome-stable-87.0.4280.88-1.x86_64
Loading mirror speeds from cached hostfile
 * base: mirror.wwfx.net
 * epel: mirrors.uni-ruse.bg
 * extras: mirror.wwfx.net
 * updates: mirror.wwfx.net
–> Processing Dependency: libvulkan.so.1()(64bit) for package: google-chrome-stable-87.0.4280.88-1.x86_64
–> Running transaction check
—> Package liberation-fonts.noarch 1:1.07.2-16.el7 will be installed
–> Processing Dependency: liberation-narrow-fonts = 1:1.07.2-16.el7 for package: 1:liberation-fonts-1.07.2-16.el7.noarch
—> Package vulkan.x86_64 0:1.1.97.0-1.el7 will be installed
–> Processing Dependency: vulkan-filesystem = 1.1.97.0-1.el7 for package: vulkan-1.1.97.0-1.el7.x86_64
–> Running transaction check
—> Package liberation-narrow-fonts.noarch 1:1.07.2-16.el7 will be installed
—> Package vulkan-filesystem.noarch 0:1.1.97.0-1.el7 will be installed
–> Finished Dependency Resolution

Dependencies Resolved

============================================================================================================================================
 Package                             Arch               Version                      Repository                                        Size
============================================================================================================================================
Installing:
 google-chrome-stable                x86_64             87.0.4280.88-1               /google-chrome-stable_current_x86_64             227 M
Installing for dependencies:
 liberation-fonts                    noarch             1:1.07.2-16.el7              base                                              13 k
 liberation-narrow-fonts             noarch             1:1.07.2-16.el7              base                                             202 k
 vulkan                              x86_64             1.1.97.0-1.el7               base                                             3.6 M
 vulkan-filesystem                   noarch             1.1.97.0-1.el7               base                                             6.3 k

Transaction Summary
============================================================================================================================================
Install  1 Package (+4 Dependent packages)

Total size: 231 M
Total download size: 3.8 M
Installed size: 249 M
Is this ok [y/d/N]: y
Downloading packages:
(1/4): liberation-fonts-1.07.2-16.el7.noarch.rpm                                                                     |  13 kB  00:00:00     
(2/4): liberation-narrow-fonts-1.07.2-16.el7.noarch.rpm                                                              | 202 kB  00:00:00     
(3/4): vulkan-filesystem-1.1.97.0-1.el7.noarch.rpm                                                                   | 6.3 kB  00:00:00     
(4/4): vulkan-1.1.97.0-1.el7.x86_64.rpm                                                                              | 3.6 MB  00:00:01     
——————————————————————————————————————————————–
Total                                                                                                       1.9 MB/s | 3.8 MB  00:00:02     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
  Installing : vulkan-filesystem-1.1.97.0-1.el7.noarch                                                                                  1/5
  Installing : vulkan-1.1.97.0-1.el7.x86_64                                                                                             2/5
  Installing : 1:liberation-narrow-fonts-1.07.2-16.el7.noarch                                                                           3/5
  Installing : 1:liberation-fonts-1.07.2-16.el7.noarch                                                                                  4/5
  Installing : google-chrome-stable-87.0.4280.88-1.x86_64                                                                               5/5
Redirecting to /bin/systemctl start atd.service
  Verifying  : vulkan-1.1.97.0-1.el7.x86_64                                                                                             1/5
  Verifying  : 1:liberation-narrow-fonts-1.07.2-16.el7.noarch                                                                           2/5
  Verifying  : 1:liberation-fonts-1.07.2-16.el7.noarch                                                                                  3/5
  Verifying  : google-chrome-stable-87.0.4280.88-1.x86_64                                                                               4/5
  Verifying  : vulkan-filesystem-1.1.97.0-1.el7.noarch                                                                                  5/5

Installed:
  google-chrome-stable.x86_64 0:87.0.4280.88-1                                                                                              

Dependency Installed:
  liberation-fonts.noarch 1:1.07.2-16.el7         liberation-narrow-fonts.noarch 1:1.07.2-16.el7       vulkan.x86_64 0:1.1.97.0-1.el7      
  vulkan-filesystem.noarch 0:1.1.97.0-1.el7      

Complete!
 

Once Chrome is installed you can either run it from gnome-terminal
 

[test@localhost ~]$ gnome-terminal &


Google-chrome-screenshot-on-centos-linux

Or find it in the list of CentOS programs:

Applications → Internet → Google Chrome

google-chrome-programs-list-internet-cetnos

Last step to do is to make Google Chrome easily updatable to keep up VM level on high security and let it get updated every time when apply security updates with yum check-update && yum upgrade
for that its necessery to create new custom repo file
/etc/yum.repos.d/google-chrome.repo

[root@localhost mozilla_test0]# vim /etc/yum.repos.d/google-chrome.repo
[google-chrome]
name=google-chrome
baseurl=http://dl.google.com/linux/chrome/rpm/stable/x86_64
enabled=1
gpgcheck=1
gpgkey=https://dl.google.com/linux/linux_signing_key.pub

Now letes import the gpg checksum key

[root@localhost mozilla_test0]# rpmkeys –import https://dl.google.com/linux/linux_signing_key.pub

That's all folks google-chrome is at your disposal.

Howto Upgrade IBM Spectrum Protect Backup Client TSM 7.X to 8.1.8, Update Tivoli 8.1.8 to 8.1.11 on CentOS and Redhat Linux

Thursday, December 3rd, 2020

 

IBM-spectrum-protect-backup-logo-tivoli-tsm-logo

Having another day of a system administrator boredom, we had a task to upgrade some Tivoli TSM Backup clients running on a 20+ machines powered by CentOS and RHEL Linux to prepare the systems to be on the latest patched IBM Spectrum Backup client version available from IBM. For the task of patching I've used a central server where, I've initially downloaded the provided TSM client binaries archives. From this machine, we have copied TivSM*.tar to each and every system that needs to be patched and then patched. The task is not too complex as the running TSM in the machines are all at the same version and all running a recent patched version of Linux. Hence to make sure all works as expected we have tested TSM is upgraded from 7.X.X to 8.X.X on one machine and then test 8.1.8 to 8.1.11 upgrade on another one. Once having confirmed that Backups works as expected after upgrade. We have proceeded to do it massively on each of the rest 20+ hosts.
Below article's goal is to help some lazy sysadmin with the task to prepare an TSM Backup upgrade procedure to standartize TSM Upgrade, which as many of the IBM's softwares is very specific and its upgrade requires, a bit of manual work and extra cautious as there seems to be no easy way (or at least I don't know it), to do the upgrade by simply adding an RPM repository and doing, something like yum install tivsm*.


0. Check if there is at least 2G free of space

According to documentation the minimum space you need to a functional install without having it half installed or filling up your filesystem is 2 Gigabytes of Free Memory on a filesystem where the .tar and rpms will be living.

Thus check what is the situation with your filesystem where you wills store the .tar archice and extract .RPM files / install the RPM files.

# df -h

1. Download the correct tarball with 8.1 Client

On one central machine you would need to download the Tivoli you can do that via wget / curl / lynx whatever is at hand on the Linux server.

As of time of writting this article TSM's 8.1.11 location is at
URL:

http://public.dhe.ibm.com/storage/tivoli-storage-management/maintenance/client/v8r1/Linux/LinuxX86/BA/v8111/

I've made a local download mirror of Tivoli TSM 8.1.11 here.
In case you need to install IBM Spectrum Backup Client to a PCI secured environment to a DMZ-ed LAN network from a work PC you can Download it first from your local PC and via Citrix client upload program or WinSCP upload it to a central replication host from where you will later copy to each of the other server nodes that needs to be upgraded.

Lets Copy archive to all Server hosts where you want it later installed, using a small hack

Assuming you already have an Excel document or a Plain text document with all the IPs of the affected hosts where you will need to get TSM upgraded. Extract this data and from it create a plain text file /home/user/hosts.txt containing all the machine IPs lined up separated with carriage return separations (\n), so you can loop over each one and use scp to send the files.

– Replicate Tivoli tar to all machine hosts where you want to get IBM Spectrum installed or upgraded.
Do it with a loop like this:

# for i in $(cat hosts.txt); do scp 8.1.11.0-TIV-TSMBAC-LinuxX86.tar user@$i:/home/user/; done

 Copy to a Copy buffer temporary your server password assuming all your passwords to each machine are identical and paste your login user pass for each host to initiate transfer
 

2. SSH to each of the Machine hosts IPs

Once you login to the host you want to upgrade
Go to your user $HOME /home/user and create files where we'll temporary store Tivoli archive files and extract RPMs

[root@linux-server user]# mkdir -p ~/tsm/TSM_BCK/
[root@linux-server user]# mv 8.1.11.0-TIV-TSMBAC-LinuxX86.tar ~/tsm
[root@linux-server user]# cd tsm
[root@linux-server user]# tar -xvvf 8.1.11.0-TIV-TSMBAC-LinuxX86.tar
gskcrypt64-8.0.55.17.linux.x86_64.rpm
GSKit.pub.pgp
gskssl64-8.0.55.17.linux.x86_64.rpm
README_api.htm
README.htm
RPM-GPG-KEY-ibmpkg
TIVsm-API64.x86_64.rpm
TIVsm-APIcit.x86_64.rpm
TIVsm-BAcit.x86_64.rpm
TIVsm-BAhdw.x86_64.rpm
TIVsm-BA.x86_64.rpm
TIVsm-filepath-source.tar.gz
TIVsm-JBB.x86_64.rpm
TIVsm-WEBGUI.x86_64.rpm
update.txt

3. Create backup of old backup files

It is always a good idea to keep old backup files

[root@linux-server tsm]# cp -av /opt/tivoli/tsm/client/ba/bin/dsm.opt ~/tsm/TSM_BCK/dsm.opt_bak_$(date +'%Y_%M_%H')
[root@linux-server tsm]# cp -av /opt/tivoli/tsm/client/ba/bin/dsm.sys ~/tsm/TSM_BCK/dsm.sys_bak_$(date +'%Y_%M_%H')

[root@linux-server tsm]# [[ -f /etc/adsm/TSM.PWD ]] && cp -av /etc/adsm/TSM.PWD ~/TSM_BCK/ || echo 'file doesnt exist'

/etc/adsm/TSM.PWD this file is only there as legacy for TSM it contained encrypted passwords inver 7 for updates. In TSM v.8 encryption file is not there as new mechanism for sensitive data was introduced.
Be aware that from Tivoli 8.X it will return error
exist'

!! Note – if dsm.opt , dsm.sys files are on different locations – please use correct full path locations !!

4. Stop  dsmcad – TSM Service daemon

[root@linux-server tsm]# systemctl stop dsmcad

5. Locate and deinstall all old Clients

Depending on the version to upgrade if you're upgrading from TSM version 7 to 8, you will get output like.

[root@linux-server tsm]# rpm -qa | grep 'TIVsm-'
TIVsm-BA-7.1.6-2.x86_64
TIVsm-API64-7.1.6-2.x86_64

If you're one of this paranoid admins you can remove TIVsm packs  one by one.

[root@linux-server tsm]# rpm -e TIVsm-BA-7.1.6-2.x86_64
[root@linux-server tsm]# rpm -e TIVsm-API64-7.1.6-2.x86_64

Instead if upgrading from version 8.1.8 to 8.1.11 due to the Security CVE advisory recently published by IBM e.g. (IBM Runtime Vulnerability affects IBM Spectrum Backup archive Client) and  vulnerability in Apache Commons Log4J affecting IBM Spectrum Protect Backup Archive Client.

[root@linux-server tsm]# rpm -qa | grep 'TIVsm-'
TIVsm-API64-8.1.8-0.x86_64
TIVsm-BA-8.1.8-0.x86_64

Assuming you're not scared of a bit automation you can straight do it with below one liner too 🙂

# rpm -e $(rpm -qa | grep TIVsm)

[root@linux-server tsm]# rpm -qa | grep gsk
[root@linux-server tsm]# rpm -e gskcrypt64 gskssl64

6. Check uninstallation success:

[root@linux-server tsm]# rpm -qa | grep TIVsm
[root@linux-server tsm]# rpm -qa | grep gsk

Here you should an Empty output, if packages are not on the system, e.g. Empty output is good output ! 🙂

7. Install new client IBM Spectrum Client (Tivoli Storage Manager) and lib dependencies

[root@linux-server tsm]# rpm -ivh gskcrypt64-8.0.55.4.linux.x86_64.rpm
[root@linux-server tsm]# rpm -ivh gskssl64-8.0.55.4.linux.x86_64.rpm

 If you're lazy to type you can do as well

[root@linux-server tsm]# rpm -Uvh gsk*

Next step is to install main Tivoli SM components the the API files and BA (The Backup Archive Client)

[root@linux-server tsm]# rpm -ivh TIVsm-API64.x86_64.rpm
[root@linux-server tsm]# rpm -ivh TIVsm-BA.x86_64.rpm

If you have to do it on multiple servers and you do it manually following a guide like this, you might instead want to install them with one liner.

[root@linux-server tsm]# rpm -ivh TIVsm-API64.x86_64.rpm TIVsm-BA.x86_64.rpm

There are some Not mandatory "Common Inventory Technology" components (at some cases if you're using the API install it we did not need that), just for the sake if you need them on your servers due to backup architecture, install also below commented rpm files.

## rpm -ivh TIVsm-APIcit.x86_64.rpm

## rpm -ivh TIVsm-BAcit.x86_64.rpm

These packages not needed only for operation WebGUI TSM GUI management, (JBB) Journal Based Backup, BAhdw (the ONTAP library)


— TIVsm-WEBGUI.x86_64.rpm
— TIVsm-JBB.x86_64.rpm
— TIVsm-BAhdw.x86_64.rpm

8. Start and enable dsmcad service

[root@linux-server tsm]# systemctl stop dsmcad

You will get

##Warning: dsmcad.service changed on disk. Run 'systemctl daemon-reload' to reload units.

[root@linux-server tsm]# systemctl daemon-reload

[root@linux-server tsm]# systemctl start dsmcad


## enable dsmcad – it is disabled by default after install

[root@linux-server ~]# systemctl enable dsmcad

[root@linux-server tsm]# systemctl status dsmcad

9. Check dmscad service is really running

Once enabled IBM TSM will spawn a process in the bacground dmscad if it started properly you should have the process backgrounded.

[root@linux-server tsm]# ps -ef|grep -i dsm|grep -v grep
root      2881     1  0 18:05 ?        00:00:01 /usr/bin/dsmcad

If process is not there there might be some library or something not at place preventing the process to start …

10. Check DSMCAD /var/tsm logs for errors

After having dsmcad process enabled and running in background

[root@linux-server tsm]# grep -i Version /var/tsm/sched.log|tail -1
12/03/2020 18:06:29   Server Version 8, Release 1, Level 10.000

 

[root@linux-server tsm]# cat /var/tsm/dsmerror.log

To see the current TSM configuration files we can  grep out comments *

[root@linux-server tsm]# grep -v '*' /opt/tivoli/tsm/client/ba/bin/dsm.sys

Example Configuration of the agent:
—————————————————-
   *TSM SERVER NODE Location
   Servername           tsm_server
   COMMmethod           TCPip
   TCPPort              1400
   TCPServeraddress     tsmserver2.backuphost.com
   NodeName             NODE.SERVER-TO-BACKUP-HOSTNAME.COM
   Passwordaccess       generate
   SCHEDLOGNAME         /var/tsm/sched.log
   SCHEDLOGRETENTION    21 D
   SCHEDMODE            POLLING
   MANAGEDServices      schedule
   ERRORLOGNAME         /var/tsm/dsmerror.log
   ERRORLOGRETENTION    30 D
   INCLEXCL             /opt/tivoli/tsm/client/ba/bin/inclexcl.tsm

11. Remove tsm install directory tar ball and rpms to save space on system

The current version of Tivoli service manager is 586 Megabytes.

[root@linux-server tsm]# du -hsc 8.1.11.0-TIV-TSMBAC-LinuxX86.tar
586M    8.1.11.0-TIV-TSMBAC-LinuxX86.tar

Some systems are on purpose configured to have less space under their /home directory,
hence it is a good idea to clear up unnecessery files after completion.

Lets get rid of all the IBM Spectrum archive source files and the rest of RPMs used for installation.

[root@linux-server tsm]# rm -rf ~/tsm/{*.tar,*.rpm,*.gpg,*.htm,*.txt}

12. Check backups are really created on the configured remote Central backup server

To make sure after the upgrade the backups are continuously created and properly stored on the IBM Tivoly remote central backup server, either manually initiate a backup or wait for lets say a day and run dsmc client to show all created backups from previous day. To make sure you'll not get empty output you can on purpose modify some file by simply opening it and writting over without chaning anything e.g. modify your ~/.bashrc or ~/.bash_profile

## List all backups for '/' root directory from -fromdate='DD/MM/YY'

[root@linux-server tsm]# dsmc
Protect>
IBM Spectrum Protect
Command Line Backup-Archive Client Interface
  Client Version 8, Release 1, Level 11.0
  Client date/time: 12/03/2020 18:14:03
(c) Copyright by IBM Corporation and other(s) 1990, 2020. All Rights Reserved.

Node Name: NODE.SERVER-TO-BACKUP-HOSTNAME.COM
Session established with server TSM2_SERVER: AIX
  Server Version 8, Release 1, Level 10.000
  Server date/time: 12/03/2020 18:14:04  Last access: 12/03/2020 18:06:29
 
Protect> query backup -subdir=yes "/" -fromdate=12/3/2020
           Size        Backup Date                Mgmt Class           A/I File
           —-        ———–                ———-           — —-
         6,776  B  12/03/2020 01:26:53             DEFAULT              A  /etc/freshclam.conf
         6,685  B  12/03/2020 01:26:53             DEFAULT              A  /etc/freshclam.conf-2020-12-02
         5,602  B  12/03/2020 01:26:53             DEFAULT              A  /etc/hosts
         5,506  B  12/03/2020 01:26:53             DEFAULT              A  /etc/hosts-2020-12-02
           398  B  12/03/2020 01:26:53             DEFAULT              A  /opt/tivoli/tsm/client/ba/bin/tsmstats.ini
       114,328  B  12/03/2020 01:26:53             DEFAULT              A  /root/.bash_history
           403  B  12/03/2020 01:26:53             DEFAULT              A  /root/.lesshst

VIM Project (VI Improvied IDE Editor extension to facilitate web development with vi enhanced editor

Wednesday, August 25th, 2010

I use VIM as an editor of choice for many years already.
Yet it's until recently I use it for a PHP ZF (Zend Framework) web development.

Few days ago I've blogged How to configure vimrc for a php syntax highlightning (A Nicely pre-configured vimrc to imrpove the daily text editing experience

This enhancements significantly improves the overall PHP code editing with VIM. However I felt something is yet missing because I didn't have the power and functunality of a complete IDE like for instance The Eclipse IDE

I was pretty sure that VIM has to have a way to be used in a similar fashion to a fully functional IDE and looked around the net to find for any VIM plugins that will add vim an IDE like coding interface.

I then come accross a vim plugin called VIM Prokject : Organize/Navigate projects of files (like IDE/buffer explorer)

The latest VIM Project as of time of writting is 1.4.1 and I've mirrored it here

The installation of the VIM Project VIM extension is pretty straight forward to install it and start using it on your PC issue commands:

1. Install the project VIM add-on

debian:~$ wget https://www.pc-freak.net/files/project-1.4.1.tar.gz
debian:~$ mv project-1.4.1.tar.gz ~/.vim/
debian:~$ cd ~/.vim/
debian:~$ tar -zxvvf project-1.4.1.tar.gz

2. Load the plugin

Launch your vim editor and type : Project(without the space between : and P)
You will further see a screen like:

vim project entry screen

3. You will have to press C within the Project window to load a new project

Then you will have to type a directory to use to load a project sources files from:

vim project enter file source directory screen

You will be prompted with to type a project name like in the screenshot below:

vim project load test project

4. Next you will have to type a CD (Current Dir) parameter
To see more about the CD parameter consult vim project documentation by typing in main vim pane :help project

The appearing screen will be something like:

vim project extension cd parameter screen

5. Thereafter you will have to type a file filter

File filter is necessary and will instruct the vim project plugin to load all files with the specified extension within vim project pane window

You will experience a screen like:


vim project plugin file filter screen

Following will be a short interval in which all specified files by the filter type will get loaded in VIM project pane and your Zend Framework, PHP or any other source files will be listed in a directory tree structure like in the picture shown below:

vim project successful loaded project screen

6. Saving loaded project hierarchy state

In order to save a state of a loaded project within the VIM project window pane you will have to type in vim, let's say:

:saveas .projects/someproject

Later on to load back the saved project state you will have to type in vim :r .projects/someproject

You will now have almost fully functional development IDE on top of your simple vim text editor.

You can navigate within the Project files loaded with the Project extension pane easily and select a file you would like to open up, whenever a source file is opened and you work on it to switch in between the Project file listing pane and the opened source code file you will have to type twice CTRL+w or in vim language C-w

To even further sophisticate your web development in PHP with vim you can add within your ~/.vimrc file the following two lines:

" run file with PHP CLI (CTRL-M)
:autocmd FileType php noremap <C-M> :w!<CR>:!/usr/bin/php %<CR>
" PHP parser check (CTRL-L)
:autocmd FileType php noremap <C-L> :!/usr/bin/php -l %>CR>

In the above vim configuration directovies the " character is a comment line and the autocmd is actually vim declarations.
The first :autocmd … declaration will instruct vim to execute your current opened php source file with the php cli interpreter whenever a key press of CTRL+M (C-m) occurs.

The second :autocmd … will add to your vim a shortcut, so whenever a CTRL+L (C-l) key combination is pressed VIM editor will check your current edited source file for syntax errors.
Therefore this will enable you to very easily periodically check if your file syntax is correct.

Well this things were really helpful to me, so I hope they will be profitable for you as well.
Cheers 🙂

Set all logs to log to to physical console /dev/tty12 (tty12) on Linux

Wednesday, August 12th, 2020

tty linux-logo how to log everything to last console terminal tty12

Those who administer servers from the days of birth of Linux and who used actively GNU / Linux over the years or any other UNIX knows how practical could be to configure logging of all running services / kernel messages / errors and warnings on a physical console.

Traditionally from the days I was learning Linux basics I was shown how to do this on an old Debian Sarge 3.0 Linux without systemd and on all Linux distributions Redhat 9.0 / Calderas and Mandrakes I've used either as a home systems or for servers. I've always configured output of all messages to go to the last easy to access console /dev/tty12 (for those who never use it console switching under Linux plain text console mode is done with key combination of CTRL + ALT + F1 .. F12.

In recent times however with the introduction of systemd pretty much things changed as messages to console are not handled by /etc/inittab which was used to add and refresh physical consoles tty1, tty2 … tty7 (the default added one on Linux were usually 7), but I had to manually include more respawn lines for each console in /etc/inittab.
Nowadays as of year 2020 Linux distros /etc/inittab is no longer there being obsoleted and console print out of INPUT / OUTPUT messages are handled by systemd.
 

1. Enable Physical TTYs from TTY8 till TTY12 etc.


The number of default consoles existing in most Linux distributions I've seen is still from tty1 to tty7. Hence to add more tty consoles and be ready to be able to switch out  not only towards tty7 but towards tty12 once you're connected to the server via a remote ILO (Integrated Lights Out) / IdRAC (Dell Remote Access Controller) / IPMI / IMM (Imtegrated Management Module), you have to do it by telling systemd issuing below systemctl commands:
 

 

 # systemctl enable getty@tty8.service Created symlink /etc/systemd/system/getty.target.wants/getty@tty8.service -> /lib/systemd/system/getty@.service.

systemctl enable getty@tty9.service

Created symlink /etc/systemd/system/getty.target.wants/getty@tty9.service -> /lib/systemd/system/getty@.service.

systemctl enable getty@tty10.service

Created symlink /etc/systemd/system/getty.target.wants/getty@tty10.service -> /lib/systemd/system/getty@.service.

systemctl enable getty@tty11.service

Created symlink /etc/systemd/system/getty.target.wants/getty@tty11.service -> /lib/systemd/system/getty@.service.

systemctl enable getty@tty12.service

Created symlink /etc/systemd/system/getty.target.wants/getty@tty12.service -> /lib/systemd/system/getty@.service.


Once the TTYS tty7 to tty12 are enabled you will be able to switch to this consoles either if you have a physical LCD / CRT monitor or KVM switch connected to the machine mounted on the Rack shelf once you're in the Data Center or will be able to see it once connected remotely via the Management IP Interface (ILO) remote console.
 

2. Taking screenshot of the physical console TTY with fbcat


For example below is a screenshot of the 10th enabled tty10:

tty10-linux-screenshot-fbcat-how-to-screenshot-console

As you can in the screenshot I've used the nice tool fbcat that can be used to make a screenshot of remote console. This is very useful especially if remote access via a SSH client such as PuTTY / MobaXterm is not there but you have only a physical attached monitor access on a DCs that are under a heavy firewall that is preventing anyone to get to the system remotely. For example screenshotting the physical console in case if there is a major hardware failure occurs and you need to dump a hardware error message to a flash drive that will be used to later be handled to technicians to analyize it and exchange the broken server hardware part.

Screenshots of the CLI with fbcat is possible across most Linux distributions where as usual.

In Debian you have to first instal the tool via :
 

# apt install –yes fbcat


and on RedHats / CentOS / Fedoras

# yum install -y fbcat


Taking screenshot once tool is on the server of whatever you have printed on console is as easy as

# fbcat > tty_name.ppm


Note that you might want to convert the .ppm created picture to png with any converter such as imagemagick's convert command or if you have a GUI perhaps with GNU Image Manipulation Tool (GIMP).

3. Enabling every rsyslog handled message to log to Physical TTY12


To make everything such as errors, notices, debug, warning messages  become instantly logging towards above added new /dev/tty12.

Open /etc/rsyslog.conf and to the end of the file append below line :
 

daemon,mail.*;\
   news.=crit;news.=err;news.=notice;\
   *.=debug;*.=info;\
   *.=notice;*.=warn   /dev/tty12


To make rsyslog load its new config restart it:

 

# systemctl status rsyslog

 

 

 

rsyslog.service – System Logging Service
   Loaded: loaded (/lib/systemd/system/rsyslog.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2020-08-10 04:09:36 EEST; 2 days ago
     Docs: man:rsyslogd(8)
           https://www.rsyslog.com/doc/
 Main PID: 671 (rsyslogd)
    Tasks: 4 (limit: 4915)
   Memory: 12.5M
   CGroup: /system.slice/rsyslog.service
           └─671 /usr/sbin/rsyslogd -n -iNONE

 

авг 12 00:00:05 pcfreak rsyslogd[671]:  [origin software="rsyslogd" swVersion="8.1901.0" x-pid="671" x-info="https://www.rsyslo
Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

 

systemctl restart rsyslog


That's all folks navigate by pressing simultaneously CTRL + ALT + F12 to get to TTY12 or use ALT + LEFT / ALT + RIGHT ARROW (console switch commands) till you get to the console where everything should be now logged.

Enjoy and if you like this article share to tell your sysadmin friends about this nice hack  ! 🙂

 

 

 

Linux Send Monitoring Alert Emails without Mail Server via relay SMTP with ssmtp / msmtp

Friday, July 10th, 2020

ssmtp-linux-server-sending-email-without-a-local-mail-server-mta-relay-howto

If you have to setup a new Linux server where you need to do a certain local running daemons monitoring with a custom scripts on the local machine Nagios / Zabbix / Graphana etc. that should notify about local running custom programs or services in case of a certain criteria is matched or you simply want your local existing UNIX accounts to be able to send outbound Emails to the Internet.

Then usually you need to install a fully functional SMTP Email server that was Sendmail or QMAIL in old times in early 21st century andusually postfix or Exim in recent days and configure it to use as as a Relay mail server some Kind of SMTP.

The common Relay smtp setting would be such as Google's smtp.gmail.com, Yahoo!'s  smtp.mail.yahoo.com relay host, mail.com or External configured MTA Physical server with proper PTR / MX records or a SMTP hosted on a virtual machine living in Amazon's AWS or m$ Azure that is capable to delivere EMails to the Internet.

Configuring the local installed Mail Transport Agent (MTA) as a relay server is a relatively easy task to do but of course why should you have a fully stacked MTA service with a number of unnecessery services such as Email Queue, Local created mailboxes, Firewall rules, DNS records, SMTP Auth, DKIM keys etc. and even the ability to acccept any emails back in case if you just want to simply careless send and forget with a confirmation that remote email was send successfully?

This is often the case for some machines and especially with the inclusion of technologies such as Kubernettes / Clustered environments / VirtualMachines small proggies such as ssmtp / msmtp that could send mail without a Fully functional mail server installed on localhost ( 127.0.0.1 ) is true jams.

ssmtp program is Simple Send-only sendMail emulator  has been around in Debian GNU / Linux, Ubuntu, CentOS and mostly all Linuxes for quite some a time but recently the Debian package has been orphaned so to install it on a deb based server host you need to use instead msmtp.
 

1. Install ssmtp on CentOS / Fedora / RHEL Linux

In RPM distributions you can't install until epel-release repository is enabled.

[root@centos:~]# yum –enablerepo=extras install epel-release

[root@centos:~]# yum install ssmtp


2. Install ssmp / msmtp Debian / Ubuntu Linux

If you run older version of Debian based distribution the package to install is ssmtp, e.g.:

root@debian:~# apt-get install –yes ssmtp


On Newer Debians as of Debian 10.0 Buster onwards install instead

root@debian:~# apt install –yes msmtp-mta

can save you a lot of effort to keep an eye on a separately MTA hanging around and running as a local service eating up resources that could be spared.
 

3. Configure Relay host for ssmtp


A simple configuration to make ssmtp use gmail.com SMTP servers as a relay host below:

linux:~# cat << EOF > /etc/ssmtp/ssmtp.conf
# /etc/ssmtp/ssmtp.conf
# The user that gets all the mails (UID < 1000, usually the admin)
root=user@host.name
# The full hostname.  Must be correctly formed, fully qualified domain name or GMail will reject connection.
hostname=host.name
# The mail server (where the mail is sent to), both port 465 or 587 should be acceptable
# See also https://support.google.com/mail/answer/78799
mailhub=smtp.gmail.com:587
#mailhub=smtp.host.name:465

# The address where the mail appears to come from for user authentication.
rewriteDomain=gmail.com
# Email 'From header's can override the default domain?

FromLineOverride=YES

# Username/Password
AuthUser=username@gmail.com
AuthPass=password
AuthMethod=LOGIN
# Use SSL/TLS before starting negotiation
UseTLS=YES
UseTLS=Yes
UseSTARTTLS=Yes
logfile        ~/.msmtp.log

EOF

This configuration is very basic and it is useful only if you don't want to get delivered mails back as this functionality is also supported even though rarely used by most.

One downside of ssmtp is mail password will be plain text, so make sure you set proper permissions to /etc/ssmtp/ssmtp.conf
 

– If your Gmail account is secured with two-factor authentication, you need to generate a unique App Password to use in ssmtp.conf. You can do so on your App Passwords page. Use Gmail username (not the App Name) in the AuthUser line and use the generated 16-character password in the AuthPass line, spaces in the password can be omitted.

– If you do not use two-factor authentication, you need to allow access to unsecure apps.
 

4. Configuring different msmtp for separate user profiles


SSMTP is capable of respecting multiple relays for different local UNIX users assuming each of whom has a separate home under /home/your-username

To set a certain user lets say georgi to relay smtp sent emails with mail or mailx command create ~/.msmtprc

 

linux:~# vim ~/.msmtprc


Append configuration like:

# Set default values for all following accounts.
defaults
port 587
tls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
account gmail
host smtp.gmail.com
from <user>@gmail.com
auth on
user <user>
passwordeval gpg –no-tty -q -d ~/.msmtp-gmail.gpg
# Set a default account

account default : gmail


To add it for any different user modify the respective fields and set the different Mail hostname etc.
 

5. Using mail address aliases


msmtp also supports mail aliases, to make them work you will need to have file /etc/msmptrc with
 

aliases               /etc/aliases


Standard aliasses them should work 

linux:~# cat /etc/aliases
# Example aliases file
     
# Send root to Joe and Jane
root: georgi_georgiev@example.com, georgi@example.com
   
# Send everything else to admin
default: admin@domain.example

 

6. Get updated when your Debian servers have new packages to update 

msmpt can be used for multiple stuff one example use would be to use it together with cron to get daily updates if there are new debian issued security or errata update pending packages, to do so you can use the apticron shell script.

To use it on debian install the apticron pack:
 

root@debian:~# apt-get install –yes apticron

apticron has the capability to:

 * send daily emails about pending upgrades in your system;
 * give you the choice of receiving only those upgrades not previously notified;
 * automatically integrate to apt-listchanges in order to give you by email the
   new changes of the pending upgrade packages;
 * handle and warn you about packages put on hold via aptitude/dselect,
   avoiding unexpected package upgrades (see #137771);
 * give you all these stuff in a simple default installation;

 

To configure it you have to place a config copy the one from /usr/lib/apticron/apticron.conf to /etc/apticron/apticron.conf

The only important value to modify in the config is the email address to which an apt-listchanges info for new installable debs from the apt-get dist-upgrade command. Output from them will be be send to the configured EMAIL field  in apticron.conf.
 

EMAIL="<your-user@email-addr-domain.com>"


The timing at which the offered new pending package update reminder will be sent is controlled by /etc/cron.d/apticron
 

debian:~# cat /etc/cron.d/apticron
# cron entry for apticron

48 * * * * root if test -x /usr/sbin/apticron; then /usr/sbin/apticron –cron; else true; fi

apticron will use the local previous ssmtp / msmpt program to deliver to configured mailbox.
To manually trigger apticron run:
 

root@debian:~# if test -x /usr/sbin/apticron; then /usr/sbin/apticron –cron; else true; fi


7. Test whether local mail send works to the Internet

To test mail sent we can use either mail / mailx or sendmail command or some more advanced mailer as alpine or mutt.

Below is few examples.

linux:~$ echo -e "Subject: this is the subject\n\nthis is the body" | mail user@your-recipient-domain.com

To test attachments to mail also works run:

linux:~$ mail -s "Subject" recipient-email@domain.com < mail-content-to-attach.txt

or

Prepare the mail you want to send and send it with sendmail

linux:~$ vim test-mail.txt
To:username@example.com
From:youraccount@gmail.com
Subject: Test Email
This is a test mail.

linux:~$ sendmail -t < test-mail.txt

Sending encoded atacchments with uuencode is also possible but you will need sharutils Deb / RPM package installed.

To attach lets say 2 simple text files uuencoded:

linux:~$ uuencode file.txt myfile.txt | sendmail user@example.com

echo "

To: username@domain.com From: username@gmail.com Subject: A test Hello there." > test.mail

linux:~$ cat test.mail | msmtp -a default <username>@domain.com


That's all folks, hope you learned something, if you know of some better stuff like ssmtp please shar e it.

Stop SSH Bruteforce authentication attempt Attacks with fail2ban

Monday, July 6th, 2020

Fail2ban stop restrict ssh bruteforce authentication attempt attacks

Most of webmasters today have some kind of SSH console remote access to the server and the OpenSSH Secure Shell service is usually not filtered for specific Networks but fully accessbible on the internet. This is especially true for home brew Linux Web servers as well as small to mid sized websites and blogs hosted on a cheap dedicated servers hosted in UK2 / Contabo RackSpace etc.

Brute force password guess attack tools such as Hydra and a distributed password dictionary files have been circulating quite for a while and if the attacker has enough time as well as a solid dictionary base, as well as some kind of relatively weak password you can expect that sooner or later some of the local UNIX accounts can be breaked and the script kiddie can get access to your server and make quickly a havoc, if he is lucky enough to be able to exploit some local vulnerability and get root access …

If you're a sysadmin that has to manage the Linux server and you do a routine log reading on the machine, you will soon get annoyed of the ever growing amount of different users, that are trying to login unsucessfully to the SSH (TCP port 22) service filling up the logs with junk and filling up disk space for nothing as well as consuming some CPU and Memory resources for nothing, you will need some easy  solution to make brute force attacks from an IP get filtered after few unsuccessful login attempts.
 

The common way to protect SSH would then be to ban an IP address from logging in if there are too many failed login attempts based on an automatic firewall inclusion of any IP that tried to unsuccessfully login lets say 5 or 10 reoccuring times.

In Linux there is a toll called “fail2ban” (F2B) used to limit brute force authentication attempts.

F2B works with minimal configuration and besides being capable of protecting the SSH service, it can be set to protect a lot of other Server applications like;

 Apache / NGINX Web Servers with PHP / Mail Servers (Exim, Postfix, Qmail, Sendmail), POP3 IMAP / AUTH services (Dovecot, Courier, Cyrus), DNS Bind servers, MySQL DBs, Monitoring tools such as Nagios, FTP servers (ProFTPD / PureFTP), Proxy servers (Squid), WordPress sites (wp-login) brute force attacks, Web Mail services (Horde / Roundcube / OpenWebmail), jabber servers etc.

To get a better overview below is F2B package description:
 

linux:~# apt-cache show fail2ban|grep -i description-en -A 21
Description-en: ban hosts that cause multiple authentication errors
 Fail2ban monitors log files (e.g. /var/log/auth.log,
 /var/log/apache/access.log) and temporarily or persistently bans
 failure-prone addresses by updating existing firewall rules.  Fail2ban
 allows easy specification of different actions to be taken such as to ban
 an IP using iptables or hostsdeny rules, or simply to send a notification
 email.
 .
 By default, it comes with filter expressions for various services
 (sshd, apache, qmail, proftpd, sasl etc.) but configuration can be
 easily extended for monitoring any other text file.  All filters and
 actions are given in the config files, thus fail2ban can be adopted
 to be used with a variety of files and firewalls.  Following recommends
 are listed:
 .
  – iptables/nftables — default installation uses iptables for banning.
    nftables is also suported. You most probably need it
  – whois — used by a number of *mail-whois* actions to send notification
    emails with whois information about attacker hosts. Unless you will use
    those you don't need whois
  – python3-pyinotify — unless you monitor services logs via systemd, you
    need pyinotify for efficient monitoring for log files changes

 

Using fail2ban is easy as there is a multitude of preexisting filters and actions (that gets triggered on a filter match) already written by different people usually found in /etc/fail2ban/filter.d an /etc/fail2ban/action.d.
as well as custom action / filter / jails is easy to do.
Fail2ban is available as a standard distro RPM / DEB package
on most modern versions of Ubuntu (16.04 and later), Debian, Mint and CentOS 7, OpenSuSE, Fedora etc.
 

1. Install Fail2ban package


– On Deb based distros do the usual:

 

linux:~# apt update
linux:~# apt install –yes fail2ban


On RPM distros Fedora / CentOS / SuSE etc.
 

linux:~# sudo yum -y install epel-release
linux:~# sudo yum -y install fail2ban

 

2. Enable fail2ban ban rules for SSH failed authentication filtering

 

Create the file /etc/fail2ban/jail.local :
 

# cat > /etc/fail2ban/jail.local


Paste below content:
 

[DEFAULT]
# Ban hosts for one hour:
bantime = 360
# Override /etc/fail2ban/jail.d/00-firewalld.conf:
banaction = iptables-multiport

 

[sshd]
enabled = true
maxretry = 10
findtime = 43200
# ban for 1 day
bantime = 3600
# ban for 1 day
#bantime = 86400

 

This config will ban for 1 day any IP that tries to access more than 10 unsuccesful times sshd daemon.
This works through IPTABLES indicated by config banaction = iptables-multiport config making fail2ban to automatically add a new iptables block rule valid for 1 day.

To close the file press CTRL + D simulanesly

  • maxretry controls the maximum number of allowed retries.
  • findtime specifies the time window (in seconds) which should be considered for banning an IP. (43200 seconds is 12 hours)
  • bantime specifies the time window (in seconds) for which the IP address will be banned (86400 seconds is 24 hours).

If SSH listens to a different port from 22 on the machine, you can specify the port number with  port = <port_number>  in this file.

 

3. Start fail2ban service


Either use the /etc/init.d/fail2ban start script or systemd systemctl
 

linux:~# systemctl enable fail2ban
linux:~# systemctl restart fail2ban

 

4. Fail2ban operational principle in short


Fail2ban uses Filters, Actions And Jails:

Filtersspecify certain patterns of text that Fail2ban should recognize in log files.
Actionsare things Fail2ban can do once a filter is matched.
Jailstell Fail2ban to match a filter on some logs. When the number of matches goes beyond a certain limit specified in the jail, Fail2ban takes an action specified in the jail.

If still wonder what is Fail2ban jail? Each configured jail tells fail2ban to look at system logs and take actions against attacks on a configured service, in our case OpenSSH service.
 

5. Blocking repeated unsuccessful password authentication attempts for longer periods


If more than number of failed ssh logins happen to occur (lets say 35 reoccuring ones in /var/log/auth.log (the debian failed ssh login file) or /var/log/secure (redhat distros failed ssh log file).
You will perhaps want to permanently block this IP for 3 days or so, here is how:
 

banaction = iptables-multiport
maxretry  = 35
findtime  = 259200
bantime   = 608400
enabled   = true
filter    = sshd

 

Add this to [sshlongterm] section to make it finally look like this:
 

[sshlongterm]
port      = ssh
logpath   = %(sshd_log)s
banaction = iptables-multiport
maxretry  = 35
findtime  = 259200
bantime   = 608400
enabled   = true
filter    = sshd


Of course to load new config restart fail2ban
 

# /etc/init.d/fail2ban restart


Default jail as well as the sshlongterm jail should now work together. Short term attacks will be handled by the default jail under [sshd], and the long term attacks handled by our own second jail [sshlongterm].
 

6. Checking which intruders were blocked by fail2ban


Fail2ban creates a separate chain f2b-sshd to which it adds each blocked IP for the period of time preset in the config, to list it:

linux:~# /sbin/iptables -L f2b-sshd
Chain f2b-sshd (1 references)
target     prot opt source               destination         
REJECT     all  —  165.22.2.95          anywhere             reject-with icmp-port-unreachable
REJECT     all  —  ec2-13-250-58-46.ap-southeast-1.compute.amazonaws.com  anywhere             reject-with icmp-port-unreachable
REJECT     all  —  cable200-116-3-133.epm.net.co  anywhere             reject-with icmp-port-unreachable
REJECT     all  —  mail.antracite.org   anywhere             reject-with icmp-port-unreachable
REJECT     all  —  139.59.57.2          anywhere             reject-with icmp-port-unreachable
REJECT     all  —  111.68.98.152.pern.pk  anywhere             reject-with icmp-port-unreachable
REJECT     all  —  216.126.59.61        anywhere             reject-with icmp-port-unreachable
REJECT     all  —  104.248.4.138        anywhere             reject-with icmp-port-unreachable
REJECT     all  —  210.211.119.10       anywhere             reject-with icmp-port-unreachable
REJECT     all  —  85.204.118.13        anywhere             reject-with icmp-port-unreachable
REJECT     all  —  broadband.actcorp.in  anywhere             reject-with icmp-port-unreachable
REJECT     all  —  104.236.33.155       anywhere             reject-with icmp-port-unreachable
REJECT     all  —  114.ip-167-114-114.net  anywhere             reject-with icmp-port-unreachable
REJECT     all  —  142.162.194.168.rfc6598.dynamic.copelfibra.com.br  anywhere             reject-with icmp-port-unreachable
REJECT     all  —  server.domain.cl     anywhere             reject-with icmp-port-unreachable
REJECT     all  —  ip202.ip-51-68-251.eu  anywhere             reject-with icmp-port-unreachable
REJECT     all  —  188.166.185.157      anywhere             reject-with icmp-port-unreachable
REJECT     all  —  93.49.11.206         anywhere             reject-with icmp-port-unreachable
RETURN     all  —  anywhere             anywhere            

 

Conclusion


What we have seen here is how to make fail2ban protect Internet firewall unrestricted SSHD Service to filter out 1337 skript kiddie 'hackers' out of your machine. With a bit of tuning could not only break the occasional SSH Brute Force bot scanners that craw the new but could even mitigate massive big botnet initiated brute force attacks to servers.
Of course fail2ban is not a panacea and to make sure you won't get hacked one days better make sure to only allow access to SSH service only for a certain IP addresses or IP address ranges that are of your own PCs.