Posts Tagged ‘use’

Get dmesg command kernel log report with human date / time timestamp on older Linux distributions

Friday, June 18th, 2021

how-to-get-dmesg-human-readable-timestamp-kernel-log-command-linux-logo

If you're a sysadmin you surely love to take a look at dmesg kernel log output. Usually on many Linux distributions there is a setup that dmesg keeps logging to log files /var/log/dmesg or /var/log/kern.log. But if you get some inherited old Linux servers it is quite possible that the previous machine maintainer did not enable the output of syslog to get logged in /var/log/{dmesg,kern.log,kernel.log}  or even have disabled the kernel log for some reason. Even though that in dmesg output you might find some interesting events reporting issues with Hard drives on its way to get broken / a bad / reads system processes crashing or whatever of other interesting information that could help you prevent severe servers downtimes or problems earlier but due to an old version of Linux distribution lets say Redhat 5 / Debian 6 or old CentOS / Fedora, the version of dmesg command shipped does not support the '-T' option that is present in util-linux package shipped with newer versions of  Redhat 7.X  / 8.X / SuSEs etc.  

 -T, –ctime
              Print human readable timestamps.  The timestamp could be inaccurate!

To illustrate better what I mean, here is an example from the non-human readable timestamp provided by older dmesg command

root@web-server~:# dmesg |tail -n 5
[4505913.361095] hid-generic 0003:1C4F:0002.000E: input,hidraw1: USB HID v1.10 Device [SIGMACHIP USB Keyboard] on usb-0000:00:1d.0-1.3/input1
[4558251.034024] Process accounting resumed
[4615396.191090] r8169 0000:03:00.0 eth1: Link is Down
[4615397.856950] r8169 0000:03:00.0 eth1: Link is Up – 100Mbps/Full – flow control rx/tx
[4644650.095723] Process accounting resumed

Thanksfully using below few lines of shell or perl scripts the dmesg -T  functionality could be added to the system , so you can easily get the proper timestamp out of the obscure default generated timestamp in the same manner as on newer distros.

Here is how to do with it with bash script:

#!/bin/sh paste in .bashrc and use dmesgt to get human readable timestamp
dmesg_with_human_timestamps () {
    FORMAT="%a %b %d %H:%M:%S %Y"

 

    now=$(date +%s)
    cputime_line=$(grep -m1 "\.clock" /proc/sched_debug)

    if [[ $cputime_line =~ [^0-9]*([0-9]*).* ]]; then
        cputime=$((BASH_REMATCH[1] / 1000))
    fi

    dmesg | while IFS= read -r line; do
        if [[ $line =~ ^\[\ *([0-9]+)\.[0-9]+\]\ (.*) ]]; then
            stamp=$((now-cputime+BASH_REMATCH[1]))
            echo "[$(date +”${FORMAT}” –date=@${stamp})] ${BASH_REMATCH[2]}"
        else
            echo "$line"
        fi
    done
}

Copy the script somewhere under lets say /usr/local/bin or wherever you like on the server and add into your HOME ~/.bashrc some alias like:
 

alias dmesgt=dmesg_with_timestamp.sh


You can get a copy dmesg_with_timestamp.sh of the script from here

Or you can use below few lines perl script to get the proper dmeg kernel date / time

 

#!/bin/perl
# on old Linux distros CentOS 6.0 etc. with dmesg (part of util-linux-ng-2.17.2-12.28.el6_9.2.x86_64) etc. dmesg -T not available
# workaround is little pl script below
dmesg_with_human_timestamps () {
    $(type -P dmesg) "$@" | perl -w -e 'use strict;
        my ($uptime) = do { local @ARGV="/proc/uptime";<>}; ($uptime) = ($uptime =~ /^(\d+)\./);
        foreach my $line (<>) {
            printf( ($line=~/^\[\s*(\d+)\.\d+\](.+)/) ? ( “[%s]%s\n", scalar localtime(time – $uptime + $1), $2 ) : $line )
        }'
}


Again to make use of the script put it under /usr/local/bin/check_dmesg_timestamp.pl

alias dmesgt=dmesg_with_human_timestamps

root@web-server:~# dmesgt | tail -n 20

[Sun Jun 13 15:51:49 2021] usb 2-1.3: USB disconnect, device number 9
[Sun Jun 13 15:51:50 2021] usb 2-1.3: new low-speed USB device number 10 using ehci-pci
[Sun Jun 13 15:51:50 2021] usb 2-1.3: New USB device found, idVendor=1c4f, idProduct=0002, bcdDevice= 1.10
[Sun Jun 13 15:51:50 2021] usb 2-1.3: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[Sun Jun 13 15:51:50 2021] usb 2-1.3: Product: USB Keyboard
[Sun Jun 13 15:51:50 2021] usb 2-1.3: Manufacturer: SIGMACHIP
[Sun Jun 13 15:51:50 2021] input: SIGMACHIP USB Keyboard as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.3/2-1.3:1.0/0003:1C4F:0002.000D/input/input25
[Sun Jun 13 15:51:50 2021] hid-generic 0003:1C4F:0002.000D: input,hidraw0: USB HID v1.10 Keyboard [SIGMACHIP USB Keyboard] on usb-0000:00:1d.0-1.3/input0
[Sun Jun 13 15:51:50 2021] input: SIGMACHIP USB Keyboard Consumer Control as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.3/2-1.3:1.1/0003:1C4F:0002.000E/input/input26
[Sun Jun 13 15:51:50 2021] input: SIGMACHIP USB Keyboard System Control as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.3/2-1.3:1.1/0003:1C4F:0002.000E/input/input27
[Sun Jun 13 15:51:50 2021] hid-generic 0003:1C4F:0002.000E: input,hidraw1: USB HID v1.10 Device [SIGMACHIP USB Keyboard] on usb-0000:00:1d.0-1.3/input1
[Mon Jun 14 06:24:08 2021] Process accounting resumed
[Mon Jun 14 22:16:33 2021] r8169 0000:03:00.0 eth1: Link is Down
[Mon Jun 14 22:16:34 2021] r8169 0000:03:00.0 eth1: Link is Up – 100Mbps/Full – flow control rx/tx

KVM Virtual Machine RHEL 8.3 Linux install on Redhat 8.3 Linux Hypervisor with custom tailored kickstart.cfg

Friday, January 22nd, 2021

kvm_virtualization-logo-redhat-8.3-install-howto-with-kickstart

If you don't have tried it yet Redhat and CentOS and other RPM based Linux operationg systems that use anaconda installer is generating a kickstart file after being installed under /root/{anaconda-ks.cfg,initial-setup- ks.cfg,original-ks.cfg} immediately after the OS installation completes. Using this Kickstart file template you can automate installation of Redhat installation with exactly the same configuration as many times as you like by directly loading your /root/original-ks.cfg file in RHEL installer.

Here is the official description of Kickstart files from Redhat:

"The Red Hat Enterprise Linux installation process automatically writes a Kickstart file that contains the settings for the installed system. This file is always saved as /root/anaconda-ks.cfg. You may use this file to repeat the installation with identical settings, or modify copies to specify settings for other systems."


Kickstart files contain answers to all questions normally asked by the text / graphical installation program, such as what time zone you want the system to use, how the drives should be partitioned, or which packages should be installed. Providing a prepared Kickstart file when the installation begins therefore allows you to perform the installation automatically, without need for any intervention from the user. This is especially useful when deploying Redhat based distro (RHEL / CentOS / Fedora …) on a large number of systems at once and in general pretty useful if you're into the field of so called "DevOps" system administration and you need to provision a certain set of OS to a multitude of physical servers or create or recreate easily virtual machines with a certain set of configuration.
 

1. Create /vmprivate storage directory where Virtual machines will reside

First step on the Hypervisor host which will hold the future created virtual machines is to create location where it will be created:

[root@redhat ~]#  lvcreate –size 140G –name vmprivate vg00
[root@redhat ~]#  mkfs.ext4 -j -b 4096 /dev/mapper/vg00-vmprivate
[root@redhat ~]# mount /dev/mapper/vg00-vmprivate /vmprivate

To view what is the situation with Logical Volumes and  VG group names:

[root@redhat ~]# vgdisplay -v|grep -i vmprivate -A7 -B7
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:0

 

  — Logical volume —
  LV Path                /dev/vg00/vmprivate
  LV Name                vmprivate
  VG Name                vg00
  LV UUID                VVUgsf-FXq2-TsMJ-QPLw-7lGb-Dq5m-3J9XJJ
  LV Write Access        read/write
  LV Creation host, time lpgblu01f.ffm.de.int.atosorigin.com, 2021-01-20 17:26:11 +0100
  LV Status              available
  # open                 1
  LV Size                150.00 GiB


Note that you'll need to have the size physically available on a SAS / SSD Hard Drive physically connected to Hypervisor Host.

To make the changes Virtual Machines storage location directory permanently mounted add to /etc/fstab

/dev/mapper/vg00-vmprivate  /vmprivate              ext4    defaults,nodev,nosuid 1 2

[root@redhat ~]# echo '/dev/mapper/vg00-vmprivate  /vmprivate              ext4    defaults,nodev,nosuid 1 2' >> /etc/fstab

 

2. Second we need to install the following set of RPM packages on the Hypervisor Hardware host

[root@redhat ~]# yum install qemu-kvm qemu-img libvirt virt-install libvirt-client virt-manager libguestfs-tools virt-install virt-top -y

3. Enable libvirtd on the host

[root@redhat ~]#  lsmod | grep -i kvm
[root@redhat ~]#  systemctl enable libvirtd

4. Configure network bridging br0 interface on Hypervisor


In /etc/sysconfig/network-scripts/ifcfg-eth0 you need to include:

NM_CONTROLED=NO

Next use nmcli redhat configurator to create the bridge (you can use ip command instead) but since the tool is the redhat way to do it lets do it their way ..

[root@redhat ~]# nmcli connection delete eno3
[root@redhat ~]# nmcli connection add type bridge autoconnect yes con-name br0 ifname br0
[root@redhat ~]# nmcli connection modify br0 ipv4.addresses 10.80.51.16/26 ipv4.method manual
[root@redhat ~]# nmcli connection modify br0 ipv4.gateway 10.80.51.1
[root@redhat ~]# nmcli connection modify br0 ipv4.dns 172.20.88.2
[root@redhat ~]# nmcli connection add type bridge-slave autoconnect yes con-name eno3 ifname eno3 master br0
[root@redhat ~]# nmcli connection up br0

5. Prepare a working kickstart.cfg file for VM


Below is a sample kickstart file I've used to build a working fully functional Virtual Machine with Red Hat Enterprise Linux 8.3 (Ootpa) .

#version=RHEL8
#install
# Run the Setup Agent on first boot
firstboot --enable
ignoredisk --only-use=vda
# Use network installation
#url --url=http://hostname.com/rhel/8/BaseOS
##url --url=http://171.23.8.65/rhel/8/os/BaseOS
# Use text mode install
text
#graphical
# System language
#lang en_US.UTF-8
keyboard --vckeymap=us --xlayouts='us'
# Keyboard layouts
##keyboard us
lang en_US.UTF-8
# Root password
rootpw $6$gTiUCif4$YdKxeewgwYCLS4uRc/XOeKSitvDJNHFycxWVHi.RYGkgKctTMCAiY2TErua5Yh7flw2lUijooOClQQhlbstZ81 --iscrypted
# network-stuff
# place ip=your_VM_IP, netmask, gateway, nameserver hostname 
network --bootproto=static --ip=10.80.21.19 --netmask=255.255.255.192 --gateway=10.80.21.1 --nameserver=172.30.85.2 --device=eth0 --noipv6 --hostname=FQDN.VMhost.com --onboot=yes
# if you need just localhost initially configured uncomment and comment above
##network В --device=lo --hostname=localhost.localdomain
# System authorization information
authconfig --enableshadow --passalgo=sha512 --enablefingerprint
# skipx
skipx
# Firewall configuration
firewall --disabled
# System timezone
timezone Europe/Berlin
# Clear the Master Boot Record
##zerombr
# Repositories
## Add RPM repositories from KS file if necessery
#repo --name=appstream --baseurl=http://hostname.com/rhel/8/AppStream
#repo --name=baseos --baseurl=http://hostname.com/rhel/8/BaseOS
#repo --name=inst.stage2 --baseurl=http://hostname.com ff=/dev/vg0/vmprivate
##repo --name=rhsm-baseos В  В --baseurl=http://172.54.8.65/rhel/8/rhsm/x86_64/BaseOS/
##repo --name=rhsm-appstream --baseurl=http://172.54.8.65/rhel/8/rhsm/x86_64/AppStream/
##repo --name=os-baseos В  В  В --baseurl=http://172.54.9.65/rhel/8/os/BaseOS/
##repo --name=os-appstream В  --baseurl=http://172.54.8.65/rhel/8/os/AppStream/
#repo --name=inst.stage2 --baseurl=http://172.54.8.65/rhel/8/BaseOS
# Disk partitioning information set proper disk sizing
##bootloader --location=mbr --boot-drive=vda
bootloader --append=" crashkernel=auto tsc=reliable divider=10 plymouth.enable=0 console=ttyS0 " --location=mbr --boot-drive=vda
# partition plan
zerombr
clearpart --all --drives=vda --initlabel
part /boot --size=1024 --fstype=ext4 --asprimary
part swap --size=1024
part pv.01 --size=30000 --grow --ondisk=vda
##part pv.0 --size=80000 --fstype=lvmpv
#part pv.0 --size=61440 --fstype=lvmpv
volgroup s pv.01
logvol / --vgname=s --size=15360 --name=root --fstype=ext4
logvol /var/cache/ --vgname=s --size=5120 --name=cache --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var/log --vgname=s --size=7680 --name=log --fstype=ext4 --fsoptions="defaults,nodev,noexec,nosuid"
logvol /tmp --vgname=s --size=5120 --name=tmp --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /home --vgname=s --size=5120 --name=home --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /opt --vgname=s --size=2048 --name=opt --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var/log/audit --vgname=s --size=3072 --name=audit --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var/spool --vgname=s --size=2048 --name=spool --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var --vgname=s --size=7680 --name=var --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
# SELinux configuration
selinux --disabled
# Installation logging level
logging --level=debug
# reboot automatically
reboot
###
%packages
@standard
python3
pam_ssh_agent_auth
-nmap-ncat
#-plymouth
#-bpftool
-cockpit
#-cryptsetup
-usbutils
#-kmod-kvdo
#-ledmon
#-libstoragemgmt
#-lvm2
#-mdadm
-rsync
#-smartmontools
-sos
-subscription-manager-cockpit
# Tune Linux vm.dirty_background_bytes (IMAGE-439)
# The following tuning causes dirty data to begin to be background flushed at
# 100 Mbytes, so that it writes earlier and more often to avoid a large build
# up and improving overall throughput.
echo "vm.dirty_background_bytes=100000000" >> /etc/sysctl.conf
# Disable kdump
systemctl disable kdump.service
%end

Important note to make here is the MD5 set root password string in (rootpw) line this string can be generated with openssl or mkpasswd commands :

Method 1: use openssl cmd to generate (md5, sha256, sha512) encrypted pass string

[root@redhat ~]# openssl passwd -6 -salt xyz test
$6$xyz$rjarwc/BNZWcH6B31aAXWo1942.i7rCX5AT/oxALL5gCznYVGKh6nycQVZiHDVbnbu0BsQyPfBgqYveKcCgOE0

Note: passing -1 will generate an MD5 password, -5 a SHA256 encryption and -6 SHA512 encrypted string (logically recommended for better security)

Method 2: (md5, sha256, sha512)

[root@redhat ~]# mkpasswd –method=SHA-512 –stdin

The option –method accepts md5, sha-256 and sha-512
Theoretically there is also a kickstart file generator web interface on Redhat's site here however I never used it myself but instead use above kickstart.cfg
 

6. Install the new VM with virt-install cmd


Roll the new preconfigured VM based on above ks template file use some kind of one liner command line  like below:
 

[root@redhat ~]# virt-install -n RHEL8_3-VirtualMachine –description "CentOS 8.3 Virtual Machine" –os-type=Linux –os-variant=rhel8.3 –ram=8192 –vcpus=8 –location=/vmprivate/rhel-server-8.3-x86_64-dvd.iso –disk path=/vmprivate/RHEL8_3-VirtualMachine.img,bus=virtio,size=70 –graphics none –initrd-inject=/root/kickstart.cfg –extra-args "console=ttyS0 ks=file:/kickstart.cfg"

7. Use a tiny shell script to automate VM creation


For some clarity and better automation in case you plan to repeat VM creation you can prepare a tiny bash shell script:
 

#!/bin/sh
KS_FILE='kickstart.cfg';
VM_NAME='RHEL8_3-VirtualMachine';
VM_DESCR='CentOS 8.3 Virtual Machine';
RAM='8192';
CPUS='8';
# size is in Gigabytes
VM_IMG_SIZE='140';
ISO_LOCATION='/vmprivate/rhel-server-8.3-x86_64-dvd.iso';
VM_IMG_FILE_LOC='/vmprivate/RHEL8_3-VirtualMachine.img';

virt-install -n "$VMNAME" –description "$VM_DESCR" –os-type=Linux –os-variant=rhel8.3 –ram=8192 –vcpus=8 –location="$ISO_LOCATION" –disk path=$VM_IMG_FILE,bus=virtio,size=$IMG_VM_SIZE –graphics none –initrd-inject=/root/$KS_FILE –extra-args "console=ttyS0 ks=file:/$KS_FILE"


A copy of virt-install.sh script can be downloaded here

Wait for the installation to finish it should be visualized and if all installation is smooth you should get a login prompt use the password generated with openssl tool and test to login, then disconnect from the machine by pressing CTRL + ] and try to login via TTY with

[root@redhat ~]# virst list –all
 Id   Name        State
—————————
 2    
RHEL8_3-VirtualMachine   running

[root@redhat ~]#  virsh console RHEL8_3-VirtualMachine


redhat8-login-prompt

One last thing I recommend you check the official documentation on Kickstart2 from CentOS official website

In case if you later need to destroy the VM and the respective created Image file you can do it with:
 

[root@redhat ~]#  virsh destroy RHEL8_3-VirtualMachine
[root@redhat ~]#  virsh undefine RHEL8_3-VirtualMachine

Don't forget to celebreate the success and give this nice article a credit by sharing this nice tutorial with a friend or by placing a link to it from your blog 🙂

 

 

Enjoy !

Configure rsyslog buffering on Linux to avoid message lost to Central Logging server

Wednesday, January 13th, 2021

rsyslog-Centralized-Logging-System-using-Rsyslog_logo

1. Rsyslog Buffering

One of the best practice about logs management is to send syslog to a central server. However, a logging system should be capable of avoiding message loss in situations where the server is not reachable. To do so, unsent data needs to be buffered at the client when central server is not available. You might have recently noticed that many servers forwarding logs messages to a central server do not have buffering functionalities activated. Thus I strongly advise you to have look to this documentation to know how to check your configuration: http://www.rsyslog.com/doc/rsyslog_reliable_forwarding.html

Rsyslog buffering with TCP/UDP configured

In rsyslog, every action runs on its own queue and each queue can be set to buffer data if the action is not ready. Of course, you must be able to detect that "the action is not ready", which means the remote server is offline. This can be detected with plain TCP syslog and RELP, but not with UDP. So you need to use either of the two. In this howto, we use plain TCP syslog.

– Version requirement

Please note that we are using rsyslog-specific features. The are required on the client, but not on the server. So the client system must run rsyslog (at least version 3.12.0), while on the server another syslogd may be running, as long as it supports plain tcp syslog.

How To Setup rsyslog buffering on Linux

First, you need to create a working directory for rsyslog. This is where it stores its queue files (should need arise). You may use any location on your local system. Next, you need to do is instruct rsyslog to use a disk queue and then configure your action. There is nothing else to do. With the following simple config file, you forward anything you receive to a remote server and have buffering applied automatically when it goes down. This must be done on the client machine.

# Example:
# $ModLoad imuxsock             # local message reception
# $WorkDirectory /rsyslog/work  # default location for work (spool) files
# $ActionQueueType LinkedList   # use asynchronous processing
# $ActionQueueFileName srvrfwd  # set file name, also enables disk mode
# $ActionResumeRetryCount -1    # infinite retries on insert failure
# $ActionQueueSaveOnShutdown on # save in-memory data if rsyslog shuts down
# *.*       @@server:port

How to check Linux server power supply state is Okay / How to find out a Linux Power Supply is broken

Wednesday, January 6th, 2021

2U-power-supplies-get-status-if-Power-supply-broken-information-linux-ipmitool

If you're a sysadmin and managing remotely Linux servers, every now and then if a machine is hanging without a reason it useful to check the server Power Supply state. I say that because often if the machine is mysteriously hanging and a standard Root Cause Analysis (RCA) on /var/log/messages /var/log/dmesg /var/log/boot etc. did not bring you to any different conclusion. The next step after you send a technician to reboot the machine is to check on Linux OS level whether Power Supply Unit (PSU) hardware on the machine does not have some issues.
As blogged earlier on how to use ipmitool to manage remote ILO remote boards etc. the ipmitool can also be used to check status of Server PSUs.

Below is example output of 2 PSU server whose Power Supplies are functioning normally.
 

[root@linux-server ~]# ipmitool sdr type "Power Supply"

PS Heavy Load    | 2Bh | ok  | 19.1 | State Deasserted
Power Supply 1   | 70h | ok  | 10.1 | Presence detected
Power Supply 2   | 71h | ok  | 10.2 | Presence detected
PS Configuration | 72h | ok  | 19.1 |
PS 1 Therm Fault | 75h | ok  | 10.1 | Transition to OK
PS 2 Therm Fault | 76h | ok  | 10.2 | Transition to OK
PS1 12V OV Fault | 77h | ok  | 10.1 | Transition to OK
PS2 12V OV Fault | 78h | ok  | 10.2 | Transition to OK
PS1 12V UV Fault | 79h | ok  | 10.1 | Transition to OK
PS2 12V UV Fault | 7Ah | ok  | 10.2 | Transition to OK
PS1 12V OC Fault | 7Bh | ok  | 10.1 | Transition to OK
PS2 12V OC Fault | 7Ch | ok  | 10.2 | Transition to OK
PS1 12Vaux Fault | 7Dh | ok  | 10.1 | Transition to OK
PS2 12Vaux Fault | 7Eh | ok  | 10.2 | Transition to OK
Power Unit       | 7Fh | ok  | 19.1 | Fully Redundant

Now if you have a server lets say on an old ProLiant DL360e Gen8 whose Power Supply is damaged, you will get an from ipmitool similar to:

[root@linux-server  systemd]# ipmitool sdr type "Power Supply"
Power Supply 1   | 30h | ok  | 10.1 | 100 Watts, Presence detected
Power Supply 2   | 31h | ok  | 10.2 | 0 Watts, Presence detected, Failure detected, Power Supply AC lost
Power Supplies   | 33h | ok  | 10.3 | Redundancy Lost


If you don't have ipmitool installed due to security or whatever but you have the hardware detection software dmidecode you can use it too to get the Power Supply state

[root@linux-server  systemd]# dmidecode -t chassis
# dmidecode 3.2
Getting SMBIOS data from sysfs.
SMBIOS 2.8 present.

 

Handle 0x0300, DMI type 3, 21 bytes
Chassis Information
        Manufacturer: HP
        Type: Rack Mount Chassis
        Lock: Not Present
        Version: Not Specified
        Serial Number: CZJ38201ZH
        Asset Tag:
        Boot-up State: Critical
        Power Supply State: Critical

        Thermal State: Safe
        Security Status: Unknown
        OEM Information: 0x00000000
        Height: 1 U
        Number Of Power Cords: 2
        Contained Elements: 0

To find only Power Supply info status on a server with dmideode.

# dmidecode –type 39

monitoring-power-supply-hardware-information-linux-ipmitool

Plug between the power supply and the mainboard voltage / coms ATX specification

This can also be used on a normal Linux desktop PCs which usually have only 1U (one power supply) on many of Ubuntus and Linux desktops where lshw (list hardaware information) is installed to get the machine PSUs status with lshw 

 root@ubuntu:~# lshw -c power
  *-battery               
       product: 45N1111
       vendor: SONY
       physical id: 1
       slot: Front
       capacity: 23200mWh
       configuration: voltage=11.1V
        Thermal State: Safe
        Security Status: Unknown
        OEM Information: 0x00000000
        Height: 1 U
        Number Of Power Cords: 2
        Contained Elements: 0


Finally to get an extensive information on the voltages of the Power Supply you can use the good old lm_sensors.

# apt-get install lm-sensors
# sensors-detect 
# service kmod start

# sensors
# watch sensors


As manually monitoring Power Supplies and other various data is dubious, finally you might want to use some centralized monitoring. For one example on that you might want to check my prior Zabbix to Monitor Hardware Hard Drive / Temperature and Disk with lm_sensors / smartd on Linux with Zabbix.

SEO: Best day and time to write new articles and tweet to get more blog reads – Social Network Timing

Tuesday, June 17th, 2014

what-is-best-time-to-write-articles-to-increase-your-blog-traffic

I'm trying to regularly blog – as this gives me a roadmap what I'm into and how I spent my time. When have free time,  I blog almost daily except on weekends (as in weekends I'm trying to stay away from computers). So if you want to attract more readers to your blog the interesting question arises
 

What time is best to hit publish on your posts?

writing-in-the-mogning-on-the-internet-timing-morning-is-best-for-your-posts
Now there are different angles from where you can extract conclusions on best timing to blog post.One major thing to consider always when posting is that highest percentage of users read blogs in the morning with their morning coffee. Here are some more facts on when web content is more red:

  • 70% of users say they read blogs in the morning
  • More men read blogs at night than woman
  • Mondays are the highest traffic days for avarage blogs
  • 11 a.m. is normally the highest traffic hour for blogs
  • Usually most comments are put on Saturdays
  • Blogs with more than one post a day has higher chance of inbound links and usually get more unique visitors

As my blog is more technical oriented most of my visitors are men and therefore posting my blogs at night doesn't interfere much with my readers.
However, I've noticed that for me personally posting in time interval from 13:00 to 17:00 influence positively the amount of unique visitors the blog gets.

According to research done by Social Fresh – Thursday is the best day to publish an article if you want to get more Social SharesBest-Day-to-Blog-to-get-more-shares-in-social-networks

As a rule of thumb Thursday wins 10% more shares than all other days. In fact, 31% of the top 100 social share days in 2011 fell on Thursday.
My logical explanation on this phenomenon is that people tend to be more and more bored from their work and try to entertain more and more as the week progresses.

To get more attention on what I'm writting I use a bit of social networking but I prefer using only a micro blogging social networking.  I use Twitter to share what I'm into. When I write a new article on my blog I tweet its title with a link to my article, because this drives people attention to what I have to say.

In overall I am skeptical about social siting like Facebook and MySpace because it has negative impact on how people use their time and especially negative on youngsters Other reason why I don't like Friends Networks is because sharing what you have to say on sites like FB, Google+ or "The Russian Facebook" –  Vkontekte VK.com are not respecting privacy of your data.

 

You write free fresh content for their website for free and you get nothing!

 

Moreover by daily posting latest buzz you read / watched on Facebook etc. or simply saying what's happening with you, where you're situated now etc., you slowly get addicted to posting – yes for good or bad people tend to be maniacal).

By placing all of your pesronal or impersonal stuff online, you're making these sites better index their sites into Google / Yahoo / Yandex search engines and therefore making them profitable and high ranked websites on the internet and giving out your personal time for Facebook profit? + you loose control over your data (your data is not physically on your side but situated on some remote server, somewhere on the internet).
 

Best avarage time to post on Tweet Facebook, Google+ and Linkedin

best-time-and-day-to-write-new-articles-schedule-content-at-the-right-time-on-social-media-to-get-high-trafficrank

So What is Best Day timing to Post, Pin or Tweet?

Below is an infographic I fond on this blog (visual data is originalcompiled by SurePayRoll) and showing visualized results from some extensive research on the topic.

best-time-to-post-and-tweet-blog-articles-social-media-infographic


Here is most important facts this infographic reveals:


The avarage best time to post tweet and pin your new articles is about 15:00 h
 

  • Best timing to post on Twitter is on Mondays to Thursdays from 13:00 to 15:00 h
  • Best timing to post on facebook is between 13:00 and 16:00 h
  • For Linkedin it is best to place your publish between Tuesdays to Thursdays


Peak times on Facebook, Twitter and Linkedin

  • Peak times for use of Facebook is on Wednesdays about 15:00 h
  • Peak times for use of Twitter is from Monday to Thursdays from 9:00  to 15:00 h
  • Linkedin Peak time is from 17:00 to 18:00 h
  • Including images to your articles increases traffic, tweets with images increase visits, favorites and leads


Worst time (when users will probably not view your content) on FB, Twitter and Linkedin

  • Weekends before 08:00  and after 20:00 h
  • Everyday after 20:00 and Fridays after 15:00 noon
  • Mondays and Fridays from 22:00 to 06:00 morning

Facts about Google+
 

  • Google+ is the fastest growing demographic social network for people aged 45 to 54
  • Best time to share your posts on Google+ is from 09:00 to 10:00 in the morning
  • Including images to your articles increases traffic, tweets with images increase visits, favorites and leads
     

Images generate more traffic and engagement

  • Including images to your articles increases traffic, tweets with images increase visits, favorites and leads


I'm aware as every research above info on best time to tweet and post is just a generalization and according to field of information posted suggested time could be different from optiomal time for individual writer, however as a general direction, info is very useful and it gives you some idea.
Twitter engagement for brands is 17% higher on weekends according to Dan Zarrella’s research. Tweets posted on Friday, Saturday and Sunday had higher CTR (Click Through Rate) than those posted in the rest of the week.

tweet-on-the-weekends-is-better-for-high-click-through-rate

Other best day to tweet other than weekends is mid-week time Wednesday.
Whether your site or blog is using retweet to generate more traffic to website best time to retweet is said to be around 5 pm. CTR is higher

How to redirect / forward all postfix emails to one external email address?

Thursday, October 29th, 2020

Postfix_mailserver-logo-howto-forward-email-with-regular-expression-or-maildrop

Lets say you're  a sysadmin doing email migration of a Clustered SMTP and due to that you want to capture for a while all incoming email traffic and redirect it (forward it) towards another single mailbox, where you can review the mail traffic that is flowing for a few hours and analyze it more deeper. This aproach is useful if you have a small or middle sized mail servers and won't be so useful on a mail server that handels few  hundreds of mails hourly. In below article I'll show you how.

How to redirect all postfix mail for a specific domain to single external email address?

There are different ways but if you don't want to just intercept the traffic and a create a copy of email traffic using the always_bcc integrated postfix option (as pointed in my previous article postfix copy every email to a central mailbox).  You can do a copy of email flow via some custom written dispatcher script set to be run by the MTA on each mail arriva, or use maildrop filtering functionality below is very simple example with maildrop in case if you want to filter out and deliver to external email address only email targetted to specific domain.

If you use maildrop as local delivery agent to copy email targetted to specifidc domain to another defined email use rule like:

if ( /^From:.*domain\.com/:h ) {
  cc "!someothermail@domain2.com"
}


To use maildrop to just forward email incoming from a specific sender towards local existing email address on the postfix to an external email address  use something like:

if ( /^From: .*linus@mail.example.com.*/ )
{
        dotlock "forward.lock" {
          log "Forward mail"
          to "|/usr/sbin/sendmail linuxbox@collector.example.com"
        }
}

Then to make the filter active assuming the user has a physical unix mailbox, paste above to local user's  $HOME/.mailfilter.

What to do if your mail delivered via your Email-Server.com are sent from a monitoring and alarming scripts that are sending towards many mailboxes that no longer exist after the migration?

To achive capturing all normal attempted to be sent traffic via the mail server, we can forward all served mails towards a single external mail address you can use the nice capability of postfix to understand PCRE perl compatible regular expressions. Regular expressions in postfix of course has its specific I recommend you take a look to the postfix regexp table documentation here, as well as check the Postfix Regex / Tester / Debugger online tool – useful to validate a regexp you want to implement.

How to use postfix regular expression to do a redirect of all sent emails via your postfix mail relayhost towards external mail servers?

 

In main.cf /etc/postfix/main.cf include this line near bottom or as a last line:

virtual_maps = hash:/etc/postfix/virtual, regexp:/etc/postfix/virtual-regexp

One defines the virtual file which can be used to define any of your virtual domains you want to simulate as present on the local postfix, the regexp: does load the file which is read by postfix where you can type the regular expression applied to every incoming email via SMTP port 25 or encrypted MTA ports 385 / 995 etc.

So how to redirect all postfix mail to one external email address for later analysis?

Create file /etc/postfix/virtual-regexp

/.+@.+/ external-forward-email@gmail.com

Next build the mapfile (this will generate /etc/postfix/virtual-regexp.db )
 

# postmap /etc/postfix/virtual-regexp

This also requires a virtual.db to exist. If it doesn't create an empty file called virtual and run again postmap postfix .db generator

# touch /etc/postfix/virtual && postmap /etc/postfix/virtual


Note in /etc/postfix/virtual you can add your postfix mail domains for which you want the MTA to accept mail as a local mail.

In case you need to view all postfix defined virtual domains configured to accept mail locally on the mail server.
 

$ postconf -n | grep virtual
virtual_alias_domains = mydomain.com myanotherdomain.com
virtual_alias_maps = hash:/etc/postfix/virtual


The regexp /.+@.+/ external-forward-email@gmail.com applied will start forwarding mails immediately after you reload the MTA with:

# systemctl restart postfix


If you want to exclude target mail domains to not be captured by above regexp, in /etc/postfix/virtual-regexp place:

/.+@exclude-domain1.com/ @exclude-domain1.com
/.+@exclude-domain2.com/ @exclude-domain2.com

Time for a test. Send a test email


Next step is to Test it mail forwarding works as expected
 

# echo -e "Tseting body" | mail -s "testing subject" -r "testing@test.com" whatevertest-user@mail-recipient-domain.com

How to convert .CRT SSL Certificate to .PFX format (with openssl Linux command) and Import newly generated .PFX to Windows IIS Webserver

Tuesday, September 27th, 2016

IIS8_Windows_Webserver_logo_convert_CRT_and_import_PFX-certificate

1. Converting to .CRT to.PFX file format with OpenSSL tool on GNU / Linux to import in Windows (for example, IIS)

Assuming you have generated already a certificate using the openssl Linux command and you have issued the .CRT SSL Certificate issuer file
and you need to have the new .CRT SSL Certificate installed on Windows Server (lets say on Windows 2012) with IIS Webserver version 8.5, you will need a way to convert the .CRT file to .PFX, there is plenty of ways to do that including using online Web Site SSL Certificate converter or use a stand alone program on the Windows server or even use a simple perl / python / ruby script to do the conversion but anyways the best approach will be to convert the new .CRT file to IIS supported binary Certificate format .PFX on the same (Linux certificate issuer host where you have first generated the certificate issuer request .KEY (private key file used with third party certificate issuer such as Godaddy or Hostgator to receive the .CRT / PEM file).

Here is how to generate the .PFX file based on the .CRT file for an Internal SSL Certfiicate:

 

openssl pkcs12 -export -in server.crt -inkey server.key -out server.pfx

On the password prompt to appear use any password because otherwise the future IIS Webserver certificate import will not work.
 

To do a certificate chain SSL export to be accessed from the  internet.

 

openssl pkcs12 -export -in server.crt -inkey server.key -out server.pfx -certfile internet v2.crt

2. Import the PFX file in Windows


Run: mmc, add snap, Certificates, Computer account, Local Computer; in the
Console:

Certificates (Local Computer) > Personal > Certificates: Select All Tasks > Import File

Enter previously chosen password.
You should get further the Message "Import was successful."

You can import the PFX file by simply copying it to the server where you want it imported and double click it this will  open Windows Importwizzard.

Then select the IIS:

 

Site, Properties, Directory Security, Server Certificate, Replace the current certficate, select proper Certificate. Done.

Alternatively to complete the IIS Webserver certificate import within one step when a new certificate is to be imported:

In IIS Manager interface go to :

Site, Properties, Directory Security, Server Certificate, Server Certificate Wizard


Click on

Next

Choose

import a certificate from a .pfx file, select and enter password.

Internet_Information_Server_IIS_Windows-SSL_Certificate-import-PKF-file

3. Import the PFX file into a Java keystore


Another thing you might need if you have the IIS Webserver using a backend Java Virtual Machine on the same or a different Windows server is to import the newly generated .PFX file within the Java VM keystore.

To import with keytool command for Java 1.6 type:

 

keytool -importkeystore -deststorepass your_pass_here -destkeypass changeit -destkeystore keystore.jks -srckeystore server.pfx -srcstoretype PKCS12 -srcstorepass 1234 -srcalias 1 -destalias xyz


Also the .CRT file could be directly imported into the Java keystore

 

Import a .crt in a Java keystore


/usr/java/jre/bin/keytool -import -keystore /webdienste/java/jdk/jre/lib/security/cacerts -file certificate.crt -alias Some alias

 

 

4. Get a list of Windows locally installed certificates

To manager installed certificates on Windows 7 / 8 / 2012 Server OS is to run command via

Start -> Run

 

certmgr.msc

certmgr_trca_windows_check-windows-installed-ssl-certificates

 

One other way to see the installed certificates on your Windows server is checking within

Internet Explorer

Go to Tools (Alt+X) → Internet Options → Content → Certificates.

 

To get a a complete list of installed Certificate Chain on Windows you can use PowerShell

 

Get-ChildItem -Recurse Cert:

 

That's all folks ! 🙂

 

Deny DHCP Address by MAC on Linux

Thursday, October 8th, 2020

Deny DHCP addresses by MAC ignore MAC to not be DHCPD leased on GNU / Linux howto

I have not blogged for a long time due to being on a few weeks vacation and being in home with a small cute baby. However as a hardcore and a bit of dumb System administrator, I have spend some of my vacation and   worked on bringing up the the www.pc-freak.net and the other Websites hosted as a high availvailability ones living on a 2 Webservers running on a Master to Master MySQL Replication backend database, this is oll hosted on  servers, set to run as a round robin DNS hosts on 2 servers one old Lenove ThinkCentre Edge71 as well as a brand new real Lenovo server Lenovo ThinkServer SD350 with 24 CPUs and a 32 GB of RAM
To assure Internet Connectivity is having a good degree of connectivity and ensure websites hosted on both machines is not going to die if one of the 2 pair configured Fiber Optics Internet Providers Bergon.NET has some Issues, I've rented another Internet Provider Line is set bought from the VIVACOM Mobile Fiber Internet provider – that is a 1 Gigabit Fiber Optics Line.
Next to that to guarantee there is no Database, Webserver, MailServer, Memcached and other running services did not hit downtimes due to Electricity power outage, two Powerful Uninterruptable Power Supplies (UPS)  FPS Fortron devices are connected to the servers each of which that could keep the machine and the connected switches and Servers for up to 1 Hour.

The machines are configured to use dhcpd to distributed IP addresses and the Main Node is set to distribute IPs, however as there is a local LAN network with more of a personal Work PCs, Wireless Devices and Testing Computers and few Virtual machines in the Network and the IPs are being distributed in a consequential manner via a ISC DHCP server.

As always to make everything work properly hence, I had again some a bit weird non-standard requirement to make some of the computers within the Network with Static IP addresses and the others to have their IPs received via the DHCP (Dynamic Host Configuration Protocol) and add some filter for some of the Machine MAC Addresses which are configured to have a static IP addresses to prevent the DHCP (daemon) server to automatically reassign IPs to this machines.

After a bit of googling and pondering I've done it and some of the machines, therefore to save others the efforts to look around How to set Certain Computers / Servers Network Card MAC (Interfaces) MAC Addresses  configured on the LAN network to use Static IPs and instruct the DHCP server to ingnore any broadcast IP addresses leases – if they're to be destined to a set of IGNORED MAcs, I came up with this small article.

Here is the DHCP server /etc/dhcpd/dhcpd.conf from my Debian GNU / Linux (Buster) 10.4

 

option domain-name "pcfreak.lan";
option domain-name-servers 8.8.8.8, 8.8.4.4, 208.67.222.222, 208.67.220.220;
max-lease-time 891200;
authoritative;
class "black-hole" {
    match substring (hardware, 1, 6);
    ignore booting;
}
subclass "black-hole" 18:45:91:c3:d9:00;
subclass "black-hole" 70:e2:81:13:44:11;
subclass "black-hole" 70:e2:81:13:44:12;
subclass "black-hole" 00:16:3f:53:5d:11;
subclass "black-hole" 18:45:9b:c6:d9:00;
subclass "black-hole" 16:45:93:c3:d9:09;
subclass "black-hole" 16:45:94:c3:d9:0d;/etc/dhcpd/dhcpd.conf
subclass "black-hole" 60:67:21:3c:20:ec;
subclass "black-hole" 60:67:20:5c:20:ed;
subclass "black-hole" 00:16:3e:0f:48:04;
subclass "black-hole" 00:16:3e:3a:f4:fc;
subclass "black-hole" 50:d4:f5:13:e8:ba;
subclass "black-hole" 50:d4:f5:13:e8:bb;
subnet 192.168.0.0 netmask 255.255.255.0 {
        option routers                  192.168.0.1;
        option subnet-mask              255.255.255.0;
}
host think-server {
        hardware ethernet 70:e2:85:13:44:12;
        fixed-address 192.168.0.200;
}
default-lease-time 691200;
max-lease-time 891200;
log-facility local7;

To spend you copy paste efforts a file with Deny DHCP Address by Mac Linux configuration is here
/home/hipo/info
Of course I have dumped the MAC Addresses to omit a data leaking but I guess the idea behind the MAC ADDR ignore is quite clear

The main configuration doing the trick to ignore a certain MAC ALenovo ThinkServer SD350ddresses that are reachable on the Connected hardware switch on the device is like so:

class "black-hole" {
    match substring (hardware, 1, 6);
    ignore booting;
}
subclass "black-hole" 18:45:91:c3:d9:00;


The Deny DHCP Address by MAC is described on isc.org distribution lists here but it seems the documentation on the topic on how to Deny / IGNORE DHCP Addresses by MAC Address on Linux has been quite obscure and limited online.

As you can see in above config the time via which an IP is freed up and a new IP lease is done from the server is severely maximized as often DHCP servers do use a max-lease-time like 1 hour (3600) seconds:, the reason for increasing the lease time to be to like 10 days time is that the IPs in my network change very rarely so it is a waste of CPU cycles to do a frequent lease.

default-lease-time 691200;
max-lease-time 891200;


As you see to Guarantee resolving works always as expected I have configured – Google Public DNS and OpenDNS IPs

option domain-name-servers 8.8.8.8, 8.8.4.4, 208.67.222.222, 208.67.220.220;


One hint to make is, after setting up all my desired config in the standard config location /etc/dhcp/dhcpd.conf it is always good idea to test configuration before reloading the running dhcpd process.

 

root@pcfreak: ~# /usr/sbin/dhcpd -t
Internet Systems Consortium DHCP Server 4.4.1
Copyright 2004-2018 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/
Config file: /etc/dhcp/dhcpd.conf
Database file: /va/home/hipo/infor/lib/dhcp/dhcpd.leases
PID file: /var/run/dhcpd.pid
 

That's all folks with this sample config the IPs under subclass "black-hole", which are a local LAN Static IP Addresses will never be offered leasess anymore from the ISC DHCP.
Hope this stuff helps someone, enjoy and in case if you need a colocation of a server or a website hosting for a really cheap price on this new set High Availlability up described machines open an inquiry on https://web.www.pc-freak.net.

 

Installing usual Software Tools and Development header files and libraries on a newly installed Debian Server

Thursday, February 11th, 2010

installing-usual-software-tools-and-development-header-files-and-libraries-on-newly-installed-Debian-Ubuntu-server
Today I start my work as a system administrator for a new IT company.
My first duties include configuration and installation of some usual programs
used in everyday's sys admin job.
In that manner of thoughts I have long ago realized there is a common group of
tools and software I had to install on almost each and every new configured
Debian GNU / Linux running Server.
Here is a list of packages I usually install on new Debian systems,
even though this exact commands are expected to be executed on Debian (5.0) Lenny
I believe they are quite accurate for Debian Testing and Debian Testing/Unstable,
bleeding edge distributions.
Before I show you the apt-get lines with all the packages, I would advice you to install
and use netselect-apt to select the fastest Debian package mirror near you
So to install and use run the following commands;

aptitude install netselect-apt
netselect-apt -n lenny

Now as netselect-apt would have tested for the fastest mirror and created sources.list
file in your current directory, open the sources.list file and decide what should enter your
official /etc/apt/sources.list file or in other words merge the two files as you like.
Good, now as we have a fast mirror to download our packages let's continue further with the
packages to install.
Excecute the following command to install some of the basic tools and packages:

# install some basic required tools, software and header files
debian-server:~# apt-get install tcpdump mc ncurses-dev htop iftop iptraf nmap tcpdump apache2 apachetop
mysql-server-5.0 phpmyadmin vnstat rsync traceroute tcptrace e2fsprogs hddtemp finger mtr-tiny
netcat screen imagemagick flex snort mysql-server-5.0 sysstat lm-sensors alien rar unrar util-linux curl
vim lynx links elinks sudo autoconf gcc build-essential dpkg-dev webalizer awstats

Herein I'll explain just a few of the installed package and their install
purpose,as they could be unknown to some of the people out there.

apachetop - monitors apache log file in real time similar to gnu top
iftop - display bandwidth usage on selected interface interactively
vnstat - show inbound & outbound traffic usage on selected network interfaces
e2fsprogs - some general tools for creation of ext2 file systems etc.
hddtemp - Utility to monitor hard drive temperature
mtr-tiny - matt's traceroute great traceroute proggie
netcat - TCP/IP swiss army knife, quite helpful for network maintance tasks
snort - an Intrusion Detecting System
build-essential - installs basic stuff required for most applications compiled from source code
sysstat - generates statistics about server load each and every ten minutes, check man for more
lm-sensors - enables you to track your system hardware sensors information and warn in CPU heatups etc.

I believe the rest of them are no need to be explained, if you're not familiar with them check the manuals.
So far so good but this is not all I had to install, as you probably know most Apache webservers nowadays
are running PHP and are using a dozen of PHP libraries / extensions not originally bundled with PHP install
Therefore here are some more packages related to php to install that would install some more php goodies.

# install some packages required for many php enabled applications
debian-esrver:~# apt-get install php-http php-db php-mail php-net-smtp php-net-socket php-pear php-xml-parser
php5-curl php5-gd php5-imagick php5-mysql php5-odbc php5-recode php5-sybase php5-xmlrpc php5-dev

As I said that is mostly the basic stuff that is a must have on most of the Debian servers I have
configured this days, of course this is not applicable to all situations, however I hope
this would be of use to somebody out there.

How to build Linux logging bash shell script write_log, logging with Named Pipe buffer, Simple Linux common log files logging with logger command

Monday, August 26th, 2019

how-to-build-bash-script-for-logging-buffer-named-pipes-basic-common-files-logging-with-logger-command

Logging into file in GNU / Linux and FreeBSD is as simple as simply redirecting the output, e.g.:
 

echo "$(date) Whatever" >> /home/hipo/log/output_file_log.txt


or with pyping to tee command

 

echo "$(date) Service has Crashed" | tee -a /home/hipo/log/output_file_log.txt


But what if you need to create a full featured logging bash robust shell script function that will run as a daemon continusly as a background process and will output
all content from itself to an external log file?
In below article, I've given example logging script in bash, as well as small example on how a specially crafted Named Pipe buffer can be used that will later store to a file of choice.
Finally I found it interesting to mention few words about logger command which can be used to log anything to many of the common / general Linux log files stored under /var/log/ – i.e. /var/log/syslog /var/log/user /var/log/daemon /var/log/mail etc.
 

1. Bash script function for logging write_log();


Perhaps the simplest method is just to use a small function routine in your shell script like this:
 

write_log()
LOG_FILE='/root/log.txt';
{
  while read text
  do
      LOGTIME=`date "+%Y-%m-%d %H:%M:%S"`
      # If log file is not defined, just echo the output
      if [ “$LOG_FILE” == “” ]; then
    echo $LOGTIME": $text";
      else
        LOG=$LOG_FILE.`date +%Y%m%d`
    touch $LOG
        if [ ! -f $LOG ]; then echo "ERROR!! Cannot create log file $LOG. Exiting."; exit 1; fi
    echo $LOGTIME": $text" | tee -a $LOG;
      fi
  done
}

 

  •  Using the script from within itself or from external to write out to defined log file

 

echo "Skipping to next copy" | write_log

 

2. Use Unix named pipes to pass data – Small intro on what is Unix Named Pipe.


Named Pipe –  a named pipe (also known as a FIFO (First In First Out) for its behavior) is an extension to the traditional pipe concept on Unix and Unix-like systems, and is one of the methods of inter-process communication (IPC). The concept is also found in OS/2 and Microsoft Windows, although the semantics differ substantially. A traditional pipe is "unnamed" and lasts only as long as the process. A named pipe, however, can last as long as the system is up, beyond the life of the process. It can be deleted if no longer used.
Usually a named pipe appears as a file, and generally processes attach to it for IPC.

 

Once named pipes were shortly explained for those who hear it for a first time, its time to say named pipe in unix / linux is created with mkfifo command, syntax is straight foward:
 

mkfifo /tmp/name-of-named-pipe


Some older Linux-es with older bash and older bash shell scripts were using mknod.
So idea behind logging script is to use a simple named pipe read input and use date command to log the exact time the command was executed, here is the script.

 

#!/bin/bash
named_pipe='/tmp/output-named-pipe';
output_named_log='
/tmp/output-named-log.txt ';

if [ -p $named_pipe ]; then
rm -f $named_pipe
fi
mkfifo $named_pipe

while true; do
read LINE <$named_pipe
echo $(date): "$LINE" >>/tmp/output-named-log.txt
done


To write out any other script output and get logged now, any of your output with a nice current date command generated output write out any output content to the loggin buffer like so:

 

echo 'Using Named pipes is so cool' > /tmp/output-named-pipe
echo 'Disk is full on a trigger' > /tmp/output-named-pipe

  • Getting the output with the date timestamp

# cat /tmp/output-named-log.txt
Mon Aug 26 15:21:29 EEST 2019: Using Named pipes is so cool
Mon Aug 26 15:21:54 EEST 2019: Disk is full on a trigger


If you wonder why it is better to use Named pipes for logging, they perform better (are generally quicker) than Unix sockets.

 

3. Logging files to system log files with logger

 

If you need to do a one time quick way to log any message of your choice with a standard Logging timestamp, take a look at logger (a part of bsdutils Linux package), and is a command which is used to enter messages into the system log, to use it simply invoke it with a message and it will log your specified output by default to /var/log/syslog common logfile

 

root@linux:/root# logger 'Here we go, logging'
root@linux:/root # tail -n 3 /var/log/syslog
Aug 26 15:41:01 localhost CRON[24490]: (root) CMD (chown qscand:qscand -R /var/run/clamav/ 2>&1 >/dev/null)
Aug 26 15:42:01 localhost CRON[24547]: (root) CMD (chown qscand:qscand -R /var/run/clamav/ 2>&1 >/dev/null)
Aug 26 15:42:20 localhost hipo: Here we go, logging

 

If you have took some time to read any of the init.d scripts on Debian / Fedora / RHEL / CentOS Linux etc. you will notice the logger logging facility is heavily used.

With logger you can print out message with different priorities (e.g. if you want to write an error message to mail.* logs), you can do so with:
 

 logger -i -p mail.err "Output of mail processing script"


To log a normal non-error (priority message) with logger to /var/log/mail.log system log.

 

 logger -i -p mail.notice "Output of mail processing script"


A whole list of supported facility named priority valid levels by logger (as taken of its current Linux manual) are as so:

 

FACILITIES AND LEVELS
       Valid facility names are:

              auth
              authpriv   for security information of a sensitive nature
              cron
              daemon
              ftp
              kern       cannot be generated from userspace process, automatically converted to user
              lpr
              mail
              news
              syslog
              user
              uucp
              local0
                to
              local7
              security   deprecated synonym for auth

       Valid level names are:

              emerg
              alert
              crit
              err
              warning
              notice
              info
              debug
              panic     deprecated synonym for emerg
              error     deprecated synonym for err
              warn      deprecated synonym for warning

       For the priority order and intended purposes of these facilities and levels, see syslog(3).

 


If you just want to log to Linux main log file (be it /var/log/syslog or /var/log/messages), depending on the Linux distribution, just type', even without any shell quoting:

 

logger 'The reason to reboot the server Currently was a System security Update

 

So what others is logger useful for?

 In addition to being a good diagnostic tool, you can use logger to test if all basic system logs with its respective priorities work as expected, this is especially
useful as I've seen on a Cloud Holsted OpenXEN based servers as a SAP consultant, that sometimes logging to basic log files stops to log for months or even years due to
syslog and syslog-ng problems hungs by other thirt party scripts and programs.
To test test all basic logging and priority on system logs as expected use the following logger-test-all-basic-log-logging-facilities.sh shell script.

 

#!/bin/bash
for i in {auth,auth-priv,cron,daemon,kern, \
lpr,mail,mark,news,syslog,user,uucp,local0 \
,local1,local2,local3,local4,local5,local6,local7}

do        
# (this is all one line!)

 

for k in {debug,info,notice,warning,err,crit,alert,emerg}
do

logger -p $i.$k "Test daemon message, facility $i priority $k"

done

done

Note that on different Linux distribution verions, the facility and priority names might differ so, if you get

logger: unknown facility name: {auth,auth-priv,cron,daemon,kern,lpr,mail,mark,news, \
syslog,user,uucp,local0,local1,local2,local3,local4, \
local5,local6,local7}

check and set the proper naming as described in logger man page.

 

4. Using a file descriptor that will output to a pre-set log file


Another way is to add the following code to the beginning of the script

#!/bin/bash
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>log.out 2>&1
# Everything below will go to the file 'log.out':

The code Explaned

  •     Saves file descriptors so they can be restored to whatever they were before redirection or used themselves to output to whatever they were before the following redirect.
    trap 'exec 2>&4 1>&3' 0 1 2 3
  •     Restore file descriptors for particular signals. Not generally necessary since they should be restored when the sub-shell exits.

          exec 1>log.out 2>&1

  •     Redirect stdout to file log.out then redirect stderr to stdout. Note that the order is important when you want them going to the same file. stdout must be redirected before stderr is redirected to stdout.

From then on, to see output on the console (maybe), you can simply redirect to &3. For example
,

echo "$(date) : Do print whatever you want logging to &3 file handler" >&3


I've initially found out about this very nice bash code from serverfault.com's post how can I fully log all bash script actions (but unfortunately on latest Debian 10 Buster Linux  that is prebundled with bash shell 5.0.3(1)-release the code doesn't behave exactly, well but still on older bash versions it works fine.

Sum it up


To shortlysummarize there is plenty of ways to do logging from a shell script logger command but using a function or a named pipe is the most classic. Sometimes if a script is supposed to write user or other script output to a a common file such as syslog, logger command can be used as it is present across most modern Linux distros.
If you have a better ways, please drop a common and I'll add it to this article.