Posts Tagged ‘security’

iSH, the best free SSH / Telnet client for iOS iPhone, iPad equivallent of MobaXterm and fully functional Alpine Linux emulator

Wednesday, February 8th, 2023

ish-linux-terminal-emulator-for-iphone-ipad-ios-logo-screenshoticon

Since few months I've switched my old BLU r1 HD Phone (a great old low budget phone for its price) to a friend's iPhone 10 ( X ) who gifted it for me. Coming from Android world, everyone who has experience with it is a pain in the ass as some of the Apps, which are into Google's play store does not have the same equivalent into Apple's install Package manager tool AppStore. Some of the crucial tools which I was interested as a freshly new migrated user from Android to iPhone was to have a decent SSH / Telnet client and Terminal, with which I can easily connect to my Linux servers both home and work. 

As Android Phone user, to connect and manage my SSH sessions I used most often some of the most popular Connectbot / SSHDroid / JuiceSSH.
On Android I've usually installed all of these tools but most frequently used Connectbot, which quickly become my favourite SSH client for Android over time.

The reasons why I really loved Connectbot and used it on Android OS in short:

  • It is Completely free
  • Ad-free
  • Open-source (too bad not Free software but still step better)
  • Copy and paste text between Applications
  • Customizable interface (i.e. font size, keyboard layout, SSH auth agent, etc.)


connectbot-android-ssh-remote-connect-client-screenshot

I've seen some people used and preferred Termius but never myself really liked this client, as it was including some Advertisements or for don't remember why reason.
Switching to iOS mobile operating system, of course was quite a shock especially the moment I found out the standard loved SSH Remote Client programs are used are not available or have only a paid version. Thus it took me quite a while of a research and googling until I found some decent stuff.

termius-ssh-telnet-client-ios-screenshot

Tried for a time with Termius as well but again, its Ads and lack of some functionality pissed me off, so I've moved on to Shelly.

shelly-iphone-ssh-telnet-client-ios-screenshot

Shelly is really not a bad tool but has limitation over the SSH sessions you can add and other limitations, which can only be unlocked with an "Upgrade", to its paid version, thus I decided after few weeks of attempts to make it my remote server management mobile tool for iPhone, I've dropped it off as well.

Then I found the Blink Shell App – Blink Shell is a professional, desktop grade terminal for iOS. As overall the tool is really great and is easy to use but again to have it used in its full power you need the paid version and until you pay for it every now and then you got interruption of your shell for some really annoying ads.
Thus even though I used it for a times this few tools with whom basicly you can do basic remote ssh / telnet session operations eventually,  started looking for a better SSH Client Free alternative for iPhone Users.

Then came a friend at home for a dinner my dear friend Milen (Static) and he show me iOS.
The moment I saw this tool I totally loved it, for its simplicity and its resemblance to a classical TTY Physical old Linux console I used back in the days and its ability to resemble easily any improved functionaltiy through simple screen (multiple session management) command tool or tmux.

Wait, what's iSH ? And why it is the Best SSH / Telnet client to manage your servers remotely on iOS Mobiles (iPhone and IPads) ? 

iSH is a project to get a Linux shell environment running locally on your iOS device, using a usermode x86 emulator.


In other wors iSH is Linux emulator with busybox and a package ports for many of the standard Linux tools you get by simple apt-get / yum or if I have to compare you get via the MobaXterm's advanced apt-cyg (Cygwin packages) tool capabilities.

Once iSH is installed it comes with pre-installed apk command line package management tool, with which you can install stuff like openssh-client / screen / tmux / mc (midnight commander) etc. apk, is an apt like command like tool which uses as a basis for installing its packages Alpine Linux repositories.
Alpine Linux is perhaps little known as it is not one of these main stream disributions, such as Fedora or Ubuntu, but for those more concerned about security  Alpine Linux is well known as it is a security-oriented, lightweight Linux distribution based on musl libc and busybox. What makes the Linux even more attractive and perhaps the reason why the iSH developers decided to use it as a basis for their iSH emulator is it being actively developed and its tightened security makes it a good compliment to the quite closed and security focused mobile platform iOS.

iSH is available straight from AppStore , so to use it install it and run it (it is really a great news that iOS does not require iphone to be jailbreak – ed, and it is an ordinary installable software straight from AppStore):
iSH, already comes with some of the standard programs you would expect in a Linux environment such as Vi, wget, zip / unzip, and tar.
However to fit it better for my use over ssh and improve its capabilities, as well as support and use multiple Virtual windows ssh, just like you do on a Linux xterm
run from ish shell: 

# apk add openssh-client
# apk add screen
# apk add vim
# apk add mc


ish-screenshot-terminal3-linux-emulator-iphone-alpine

ish-screenshot-terminal2-linux-emulator-iphone-alpine

ish-screenshot-terminal1-linux-emulator-iphone-alpine-linux

I also like to have a Midnight Commander and VIM Text editor installed out of the box to be able to move around in Ncurses interface through my iPhone.

ish-iphone-keyboard-key-shortcuts

Note that, just like most GNU / Linux distributions, iOS shell will run a normal bash shell.
From there on to use iSH as my default SSH client and enable my just installed GNU screen some Windowing beauty for readability whence I use the screen with multiple ssh logins to different servers as well make the screen Virtual consoles to have ability for scroll back and scroll up of console text to work, I do set up the following .screenrc inside my /home/iPhoneuser

The .screenrc to setup on the iSH to easify your work with screen is as follows:
 

# An alternative hardstatus to display a bar at the bottom listing the
# windownames and highlighting the current windowname in blue. (This is only
# enabled if there is no hardstatus setting for your terminal)
hardstatus on
hardstatus alwayslastline
hardstatus string "%{.bW}%-w%{.rW}%n %t%{-}%+w %=%{..G} %H %{..Y} %m/%d %C%a "
# Enable scrolling fix the annoying screen scrolling problem
termcapinfo xterm* ti@:te@
# Scroll up
bindkey -d "^[[5S" eval copy "stuff 5\025"
bindkey -m "^[[5S" stuff 5\025

# Scroll down
bindkey -d "^[[5T" eval copy "stuff 5\004"
bindkey -m "^[[5T" stuff 5\004

# Scroll up more
bindkey -d "^[[25S" eval copy "stuff \025"
bindkey -m "^[[25S" stuff \025

# Scroll down more
bindkey -d "^[[25T" eval copy "stuff \004"
bindkey -m "^[[25T" stuff \004

You can download the same .screenrc file from here straight with wget from the console:

# wget https://www.pc-freak.net/files/.screenrc


Run GNU screen manager

 

 # screen

You will end up with a screen session, to open a new session for Virtual Terminal use virtual keyboard from ISH and Press

CTRL + A + C

To open other Virtual Windows inside screen just press CTRL + A + C as many times as you need it, each session will appear ina small window on the down corner as you can see in screenshot

ish-terminal-with-screen-multiple-virtual-terminals-screenshot-iphone-ios

To move across the Screen unnamed 3 Virtual Windows 0 ash 1 ash and 2 ash use the Virtual keyboard

for next WIndow use key combination:
 

CTRL + A + N (where + is just to indicate you have to press them once after another and not actually press the + 🙂 )


For Previous Window use:

CTRL + A + P

Or use CTRL + A and type 

:number 3 (where number is the number of window)

The available iSH commands without adding any further packages which are part of the busybox install are as follows:

Available /bin/ directory commands:

arch  ash  base64  bbconfig  busybox  cat  chgrp  chmod  chown  conspy  cp  date  dd  df  dmesg  dnsdomainname  dumpkmap  echo  ed  egrep  false  fatattr  fdflush  fgrep  fsync  getopt  grep  gunzip  gzip  hostname  ionice  iostat  ipcalc  kbd_mode  kill  link  linux32  linux64  ln  login  ls  lzop  makemime  mkdir  mknod  mktemp  more  mount  mountpoint  mpstat  mv  netstat  nice  pidof  ping  ping6  pipe_progress  printenv  ps  pwd  reformime  rev  rm  rmdir  run-parts  sed  setpriv  setserial  sh  sleep  stty  su  sync  tar  touch  true  umount  uname  usleep  watch  zcat  


Available /usr/bin/ commands:    

awk  basename  beep  blkdiscard  bunzip2  bzcat  bzip2  cal  chvt  cksum  clear  cmp  comm  cpio  crontab  cryptpw  cut  dc  deallocvt  diff  dirname  dos2unix  du  dumpleases  eject  env  expand  expr  factor  fallocate  find  flock  fold  free  fuser  getconf  getent  groups  hd  head  hexdump  hostid  iconv  id  install  ipcrm  ipcs  killall  ldd  less  logger  lsof  lsusb  lzcat  lzma  lzopcat  md5sum  mesg  microcom  mkfifo  mkpasswd  nc  nl  nmeter  nohup  nproc  nsenter  nslookup  od  passwd  paste  patch  pgrep  pkill  pmap  printf  pscan  pstree  pwdx  readlink  realpath  renice  reset  resize  scanelf  seq  setkeycodes  setsid  sha1sum  sha256sum  sha3sum  sha512sum  showkey  shred  shuf  smemcap  sort  split  ssl_client  strings  sum  tac  tail  tee  test  time  timeout  top  tr  traceroute  traceroute6  truncate  tty  ttysize  udhcpc6  unexpand  uniq  unix2dos  unlink  unlzma  unlzop  unshare  unxz  unzip  uptime  uudecode  uuencode  vi  vlock  volname  wc  wget  which  whoami  whois  xargs  xxd  xzcat  yes  


If you're a maniac developer you can even use iSH, to do some programs development with vim with Python / Perl or PHP as these are available from the Alpine repositories and installable via a simple apk add packagename for security experts nmap and some security tools are also available but unfortunately not everything is still working as this project is in active development and iOS has some security limitations if OS is not ROOTED 🙂

Hence some of the packages you can install via apk manager will be failing actually.
There is a list of What works and what doesn't still on iSH on the project github wiki check it out here.

There is much more funny stuff you can do with it, and actually my quick research on how people use iSH on their phones lead me to some Videos talking about iOS and Ethical hacking etc, but I'll stop here as I dont have the time to dig deeper to it. 
If you know or have some good use of iSH or some other goody you are using as a hack please share in comments.

Enjoy ! 🙂

Install and configure rkhunter for improved security on a PCI DSS Linux / BSD servers with no access to Internet

Wednesday, November 10th, 2021

install-and-configure-rkhunter-with-tightened-security-variables-rkhunter-logo

rkhunter or Rootkit Hunter scans systems for known and unknown rootkits. The tool is not new and most system administrators that has to mantain some good security servers perhaps already use it in their daily sysadmin tasks.

It does this by comparing SHA-1 Hashes of important files with known good ones in online databases, searching for default directories (of rootkits), wrong permissions, hidden files, suspicious strings in kernel modules, commmon backdoors, sniffers and exploits as well as other special tests mostly for Linux and FreeBSD though a ports for other UNIX operating systems like Solaris etc. are perhaps available. rkhunter is notable due to its inclusion in popular mainstream FOSS operating systems (CentOS, Fedora,Debian, Ubuntu etc.).

Even though rkhunter is not rapidly improved over the last 3 years (its last Official version release was on 20th of Febuary 2018), it is a good tool that helps to strengthen even further security and it is often a requirement for Unix servers systems that should follow the PCI DSS Standards (Payment Card Industry Data Security Standards).

Configuring rkhunter is a pretty straight forward if you don't have too much requirements but I decided to write this article for the reason there are fwe interesting options that you might want to adopt in configuration to whitelist any files that are reported as Warnings, as well as how to set a configuration that sets a stricter security checks than the installation defaults. 

1. Install rkhunter .deb / .rpm package depending on the Linux distro or BSD

  • If you have to place it on a Redhat based distro CentOS / Redhat / Fedora

[root@Centos ~]# yum install -y rkhunter

 

  • On Debian distros the package name is equevallent to install there exec usual:

root@debian:~# apt install –yes rkhunter

  • On FreeBSD / NetBSD or other BSD forks you can install it from the BSD "World" ports system or install it from a precompiled binary.

freebsd# pkg install rkhunter

One important note to make here is to have a fully functional Alarming from rkhunter, you will have to have a fully functional configured postfix / exim / qmail whatever mail server to relay via official SMTP so you the Warning Alarm emails be able to reach your preferred Alarm email address. If you haven't installed postfix for example and configure it you might do.

– On Deb based distros 

[root@Centos ~]#yum install postfix


– On RPM based distros

root@debian:~# apt-get install –yes postfix


and as minimum, further on configure some functional Email Relay server within /etc/postfix/main.cf
 

# vi /etc/postfix/main.cf
relayhost = [relay.smtp-server.com]

2. Prepare rkhunter.conf initial configuration


Depending on what kind of files are present on the filesystem it could be for some reasons some standard package binaries has to be excluded for verification, because they possess unusual permissions because of manual sys admin monification this is done with the rkhunter variable PKGMGR_NO_VRFY.

If remote logging is configured on the system via something like rsyslog you will want to specificly tell it to rkhunter so this check as a possible security issue is skipped via ALLOW_SYSLOG_REMOTE_LOGGING=1. 

In case if remote root login via SSH protocol is disabled via /etc/ssh/sshd_config
PermitRootLogin no variable, the variable to include is ALLOW_SSH_ROOT_USER=no

It is useful to also increase the hashing check algorithm for security default one SHA256 you might want to change to SHA512, this is done via rkhunter.conf var HASH_CMD=SHA512

Triggering new email Warnings has to be configured so you receive, new mails at a preconfigured mailbox of your choice via variable
MAIL-ON-WARNING=SetMailAddress

 

# vi /etc/rkhunter.conf

PKGMGR_NO_VRFY=/usr/bin/su

PKGMGR_NO_VRFY=/usr/bin/passwd

ALLOW_SYSLOG_REMOTE_LOGGING=1

# Needed for corosync/pacemaker since update 19.11.2020

ALLOWDEVFILE=/dev/shm/qb-*/qb-*

# enabled ssh root access skip

ALLOW_SSH_ROOT_USER=no

HASH_CMD=SHA512

# Email address to sent alert in case of Warnings

MAIL-ON-WARNING=Your-Customer@Your-Email-Server-Destination-Address.com

MAIL-ON-WARNING=Your-Second-Peronsl-Email-Address@SMTP-Server.com

DISABLE_TESTS=os_specific


Optionally if you're using something specific such as corosync / pacemaker High Availability cluster or some specific software that is creating /dev/ files identified as potential Risks you might want to add more rkhunter.conf options like:
 

# Allow PCS/Pacemaker/Corosync
ALLOWDEVFILE=/dev/shm/qb-attrd-*
ALLOWDEVFILE=/dev/shm/qb-cfg-*
ALLOWDEVFILE=/dev/shm/qb-cib_rw-*
ALLOWDEVFILE=/dev/shm/qb-cib_shm-*
ALLOWDEVFILE=/dev/shm/qb-corosync-*
ALLOWDEVFILE=/dev/shm/qb-cpg-*
ALLOWDEVFILE=/dev/shm/qb-lrmd-*
ALLOWDEVFILE=/dev/shm/qb-pengine-*
ALLOWDEVFILE=/dev/shm/qb-quorum-*
ALLOWDEVFILE=/dev/shm/qb-stonith-*
ALLOWDEVFILE=/dev/shm/pulse-shm-*
ALLOWDEVFILE=/dev/md/md-device-map
# Needed for corosync/pacemaker since update 19.11.2020
ALLOWDEVFILE=/dev/shm/qb-*/qb-*

# tomboy creates this one
ALLOWDEVFILE="/dev/shm/mono.*"
# created by libv4l
ALLOWDEVFILE="/dev/shm/libv4l-*"
# created by spice video
ALLOWDEVFILE="/dev/shm/spice.*"
# created by mdadm
ALLOWDEVFILE="/dev/md/autorebuild.pid"
# 389 Directory Server
ALLOWDEVFILE=/dev/shm/sem.slapd-*.stats
# squid proxy
ALLOWDEVFILE=/dev/shm/squid-cf*
# squid ssl cache
ALLOWDEVFILE=/dev/shm/squid-ssl_session_cache.shm
# Allow podman
ALLOWDEVFILE=/dev/shm/libpod*lock*

 

3. Set the proper mirror database URL location to internal network repository

 

Usually  file /var/lib/rkhunter/db/mirrors.dat does contain Internet server address where latest version of mirrors.dat could be fetched, below is how it looks by default on Debian 10 Linux.

root@debian:/var/lib/rkhunter/db# cat mirrors.dat 
Version:2007060601
mirror=http://rkhunter.sourceforge.net
mirror=http://rkhunter.sourceforge.net

As you can guess a machine that doesn't have access to the Internet neither directly, neither via some kind of secure proxy because it is in a Paranoic Demilitarized Zone (DMZ) Network with many firewalls. What you can do then is setup another Mirror server (Apache / Nginx) within the local PCI secured LAN that gets regularly the database from official database on http://rkhunter.sourceforge.net/ (by installing and running rkhunter –update command on the Mirror WebServer and copying data under some directory structure on the remote local LAN accessible server, to keep the DB uptodate you might want to setup a cron to periodically copy latest available rkhunter database towards the http://mirror-url/path-folder/)

# vi /var/lib/rkhunter/db/mirrors.dat

local=http://rkhunter-url-mirror-server-url.com/rkhunter/1.4/


A mirror copy of entire db files from Debian 10.8 ( Buster ) ready for download are here.

Update entire file property db and check for rkhunter db updates

 

# rkhunter –update && rkhunter –propupdate

[ Rootkit Hunter version 1.4.6 ]

Checking rkhunter data files…
  Checking file mirrors.dat                                  [ Skipped ]
  Checking file programs_bad.dat                             [ No update ]
  Checking file backdoorports.dat                            [ No update ]
  Checking file suspscan.dat                                 [ No update ]
  Checking file i18n/cn                                      [ No update ]
  Checking file i18n/de                                      [ No update ]
  Checking file i18n/en                                      [ No update ]
  Checking file i18n/tr                                      [ No update ]
  Checking file i18n/tr.utf8                                 [ No update ]
  Checking file i18n/zh                                      [ No update ]
  Checking file i18n/zh.utf8                                 [ No update ]
  Checking file i18n/ja                                      [ No update ]

 

rkhunter-update-propupdate-screenshot-centos-linux


4. Initiate a first time check and see whether something is not triggering Warnings

# rkhunter –check

rkhunter-checking-for-rootkits-linux-screenshot

As you might have to run the rkhunter multiple times, there is annoying Press Enter prompt, between checks. The idea of it is that you're able to inspect what went on but since usually, inspecting /var/log/rkhunter/rkhunter.log is much more easier, I prefer to skip this with –skip-keypress option.

# rkhunter –check  –skip-keypress


5. Whitelist additional files and dev triggering false warnings alerts


You have to keep in mind many files which are considered to not be officially PCI compatible and potentially dangerous such as lynx browser curl, telnet etc. might trigger Warning, after checking them thoroughfully with some AntiVirus software such as Clamav and checking the MD5 checksum compared to a clean installed .deb / .rpm package on another RootKit, Virus, Spyware etc. Clean system (be it virtual machine or a Testing / Staging) machine you might want to simply whitelist the files which are incorrectly detected as dangerous for the system security.

Again this can be achieved with

PKGMGR_NO_VRFY=

Some Cluster softwares that are preparing their own /dev/ temporary files such as Pacemaker / Corosync might also trigger alarms, so you might want to suppress this as well with ALLOWDEVFILE

ALLOWDEVFILE=/dev/shm/qb-*/qb-*


If Warnings are found check what is the issue and if necessery white list files due to incorrect permissions in /etc/rkhunter.conf .

rkhunter-warnings-found-screenshot

Re-run the check until all appears clean as in below screenshot.

rkhunter-clean-report-linux-screenshot

Fixing Checking for a system logging configuration file [ Warning ]

If you happen to get some message like, message appears when rkhunter -C is done on legacy CentOS release 6.10 (Final) servers:

[13:45:29] Checking for a system logging configuration file [ Warning ]
[13:45:29] Warning: The 'systemd-journald' daemon is running, but no configuration file can be found.
[13:45:29] Checking if syslog remote logging is allowed [ Allowed ]

To fix it, you will have to disable SYSLOG_CONFIG_FILE at all.
 

SYSLOG_CONFIG_FILE=NONE

Linux: Howto Fix “N: Repository ‘http://deb.debian.org/debian buster InRelease’ changed its ‘Version’ value from ‘10.9’ to ‘10.10’” error to resolve apt-get release update issue

Friday, August 13th, 2021

Linux's surprises and disorganization is continuously growing day by day and I start to realize it is becoming mostly impossible to support easily this piece of hackware bundled together.
Usually so far during the last 5 – 7 years, I rarely had any general issues with using:

 apt-get update && apt-get upgrade && apt-get dist-upgrade 

to raise a server's working stable Debian Linux version packages e.g. version X.Y to verzion X.Z (for example up the release from Debian Jessie from 8.1 to 8.2). 

Today I just tried to follow this well known and established procedure that, of course nowdays is better to be done with the newer "apt" command instead with the legacy "apt-get"
And the set of 

 

# apt-get update && apt-get upgrade && apt-get dist-upgrade

 

has triggered below shitty error:
 

root@zabbix:~# apt-get update && apt-get upgrade
Get:1 http://security.debian.org buster/updates InRelease [65.4 kB]
Get:2 http://deb.debian.org/debian buster InRelease [122 kB]
Get:3 http://security.debian.org buster/updates/non-free Sources [688 B]
Get:4 http://repo.zabbix.com/zabbix/5.0/debian buster InRelease [7096 B]
Get:5 http://security.debian.org buster/updates/main Sources [198 kB]
Get:6 http://security.debian.org buster/updates/main amd64 Packages [300 kB]
Get:7 http://security.debian.org buster/updates/main Translation-en [157 kB]
Get:8 http://security.debian.org buster/updates/non-free amd64 Packages [556 B]
Get:9 http://deb.debian.org/debian buster/main Sources [7836 kB]
Get:10 http://repo.zabbix.com/zabbix/5.0/debian buster/main Sources [1192 B]
Get:11 http://repo.zabbix.com/zabbix/5.0/debian buster/main amd64 Packages [4785 B]
Get:12 http://deb.debian.org/debian buster/non-free Sources [85.7 kB]
Get:13 http://deb.debian.org/debian buster/contrib Sources [42.5 kB]
Get:14 http://deb.debian.org/debian buster/main amd64 Packages [7907 kB]
Get:15 http://deb.debian.org/debian buster/main Translation-en [5968 kB]
Get:16 http://deb.debian.org/debian buster/main amd64 Contents (deb) [37.3 MB]
Get:17 http://deb.debian.org/debian buster/contrib amd64 Packages [50.1 kB]
Get:18 http://deb.debian.org/debian buster/non-free amd64 Packages [87.7 kB]
Get:19 http://deb.debian.org/debian buster/non-free Translation-en [88.9 kB]
Get:20 http://deb.debian.org/debian buster/non-free amd64 Contents (deb) [861 kB]
Fetched 61.1 MB in 22s (2774 kB/s)
Reading package lists… Done
N: Repository 'http://deb.debian.org/debian buster InRelease' changed its 'Version' value from '10.9' to '10.10'


As I used to realize nowdays, as Linux started originally as 'Hackers' operating system, its legacy is just one big hack and everything from simple maintenance up to the higher and more sophisticated things requires a workaround 'hack''.

 

This time the hack to resolve error:
 

N: Repository 'http://deb.debian.org/debian buster InRelease' changed its 'Version' value from '10.9' to '10.10'


is up to running cmd:
 

debian-server:~# apt-get update –allow-releaseinfo-change
Поп:1 http://ftp.de.debian.org/debian buster-backports InRelease
Поп:2 http://ftp.debian.org/debian stable InRelease
Поп:3 http://security.debian.org stable/updates InRelease
Изт:5 https://packages.sury.org/php buster InRelease [6837 B]
Изт:6 https://download.docker.com/linux/debian stretch InRelease [44,8 kB]
Изт:7 https://packages.sury.org/php buster/main amd64 Packages [317 kB]
Игн:4 https://attic.owncloud.org/download/repositories/production/Debian_10  InRelease
Изт:8 https://download.owncloud.org/download/repositories/production/Debian_10  Release [964 B]
Изт:9 https://packages.sury.org/php buster/main i386 Packages [314 kB]
Изт:10 https://download.owncloud.org/download/repositories/production/Debian_10  Release.gpg [481 B]
Грш:10 https://download.owncloud.org/download/repositories/production/Debian_10  Release.gpg
  Следните подписи са невалидни: DDA2C105C4B73A6649AD2BBD47AE7F72479BC94B
Грш:11 https://ookla.bintray.com/debian generic InRelease
  403  Forbidden [IP: 52.39.193.126 443]
Четене на списъците с пакети… Готово
N: Repository 'https://packages.sury.org/php buster InRelease' changed its 'Suite' value from '' to 'buster'
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://download.owncloud.org/download/repositories/production/Debian_10  Release: 

apt-get-update-allow-releaseinfo-change-debian-linux-screenshot

Onwards to upgrade the system up to the latest .deb packages, as usual run:

# apt-get -y update && apt-get upgrade -y

 

and updates should be applied as usual with some prompts on whether you prefer to keep or replace existing service configuration and some information on some general changes that might affect your installed services. In a few minutes and few prompts hopefully your Debian OS should be up to the latest stable.

Find largest files on AIX system root / show biggest files and directories in AIX folder howto

Friday, November 6th, 2020

ibm-aix-logo-find-largest-files-and-directories-on-system-to-free-space-if-disk-is-full

On an AIX server if you get a root directory ( / ) to be completely full problem and the AIX running services are unable to write their pid files and logs for example in /tmp /admin /home /var/tmp /var/log/ and rest of directory structure or the system is almost full with mounted filesystems which shows it is 90% or 95%+ full on main partition,  the system is either already stuck or it is on the way to stop functiononing normally. Hence the only way to recover IBM AIX machine to a normal behavior is to clean up some files (if you can't extend the partition) or add more physical Hard drive, just as we usually do on Linux.

So How can we clean up largest files on AIX?


Lets say we want to find all files on AIX larger than 1 MB.

aix-system:/ $ find / -xdev -size 2048 -ls | sort -r +6
12579 1400 -rw-r—–  1 root      security   1433534 Jun 26  2019 /etc/security/tsd/tsd.dat
 9325 20361 -rw-r—–  1 root      system    20848752 Nov  6 16:02 /etc/security/failedlogin
21862 7105 -rwxr-xr-x  1 root      system     7274915 Aug 24  2017 /sbin/zabbix_agentd
   72 7005 -rw-rw—-  1 root      system     7172962 Nov  6 16:19 /audit/stream.out
24726 2810 -rw——-  1 root      system     2876944 Feb 29  2012 /etc/syslog-ng/core
29314 2391 -r-xr-xr-x  1 root      system     2447454 Jun 25  2019 /lpp/bos/bos.rte.filesystem/7.1.5.32.save/update.16
21844 2391 -r-xr-xr-x  1 root      system     2447414 Jun 25  2019 /sbin/helpers/jfs2/logredo64
21843 2219 -r-xr-xr-x  1 root      system     2271971 Jun 25  2019 /sbin/helpers/jfs2/logredo
29313 2218 -r-xr-xr-x  1 root      system     2270835 Jun 25  2019 /lpp/bos/bos.rte.filesystem/7.1.5.32.save/update.15
22279 1800 -rw-r–r–  1 root      system     1843200 Nov  4 08:03 /root/smit.log
12577 1399 -rw-r–r–  1 root      system     1431685 Jun 26  2019 /etc/security/tsd/.tsd.bk
21837 1325 -r-xr-xr-x  1 root      system     1356340 Jun 25  2019 /sbin/helpers/jfs2/fsck64
29307 1325 -r-xr-xr-x  1 root      system     1356196 Jun 25  2019 /lpp/bos/bos.rte.filesystem/7.1.5.32.save/update.9
   12 1262 -rw——-  1 root      system     1291365 Aug  8  2011 /core

 

Above finds all files greater than 1 MB and sort them in reverse
order with the largest files first.

To search all files larger than 64 Megabytes under root ( / )

aix-system:/ $ find / -xdev -size +131072 -ls | sort -r +6
65139 97019 -rw-r–r–  1 root      system    99347181 Mar 31  2017 /admin/archive.zip


Display 10 largest directories on system

aix-system:/ $ du -a /dir | sort -n -r | head -10


Show biggest files and directories in a directory

 

aix-system:/ $ du -sk * | sort -n
4       Mail
4       liste
4       my_user
4       syslog-ng.conf
140     smit.script
180     smit.transaction
1804    smit.log

Below du display the size of all files and directories in the current directory with the biggest being at the bottom.

 

List all largest files in dir decrasingly. If a directory is matches show all sub-dirs largest files.

aix-system:/ $ ls -A . | while read name; do du -sk $name; done | sort -nr

Below ls + while loop command sorts disk usage for all files in the current directory by size, in decreasing order. If the file we suspect happens to be a directory, we can then change into that directory, and re-run the preceding command to determine what is taking up space within that directory.

Continue these steps until you find the desired file or files, at which point you can take appropriate actions.

If the bottom-most item is a directory, then cd into that directory and run the du command again. Keep drilling down until you find the biggest files on your system and get rid of them to save some space.

Improve SSL security: Generate and add Diffie Hellman key to SSL certificate for stronger line encryption

Wednesday, June 10th, 2020

Diffie–Hellman key exchange (DH) is a method of securely exchanging cryptographic keys over a public channel and was one of the first public-key protocols as conceived by Ralph Merkle and named after Whitfield Diffie and Martin Hellman. DH is one of the earliest practical examples of public key exchange implemented within the field of cryptography.

Traditionally, secure encrypted communication between two parties required that they first exchange keys by some secure physical means, such as paper key lists transported by a trusted courier. The Diffie–Hellman key exchange method allows two parties that have no prior knowledge of each other to jointly establish a shared secret key over an insecure channel. This key can then be used to encrypt subsequent communications using a symmetric key cipher.

DH has been widely used on the Internet for improving the authentication encryption among parties. The only note is it useful if both the communication sides A and B are at your control, as what DH does is just strenghten the already established connection between client A and B and not protect from Man in the Middle Attacks. If some malicious user could connect to B pretending it is A the encryption will be established.

diffie-hellman-explained

Alternatively, the Diffie-Hellman key exchange can be combined with an algorithm like the Digital Signature Standard (DSS) to provide authentication, key exchange, confidentiality and check the integrity of the data. In such a situation, RSA is not necessary for securing the connection.

TLS, which is a protocol that is used to secure much of the internet, can use the Diffie-Hellman exchange in three different ways: anonymous, static and ephemeral. In practice, only ephemeral Diffie-Hellman should be implemented, because the other options have security issues.

Anonymous Diffie-Hellman – This version of the Diffie-Hellman key exchange doesn’t use any authentication, leaving it vulnerable to man-in-the-middle attacks. It should not be used or implemented.

Static Diffie-Hellman – Static Diffie-Hellman uses certificates to authenticate the server. It does not authenticate the client by default, nor does it provide forward secrecy.

Ephemeral Diffie-Hellman – This is considered the most secure implementation because it provides perfect forward secrecy. It is generally combined with an algorithm such as DSA or RSA to authenticate one or both of the parties in the connection.

Ephemeral Diffie-Hellman uses different key pairs each time the protocol is run. This gives the connection perfect forward secrecy, because even if a key is compromised in the future, it can’t be used to decrypt all of the past messages.

diffie-hellman-dh-revised

DH encryption key could be generated with the openssl command and could be generated depending on your preference using a 1024 / 2048 or 4096 bit encryption.
Of course it is best to have the strongest encryption possible i.e 4096.

The Logjam attack 

The Diffie-Hellman key exchange was designed on the basis of the discrete logarithm problem being difficult to solve. The most effective publicly known mechanism for finding the solution is the number field sieve algorithm.

The capabilities of this algorithm were taken into account when the Diffie-Hellman key exchange was designed. By 1992, it was known that for a given group, G, three of the four steps involved in the algorithm could potentially be computed beforehand. If this progress was saved, the final step could be calculated in a comparatively short time.

This wasn’t too concerning until it was realized that a significant portion of internet traffic uses the same groups that are 1024 bits or smaller. In 2015, an academic team ran the calculations for the most common 512-bit prime used by the Diffie-Hellman key exchange in TLS.

They were also able to downgrade 80% of TLS servers that supported DHE-EXPORT, so that they would accept a 512-bit export-grade Diffie-Hellman key exchange for the connection. This means that each of these servers is vulnerable to an attack from a well-resourced adversary.

The researchers went on to extrapolate their results, estimating that a nation-state could break a 1024-bit prime. By breaking the single most-commonly used 1024-bit prime, the academic team estimated that an adversary could monitor 18% of the one million most popular HTTPS websites.

They went on to say that a second prime would enable the adversary to decrypt the connections of 66% of VPN servers, and 26% of SSH servers. Later in the report, the academics suggested that the NSA may already have these capabilities.

“A close reading of published NSA leaks shows that the agency’s attacks on VPNs are consistent with having achieved such a break.”

Despite this vulnerability, the Diffie-Hellman key exchange can still be secure if it is implemented correctly. As long as a 2048-bit key is used, the Logjam attack will not work. Updated browsers are also secure from this attack.

Is the Diffie-Hellman key exchange safe?

While the Diffie-Hellman key exchange may seem complex, it is a fundamental part of securely exchanging data online. As long as it is implemented alongside an appropriate authentication method and the numbers have been selected properly, it is not considered vulnerable to attack.

The Diffie-Hellman key exchange was an innovative method for helping two unknown parties communicate safely when it was developed in the 1970s. While we now implement newer versions with larger keys to protect against modern technology the protocol itself looks like it will continue to be secure until the arrival of quantum computing and the advanced attacks that will come with it.

Here is how easy it is to add this extra encryption to make the SSL tunnel between A and B stronger.

On a Linux / Mac / BSD OS machine install and use openssl client like so:
 

# openssl dhparam -out dhparams1.pem 2048
Generating DH parameters, 2048 bit long safe prime, generator 2
This is going to take a long time
……………………………………………………….+………..+………………………………………………………+


…..
…. ………………..++*++*

Be aware that the Diffie-Hellman key exchange would be insecure if it used numbers as small as those in our example. We are only using such small numbers to demonstrate the concept in a simpler manner.

 

# cat dhparams1.pem
—–BEGIN DH PARAMETERS—–
MIIBCAKCAQEAwG85wZPoVAVhwR23H5cF81Ml4BZTWuEplrmzSMOR9UNMnKjURf10
JX9xe/ZaqlwMxFYwZLyqtFQB2zczuvp1j+tKkSi4/TbD6Qm6gtsTeRghqunfypjS
+c4dNOVSbo/KLuIB5jDT31iMUAIDJF8OBUuqazRsg4pmYVHFm1KLHCcgcTk5kXqh
m8vXoCTlaLlmicC9pRTgQLuAQRXAF8LnVLCUvGlsyynTdc0yUFePWkmeYHMYAmWo
aBS6AMFNDvOxCubWv9cULkOouhPzd8k0wWYhUrrxMJXc1bSDFCBA7DiRCLPorefd
kCcNJFrh7rgy1lmu00d3I5S9EPH/EyoGSwIBAg==
—–END DH PARAMETERS—–


Copy the generated DH PARAMETERS headered key string to your combined .PEM certificate pair at the end of the file and save it

 

# vim /etc/haproxy/cert/ssl-cert.pem
….
—–BEGIN DH PARAMETERS—–
MIIBCAKCAQEAwG85wZPoVAVhwR23H5cF81Ml4BZTWuEplrmzSMOR9UNMnKjURf10
JX9xe/ZaqlwMxFYwZLyqtFQB2zczuvp1j+tKkSi4/TbD6Qm6gtsTeRghqunfypjS
+c4dNOVSbo/KLuIB5jDT31iMUAIDJF8OBUuqazRsg4pmYVHFm1KLHCcgcTk5kXqh
m8vXoCTlaLlmicC9pRTgQLuAQRXAF8LnVLCUvGlsyynTdc0yUFePWkmeYHMYAmWo
aBS6AMFNDvOxCubWv9cULkOouhPzd8k0wWYhUrrxMJXc1bSDFCBA7DiRCLPorefd
kCcNJFrh7rgy1lmu00d3I5S9EPH/EyoGSwIBAg==
—–END DH PARAMETERS—–

…..

Restart the WebServer or Proxy service wher Diffie-Hellman key was installed and Voila you should a bit more secure.

 

 

Fix FTP active connection issues “Cannot create a data connection: No route to host” on ProFTPD Linux dedicated server

Tuesday, October 1st, 2019

proftpd-linux-logo

Earlier I've blogged about an encounter problem that prevented Active mode FTP connections on CentOS
As I'm working for a client building a brand new dedicated server purchased from Contabo Dedi Host provider on a freshly installed Debian 10 GNU / Linux, I've had to configure a new FTP server, since some time I prefer to use Proftpd instead of VSFTPD because in my opinion it is more lightweight and hence better choice for a small UNIX server setups. During this once again I've encounted the same ACTIVE FTP not working from FTP server to FTP client host machine. But before shortly explaining, the fix I find worthy to explain briefly what is ACTIVE / PASSIVE FTP connection.

 

1. What is ACTIVE / PASSIVE FTP connection?
 

Whether in active mode, the client specifies which client-side port the data channel has been opened and the server starts the connection. Or in other words the default FTP client communication for historical reasons is in ACTIVE MODE. E.g.
Client once connected to Server tells the server to open extra port or ports locally via which the overall FTP data transfer will be occuring. In the early days of networking when FTP protocol was developed security was not of such a big concern and usually Networks did not have firewalls at all and the FTP DATA transfer host machine was running just a single FTP-server and nothing more in this, early days when FTP was not even used over the Internet and FTP DATA transfers happened on local networks, this was not a problem at all.

In passive mode, the server decides which server-side port the client should connect to. Then the client starts the connection to the specified port.

But with the ever increasing complexity of Internet / Networks and the ever tightening firewalls due to viruses and worms that are trying to own and exploit networks creating unnecessery bulk loads this has changed …

active-passive-ftp-explained-diagram
 

2. Installing and configure ProFTPD server Public ServerName

I've installed the server with the common cmd:

 

apt –yes install proftpd

 

And the only configuration changed in default configuration file /etc/proftpd/proftpd.conf  was
ServerName          "Debian"

I do this in new FTP setups for the logical reason to prevent the multiple FTP Vulnerability Scan script kiddie Crawlers to know the exact OS version of the server, so this was changed to:

 

ServerName "MyServerHostname"

 

Though this is the bad security through obscurity practice doing so is a good practice.
 

3. Create iptable firewall rules to allow ACTIVE FTP mode


But anyways, next step was to configure the firewall to be allowed to communicate on TCP PORT 21 and 20 to incoming source ports range 1024:65535 (to enable ACTIVE FTP) on firewal level with iptables on INPUT and OUTPUT chain rules, like this:

 

iptables -A INPUT -p tcp –sport 1024:65535 -d 0/0 –dport 21 -m state –state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -p tcp -s 0/0 –sport 1024:65535 -d 0/0 –dport 20 -m state –state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -s 0/0 –sport 21 -d 0/0 –dport 1024:65535 -m state –state ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -s 0/0 –sport 20 -d 0/0 –dport 1024:65535 -m state –state ESTABLISHED,RELATED -j ACCEPT


Talking about Active and Passive FTP connections perhaps for novice Linux users it might be worthy to say few words on Active and Passive FTP connections

Once firewall has enabled FTP Active / Passive connections is on and FTP server is listening, to test all is properly configured check iptable rules and FTP listener:
 

/sbin/iptables -L INPUT |grep ftp
ACCEPT     tcp  —  anywhere             anywhere             tcp spts:1024:65535 dpt:ftp state NEW,ESTABLISHED
ACCEPT     tcp  —  anywhere             anywhere             tcp spts:1024:65535 dpt:ftp-data state NEW,ESTABLISHED
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:ftp
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:ftp-data

netstat -l | grep "ftp"
tcp6       0      0 [::]:ftp                [::]:*                  LISTEN    

 

4. Loading nf_nat_ftp module and net.netfilter.nf_conntrack_helper (for backward compitability)


Next step of course was to add the necessery modules nf_nat_ftp nf_conntrack_sane that makes FTP to properly forward ports with respective Firewall states on any of above source ports which are usually allowed by firewalls, note that the range of ports given 1024:65535 might be too much liberal for paranoid sysadmins and in many cases if ports are not filtered, if you are a security freak you can use some smaller range such as 60000-65535.

 

Here is time to say for sysadmins who haven't recently had a task to configure a new (unecrypted) File Transfer Server as today Secure FTP is almost alltime used for file transfers for the sake of security might be puzzled to find out the old Linux kernel ip_conntrack_ftp which was the standard module used to make FTP Active connections work is substituted nowadays with  nf_nat_ftp and nf_conntrack_sane.

To make the 2 modules permanently loaded on next boot on Debian Linux they have to be added to /etc/modules

Here is how sample /etc/modules that loads the modules on next system boot looks like

cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
softdog
nf_nat_ftp
nf_conntrack_sane


Next to say is that in newer Linux kernels 3.x / 4.x / 5.x the nf_nat_ftp and nf_conntrack-sane behaviour changed so  simply loading the modules would not work and if you do the stupidity to test it with some FTP client (I used gFTP / ncftp from my Linux desktop ) you are about to get FTP No route to host errors like:

 

Cannot create a data connection: No route to host

 

cannot-create-a-data-connection-no-route-to-host-linux-error-howto-fix


Sometimes, instead of No route to host error the error FTP client might return is:

 

227 entering passive mode FTP connect connection timed out error


To make the nf_nat_ftp module on newer Linux kernels hence you have to enable backwards compatibility Kernel variable

 

 

/proc/sys/net/netfilter/nf_conntrack_helper

 

echo 1 > /proc/sys/net/netfilter/nf_conntrack_helper

 

To make it permanent if you have enabled /etc/rc.local legacy one single file boot place as I do on servers – for how to enable rc.local on newer Linuxes check here

or alternatively add it to load via sysctl

sysctl -w net.netfilter.nf_conntrack_helper=1

And to make change permanent (e.g. be loaded on next boot)

echo 'net.netfilter.nf_conntrack_helper=1' >> /etc/sysctl.conf

 

5. Enable PassivePorts in ProFTPD or PassivePortRange in PureFTPD


Last but not least open /etc/proftpd/proftpd.conf find PassivePorts config value (commented by default) and besides it add the following line:

 

PassivePorts 60000 65534

 

Just for information if instead of ProFTPd you experience the error on PureFTPD the configuration value to set in /etc/pure-ftpd.conf is:
 

PassivePortRange 30000 35000


That's all folks, give the ncftp / lftp / filezilla or whatever FTP client you prefer and test it the FTP client should be able to talk as expected to remote server in ACTIVE FTP mode (and the auto passive mode) will be not triggered anymore, nor you will get a strange errors and failure to connect in FTP clients as gftp.

Cheers 🙂

Tips & Tricks for Passing CompTIA Security+ Certification Exams with PrepAway Dumps

Monday, September 30th, 2019

comptia-security-certifications

Introduction

Many candidates aiming to land a career in IT choose to be certified by CompTIA. This reputable organization is rightly considered to be a leader on the IT market that offers various credentials. And its position is justified. The thing is that CompTIA has been in service for more than 20 years, and in its turn, certified more than 2,000,000 IT specialists all over the world. It offers various vendor-neutral certifications for all tastes Security+, Project+ among many. Their goal is to develop and advance the IT industry.

 

So, since CompTIA aims to serve as a voice of the Information technology industry, it keeps the pace with the rapidly changing IT field and makes its best to equip candidates with the most updated skills.

In this article, we’ll cover one of the exams that leads you to get SECURITY+ certification which allows you to build a successful career in IT and more .

 

Overview of CompTIA SECURITY+ Certification

The CompTIA SECURITY+ certification is one of the core credentials, offered by this vendor. The rest include ITF+, Network+, and Security+. To obtain SECURITY+ credential, you need to pass two exams. They are either 900-series (220-901 and 220-902) or the new ones Core 1 (220-1001) and Core 2 (220-1002).

220-901 exam tests your skills to complete the tasks that deal with hardware and its troubleshooting, peripherals, as well as network connectivity problems. 220-902 test is all about working with operating systems (Windows, Linux, iOS, etc.). As for the new exams, the topics included refer to cloud computing, virtualization and expanded security.

After succeeding and getting the CompTIA SECURITY+ certification, you are eligible to work as a support specialist, field service technician or desktop support analyst and earn annually from $46K to $60K.

 

Details of 220-902 Exam

To gain the SECURITY+ certification, you have to undertake two exams separately. These are 220-901 and 220-902 tests. The CompTIA 220-902 exam verifies your fundamental skills required to install and configure operating systems such as Windows, Apple OS X, Linus, iOS, and Android. The exam also covers security as well as the essentials of cloud computing and operational procedures.

 

Exam 220-902 consists of 90 questions which require 90 minutes to complete. The types of questions included are multiple-choice, drag and drop, and performance-based. To gain the CompTIA SECURITY+ certification you must attain a score of 700. The exam fee comprises $219.


Exam objectives

The CompTIA SECURITY+ 220-902 exam objectives are based on a thorough review made by experts and professionals in the IT field. This is done to make the exam relevant and useful for entry-level IT specialists. The objectives are divided into five domains which are listed below:

  • Windows operating system-29%
  • Other operating systems and technologies-12%
  • Security-22%
  • Software troubleshooting-24%
  • Operation procedures-13%

 

Importance of CompTIA 220-902 Exam

Passing the CompTIA 220-902 exam will equip you with the expertise to do the following;

  • Assemble components as per the customer requirements
  • Effective customer support
  • PCs installation, configuring and maintenance services
  • Basic understanding of networking and securitySkills in troubleshooting
  • Diagnose, resolve, and document normal hardware and software problems
  • Understanding the basics of Virtualization, desktop imaging, and deploying


CompTIA Training material

Since preparation is key to your success, start from checking the CompTIA official website. The vendor offers prep materials that you need to be adequately ready for your exam. The options include virtual labs, e-learning, video training, exam preps, study guides, and instructor-led training. All the options offered contain the study material to hone your skills in taking 220-902 exam and bring your certification goals to fruition. You simply need to choose the method/s that suit/s you best.

PrepAway – the best platform for your Exam preparation

 

In addition to the official study materials offered by CompTIA, you should hunt for the best resources searching the internet to sharpen your skills in order to pass 220-902 exam. For that, you can visit the PrepAway website. It’s a popular provider of the most updated exam dumps to your certification exam/s among test takers.

PrepAway offers you to download free practice questions with answers shared by the recent exam takers. It means that you’ll get the latest exam questions that you can practice. Thus, for exam 220-902 you’ll find a great variety of such dumps.

If you would like to get the practice questions and answers checked by IT experts, 220-902 premium bundle has been designed for such purpose. At an affordable price you can get a bundle that contains an updated premium file, a study guide, and a training course. The files offered at PrepAway are in vce format and can be opened with the help of the VCE Software. This modern exam simulator has been created by Avanset team and helps you practice exam questions in the interesting and interactive manner. Moreover, the VCE testing engine resembles the real exam environment and equips you with skills to tackle any type of question, as well as ability to manage your time wisely. Besides, you can track your results, define the weak areas and focus on them before taking the real 220-902 exam. You can download the files in VCE Player anytime you wish and practice on the go as well. Note that the download is limited to a maximum of 15 files to prevent commercial selling which is prohibited.

So, the benefit of using 220-902 premium bundle is because it contains the updated file verified by IT experts, who have worked in this field for long and feel themselves like a duck to water. And extremely informative video course along with a study guide will complete your thorough preparation to the CompTIA 220-902 exam.

 

Conclusion

This article has taken you through the necessity of passing 220-902 exam and earning the SECURITY+ certification. You get a detailed picture of the benefits waiting for you afterwards. Since the proper preparation is key to your success, utilize the materials offered on the CompTIA website and PrepAway online platform, choose the study options that suit you best, and you won’t have a reason to fail. Wish you luck!
 

How to build Linux logging bash shell script write_log, logging with Named Pipe buffer, Simple Linux common log files logging with logger command

Monday, August 26th, 2019

how-to-build-bash-script-for-logging-buffer-named-pipes-basic-common-files-logging-with-logger-command

Logging into file in GNU / Linux and FreeBSD is as simple as simply redirecting the output, e.g.:
 

echo "$(date) Whatever" >> /home/hipo/log/output_file_log.txt


or with pyping to tee command

 

echo "$(date) Service has Crashed" | tee -a /home/hipo/log/output_file_log.txt


But what if you need to create a full featured logging bash robust shell script function that will run as a daemon continusly as a background process and will output
all content from itself to an external log file?
In below article, I've given example logging script in bash, as well as small example on how a specially crafted Named Pipe buffer can be used that will later store to a file of choice.
Finally I found it interesting to mention few words about logger command which can be used to log anything to many of the common / general Linux log files stored under /var/log/ – i.e. /var/log/syslog /var/log/user /var/log/daemon /var/log/mail etc.
 

1. Bash script function for logging write_log();


Perhaps the simplest method is just to use a small function routine in your shell script like this:
 

write_log()
LOG_FILE='/root/log.txt';
{
  while read text
  do
      LOGTIME=`date "+%Y-%m-%d %H:%M:%S"`
      # If log file is not defined, just echo the output
      if [ “$LOG_FILE” == “” ]; then
    echo $LOGTIME": $text";
      else
        LOG=$LOG_FILE.`date +%Y%m%d`
    touch $LOG
        if [ ! -f $LOG ]; then echo "ERROR!! Cannot create log file $LOG. Exiting."; exit 1; fi
    echo $LOGTIME": $text" | tee -a $LOG;
      fi
  done
}

 

  •  Using the script from within itself or from external to write out to defined log file

 

echo "Skipping to next copy" | write_log

 

2. Use Unix named pipes to pass data – Small intro on what is Unix Named Pipe.


Named Pipe –  a named pipe (also known as a FIFO (First In First Out) for its behavior) is an extension to the traditional pipe concept on Unix and Unix-like systems, and is one of the methods of inter-process communication (IPC). The concept is also found in OS/2 and Microsoft Windows, although the semantics differ substantially. A traditional pipe is "unnamed" and lasts only as long as the process. A named pipe, however, can last as long as the system is up, beyond the life of the process. It can be deleted if no longer used.
Usually a named pipe appears as a file, and generally processes attach to it for IPC.

 

Once named pipes were shortly explained for those who hear it for a first time, its time to say named pipe in unix / linux is created with mkfifo command, syntax is straight foward:
 

mkfifo /tmp/name-of-named-pipe


Some older Linux-es with older bash and older bash shell scripts were using mknod.
So idea behind logging script is to use a simple named pipe read input and use date command to log the exact time the command was executed, here is the script.

 

#!/bin/bash
named_pipe='/tmp/output-named-pipe';
output_named_log='
/tmp/output-named-log.txt ';

if [ -p $named_pipe ]; then
rm -f $named_pipe
fi
mkfifo $named_pipe

while true; do
read LINE <$named_pipe
echo $(date): "$LINE" >>/tmp/output-named-log.txt
done


To write out any other script output and get logged now, any of your output with a nice current date command generated output write out any output content to the loggin buffer like so:

 

echo 'Using Named pipes is so cool' > /tmp/output-named-pipe
echo 'Disk is full on a trigger' > /tmp/output-named-pipe

  • Getting the output with the date timestamp

# cat /tmp/output-named-log.txt
Mon Aug 26 15:21:29 EEST 2019: Using Named pipes is so cool
Mon Aug 26 15:21:54 EEST 2019: Disk is full on a trigger


If you wonder why it is better to use Named pipes for logging, they perform better (are generally quicker) than Unix sockets.

 

3. Logging files to system log files with logger

 

If you need to do a one time quick way to log any message of your choice with a standard Logging timestamp, take a look at logger (a part of bsdutils Linux package), and is a command which is used to enter messages into the system log, to use it simply invoke it with a message and it will log your specified output by default to /var/log/syslog common logfile

 

root@linux:/root# logger 'Here we go, logging'
root@linux:/root # tail -n 3 /var/log/syslog
Aug 26 15:41:01 localhost CRON[24490]: (root) CMD (chown qscand:qscand -R /var/run/clamav/ 2>&1 >/dev/null)
Aug 26 15:42:01 localhost CRON[24547]: (root) CMD (chown qscand:qscand -R /var/run/clamav/ 2>&1 >/dev/null)
Aug 26 15:42:20 localhost hipo: Here we go, logging

 

If you have took some time to read any of the init.d scripts on Debian / Fedora / RHEL / CentOS Linux etc. you will notice the logger logging facility is heavily used.

With logger you can print out message with different priorities (e.g. if you want to write an error message to mail.* logs), you can do so with:
 

 logger -i -p mail.err "Output of mail processing script"


To log a normal non-error (priority message) with logger to /var/log/mail.log system log.

 

 logger -i -p mail.notice "Output of mail processing script"


A whole list of supported facility named priority valid levels by logger (as taken of its current Linux manual) are as so:

 

FACILITIES AND LEVELS
       Valid facility names are:

              auth
              authpriv   for security information of a sensitive nature
              cron
              daemon
              ftp
              kern       cannot be generated from userspace process, automatically converted to user
              lpr
              mail
              news
              syslog
              user
              uucp
              local0
                to
              local7
              security   deprecated synonym for auth

       Valid level names are:

              emerg
              alert
              crit
              err
              warning
              notice
              info
              debug
              panic     deprecated synonym for emerg
              error     deprecated synonym for err
              warn      deprecated synonym for warning

       For the priority order and intended purposes of these facilities and levels, see syslog(3).

 


If you just want to log to Linux main log file (be it /var/log/syslog or /var/log/messages), depending on the Linux distribution, just type', even without any shell quoting:

 

logger 'The reason to reboot the server Currently was a System security Update

 

So what others is logger useful for?

 In addition to being a good diagnostic tool, you can use logger to test if all basic system logs with its respective priorities work as expected, this is especially
useful as I've seen on a Cloud Holsted OpenXEN based servers as a SAP consultant, that sometimes logging to basic log files stops to log for months or even years due to
syslog and syslog-ng problems hungs by other thirt party scripts and programs.
To test test all basic logging and priority on system logs as expected use the following logger-test-all-basic-log-logging-facilities.sh shell script.

 

#!/bin/bash
for i in {auth,auth-priv,cron,daemon,kern, \
lpr,mail,mark,news,syslog,user,uucp,local0 \
,local1,local2,local3,local4,local5,local6,local7}

do        
# (this is all one line!)

 

for k in {debug,info,notice,warning,err,crit,alert,emerg}
do

logger -p $i.$k "Test daemon message, facility $i priority $k"

done

done

Note that on different Linux distribution verions, the facility and priority names might differ so, if you get

logger: unknown facility name: {auth,auth-priv,cron,daemon,kern,lpr,mail,mark,news, \
syslog,user,uucp,local0,local1,local2,local3,local4, \
local5,local6,local7}

check and set the proper naming as described in logger man page.

 

4. Using a file descriptor that will output to a pre-set log file


Another way is to add the following code to the beginning of the script

#!/bin/bash
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>log.out 2>&1
# Everything below will go to the file 'log.out':

The code Explaned

  •     Saves file descriptors so they can be restored to whatever they were before redirection or used themselves to output to whatever they were before the following redirect.
    trap 'exec 2>&4 1>&3' 0 1 2 3
  •     Restore file descriptors for particular signals. Not generally necessary since they should be restored when the sub-shell exits.

          exec 1>log.out 2>&1

  •     Redirect stdout to file log.out then redirect stderr to stdout. Note that the order is important when you want them going to the same file. stdout must be redirected before stderr is redirected to stdout.

From then on, to see output on the console (maybe), you can simply redirect to &3. For example
,

echo "$(date) : Do print whatever you want logging to &3 file handler" >&3


I've initially found out about this very nice bash code from serverfault.com's post how can I fully log all bash script actions (but unfortunately on latest Debian 10 Buster Linux  that is prebundled with bash shell 5.0.3(1)-release the code doesn't behave exactly, well but still on older bash versions it works fine.

Sum it up


To shortlysummarize there is plenty of ways to do logging from a shell script logger command but using a function or a named pipe is the most classic. Sometimes if a script is supposed to write user or other script output to a a common file such as syslog, logger command can be used as it is present across most modern Linux distros.
If you have a better ways, please drop a common and I'll add it to this article.