Posts Tagged ‘information’

How to monitor Haproxy Application server backends with Zabbix userparameter autodiscovery scripts

Friday, May 13th, 2022

zabbix-backend-monitoring-logo

Haproxy is doing quite a good job in High Availability tasks where traffic towards multiple backend servers has to be redirected based on the available one to sent data from the proxy to. 

Lets say haproxy is configured to proxy traffic for App backend machine1 and App backend machine2.

Usually in companies people configure a monitoring like with Icinga or Zabbix / Grafana to keep track on the Application server is always up and running. Sometimes however due to network problems (like burned Network Switch / router or firewall misconfiguration) or even an IP duplicate it might happen that Application server seems to be reporting reachable from some monotoring tool on it but unreachable from  Haproxy server -> App backend machine2 but reachable from App backend machine1. And even though haproxy will automatically switch on the traffic from backend machine2 to App machine1. It is a good idea to monitor and be aware that one of the backends is offline from the Haproxy host.
In this article I'll show you how this is possible by using 2 shell scripts and userparameter keys config through the autodiscovery zabbix legacy feature.
Assumably for the setup to work you will need to have as a minimum a Zabbix server installation of version 5.0 or higher.

1. Create the required  haproxy_discovery.sh  and haproxy_stats.sh scripts 

You will have to install the two scripts under some location for example we can put it for more clearness under /etc/zabbix/scripts

[root@haproxy-server1 ]# mkdir /etc/zabbix/scripts

[root@haproxy-server1 scripts]# vim haproxy_discovery.sh 
#!/bin/bash
#
# Get list of Frontends and Backends from HAPROXY
# Example: ./haproxy_discovery.sh [/var/lib/haproxy/stats] FRONTEND|BACKEND|SERVERS
# First argument is optional and should be used to set location of your HAPROXY socket
# Second argument is should be either FRONTEND, BACKEND or SERVERS, will default to FRONTEND if not set
#
# !! Make sure the user running this script has Read/Write permissions to that socket !!
#
## haproxy.cfg snippet
#  global
#  stats socket /var/lib/haproxy/stats  mode 666 level admin

HAPROXY_SOCK=""/var/run/haproxy/haproxy.sock
[ -n “$1” ] && echo $1 | grep -q ^/ && HAPROXY_SOCK="$(echo $1 | tr -d '\040\011\012\015')"

if [[ “$1” =~ (25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?):[0-9]{1,5} ]];
then
    HAPROXY_STATS_IP="$1"
    QUERYING_METHOD="TCP"
fi

QUERYING_METHOD="${QUERYING_METHOD:-SOCKET}"

query_stats() {
    if [[ ${QUERYING_METHOD} == “SOCKET” ]]; then
        echo "show stat" | socat ${HAPROXY_SOCK} stdio 2>/dev/null
    elif [[ ${QUERYING_METHOD} == “TCP” ]]; then
        echo "show stat" | nc ${HAPROXY_STATS_IP//:/ } 2>/dev/null
    fi
}

get_stats() {
        echo "$(query_stats)" | grep -v "^#"
}

[ -n “$2” ] && shift 1
case $1 in
        B*) END="BACKEND" ;;
        F*) END="FRONTEND" ;;
        S*)
                for backend in $(get_stats | grep BACKEND | cut -d, -f1 | uniq); do
                        for server in $(get_stats | grep "^${backend}," | grep -v BACKEND | grep -v FRONTEND | cut -d, -f2); do
                                serverlist="$serverlist,\n"'\t\t{\n\t\t\t"{#BACKEND_NAME}":"'$backend'",\n\t\t\t"{#SERVER_NAME}":"'$server'"}'
                        done
                done
                echo -e '{\n\t"data":[\n’${serverlist#,}’]}'
                exit 0
        ;;
        *) END="FRONTEND" ;;
esac

for frontend in $(get_stats | grep "$END" | cut -d, -f1 | uniq); do
    felist="$felist,\n"'\t\t{\n\t\t\t"{#'${END}'_NAME}":"'$frontend'"}'
done
echo -e '{\n\t"data":[\n’${felist#,}’]}'

 

[root@haproxy-server1 scripts]# vim haproxy_stats.sh 
#!/bin/bash
set -o pipefail

if [[ “$1” = /* ]]
then
  HAPROXY_SOCKET="$1"
  shift 0
else
  if [[ “$1” =~ (25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?):[0-9]{1,5} ]];
  then
    HAPROXY_STATS_IP="$1"
    QUERYING_METHOD="TCP"
    shift 1
  fi
fi

pxname="$1"
svname="$2"
stat="$3"

DEBUG=${DEBUG:-0}
HAPROXY_SOCKET="${HAPROXY_SOCKET:-/var/run/haproxy/haproxy.sock}"
QUERYING_METHOD="${QUERYING_METHOD:-SOCKET}"
CACHE_STATS_FILEPATH="${CACHE_STATS_FILEPATH:-/var/tmp/haproxy_stats.cache}"
CACHE_STATS_EXPIRATION="${CACHE_STATS_EXPIRATION:-1}" # in minutes
CACHE_INFO_FILEPATH="${CACHE_INFO_FILEPATH:-/var/tmp/haproxy_info.cache}" ## unused
CACHE_INFO_EXPIRATION="${CACHE_INFO_EXPIRATION:-1}" # in minutes ## unused
GET_STATS=${GET_STATS:-1} # when you update stats cache outsise of the script
SOCAT_BIN="$(which socat)"
NC_BIN="$(which nc)"
FLOCK_BIN="$(which flock)"
FLOCK_WAIT=15 # maximum number of seconds that "flock" waits for acquiring a lock
FLOCK_SUFFIX='.lock'
CUR_TIMESTAMP="$(date '+%s')"

debug() {
  [ “${DEBUG}” -eq 1 ] && echo "DEBUG: $@" >&2 || true
}

debug "SOCAT_BIN        => $SOCAT_BIN"
debug "NC_BIN           => $NC_BIN"
debug "FLOCK_BIN        => $FLOCK_BIN"
debug "FLOCK_WAIT       => $FLOCK_WAIT seconds"
debug "CACHE_FILEPATH   => $CACHE_FILEPATH"
debug "CACHE_EXPIRATION => $CACHE_EXPIRATION minutes"
debug "HAPROXY_SOCKET   => $HAPROXY_SOCKET"
debug "pxname   => $pxname"
debug "svname   => $svname"
debug "stat     => $stat"

# check if socat is available in path
if [ “$GET_STATS” -eq 1 ] && [[ $QUERYING_METHOD == “SOCKET” && -z “$SOCAT_BIN” ]] || [[ $QUERYING_METHOD == “TCP” &&  -z “$NC_BIN” ]]
then
  echo 'ERROR: cannot find socat binary'
  exit 126
fi

# if we are getting stats:
#   check if we can write to stats cache file, if it exists
#     or cache file path, if it does not exist
#   check if HAPROXY socket is writable
# if we are NOT getting stats:
#   check if we can read the stats cache file
if [ “$GET_STATS” -eq 1 ]
then
  if [ -e “$CACHE_FILEPATH” ] && [ ! -w “$CACHE_FILEPATH” ]
  then
    echo 'ERROR: stats cache file exists, but is not writable'
    exit 126
  elif [ ! -w ${CACHE_FILEPATH%/*} ]
  then
    echo 'ERROR: stats cache file path is not writable'
    exit 126
  fi
  if [[ $QUERYING_METHOD == “SOCKET” && ! -w $HAPROXY_SOCKET ]]
  then
    echo "ERROR: haproxy socket is not writable"
    exit 126
  fi
elif [ ! -r “$CACHE_FILEPATH” ]
then
  echo 'ERROR: cannot read stats cache file'
  exit 126
fi

# index:name:default
MAP="
1:pxname:@
2:svname:@
3:qcur:9999999999
4:qmax:0
5:scur:9999999999
6:smax:0
7:slim:0
8:stot:@
9:bin:9999999999
10:bout:9999999999
11:dreq:9999999999
12:dresp:9999999999
13:ereq:9999999999
14:econ:9999999999
15:eresp:9999999999
16:wretr:9999999999
17:wredis:9999999999
18:status:UNK
19:weight:9999999999
20:act:9999999999
21:bck:9999999999
22:chkfail:9999999999
23:chkdown:9999999999
24:lastchg:9999999999
25:downtime:0
26:qlimit:0
27:pid:@
28:iid:@
29:sid:@
30:throttle:9999999999
31:lbtot:9999999999
32:tracked:9999999999
33:type:9999999999
34:rate:9999999999
35:rate_lim:@
36:rate_max:@
37:check_status:@
38:check_code:@
39:check_duration:9999999999
40:hrsp_1xx:@
41:hrsp_2xx:@
42:hrsp_3xx:@
43:hrsp_4xx:@
44:hrsp_5xx:@
45:hrsp_other:@
46:hanafail:@
47:req_rate:9999999999
48:req_rate_max:@
49:req_tot:9999999999
50:cli_abrt:9999999999
51:srv_abrt:9999999999
52:comp_in:0
53:comp_out:0
54:comp_byp:0
55:comp_rsp:0
56:lastsess:9999999999
57:last_chk:@
58:last_agt:@
59:qtime:0
60:ctime:0
61:rtime:0
62:ttime:0
"

_STAT=$(echo -e "$MAP" | grep :${stat}:)
_INDEX=${_STAT%%:*}
_DEFAULT=${_STAT##*:}

debug "_STAT    => $_STAT"
debug "_INDEX   => $_INDEX"
debug "_DEFAULT => $_DEFAULT"

# check if requested stat is supported
if [ -z “${_STAT}” ]
then
  echo "ERROR: $stat is unsupported"
  exit 127
fi

# method to retrieve data from haproxy stats
# usage:
# query_stats "show stat"
query_stats() {
    if [[ ${QUERYING_METHOD} == “SOCKET” ]]; then
        echo $1 | socat ${HAPROXY_SOCKET} stdio 2>/dev/null
    elif [[ ${QUERYING_METHOD} == “TCP” ]]; then
        echo $1 | nc ${HAPROXY_STATS_IP//:/ } 2>/dev/null
    fi
}

# a generic cache management function, that relies on 'flock'
check_cache() {
  local cache_type="${1}"
  local cache_filepath="${2}"
  local cache_expiration="${3}"  
  local cache_filemtime
  cache_filemtime=$(stat -c '%Y' "${cache_filepath}" 2> /dev/null)
  if [ $((cache_filemtime+60*cache_expiration)) -ge ${CUR_TIMESTAMP} ]
  then
    debug "${cache_type} file found, results are at most ${cache_expiration} minutes stale.."
  elif "${FLOCK_BIN}" –exclusive –wait "${FLOCK_WAIT}" 200
  then
    cache_filemtime=$(stat -c '%Y' "${cache_filepath}" 2> /dev/null)
    if [ $((cache_filemtime+60*cache_expiration)) -ge ${CUR_TIMESTAMP} ]
    then
      debug "${cache_type} file found, results have just been updated by another process.."
    else
      debug "no ${cache_type} file found, querying haproxy"
      query_stats "show ${cache_type}" > "${cache_filepath}"
    fi
  fi 200> "${cache_filepath}${FLOCK_SUFFIX}"
}

# generate stats cache file if needed
get_stats() {
  check_cache 'stat' "${CACHE_STATS_FILEPATH}" ${CACHE_STATS_EXPIRATION}
}

# generate info cache file
## unused at the moment
get_info() {
  check_cache 'info' "${CACHE_INFO_FILEPATH}" ${CACHE_INFO_EXPIRATION}
}

# get requested stat from cache file using INDEX offset defined in MAP
# return default value if stat is ""
get() {
  # $1: pxname/svname
  local _res="$("${FLOCK_BIN}" –shared –wait "${FLOCK_WAIT}" "${CACHE_STATS_FILEPATH}${FLOCK_SUFFIX}" grep $1 "${CACHE_STATS_FILEPATH}")"
  if [ -z “${_res}” ]
  then
    echo "ERROR: bad $pxname/$svname"
    exit 127
  fi
  _res="$(echo $_res | cut -d, -f ${_INDEX})"
  if [ -z “${_res}” ] && [[ “${_DEFAULT}” != “@” ]]
  then
    echo "${_DEFAULT}"  
  else
    echo "${_res}"
  fi
}

# not sure why we'd need to split on backslash
# left commented out as an example to override default get() method
# status() {
#   get "^${pxname},${svnamem}," $stat | cut -d\  -f1
# }

# this allows for overriding default method of getting stats
# name a function by stat name for additional processing, custom returns, etc.
if type get_${stat} >/dev/null 2>&1
then
  debug "found custom query function"
  get_stats && get_${stat}
else
  debug "using default get() method"
  get_stats && get "^${pxname},${svname}," ${stat}
fi


! NB ! Substitute in the script /var/run/haproxy/haproxy.sock with your haproxy socket location

You can download the haproxy_stats.sh here and haproxy_discovery.sh here

2. Create the userparameter_haproxy_backend.conf

[root@haproxy-server1 zabbix_agentd.d]# cat userparameter_haproxy_backend.conf 
#
# Discovery Rule
#

# HAProxy Frontend, Backend and Server Discovery rules
UserParameter=haproxy.list.discovery[*],sudo /etc/zabbix/scripts/haproxy_discovery.sh SERVER
UserParameter=haproxy.stats[*],sudo /etc/zabbix/scripts/haproxy_stats.sh  $2 $3 $4

# support legacy way

UserParameter=haproxy.stat.downtime[*],sudo /etc/zabbix/scripts/haproxy_stats.sh  $2 $3 downtime

UserParameter=haproxy.stat.status[*],sudo /etc/zabbix/scripts/haproxy_stats.sh  $2 $3 status

UserParameter=haproxy.stat.last_chk[*],sudo /etc/zabbix/scripts/haproxy_stats.sh  $2 $3 last_chk

 

3. Create new simple template for the Application backend Monitoring and link it to monitored host

create-configuration-template-backend-monitoring

create-template-backend-monitoring-macros

 

Go to Configuration -> Hosts (find the host) and Link the template to it


4. Restart Zabbix-agent, in while check autodiscovery data is in Zabbix Server

[root@haproxy-server1 ]# systemctl restart zabbix-agent


Check in zabbix the userparameter data arrives, it should not be required to add any Items or Triggers as autodiscovery zabbix feature should automatically create in the server what is required for the data regarding backends to be in.

To view data arrives go to Zabbix config menus:

Configuration -> Hosts -> Hosts: (lookup for the haproxy-server1 hostname)

zabbix-discovery_rules-screenshot

The autodiscovery should have automatically created the following prototypes

zabbix-items-monitoring-prototypes
Now if you look inside Latest Data for the Host you should find some information like:

HAProxy Backend [backend1] (3 Items)
        
HAProxy Server [backend-name_APP/server1]: Connection Response
2022-05-13 14:15:04            History
        
HAProxy Server [backend-name/server2]: Downtime (hh:mm:ss)
2022-05-13 14:13:57    20:30:42        History
        
HAProxy Server [bk_name-APP/server1]: Status
2022-05-13 14:14:25    Up (1)        Graph
        ccnrlb01    HAProxy Backend [bk_CCNR_QA_ZVT] (3 Items)
        
HAProxy Server [bk_name-APP3/server1]: Connection Response
2022-05-13 14:15:05            History
        
HAProxy Server [bk_name-APP3/server1]: Downtime (hh:mm:ss)
2022-05-13 14:14:00    20:55:20        History
        
HAProxy Server [bk_name-APP3/server2]: Status
2022-05-13 14:15:08    Up (1)

To make alerting in case if a backend is down which usually you would like only left thing is to configure an Action to deliver alerts to some email address.

How to redirect TCP port traffic from Internet Public IP host to remote local LAN server, Redirect traffic for Apache Webserver, MySQL, or other TCP service to remote host

Thursday, September 23rd, 2021

 

 

Linux-redirect-forward-tcp-ip-port-traffic-from-internet-to-remote-internet-LAN-IP-server-rinetd-iptables-redir

 

 

1. Use the good old times rinetd – internet “redirection server” service


Perhaps, many people who are younger wouldn't remember rinetd's use was pretty common on old Linuxes in the age where iptables was not on the scene and its predecessor ipchains was so common.
In the raise of mass internet rinetd started loosing its popularity because the service was exposed to the outer world and due to security holes and many exploits circulating the script kiddie communities
many servers get hacked "pwned" in the jargon of the script kiddies.

rinetd is still available even in modern Linuxes and over the last years I did not heard any severe security concerns regarding it, but the old paranoia perhaps and the set to oblivion makes it still unpopular soluttion for port redirect today in year 2021.
However for a local secured DMZ lans I can tell you that its use is mostly useful and I chooes to use it myself, everynow and then due to its simplicity to configure and use.
rinetd is pretty standard among unixes and is also available in old Sun OS / Solaris and BSD-es and pretty much everything on the Unix scene.

Below is excerpt from 'man rinetd':

 

DESCRIPTION
     rinetd redirects TCP connections from one IP address and port to another. rinetd is a single-process server which handles any number of connections to the address/port pairs
     specified in the file /etc/rinetd.conf.  Since rinetd runs as a single process using nonblocking I/O, it is able to redirect a large number of connections without a severe im‐
     pact on the machine. This makes it practical to run TCP services on machines inside an IP masquerading firewall. rinetd does not redirect FTP, because FTP requires more than
     one socket.
     rinetd is typically launched at boot time, using the following syntax:      /usr/sbin/rinetd      The configuration file is found in the file /etc/rinetd.conf, unless another file is specified using the -c command line option.

To use rinetd on any LInux distro you have to install and enable it with apt or yum as usual. For example on my Debian GNU / Linux home machine to use it I had to install .deb package, enable and start it it via systemd :

 

server:~# apt install –yes rinetd

server:~#  systemctl enable rinetd


server:~#  systemctl start rinetd


server:~#  systemctl status rinetd
● rinetd.service
   Loaded: loaded (/etc/init.d/rinetd; generated)
   Active: active (running) since Tue 2021-09-21 10:48:20 EEST; 2 days ago
     Docs: man:systemd-sysv-generator(8)
    Tasks: 1 (limit: 4915)
   Memory: 892.0K
   CGroup: /system.slice/rinetd.service
           └─1364 /usr/sbin/rinetd


rinetd is doing the traffic redirect via a separate process daemon, in order for it to function once you have service up check daemon is up as well.

root@server:/home/hipo# ps -ef|grep -i rinet
root       359     1  0 16:10 ?        00:00:00 /usr/sbin/rinetd
root       824 26430  0 16:10 pts/0    00:00:00 grep -i rinet

+ Configuring a new port redirect with rinetd

 

Is pretty straight forward everything is handled via one single configuration – /etc/rinetd.conf

The format (syntax) of a forwarding rule is as follows:

     [bindaddress] [bindport] [connectaddress] [connectport]


Besides that rinetd , could be used as a primitive firewall substitute to iptables, general syntax of allow deny an IP address is done with (allow, deny) keywords:
 

allow 192.168.2.*
deny 192.168.2.1?


To enable logging to external file ,you'll have to include in the configuration:

# logging information
logfile /var/log/rinetd.log

Here is an example rinetd.conf configuration, redirecting tcp mysql 3306, nginx on port 80 and a second web service frontend for ILO to server reachable via port 8888 and a redirect from External IP to local IP SMTP server.

 

#
# this is the configuration file for rinetd, the internet redirection server
#
# you may specify global allow and deny rules here
# only ip addresses are matched, hostnames cannot be specified here
# the wildcards you may use are * and ?
#
# allow 192.168.2.*
# deny 192.168.2.1?


#
# forwarding rules come here
#
# you may specify allow and deny rules after a specific forwarding rule
# to apply to only that forwarding rule
#
# bindadress    bindport  connectaddress  connectport


# logging information
logfile /var/log/rinetd.log
83.228.93.76        80            192.168.0.20       80
192.168.0.2        3306            192.168.0.19        3306
83.228.93.76        443            192.168.0.20       443
# enable for access to ILO
83.228.93.76        8888            192.168.1.25 443

127.0.0.1    25    192.168.0.19    25


83.228.93.76 is my external ( Public )  IP internet address where 192.168.0.20, 192.168.0.19, 192.168.0.20 (are the DMZ-ed Lan internal IPs) with various services.

To identify the services for which rinetd is properly configured to redirect / forward traffic you can see it with netstat or the newer ss command
 

root@server:/home/hipo# netstat -tap|grep -i rinet
tcp        0      0 www.pc-freak.net:8888   0.0.0.0:*               LISTEN      13511/rinetd      
tcp        0      0 www.pc-freak.n:http-alt 0.0.0.0:*               LISTEN      21176/rinetd        
tcp        0      0 www.pc-freak.net:443   0.0.0.0:*               LISTEN      21176/rinetd      

 

+ Using rinetd to redirect External interface IP to loopback's port (127.0.0.1)

 

If you have the need to redirect an External connectable living service be it apache mysql / privoxy / squid or whatever rinetd is perhaps the tool of choice (especially since there is no way to do it with iptables.

If you want to redirect all traffic which is accessed via Linux's loopback interface (localhost) to be reaching a remote host 11.5.8.1 on TCP port 1083 and 1888, use below config

# bindadress    bindport  connectaddress  connectport
11.5.8.1        1083            127.0.0.1       1083
11.5.8.1        1888            127.0.0.1       1888

 

For a quick and dirty solution to redirect traffic rinetd is very useful, however you'll have to keep in mind that if you want to redirect traffic for tens of thousands of connections constantly originating from the internet you might end up with some disconnects as well as notice a increased use of rinetd CPU use with the incrased number of forwarded connections.

 

2. Redirect TCP / IP port using DNAT iptables firewall rules

 

Lets say you have some proxy, webservice or whatever service running on port 5900 to be redirected with iptables.
The easeiest legacy way is to simply add the redirection rules to /etc/rc.local​. In newer Linuxes rc.local so if you decide to use,
you'll have to enable rc.local , I've written earlier a short article on how to enable rc.local on newer Debian, Fedora, CentOS

 

# redirect 5900 TCP service 
sysctl -w net.ipv4.conf.all.route_localnet=1
iptables -t nat -I PREROUTING -p tcp –dport 5900 -j REDIRECT –to-ports 5900
iptables -t nat -I OUTPUT -p tcp -o lo –dport 5900 -j REDIRECT –to-ports 5900
iptables -t nat -A OUTPUT -o lo -d 127.0.0.1 -p tcp –dport 5900 -j DNAT  –to-destination 192.168.1.8:5900
iptables -t nat -I OUTPUT –source 0/0 –destination 0/0 -p tcp –dport 5900 -j REDIRECT –to-ports 5900

 

Here is another two example which redirects port 2208 (which has configured a bind listener for SSH on Internal host 192.168.0.209:2208) from External Internet IP address (XXX.YYY.ZZZ.XYZ) 
 

# Port redirect for SSH to VM on openxen internal Local lan server 192.168.0.209 
-A PREROUTING  -p tcp –dport 2208 -j DNAT –to-destination 192.168.0.209:2208
-A POSTROUTING -p tcp –dst 192.168.0.209 –dport 2208 -j SNAT –to-source 83.228.93.76

 

3. Redirect TCP traffic connections with redir tool

 

If you look for an easy straight forward way to redirect TCP traffic, installing and using redir (ready compiled program) might be a good idea.


root@server:~# apt-cache show redir|grep -i desc -A5 -B5
Version: 3.2-1
Installed-Size: 60
Maintainer: Lucas Kanashiro <kanashiro@debian.org>
Architecture: amd64
Depends: libc6 (>= 2.15)
Description-en: Redirect TCP connections
 It can run under inetd or stand alone (in which case it handles multiple
 connections).  It is 8 bit clean, not limited to line mode, is small and
 light. Supports transparency, FTP redirects, http proxying, NAT and bandwidth
 limiting.
 .
 redir is all you need to redirect traffic across firewalls that authenticate
 based on an IP address etc. No need for the firewall toolkit. The
 functionality of inetd/tcpd and "redir" will allow you to do everything you
 need without screwy telnet/ftp etc gateways. (I assume you are running IP
 Masquerading of course.)

Description-md5: 2089a3403d126a5a0bcf29b22b68406d
Homepage: https://github.com/troglobit/redir
Tag: interface::daemon, network::server, network::service, role::program,
 use::proxying
Section: net
Priority: optional

 

 

server:~# apt-get install –yes redir

Here is a short description taken from its man page 'man redir'

 

DESCRIPTION
     redir redirects TCP connections coming in on a local port, [SRC]:PORT, to a specified address/port combination, [DST]:PORT.  Both the SRC and DST arguments can be left out,
     redir will then use 0.0.0.0.

     redir can be run either from inetd or as a standalone daemon.  In –inetd mode the listening SRC:PORT combo is handled by another process, usually inetd, and a connected
     socket is handed over to redir via stdin.  Hence only [DST]:PORT is required in –inetd mode.  In standalone mode redir can run either in the foreground, -n, or in the back‐
     ground, detached like a proper UNIX daemon.  This is the default.  When running in the foreground log messages are also printed to stderr, unless the -s flag is given.

     Depending on how redir was compiled, not all options may be available.

 

+ Use redir to redirect TCP traffic one time

 

Lets say you have a MySQL running on remote machine on some internal or external IP address, lets say 192.168.0.200 and you want to redirect all traffic from remote host to the machine (192.168.0.50), where you run your Apache Webserver, which you want to configure to use
as MySQL localhost TCP port 3306.

Assuming there are no irewall restrictions between Host A (192.168.0.50) and Host B (192.168.0.200) is already permitting connectivity on TCP/IP port 3306 between the two machines.

To open redirection from localhost on 192.168.0.50 -> 192.168.0.200:

 

server:~# redir –laddr=127.0.0.1 –lport=3306 –caddr=192.168.0.200 –cport=3306

 

If you need other third party hosts to be additionally reaching 192.168.0.200 via 192.168.0.50 TCP 3306.

root@server:~# redir –laddr=192.168.0.50 –lport=3306 –caddr=192.168.0.200 –cport=3306


Of course once you close, the /dev/tty or /dev/vty console the connection redirect will be cancelled.

 

+ Making TCP port forwarding from Host A to Host B permanent


One solution to make the redir setup rules permanent is to use –rinetd option or simply background the process, nevertheless I prefer to use instead GNU Screen.
If you don't know screen is a vVrtual Console Emulation manager with VT100/ANSI terminal emulation to so, if you don't have screen present on the host install it with whatever Linux OS package manager is present and run:

 

root@server:~#screen -dm bash -c 'redir –laddr=127.0.0.1 –lport=3306 –caddr=192.168.0.200 –cport=3306'

 

That would run it into screen session and detach so you can later connect, if you want you can make redir to also log connections via syslog with ( -s) option.

I found also useful to be able to track real time what's going on currently with the opened redirect socket by changing redir log level.

Accepted log level is:

 

  -l, –loglevel=LEVEL
             Set log level: none, err, notice, info, debug.  Default is notice.

 

root@server:/ # screen -dm bash -c 'redir –laddr=127.0.0.1 –lport=3308 –caddr=192.168.0.200 –cport=3306 -l debug'

 

To test connectivity works as expected use telnet:
 

root@server:/ # telnet localhost 3308
Trying 127.0.0.1…
Connected to localhost.
Escape character is '^]'.
g
5.5.5-10.3.29-MariaDB-0+deb10u1-log�+c2nWG>B���o+#ly=bT^]79mysql_native_password

6#HY000Proxy header is not accepted from 192.168.0.19 Connection closed by foreign host.

once you attach to screen session with

 

root@server:/home #  screen -r

 

You will get connectivity attempt from localhost logged : .
 

redir[10640]: listening on 127.0.0.1:3306
redir[10640]: target is 192.168.0.200:3306
redir[10640]: Waiting for client to connect on server socket …
redir[10640]: target is 192.168.0.200:3306
redir[10640]: Waiting for client to connect on server socket …
redir[10793]: peer IP is 127.0.0.1
redir[10793]: peer socket is 25592
redir[10793]: target IP address is 192.168.0.200
redir[10793]: target port is 3306
redir[10793]: Connecting 127.0.0.1:25592 to 127.0.0.1:3306
redir[10793]: Entering copyloop() – timeout is 0
redir[10793]: Disconnect after 1 sec, 165 bytes in, 4 bytes out

The downsides of using redir is redirection is handled by the separate process which is all time hanging in the process list, as well as the connection redirection speed of incoming connections might be about at least 30% slower to if you simply use a software (firewall ) redirect such as iptables. If you use something like kernel IP set ( ipsets ). If you hear of ipset for a first time and you wander whta it is below is short package description.

 

root@server:/root# apt-cache show ipset|grep -i description -A13 -B5
Maintainer: Debian Netfilter Packaging Team <pkg-netfilter-team@lists.alioth.debian.org>
Architecture: amd64
Provides: ipset-6.38
Depends: iptables, libc6 (>= 2.4), libipset11 (>= 6.38-1~)
Breaks: xtables-addons-common (<< 1.41~)
Description-en: administration tool for kernel IP sets
 IP sets are a framework inside the Linux 2.4.x and 2.6.x kernel which can be
 administered by the ipset(8) utility. Depending on the type, currently an
 IP set may store IP addresses, (TCP/UDP) port numbers or IP addresses with
 MAC addresses in a  way which ensures lightning speed when matching an
 entry against a set.
 .
 If you want to
 .
  * store multiple IP addresses or port numbers and match against the
    entire collection using a single iptables rule.
  * dynamically update iptables rules against IP addresses or ports without
    performance penalty.
  * express complex IP address and ports based rulesets with a single
    iptables rule and benefit from the speed of IP sets.

 .
 then IP sets may be the proper tool for you.
Description-md5: d87e199641d9d6fbb0e52a65cf412bde
Homepage: http://ipset.netfilter.org/
Tag: implemented-in::c, role::program
Section: net
Priority: optional
Filename: pool/main/i/ipset/ipset_6.38-1.2_amd64.deb
Size: 50684
MD5sum: 095760c5db23552a9ae180bd58bc8efb
SHA256: 2e2d1c3d494fe32755324bf040ffcb614cf180327736c22168b4ddf51d462522

Reinstall all Debian packages with a copy of apt deb package list from another working Debian Linux installation

Wednesday, July 29th, 2020

Reinstall-all-Debian-packages-with-copy-of-apt-packages-list-from-another-working-Debian-Linux-installation

Few days ago, in the hurry in the small hours of the night, I've done something extremely stupid. Wanting to move out a .tar.gz binary copy of qmail installation to /var/lib/qmail with all the dependent qmail items instead of extracting to admin user /root directory (/root), I've extracted it to the main Operating system root / directrory.
Not noticing this, I've quickly executed rm -rf var with the idea to delete all directory tree under /root/var just 3 seconds later, I've realized I'm issuing the rm -rf var with the wrong location WITH a root user !!!! Being scared on what I've done, I've quickly pressed CTRL+C to immedately cancel the deletion operation of my /var.

wrong-system-var-rm-linux-dont-do-that-ever-or-your-system-will-end-up-irreversably-damaged

But as you can guess, since the machine has an Slid State Drive drive and SSD memory drive are much more faster in I/O operations than the classical ATA / SATA disks. I was not quick enough to cancel the operation and I've noticed already some part of my /var have been R.I.P-pped in the heaven of directories.

This was ofcourse upsetting so for a while I rethinked the situation to get some ideas on what I can do to recover my system ASAP!!! and I had the idea of course to try to reinstall All my installed .deb debian packages to restore system closest to the normal, before my stupid mistake.

Guess my unpleasent suprise when I have realized dpkg and respectively apt-get apt and aptitude package management tools cannot anymore handle packages as Debian Linux's package dependency database has been damaged due to missing dpkg directory 

 

/var/lib/dpkg 

 

Oh man that was unpleasent, especially since I've installed plenty of stuff that is custom on my Mate based desktop and, generally reinstalling it updating the sytem to the latest Debian security updates etc. will be time consuming and painful process I wanted to omit.

So of course the logical thing to do here was to try to somehow recover somehow a database copy of /var/lib/dpkg  if that was possible, that of course led me to the idea to lookup for a way to recover my /var/lib/dpkg from backup but since I did not maintained any backup copy of my OS anywhere that was not really possible, so anyways I wondered whether dpkg does not keep some kind of database backups somewhere in case if something goes wrong with its database.
This led me to this nice Ubuntu thred which has pointed me to the part of my root rm -rf dpkg db disaster recovery solution.
Luckily .deb package management creators has thought about situation similar to mine and to give the user a restore point for /var/lib/dpkg damaged database

/var/lib/dpkg is periodically backed up in /var/backups

A typical /var/lib/dpkg on Ubuntu and Debian Linux looks like so:
 

hipo@jeremiah:/var/backups$ ls -l /var/lib/dpkg
total 12572
drwxr-xr-x 2 root root    4096 Jul 26 03:22 alternatives
-rw-r–r– 1 root root      11 Oct 14  2017 arch
-rw-r–r– 1 root root 2199402 Jul 25 20:04 available
-rw-r–r– 1 root root 2199402 Oct 19  2017 available-old
-rw-r–r– 1 root root       8 Sep  6  2012 cmethopt
-rw-r–r– 1 root root    1337 Jul 26 01:39 diversions
-rw-r–r– 1 root root    1223 Jul 26 01:39 diversions-old
drwxr-xr-x 2 root root  679936 Jul 28 14:17 info
-rw-r—– 1 root root       0 Jul 28 14:17 lock
-rw-r—– 1 root root       0 Jul 26 03:00 lock-frontend
drwxr-xr-x 2 root root    4096 Sep 17  2012 parts
-rw-r–r– 1 root root    1011 Jul 25 23:59 statoverride
-rw-r–r– 1 root root     965 Jul 25 23:59 statoverride-old
-rw-r–r– 1 root root 3873710 Jul 28 14:17 status
-rw-r–r– 1 root root 3873712 Jul 28 14:17 status-old
drwxr-xr-x 2 root root    4096 Jul 26 03:22 triggers
drwxr-xr-x 2 root root    4096 Jul 28 14:17 updates

Before proceeding with this radical stuff to move out /var/lib/dpkg/info from another machine to /var mistakenyl removed oned. I have tried to recover with the well known:

  • extundelete
  • foremost
  • recover
  • ext4magic
  • ext3grep
  • gddrescue
  • ddrescue
  • myrescue
  • testdisk
  • photorec

Linux file deletion recovery tools from a USB stick loaded with a Number of LiveCD distributions, i.e. tested recovery with:

  • Debian LiveCD
  • Ubuntu LiveCD
  • KNOPPIX
  • SystemRescueCD
  • Trinity Rescue Kit
  • Ultimate Boot CD


but unfortunately none of them couldn't recover the deleted files … 

The reason why the standard file recovery tools could not recover ?

My assumptions is after I've done by rm -rf var; from sysroot,  issued the sync (- if you haven't used it check out man sync) command – that synchronizes cached writes to persistent storage and did a restart from the poweroff PC button, this should have worked, as I've recovered like that in the past) in a normal Sys V System with a normal old fashioned blocks filesystem as EXT2 . or any other of the filesystems without a journal, however as the machine run a EXT4 filesystem with a journald and journald, this did not work perhaps because something was not updated properly in /lib/systemd/systemd-journal, that led to the situation all recently deleted files were totally unrecoverable.

1. First step was to restore the directory skele of /var/lib/dpkg

# mkdir -p /var/lib/dpkg/{alternatives,info,parts,triggers,updates}

 

2. Recover missing /var/lib/dpkg/status  file

The main file that gives information to dpkg of the existing packages and their statuses on a Debian based systems is /var/lib/dpkg/status

# cp /var/backups/dpkg.status.0 /var/lib/dpkg/status

 

3. Reinstall dpkg package manager to make package management working again

Say a warm prayer to the Merciful God ! and do:

# apt-get download dpkg
# dpkg -i dpkg*.deb

 

4. Reinstall base-files .deb package which provides basis of a Debian system

Hopefully everything will be okay and your dpkg / apt pair will be in normal working state, next step is to:

# apt-get download base-files
# dpkg -i base-files*.deb

 

5. Do a package sanity and consistency check and try to update OS package list

Check whether packages have been installed only partially on your system or that have missing, wrong or obsolete control  data  or  files.  dpkg  should suggest what to do with them to get them fixed.

# dpkg –audit

Then resynchronize (fetch) the package index files from their sources described in /etc/apt/sources.list

# apt-get update


Do apt db constistency check:

#  apt-get check


check is a diagnostic tool; it updates the package cache and checks for broken dependencies.
 

Take a deep breath ! …

Do :

ls -l /var/lib/dpkg
and compare with the above list. If some -old file is not present don't worry it will be there tomorrow.

Next time don't forget to do a regular backup with simple rsync backup script or something like Bacula / Amanda / Time Vault or Clonezilla.
 

6. Copy dpkg database from another Linux system that has a working dpkg / apt Database

Well this was however not the end of the story … There were still many things missing from my /var/ and luckily I had another Debian 10 Buster install on another properly working machine with a similar set of .deb packages installed. Therefore to make most of my programs still working again I have copied over /var from the other similar set of package installed machine to my messed up machine with the missing deleted /var.

To do so …
On Functioning Debian 10 Machine (Working Host in a local network with IP 192.168.0.50), I've archived content of /var:

linux:~# tar -czvf var_backup_debian10.tar.gz /var

Then sftped from Working Host towards the /var deleted broken one in my case this machine's hostname is jericho and luckily still had SSHD and SFTP running processes loaded in memory:

jericho:~# sftp root@192.168.0.50
sftp> get var_backup_debian10.tar.gz

Now Before extracting the archive it is a good idea to make backup of old /var remains somewhere for example somewhere in /root 
just in case if we need to have a copy of the dpkg backup dir /var/backups

jericho:~# cp -rpfv /var /root/var_backup_damaged

 
jericho:~# tar -zxvf /root/var_backup_debian10.tar.gz 
jericho:/# mv /root/var/ /

Then to make my /var/lib/dpkg contain the list of packages from my my broken Linux install I have ovewritten /var/lib/dpkg with the files earlier backupped before  .tar.gz was extracted.

jericho:~# cp -rpfv /var /root/var_backup_damaged/lib/dpkg/ /var/lib/

 

7. Reinstall All Debian  Packages completely scripts

 

I then tried to reinstall each and every package first using aptitude with aptitude this is done with

# aptitude reinstall '~i'

However as this failed, tried using a simple shell loop like below:

for i in $(dpkg -l |awk '{ print $2 }'); do echo apt-get install –reinstall –yes $i; done

Alternatively, all .deb package reninstall is also possible with dpkg –get-selections and with awk with below cmds:

dpkg –get-selections | grep -v deinstall | awk '{print $1}' > list.log;
awk '$1=$1' ORS=' ' list.log > newlist.log
;
apt-get install –reinstall $(cat newlist.log)

It can also be run as one liner for simplicity:

dpkg –get-selections | grep -v deinstall | awk '{print $1}' > list.log; awk '$1=$1' ORS=' ' list.log > newlist.log; apt-get install –reinstall $(cat newlist.log)

This produced a lot of warning messages, reporting "package has no files currently installed" (virtually for all installed packages), indicating a severe packages problem below is sample output produced after each and every package reinstall … :

dpkg: warning: files list file for package 'iproute' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'brscan-skey' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libapache2-mod-php7.4' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libexpat1:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libexpat1:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'php5.6-readline' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'linux-headers-4.19.0-5-amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libgraphite2-3:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libgraphite2-3:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libbonoboui2-0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libxcb-dri3-0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libxcb-dri3-0:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'liblcms2-2:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'liblcms2-2:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libpixman-1-0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libpixman-1-0:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'gksu' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'liblogging-stdlog0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'mesa-vdpau-drivers:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'mesa-vdpau-drivers:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libzvbi0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libzvbi0:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libcdparanoia0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libcdparanoia0:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'python-gconf' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'php5.6-cli' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libpaper1:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'mixer.app' missing; assuming package has no files currently installed

After some attempts I found a way to be able to work around the warning message, for each package by simply reinstalling the package reporting the issue with

apt –reinstall $package_name


Though reinstallation started well and many packages got reinstalled, unfortunately some packages such as apache2-mod-php5.6 and other php related ones  started failing during reinstall ending up in unfixable states right after installation of binaries from packages was successfully placed in its expected locations on disk. The failures occured during the package setup stage ( dpkg –configure $packagename) …

The logical thing to do is a recovery attempt with something like the usual well known by any Debian admin:

apt-get install –fix-missing

As well as Manual requesting to reconfigure (issue re-setup) of all installed packages also did not produced a positive result

dpkg –configure -a

But many packages were still failing due to dpkg inability to execute some post installation scripts from respective .deb files.
To work around that and continue installing the rest of packages I had to manually delete all files related to the failing package located under directory 

/var/lib/dpkg/info#

For example to omit the post installation failure of libapache2-mod-php5.6 and have a succesful install of the package next time I tried reinstall, I had to delete all /var/lib/dpkg/info/libapache2-mod-php5.6.postrm, /var/lib/dpkg/info/libapache2-mod-php5.6.postinst scripts and even sometimes everything like libapache2-mod-php5.6* that were present in /var/lib/dpkg/info dir.

The problem with this solution, however was the package reporting to install properly, but the post install script hooks were still not in placed and important things as setting permissions of binaries after install or applying some configuration changes right after install was missing leading to programs failing to  fully behave properly or even breaking up even though showing as finely installed …

The final solution to this problem was radical.
I've used /var/lib/dpkg database (directory) from ther other working Linux machine with dpkg DB OK found in var_backup_debian10.tar.gz (linux:~# host with a working dpkg database) and then based on the dpkg package list correct database responding on jericho:~# to reinstall each and every package on the system using Debian System Reinstaller script taken from the internet.
Debian System Reinstaller works but to reinstall many packages, I've been prompted again and again whether to overwrite configuration or keep the present one of packages.
To Omit the annoying [Y / N ] text prompts I had made a slight modification to the script so it finally looked like this:
 

#!/bin/bash
# Debian System Reinstaller
# Copyright (C) 2015 Albert Huang
# Copyright (C) 2018 Andreas Fendt

# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.

# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.

# You should have received a copy of the GNU General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.

# —
# This script assumes you are using a Debian based system
# (Debian, Mint, Ubuntu, #!), and have sudo installed. If you don't
# have sudo installed, replace "sudo" with "su -c" instead.

pkgs=`dpkg –get-selections | grep -w 'install$' | cut -f 1 |  egrep -v '(dpkg|apt)'`

for pkg in $pkgs; do
    echo -e "\033[1m   * Reinstalling:\033[0m $pkg"    

    apt-get –reinstall -o Dpkg::Options::="–force-confdef" -o Dpkg::Options::="–force-confold" -y install $pkg || {
        echo "ERROR: Reinstallation failed. See reinstall.log for details."
        exit 1
    }
done

 

 debian-all-packages-reinstall.sh working modified version of Albert Huang and Andreas Fendt script  can be also downloaded here.

Note ! Omitting the text confirmation prompts to install newest config or keep maintainer configuration is handled by the argument:

 

-o Dpkg::Options::="–force-confold


I however still got few NCurses Console selection prompts during the reinstall of about 3200+ .deb packages, so even with this mod the reinstall was not completely automatic.

Note !  During the reinstall few of the packages from the list failed due to being some old unsupported packages this was ejabberd, ircd-hybrid and a 2 / 3 more.
This failure was easily solved by completely purging those packages with the usual

# dpkg –purge $packagename

and reruninng  debian-all-packages-reinstall.sh on each of the failing packages.

Note ! The failing packages were just old ones left over from Debian 8 and Debian 9 before the apt-get dist-upgrade towards 10 Duster.
Eventually I got a success by God's grance, after few hours of pains and trials, ending up in a working state package database and a complete set of freshly reinstalled packages.

The only thing I had to do finally is 2 hours of tampering why GNOME did not automatically booted after the system reboot due to failing gdm
until I fixed that I've temprary used ligthdm (x-display-manager), to do I've

dpkg –reconfigure gdm3

lightdm-x-display-manager-screenshot-gdm3-reconfige

 to work around this I had to also reinstall few libraries, reinstall the xorg-server, reinstall gdm and reinstall the meta package for GNOME, using below set of commands:
 

apt-get install –reinstall libglw1-mesa libglx-mesa0
apt-get install –reinstall libglu1-mesa-dev
apt install –reinstallgsettings-desktop-schemas
apt-get install –reinstall xserver-xorg-video-intel
apt-get install –reinstall xserver-xorg
apt-get install –reinstall xserver-xorg-core
apt-get install –reinstall task-desktop
apt-get install –reinstall task-gnome-desktop

 

As some packages did not ended re-instaled on system because on the original host from where /var/lib/dpkg db was copied did not have it I had to eventually manually trigger reinstall for those too:

 

apt-get install –reinstall –yes vlc
apt-get install –reinstall –yes thunderbird
apt-get install –reinstall –yes audacity
apt-get install –reinstall –yes gajim
apt-get install –reinstall –yes slack remmina
apt-get install –yes k3b
pt-get install –yes gbgoffice
pt-get install –reinstall –yes skypeforlinux
apt-get install –reinstall –yes vlc
apt-get install –reinstall –yes libcurl3-gnutls libcurl3-nss
apt-get install –yes virtualbox-5.2
apt-get install –reinstall –yes vlc
apt-get install –reinstall –yes alsa-tools-gui
apt-get install –reinstall –yes gftp
apt install ./teamviewer_15.3.2682_amd64.deb –yes

 

Note that some of above packages requires a properly configured third party repositories, other people might have other packages that are missing from the dpkg list and needs to be reinstalled so just decide according to your own case of left aside working system present binaries that doesn't belong to any dpkg installed package.

After a bit of struggle everything is back to normal Thanks God! 🙂 !
 

 

Monitoring Linux hardware Hard Drives / Temperature and Disk with lm_sensors / smartd / hddtemp and Zabbix Userparameter lm_sensors report script

Thursday, April 30th, 2020

monitoring-linux-hardware-with-software-temperature-disk-cpu-health-zabbix-userparameter-script

I'm part of a  SysAdmin Team that is partially doing some minor Zabbix imrovements on a custom corporate installed Zabbix in an ongoing project to substitute the previous HP OpenView monitoring for a bunch of Legacy Linux hosts.
As one of the necessery checks to have is regarding system Hardware, the task was to invent some simplistic way to monitor hardware with the Zabbix Monitoring tool.  Monitoring Bare Metal servers hardware of HP / Dell / Fujituse etc. servers  in Linux usually is done with a third party software provided by the Hardware vendor. But as this requires an additional services to run and sometimes is not desired. It was interesting to find out some alternative Linux native ways to do the System hardware monitoring.
Monitoring statistics from the system hardware components can be obtained directly from the server components with ipmi / ipmitool (for more info on it check my previous article Reset and Manage intelligent  Platform Management remote board article).
With ipmi
 hardware health info could be received straight from the ILO / IDRAC / HPMI of the server. However as often the Admin-Lan of the server is in a seperate DMZ secured network and available via only a certain set of routed IPs, ipmitool can't be used.

So what are the other options to use to implement Linux Server Hardware Monitoring?

The tools to use are perhaps many but I know of two which gives you most of the information you ever need to have a prelimitary hardware damage warning system before the crash, these are:
 

1. smartmontools (smartd)

Smartd is part of smartmontools package which contains two utility programs (smartctl and smartd) to control and monitor storage systems using the Self-Monitoring, Analysis and Reporting Technology system (SMART) built into most modern ATA/SATA, SCSI/SAS and NVMe disks

Disk monitoring is handled by a special service the package provides called smartd that does query the Hard Drives periodically aiming to find a warning signs of hardware failures.
The downside of smartd use is that it implies a little bit of extra load on Hard Drive read / writes and if misconfigured could reduce the the Hard disk life time.

 

linux:~#  /usr/sbin/smartctl -a /dev/sdb2
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.19.0-5-amd64] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model:     KINGSTON SA400S37240G
Serial Number:    50026B768340AA31
LU WWN Device Id: 5 0026b7 68340aa31
Firmware Version: S1Z40102
User Capacity:    240,057,409,536 bytes [240 GB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    Solid State Device
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   ACS-3 T13/2161-D revision 4
SATA Version is:  SATA 3.2, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Thu Apr 30 14:05:01 2020 EEST
SMART support is: Available – device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                                        was never started.
                                        Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (  120) seconds.
Offline data collection
capabilities:                    (0x11) SMART execute Offline immediate.
                                        No Auto Offline data collection support.
                                        Suspend Offline collection upon new
                                        command.
                                        No Offline surface scan supported.
                                        Self-test supported.
                                        No Conveyance Self-test supported.
                                        No Selective Self-test supported.
SMART capabilities:            (0x0002) Does not save SMART data before
                                        entering power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   2) minutes.
Extended self-test routine
recommended polling time:        (  10) minutes.

SMART Attributes Data Structure revision number: 1
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x0032   100   100   000    Old_age   Always       –       100
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       –       2820
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       –       21
148 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      –       0
149 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      –       0
167 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      –       0
168 Unknown_Attribute       0x0012   100   100   000    Old_age   Always       –       0
169 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      –       0
170 Unknown_Attribute       0x0000   100   100   010    Old_age   Offline      –       0
172 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       –       0
173 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      –       0
181 Program_Fail_Cnt_Total  0x0032   100   100   000    Old_age   Always       –       0
182 Erase_Fail_Count_Total  0x0000   100   100   000    Old_age   Offline      –       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       –       0
192 Power-Off_Retract_Count 0x0012   100   100   000    Old_age   Always       –       16
194 Temperature_Celsius     0x0022   034   052   000    Old_age   Always       –       34 (Min/Max 19/52)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       –       0
199 UDMA_CRC_Error_Count    0x0032   100   100   000    Old_age   Always       –       0
218 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       –       0
231 Temperature_Celsius     0x0000   097   097   000    Old_age   Offline      –       97
233 Media_Wearout_Indicator 0x0032   100   100   000    Old_age   Always       –       2104
241 Total_LBAs_Written      0x0032   100   100   000    Old_age   Always       –       1857
242 Total_LBAs_Read         0x0032   100   100   000    Old_age   Always       –       1141
244 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      –       32
245 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      –       107
246 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      –       15940

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]

Selective Self-tests/Logging not supported

 

2. hddtemp

 

Usually if smartd is used it is useful to also use hddtemp which relies on smartd data.
 The hddtemp program monitors and reports the temperature of PATA, SATA
 or SCSI hard drives by reading Self-Monitoring Analysis and Reporting
 Technology (S.M.A.R.T.)
information on drives that support this feature.
 

linux:~# /usr/sbin/hddtemp /dev/sda1
/dev/sda1: Hitachi HDS721050CLA360: 31°C
linux:~# /usr/sbin/hddtemp /dev/sdc6
/dev/sdc6: KINGSTON SV300S37A120G: 25°C
linux:~# /usr/sbin/hddtemp /dev/sdb2
/dev/sdb2: KINGSTON SA400S37240G: 34°C
linux:~# /usr/sbin/hddtemp /dev/sdd1
/dev/sdd1: WD Elements 10B8: S.M.A.R.T. not available

 

 

3. lm-sensors / i2c-tools 

 Lm-sensors is a hardware health monitoring package for Linux. It allows you
 to access information from temperature, voltage, and fan speed sensors.
i2c-tools
was historically bundled in the same package as lm_sensors but has been seperated cause not all hardware monitoring chips are I2C devices, and not all I2C devices are hardware monitoring chips.

The most basic use of lm-sensors is with the sensors command

 

linux:~# sensors
i350bb-pci-0600
Adapter: PCI adapter
loc1:         +55.0 C  (high = +120.0 C, crit = +110.0 C)

 

coretemp-isa-0000
Adapter: ISA adapter
Physical id 0:  +28.0 C  (high = +78.0 C, crit = +88.0 C)
Core 0:         +26.0 C  (high = +78.0 C, crit = +88.0 C)
Core 1:         +28.0 C  (high = +78.0 C, crit = +88.0 C)
Core 2:         +28.0 C  (high = +78.0 C, crit = +88.0 C)
Core 3:         +28.0 C  (high = +78.0 C, crit = +88.0 C)

 


On CentOS Linux useful tool is also  lm_sensors-sensord.x86_64 – A Daemon that periodically logs sensor readings to syslog or a round-robin database, and warns of sensor alarms.

In Debian Linux there is also the psensors-server (an HTTP server providing JSON Web service which can be used by GTK+ Application to remotely monitor sensors) useful for developers
psesors-server

psensor-linux-graphical-tool-to-check-cpu-hard-disk-temperature-unix

If you have a Xserver installed on the Server accessed with Xclient or via VNC though quite rare,
You can use xsensors or Psensora GTK+ (Widget Toolkit for creating Graphical User Interface) application software.

With this 3 tools it is pretty easy to script one liners and use the Zabbix UserParameters functionality to send hardware report data to a Company's Zabbix Sserver, though Zabbix has already some templates to do so in my case, I couldn't import this templates cause I don't have Zabbix Super-Admin credentials, thus to work around that a sample work around is use script to monitor for higher and critical considered temperature.
Here is a tiny sample script I came up in 1 min time it can be used to used as 1 liner UserParameter and built upon something more complex.

SENSORS_HIGH=`sensors | awk '{ print $6 }'| grep '^+' | uniq`;
SENSORS_CRIT=`sensors | awk '{ print $9 }'| grep '^+' | uniq`; ;SENSORS_STAT=`sensors|grep -E 'Core\s' | awk '{ print $1" "$2" "$3 }' | grep "$SENSORS_HIGH|$SENSORS_CRIT"`;
if [ ! -z $SENSORS_STAT ]; then
echo 'Temperature HIGH';
else 
echo 'Sensors OK';
fi 

Of course there is much more sophisticated stuff to use for monitoring out there


Below script can be easily adapted and use on other Monitoring Platforms such as Nagios / Munin / Cacti / Icinga and there are plenty of paid solutions, but for anyone that wants to develop something from scratch just like me I hope this
article will be a good short introduction.
If you know some other Linux hardware monitoring tools, please share.

How to build Linux logging bash shell script write_log, logging with Named Pipe buffer, Simple Linux common log files logging with logger command

Monday, August 26th, 2019

how-to-build-bash-script-for-logging-buffer-named-pipes-basic-common-files-logging-with-logger-command

Logging into file in GNU / Linux and FreeBSD is as simple as simply redirecting the output, e.g.:
 

echo "$(date) Whatever" >> /home/hipo/log/output_file_log.txt


or with pyping to tee command

 

echo "$(date) Service has Crashed" | tee -a /home/hipo/log/output_file_log.txt


But what if you need to create a full featured logging bash robust shell script function that will run as a daemon continusly as a background process and will output
all content from itself to an external log file?
In below article, I've given example logging script in bash, as well as small example on how a specially crafted Named Pipe buffer can be used that will later store to a file of choice.
Finally I found it interesting to mention few words about logger command which can be used to log anything to many of the common / general Linux log files stored under /var/log/ – i.e. /var/log/syslog /var/log/user /var/log/daemon /var/log/mail etc.
 

1. Bash script function for logging write_log();


Perhaps the simplest method is just to use a small function routine in your shell script like this:
 

write_log()
LOG_FILE='/root/log.txt';
{
  while read text
  do
      LOGTIME=`date "+%Y-%m-%d %H:%M:%S"`
      # If log file is not defined, just echo the output
      if [ “$LOG_FILE” == “” ]; then
    echo $LOGTIME": $text";
      else
        LOG=$LOG_FILE.`date +%Y%m%d`
    touch $LOG
        if [ ! -f $LOG ]; then echo "ERROR!! Cannot create log file $LOG. Exiting."; exit 1; fi
    echo $LOGTIME": $text" | tee -a $LOG;
      fi
  done
}

 

  •  Using the script from within itself or from external to write out to defined log file

 

echo "Skipping to next copy" | write_log

 

2. Use Unix named pipes to pass data – Small intro on what is Unix Named Pipe.


Named Pipe –  a named pipe (also known as a FIFO (First In First Out) for its behavior) is an extension to the traditional pipe concept on Unix and Unix-like systems, and is one of the methods of inter-process communication (IPC). The concept is also found in OS/2 and Microsoft Windows, although the semantics differ substantially. A traditional pipe is "unnamed" and lasts only as long as the process. A named pipe, however, can last as long as the system is up, beyond the life of the process. It can be deleted if no longer used.
Usually a named pipe appears as a file, and generally processes attach to it for IPC.

 

Once named pipes were shortly explained for those who hear it for a first time, its time to say named pipe in unix / linux is created with mkfifo command, syntax is straight foward:
 

mkfifo /tmp/name-of-named-pipe


Some older Linux-es with older bash and older bash shell scripts were using mknod.
So idea behind logging script is to use a simple named pipe read input and use date command to log the exact time the command was executed, here is the script.

 

#!/bin/bash
named_pipe='/tmp/output-named-pipe';
output_named_log='
/tmp/output-named-log.txt ';

if [ -p $named_pipe ]; then
rm -f $named_pipe
fi
mkfifo $named_pipe

while true; do
read LINE <$named_pipe
echo $(date): "$LINE" >>/tmp/output-named-log.txt
done


To write out any other script output and get logged now, any of your output with a nice current date command generated output write out any output content to the loggin buffer like so:

 

echo 'Using Named pipes is so cool' > /tmp/output-named-pipe
echo 'Disk is full on a trigger' > /tmp/output-named-pipe

  • Getting the output with the date timestamp

# cat /tmp/output-named-log.txt
Mon Aug 26 15:21:29 EEST 2019: Using Named pipes is so cool
Mon Aug 26 15:21:54 EEST 2019: Disk is full on a trigger


If you wonder why it is better to use Named pipes for logging, they perform better (are generally quicker) than Unix sockets.

 

3. Logging files to system log files with logger

 

If you need to do a one time quick way to log any message of your choice with a standard Logging timestamp, take a look at logger (a part of bsdutils Linux package), and is a command which is used to enter messages into the system log, to use it simply invoke it with a message and it will log your specified output by default to /var/log/syslog common logfile

 

root@linux:/root# logger 'Here we go, logging'
root@linux:/root # tail -n 3 /var/log/syslog
Aug 26 15:41:01 localhost CRON[24490]: (root) CMD (chown qscand:qscand -R /var/run/clamav/ 2>&1 >/dev/null)
Aug 26 15:42:01 localhost CRON[24547]: (root) CMD (chown qscand:qscand -R /var/run/clamav/ 2>&1 >/dev/null)
Aug 26 15:42:20 localhost hipo: Here we go, logging

 

If you have took some time to read any of the init.d scripts on Debian / Fedora / RHEL / CentOS Linux etc. you will notice the logger logging facility is heavily used.

With logger you can print out message with different priorities (e.g. if you want to write an error message to mail.* logs), you can do so with:
 

 logger -i -p mail.err "Output of mail processing script"


To log a normal non-error (priority message) with logger to /var/log/mail.log system log.

 

 logger -i -p mail.notice "Output of mail processing script"


A whole list of supported facility named priority valid levels by logger (as taken of its current Linux manual) are as so:

 

FACILITIES AND LEVELS
       Valid facility names are:

              auth
              authpriv   for security information of a sensitive nature
              cron
              daemon
              ftp
              kern       cannot be generated from userspace process, automatically converted to user
              lpr
              mail
              news
              syslog
              user
              uucp
              local0
                to
              local7
              security   deprecated synonym for auth

       Valid level names are:

              emerg
              alert
              crit
              err
              warning
              notice
              info
              debug
              panic     deprecated synonym for emerg
              error     deprecated synonym for err
              warn      deprecated synonym for warning

       For the priority order and intended purposes of these facilities and levels, see syslog(3).

 


If you just want to log to Linux main log file (be it /var/log/syslog or /var/log/messages), depending on the Linux distribution, just type', even without any shell quoting:

 

logger 'The reason to reboot the server Currently was a System security Update

 

So what others is logger useful for?

 In addition to being a good diagnostic tool, you can use logger to test if all basic system logs with its respective priorities work as expected, this is especially
useful as I've seen on a Cloud Holsted OpenXEN based servers as a SAP consultant, that sometimes logging to basic log files stops to log for months or even years due to
syslog and syslog-ng problems hungs by other thirt party scripts and programs.
To test test all basic logging and priority on system logs as expected use the following logger-test-all-basic-log-logging-facilities.sh shell script.

 

#!/bin/bash
for i in {auth,auth-priv,cron,daemon,kern, \
lpr,mail,mark,news,syslog,user,uucp,local0 \
,local1,local2,local3,local4,local5,local6,local7}

do        
# (this is all one line!)

 

for k in {debug,info,notice,warning,err,crit,alert,emerg}
do

logger -p $i.$k "Test daemon message, facility $i priority $k"

done

done

Note that on different Linux distribution verions, the facility and priority names might differ so, if you get

logger: unknown facility name: {auth,auth-priv,cron,daemon,kern,lpr,mail,mark,news, \
syslog,user,uucp,local0,local1,local2,local3,local4, \
local5,local6,local7}

check and set the proper naming as described in logger man page.

 

4. Using a file descriptor that will output to a pre-set log file


Another way is to add the following code to the beginning of the script

#!/bin/bash
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>log.out 2>&1
# Everything below will go to the file 'log.out':

The code Explaned

  •     Saves file descriptors so they can be restored to whatever they were before redirection or used themselves to output to whatever they were before the following redirect.
    trap 'exec 2>&4 1>&3' 0 1 2 3
  •     Restore file descriptors for particular signals. Not generally necessary since they should be restored when the sub-shell exits.

          exec 1>log.out 2>&1

  •     Redirect stdout to file log.out then redirect stderr to stdout. Note that the order is important when you want them going to the same file. stdout must be redirected before stderr is redirected to stdout.

From then on, to see output on the console (maybe), you can simply redirect to &3. For example
,

echo "$(date) : Do print whatever you want logging to &3 file handler" >&3


I've initially found out about this very nice bash code from serverfault.com's post how can I fully log all bash script actions (but unfortunately on latest Debian 10 Buster Linux  that is prebundled with bash shell 5.0.3(1)-release the code doesn't behave exactly, well but still on older bash versions it works fine.

Sum it up


To shortlysummarize there is plenty of ways to do logging from a shell script logger command but using a function or a named pipe is the most classic. Sometimes if a script is supposed to write user or other script output to a a common file such as syslog, logger command can be used as it is present across most modern Linux distros.
If you have a better ways, please drop a common and I'll add it to this article.

 

What is inode and how to find out which directory is eating up all your filesystem inodes on Linux, Increase inode count on a ext3 ext4 and ufs filesystems

Tuesday, August 20th, 2019

what-is-inode-find-out-which-filesystem-or-directory-eating-up-all-your-system-inodes-linux_inode_diagram

If you're a system administrator of multiple Linux servers used for Web serving delivery / Mail server sysadmin, Database admin or any High amount of Drives Data Storage used for backup servers infra, Data Repository administrator such as Linux hosted Samba / CIFS shares, etc. or using some Linux Hosting Provider to host your website or any other UNIX like Infrastructure servers that demands a storage of high number of files under a Directory  you might end up with the common filesystem inode depletion issues ( Maximum Inode number for a filesystem is predefined, limited and depending on the filesystem configured size).

In case a directory stored files end up exceding the amount of possible addressable inodes could prevent any data to be further assiged and stored on the Filesystem.

When a device runs out of inodes, new files cannot be created on the device, even though there may be plenty free space available and the first time it happened to me very long time ago I was completely puzzled how this is possible as I was not aware of Inodes existence  …

Reaching maximum inodes number (e.g. inode depletion), often happens on Busy Mail servers (receivng tons of SPAM email messages) or Content Delivery Network (CDN – Website Image caching servers) which contain many small files on EXT3 or EXT4 Journalled filesystems. File systems (such as Btrfs, JFS or XFS) escape this limitation with extents or dynamic inode allocation, which can 'grow' the file system or increase the number of inodes.

 

Hence ending being out of inodes could cause various oddities on how stored data behaves or communicated to other connected microservices and could lead to random application disruptions and odd results costing you many hours of various debugging to find the root cause of inodes (index nodes) being out of order.

In below article, I will try to give an overall explanation on what is an I-Node on a filesystem, how inodes of FS unit could be seen, how to diagnose a possible inode poblem – e.g.  see the maximum amount of inodes available per filesystem and how to prepare (format) a new filesystem with incrsed set of maximum inodes.

 

What are filesystem i-nodes?

 

This is a data structure in a Unix-style file system that describes a file-system object such as a file or a directory.
The data structure described in the inodes might vary slightly depending on the filesystem but usually on EXT3 / EXT4 Linux filesystems each inode stores the index to block that contains attributes and disk block location(s) of the object's data.
– Yes for those who are not aware on how a filesystem is structured on *nix it does allocate all stored data in logical separeted structures called data blocks. Each file stored on a local filesystem has a file descriptor, there are virtual unit structures file tables and each of the inodes that are a reference number has a own data structure (inode table).

Inodes / "Index" are slightly unusual on file system structure that stored the access information of files as a flat array on the disk, with all the hierarchical directory information living aside from this as explained by Unix creator and pioneer- Dennis Ritchie (passed away few years ago).

what-is-inode-very-simplified-explanation-diagram-data

Simplified explanation on file descriptors, file table and inode, table on a common Linux filesystem

Here is another description on what is I-node, given by Ken Thompson (another Unix pioneer and father of Unix) and Denis Ritchie, described in their paper published in 1978:

"    As mentioned in Section 3.2 above, a directory entry contains only a name for the associated file and a pointer to the file itself. This pointer is an integer called the i-number (for index number) of the file. When the file is accessed, its i-number is used as an index into a system table (the i-list) stored in a known part of the device on which the directory resides. The entry found thereby (the file's i-node) contains the description of the file:…
    — The UNIX Time-Sharing System, The Bell System Technical Journal, 1978  "


 

What is typical content of inode and how I-nodes play with rest of Filesystem units?


The inode is just a reference index to a data block (unit) that contains File-system object attributes. It may include metadata information such as (times of last change, access, modification), as well as owner and permission data.

 

On a Linux / Unix filesystem, directories are lists of names assigned to inodes. A directory contains an entry for itself, its parent, and each of its children.

Structure-of-inode-table-on-Linux-Filesystem-diagram

 

Structure of inode table-on Linux Filesystem diagram (picture source GeeksForGeeks.org)

  • Information about files(data) are sometimes called metadata. So you can even say it in another way, "An inode is metadata of the data."
  •  Inode : Its a complex data-structure that contains all the necessary information to specify a file. It includes the memory layout of the file on disk, file permissions, access time, number of different links to the file etc.
  •  Global File table : It contains information that is global to the kernel e.g. the byte offset in the file where the user's next read/write will start and the access rights allowed to the opening process.
  • Process file descriptor table : maintained by the kernel, that in turn indexes into a system-wide table of files opened by all processes, called the file table .

The inode number indexes a table of inodes in a known location on the device. From the inode number, the kernel's file system driver can access the inode contents, including the location of the file – thus allowing access to the file.

  •     Inodes do not contain its hardlink names, only other file metadata.
  •     Unix directories are lists of association structures, each of which contains one filename and one inode number.
  •     The file system driver must search a directory looking for a particular filename and then convert the filename to the correct corresponding inode number.

The operating system kernel's in-memory representation of this data is called struct inode in Linux. Systems derived from BSD use the term vnode, with the v of vnode referring to the kernel's virtual file system layer.


But enough technical specifics, lets get into some practical experience on managing Filesystem inodes.
 

Listing inodes on a Fileystem


Lets say we wan to to list an inode number reference ID for the Linux kernel (files):

 

root@linux: # ls -i /boot/vmlinuz-*
 3055760 /boot/vmlinuz-3.2.0-4-amd64   26091901 /boot/vmlinuz-4.9.0-7-amd64
 3055719 /boot/vmlinuz-4.19.0-5-amd64  26095807 /boot/vmlinuz-4.9.0-8-amd64


To list an inode of all files in the kernel specific boot directory /boot:

 

root@linux: # ls -id /boot/
26091521 /boot/


Listing inodes for all files stored in a directory is also done by adding the -i ls command flag:

Note the the '-1' flag was added to to show files in 1 column without info for ownership permissions

 

root@linux:/# ls -1i /boot/
26091782 config-3.2.0-4-amd64
 3055716 config-4.19.0-5-amd64
26091900 config-4.9.0-7-amd64
26095806 config-4.9.0-8-amd64
26091525 grub/
 3055848 initrd.img-3.2.0-4-amd64
 3055644 initrd.img-4.19.0-5-amd64
26091902 initrd.img-4.9.0-7-amd64
 3055657 initrd.img-4.9.0-8-amd64
26091756 System.map-3.2.0-4-amd64
 3055703 System.map-4.19.0-5-amd64
26091899 System.map-4.9.0-7-amd64
26095805 System.map-4.9.0-8-amd64
 3055760 vmlinuz-3.2.0-4-amd64
 3055719 vmlinuz-4.19.0-5-amd64
26091901 vmlinuz-4.9.0-7-amd64
26095807 vmlinuz-4.9.0-8-amd64

 

To get more information about Linux directory, file, such as blocks used by file-unit, Last Access, Modify and Change times, current External Symbolic or Static links for filesystem object:
 

root@linux:/ # stat /etc/
  File: /etc/
  Size: 16384         Blocks: 32         IO Block: 4096   catalog
Device: 801h/2049d    Inode: 6365185     Links: 231
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2019-08-20 06:29:39.946498435 +0300
Modify: 2019-08-14 13:53:51.382564330 +0300
Change: 2019-08-14 13:53:51.382564330 +0300
 Birth: –

 

Within a POSIX system (Linux-es) and *BSD are more or less such, a file has the following attributes[9] which may be retrieved by the stat system call:

   – Device ID (this identifies the device containing the file; that is, the scope of uniqueness of the serial number).
    File serial numbers.
    – The file mode which determines the file type and how the file's owner, its group, and others can access the file.
    – A link count telling how many hard links point to the inode.
    – The User ID of the file's owner.
    – The Group ID of the file.
    – The device ID of the file if it is a device file.
    – The size of the file in bytes.
    – Timestamps telling when the inode itself was last modified (ctime, inode change time), the file content last modified (mtime, modification time), and last accessed (atime, access time).
    – The preferred I/O block size.
    – The number of blocks allocated to this file.

 

Getting more extensive information on a mounted filesystem


Most Linuxes have the tune2fs installed by default (in debian Linux this is through e2fsprogs) package, with it one can get a very good indepth information on a mounted filesystem, lets say about the ( / ) root FS.
 

root@linux:~# tune2fs -l /dev/sda1
tune2fs 1.44.5 (15-Dec-2018)
Filesystem volume name:   <none>
Last mounted on:          /
Filesystem UUID:          abe6f5b9-42cb-48b6-ae0a-5dda350bc322
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Filesystem flags:         signed_directory_hash
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              30162944
Block count:              120648960
Reserved block count:     6032448
Free blocks:              13830683
Free inodes:              26575654
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      995
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Filesystem created:       Thu Sep  6 21:44:22 2012
Last mount time:          Sat Jul 20 11:33:38 2019
Last write time:          Sat Jul 20 11:33:28 2019
Mount count:              6
Maximum mount count:      22
Last checked:             Fri May 10 18:32:27 2019
Check interval:           15552000 (6 months)
Next check after:         Wed Nov  6 17:32:27 2019
Lifetime writes:          338 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:              256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
First orphan inode:       21554129
Default directory hash:   half_md4
Directory Hash Seed:      d54c5a90-bc2d-4e22-8889-568d3fd8d54f
Journal backup:           inode blocks


Important note to make here is file's inode number stays the same when it is moved to another directory on the same device, or when the disk is defragmented which may change its physical location. This also implies that completely conforming inode behavior is impossible to implement with many non-Unix file systems, such as FAT and its descendants, which don't have a way of storing this invariance when both a file's directory entry and its data are moved around. Also one inode could point to a file and a copy of the file or even a file and a symlink could point to the same inode, below is example:

$ ls -l -i /usr/bin/perl*
266327 -rwxr-xr-x 2 root root 10376 Mar 18  2013 /usr/bin/perl
266327 -rwxr-xr-x 2 root root 10376 Mar 18  2013 /usr/bin/perl5.14.2

A good to know is inodes are always unique values, so you can't have the same inode number duplicated. If a directory is damaged, only the names of the things are lost and the inodes become the so called “orphan”, e.g.  inodes without names but luckily this is recoverable. As the theory behind inodes is quite complicated and is complicated to explain here, I warmly recommend you read Ian Dallen's Unix / Linux / Filesystems – directories inodes hardlinks tutorial – which is among the best academic Tutorials explaining various specifics about inodes online.

 

How to Get inodes per mounted filesystem

 

root@linux:/home/hipo# df -i
Filesystem       Inodes  IUsed   IFree IUse% Mounted on

 

dev             2041439     481   2040958   1% /dev
tmpfs            2046359     976   2045383   1% /run
tmpfs            2046359       4   2046355   1% /dev/shm
tmpfs            2046359       6   2046353   1% /run/lock
tmpfs            2046359      17   2046342   1% /sys/fs/cgroup
/dev/sdb5        1221600    2562   1219038   1% /usr/var/lib/mysql
/dev/sdb6        6111232  747460   5363772  13% /var/www/htdocs
/dev/sdc1      122093568 3083005 119010563   3% /mnt/backups
tmpfs            2046359      13   2046346   1% /run/user/1000


As you see in above output Inodes reported for each of mounted filesystems has a specific number. In above output IFree on every mounted FS locally on Physical installed OS Linux is good.


Here is an example on how to recognize a depleted Inodes on a OpenXen Virtual Machine with attached Virtual Hard disks.

linux:~# df -i
Filesystem         Inodes     IUsed      IFree     IUse%   Mounted on
/dev/xvda         2080768    2080768     0      100%    /
tmpfs             92187      3          92184   1%     /lib/init/rw
varrun            92187      38          92149   1%    /var/run
varlock            92187      4          92183   1%    /var/lock
udev              92187     4404        87783   5%    /dev
tmpfs             92187       1         92186   1%    /dev/shm

 

Finding files with a certain inode


At some cases if you want to check all the copy files of a certain file that have the same i-node pointer it is useful to find them all by their shared inode this is possible with simple find (below example is for /usr/bin/perl binary sharing same inode as perl5.28.1:

 

ls -i /usr/bin/perl
23798851 /usr/bin/perl*

 

 find /usr/bin -inum 435308 -print
/usr/bin/perl5.28.1
/usr/bin/perl

 

Find directory that has a large number of files in it?

To get an overall number of inodes allocated by a certain directory, lets say /usr /var

 

root@linux:/var# du -s –inodes /usr /var
566931    /usr
56020    /var/

To get a list of directories use by inode for a directory with its main contained sub-directories sorted from 1 till highest number use:
 

du -s –inodes * 2>/dev/null |sort -g

 

Usually running out of inodes means there is a directory / fs mounts that has too many (small files) that are depleting the max count of possible inodes.

The most simple way to list directories and number of files in them on the server root directory is with a small bash shell loop like so:
 

for i in /*; do echo $i; find $i |wc -l; done


Another way to identify the exact directory that is most likely the bottleneck for the inode depletion in a sorted by file count, human readable form:
 

find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n


This will dump a list of every directory on the root (/) filesystem prefixed with the number of files (and subdirectories) in that directory. Thus the directory with the largest number of files will be at the bottom.

 

The -xdev switch is used to instruct find to narrow it's search to only the device where you're initiating the search (any other sub-mounted NAS / NFS filesystems from a different device will be omited).

 

Print top 10 subdirectories with Highest Inode Usage

 

Once identifed the largest number of files directories that is perhaps the issue, to further get a list of Top subdirectories in it with highest amount of inodes used, use below cmd:

 

for i in `ls -1A`; do echo "`find $i | sort -u | wc -l` $i"; done | sort -rn | head -10

 

To list more than 10 of the top inodes used dirs change the head -10 to whatever num needed.

N.B. ! Be very cautious when running above 2 find commands on a very large filesystems as it will be I/O Excessive and in filesystems that has some failing blocks this could create further problems.

To omit putting a high I/O load on a production filesystem, it is possible to also use du + very complex regular expression:
 

cd /backup
du –inodes -S | sort -rh | sed -n         '1,50{/^.\{71\}/s/^\(.\{30\}\).*\(.\{37\}\)$/\1…\2/;p}'


Results returned are from top to bottom.

 

How to Increase the amount of Inodes count on a new created volume EXT4 filesystem

Some FS-es XFS, JFS do have an auto-increase inode feature in case if their is physical space, whether otheres such as reiserfs does not have inodes at all but still have a field reported when queried for errors. But the classical Linux ext3 / ext4 does not have a way to increase the inode number on a live filesystem. Instead the way to do it there is to prepare a brand new filesystem on a Disk / NAS / attached storage.

The number of inodes at format-time of the block storage can be as high as 4 billion inodes. Before you create the new FS, you have to partition the new the block storage as ext4 with lets say parted command (or nullify the content of an with dd to clean up any previous existing data on a volume if there was already existing data:
 

parted /dev/sda


dd if=/dev/zero of=/dev/path/to/volume


  then format it with this additional parameter:

 

mkfs.ext4 -N 3000000000 /dev/path/to/volume

 

Here in above example the newly created filesystem of EXT4 type will be created with 3 Billion inodes !, for setting a higher number on older ext3 filesystem max inode count mkfs.ext3 could be used instead.

Bear in mind that 3 Billion number is a too high number and if you plan to have some large number of files / directories / links structures just raise it up to your pre-planning requirements for FS. In most cases it will be rarely anyone that want to have this number higher than 1 or 2 billion of inodes.

On FreeBSD / NetBSD / OpenBSD setting inode maximum number for a UFS / UFS2 (which is current default FreeBSD FS), this could be done via newfs filesystem creation command after the disk has been labeled with disklabel:

 

freebsd# newfs -i 1024 /dev/ada0s1d

 

Increase the Max Count of Inodes for a /tmp filesystem

 

Sometimes on some machines it is necessery to have ability to store very high number of small files (e.g. have a very large number of inodes) on a temporary filesystem kept in memory. For example some web applications served by Web Server Apache + PHP, Nginx + Perl-FastCGI are written in a bad manner so they kept tons of temporary files in /tmp, leading to issues with exceeded amount of inodes.
If that's the case to temporary work around you can increase the count of Inodes for /tmp to a very high number like 2 billions using:

 

mount -o remount,nr_inodes=<bignum> /tmp

To make the change permanent on next boot if needed don't forget to put the nr_inodes=whatever_bignum as a mount option for the temporary fs to /etc/fstab

Eventually, if you face this issues it is best to immediately track which application produced the mess and ask the developer to fix his messed up programs architecture.

 

Conclusion

 

It was explained on the very common issue of having maximum amount of inodes on a filesystem depleted and the unpleasent consequences of inability to create new files on living FS.
Then a general overview was given on what is inode on a Linux / Unix filesystem, what is typical content of inode, how inode addressing is handled on a FS. Further was explained how to get basic information about available inodes on a filesystem, how to get a filename/s based on inode number (with find), the well known way to determine inode number of a directory or file (with ls) and get more extensive information on a FS on inodes with tune2fs.
Also was explained how to identify directories containing multitudes of files in order to determine a sub-directories that is consuming most of the inodes on a filesystem. Finally it was explained very raughly how to prepare an ext4 filesystem from scratch with predefined number to inodes to much higher than the usual defaults by mkfs.ext3 / mkfs.ext4 and *bsds newfs as well as how to raise the number of inodes of /tmp tmpfs temporary RAM filesystem.

Play Midis on Linux / Make Linux MIDI Ready for the Future – Enable embedded MIDI music to play in a Browser, Play MIDIs with VLC and howto enjoy Midis in Text Console

Wednesday, October 4th, 2017

how-to-play-midi-on-gnu-linux-in-graphic-environment-console-and-browser-midi-synthesizer-and-linux-tux-together

 

Play Midis on Linux or Make Linux MIDI Ready for the Future – Enable embedded MIDI music to play in a Browser, Play MIDIs with VLC and howto enjoy Midis in Text Console HOWTO

 

Playing MIDI has been quite a lot of fun historically,

if you grow up in the days when personal computers were still young and the Sound Blaster was a luxury, before the raise of Mp3 music format, you have certainly enjoyed the beeping of PC Speaker and later on during 386 and 486 / 586 computers the enjoyment of playing tracked music such as S3M and MOD,

in that good days playing MIDI music was the only alternative for PC maniacs who doesn't own a CD Drive (which itself) was another luxury and even thouse who had a CD ROM device, were mainly playing music in CD audio format (.CDA).
Anyhow MIDI was a cheap and a CPU unintensive way to listen to equivalent of favourite popular Audio Songs and for those who still remember many of the songs were recreated in MIDI format, just with a number of synthesized instruments without any voice (as MIDI is usually).

The same was true also for the good old days of raise of Mobile Phones, when polyphonic was a standard as CPU power was low MIDI was a perfect substitute for the CPU heavy Encoded MP3s / OGG and other formats that required a modern for that time Intel CPU running in 50+ Mhz usually 100 / 166Mhz was perfect for the days to play Mp3 but still even on that PCs we listened to Midi songs.

Therefore if you're one of those people like me who still enjoy to play some Midi Music in the year 2017 and feel a bit like Back into the Future movie and a Free Software fan and user, especially if you're a novice GNU  / Linux Free Software user, you will be unpleasently surprised that most today's default Linux distributions doesn't have an easy way to play Midi music format out of the box right after install.

Hence below article aims to give you an understanding on

How you can play Midi Music on GNU / Linux Operating System

First, lets Prepare to load necessery Linux kernel modules to make sure MIDI can be played by soundcard:

In /etc/modules make sure you have the following list of modules loaded:
 

linux-desktop:~# cat /etc/modules
3c59x
snd-emu10k1
snd-pcm-oss
snd-mixer-oss
snd-seq-oss

!Note the modules are working as of time of writting and in time can change to some other modules, depending on how the development of ALSA (Advanced Linux Sound Architecture) goes, and if the developers decide to rename the upmentioned modules

If you just have added the modules to /etc/modules with vim / nano to reload modules into the Linux kernel run:

 

linux-desktop:~# modprobe -a


Secondly, Installing a whole bunch of MIDI music related program tools can be achieved in Debian by installing the multimedia-midi package, e.g.:

 

linux-desktop:~# apt-get install –yes multimedia-midi

 

1. Playing Midi in Graphical environment with a double click using VLC


How to make MIDI easy listanable in Linux graphical environment like GNOME / KDE / XFCE desktop ?

 

If you want to make Midi music execution sa easy as  just clicking on the .MIDI file format on Linux you can do that with a midi extension available for VLC (Video Lan Client) Universal Multi Platform Media Player player

To install it on Debian Ubuntu GNU / Linux
 

# apt-get install –yes vlc-plugin-fluidsynth

 

Необходимо е да се изтеглят 6754 B архиви.
След тази операция ще бъде използвано 35,8 kB допълнително дисково пространство.
Изт:1 http://deb.debian.org/debian stretch/main amd64 vlc-plugin-fluidsynth amd64 2.2.6-1~deb9u1 [6754 B]
Изтеглени 6754 B за 0с (33,6 kB/сек)           
Selecting previously unselected package vlc-plugin-fluidsynth:amd64.
(Reading database … 382976 files and directories currently installed.)
Preparing to unpack …/vlc-plugin-fluidsynth_2.2.6-1~deb9u1_amd64.deb …
Unpacking vlc-plugin-fluidsynth:amd64 (2.2.6-1~deb9u1) …
Setting up vlc-plugin-fluidsynth:amd64 (2.2.6-1~deb9u1) …
Processing triggers for libvlc-bin:amd64 (2.2.6-1~deb9u1) …


Besides making your MIDI play on the GUI environment easy as a a point and click VLC will also be able to play MIDIs on GNU / Linux from your favourite browser (nomatter Firefox / Chrome or Opera), even though the player would play in a new PopUP Window it is easy to select once MIDI file from a random website for example – here is a directory listing of Webserver with Doom II Soundtrack in MIDI format , click over any file from list and Choose option for VLC to always remember that MIDI files has to be opened with VLC player.
 


2. Enable Firefox / IceWeasel browser to Support Website embedded MIDI files

 

 

So VLC could make you listen the downloadable MIDIs from Web pages but,
 

What if you have stumbled on an old website which was configured with very OLD HTML Code to play some nice music (or even different MIDI songs) for each part of the website (for each webpage) and you want to have the Websites created with embedded MIDIs to automatically play on Linux oncce you visit the site?


Sadly default support in Browser for MIDI across all GNU / Linux, I've used so far never worked out of the box, not that still anyone is developing modern websites with MIDIs, but still for the sake of backward compitability and for sake of interactivity it is worthy to enable embedded MIDI support in Linux

But with a couple of tunings as usual GNU / Linux can do almost everything, so here is how to enable embedded browser support for Midi on Linux (That should work with minor modifications not only on Debian / Ubuntu / ArchLinux but also on Fedoras, CentOS etc.
If you try it on any of this distributions, please drop a short comment and tell me in few lines how you made embedded midi worked on that distros.

 

apt-get install –yes timidity mozplugger

Next do restart firefox

Sometimes in order to work you might need to delete /home/[YOUR_USERNAME]/.mozilla/pluginreg.dat and restart firefox again, e.g. make a backup and give it a try:

 

cp -rpf /home/hipo/.mozilla/pluginreg.dat /home/hipo/.mozilla/pluginreg.dat.bak
rm -f /home/hipo/.mozilla/pluginreg.dat

 

Another good tip as talking for embedding MIDI support is to embed XPDF to render PDF pages inside the Browser, by default this is done by GNOME's Evince PDF reader but as it is sometimes buggy and might crash it is generally a good idea to switch to xpdf instead, if for some reason PDF is not directly displaying in browser or suddenly stopped working after some distro uipgrade, you might want to do below as well:
 

apt-get install xpdf

vim /etc/mozpluggerrc

Fin d and Comment out the line starting with:

It should look like this afterwards:

 Repeat Swallow ….
 

text/x-pdf: pdf: PDF file
#      repeat swallow(documentShell) fill: acroread -geometry +9000+9000 +useFrontEndProgram "$file"
        repeat noisy swallow(Xpdf) fill: xpdf -g +9000+9000 "$file"
        repeat noisy swallow(gv) fill: gv –safer –quiet –antialias -geometry +9000+9000 "$file"


 

3. Play Midi music in Linux text console / terminal


There is a console tool that historically has been like the Linux standard for playing midis over the years as I remember, its called timidity

 


To install timidity on .Deb based Linux:
 

linux-desktop:~$ su root
Password:
linux-desktop:~# apt-get install –yes timidity

Необходимо е да се изтеглят 0 B/580 kB архиви.
След тази операция ще бъде използвано 0 B допълнително дисково пространство.
(Reading database … 382981 files and directories currently installed.)
Preparing to unpack …/timidity_2.13.2-40.5_amd64.deb …
Unpacking timidity (2.13.2-40.5) over (2.13.2-40.5) …
Processing triggers for menu (2.1.47+b1) …
Processing triggers for man-db (2.7.6.1-2) …
Setting up timidity (2.13.2-40.5) …
Processing triggers for menu (2.1.47+b1) …

 

To test your new MIDI Synthesizer tool and make the enjoyment full you can download Doom 2 extracted MIDI Soundtrack from here
 

Once you have downloaded above Metal MIDI DOOM old school arcade soundtrack and untarred it into your home directory be it ~/doom-midis

A remark to make here is timidity is quite CPU intensive, but on modern Dual and Quad-Core PC Notebooks, the CPU load is not of a big concern.

To test and play with timidity:
 

linux-desktop~$ timidity ~/mp3/midis/*


timidity-playing-doom-midi-bunny-song-on-debian-stretch-gnome-terminal-screenshot
 

hipo@jericho:~/mp3/midis$ aplaymidi -l
 Port    Client name                      Port name
 14:0    Midi Through                     Midi Through Port-0
128:0    TiMidity                         TiMidity port 0
128:1    TiMidity                         TiMidity port 1
128:2    TiMidity                         TiMidity port 2
128:3    TiMidity                         TiMidity port 3

 


We have also the playmidi  (simple midi text console terminal player), which historically was working quite decent and I use it to in the past on my RedHat 6.0 and RedHat 7.0 to listen to my .MID format files but unfortunately as of time of writting something is wrong with it, so when I try to play MIDIs with it instead of timidity I get this erro:

 

$ playmidi *.mid
Playmidi 2.4 Copyright (C) 1994-1997 Nathan I. Laredo, AWE32 by Takashi Iwai
This is free software with ABSOLUTELY NO WARRANTY.
For details please see the file COPYING.
open /dev/sequencer: No such file or directory

Even though I tried hard to resolve that error by loading various midi related MIDI modules and following a lot of the suggestions online on how to  make /dev/sequencer work again it was all no luck.
 

Some people back in the distant year 2005, reported the problem was solved by simply loading snd-seq

But as of time of writting:

 

# modprobe snd-seq

 

Some people said in archlinux's Forum

/dev/sequencer sequencer: No such file or directory

 

is solved by loading snd-seq-oss kernel module, but on my Debian Linux 9.1 Stretch, this ain't work as well :

 

root@jericho:/home/hipo/mp3/midis# modprobe snd-seq-oss
modprobe: FATAL: Module snd-seq-oss not found in directory /lib/modules/4.9.0-3-amd64
root@jericho:/home/hipo/mp3/midis# uname -a;
Linux jericho 4.9.0-3-amd64 #1 SMP Debian 4.9.30-2+deb9u5 (2017-09-19) x86_64 GNU/Linux


Another invention of mine was to try to also link /dev/snd/seq to /dev/sequencer but this produced no positive result either:

 

# ln -sf /dev/snd/seq /dev/sequencer
# ls -al /dev/sequencer
lrwxrwxrwx 1 root root 12 окт  4 16:48 /dev/sequencer -> /dev/snd/seq


Note that after lining in that way I got following error with my attempt to play MIDIs with playmidi

# playmidi *.mid
Playmidi 2.4 Copyright (C) 1994-1997 Nathan I. Laredo, AWE32 by Takashi Iwai
This is free software with ABSOLUTELY NO WARRANTY.
For details please see the file COPYING.
there is no soundcard


Anyhow on some other Linux distributions (especially with Older Kernel versions), some of the above 3 suggested Fix might work perfectly fine so if you have some time give it a try please and drop me  a comment on how it went, you will help the GNU / Linux community out there that way.

Well never mind the bollocks, so

Now back to where I started timidity even though it will play fine it will not give any indication on the lenght of the midi song (precious information such as how much time is left until the end is over).

Hence if you prefer a player that gives you an indicator on how much is left towards the end length of each of the played MIDI file you can give a try to wildmidi:

 

linux-desktop:~$ apt-cache show wildmidi|grep -i description -A 2

Description-en: software MIDI player
 Minimal MIDI player implementation based on the wildmidi library that
 can either dump to WAV or playback over ALSA. It is intended to

Description-md5: b4b34070ae88e73e3289b751230cfc89
Homepage: http://www.mindwerks.net/projects/wildmidi/
Tag: implemented-in::c, role::program, sound::midi, sound::player,

Description: software MIDI player
Description-md5: 4673a7051f104675c73eb344bb045607
Homepage: http://wildmidi.sourceforge.net/
Bugs: https://bugs.launchpad.net/ubuntu/+filebug


If yet not installed install it after becoming admin user:

 

linux-desktop:~$ su root
Password:

linux-desktop:~# apt-get install –yes wildmidi


wildmidi is much less CPU intensive (it uses gstreamer to play (Gstreamer – open source multimedia framework)

And next give it a try by running:

 

linux-desktop:~$ wildmidi ~/mp3/midis/*

 

wildmidi-midi-lenght-status-text-console-player-for-linux-ubuntu-debian-fedora-suse

 

 

4. Editting MIDI files with Free Software and Proprietary MIDI Editor Programs

 


If you want a professional software that can play Midi in a fuzzy interactive GUI way and have some extra possibilities to edit MIDIs and other format give a try to Muse Sequencer:
 

 

linux-desktop:~$ sudo apt-get install –yes muse

The following NEW packages will be installed:
  muse
0 upgraded, 1 newly installed, 0 to remove and 38 not upgraded.
Need to get 5814 kB of archives.
After this operation, 21.0 MB of additional disk space will be used.
Get:1 http://deb.debian.org/debian stretch/main amd64 muse amd64 2.1.2-3+b1 [5814 kB]
Fetched 5814 kB in 2s (2205 kB/s)                             
    are supported and installed on your system.
Preconfiguring packages …
Selecting previously unselected package muse.
(Reading database … 382981 files and directories currently installed.)
Preparing to unpack …/muse_2.1.2-3+b1_amd64.deb …
Unpacking muse (2.1.2-3+b1) …
Processing triggers for mime-support (3.60) …
Processing triggers for desktop-file-utils (0.23-1) …
Processing triggers for doc-base (0.10.7) …
Processing 1 added doc-base file…
Registering documents with scrollkeeper…
Processing triggers for man-db (2.7.6.1-2) …
Processing triggers for shared-mime-info (1.8-1) …
Unknown media type in type 'all/all'
Unknown media type in type 'all/allfiles'
Processing triggers for gnome-menus (3.13.3-9) …
Setting up muse (2.1.2-3+b1) …
Processing triggers for hicolor-icon-theme (0.15-1) …


 

Below is short description what Muse can do for you:

 

MusE is a MIDI/audio sequencer with recording and editing capabilities.
 Some Highlights:
 

  * Standard midifile (smf) import-/export.
  * Organizes songs in tracks and parts which you can arrange with
    the part editor.
  * MIDI editors: pianoroll, drum, list, controller.
  * Score editor with high quality postscript printer output.
  * Realtime: editing while playing.
  * Unlimited number of open editors.
  * Unlimited undo/redo.
  * Realtime and step-recording.
  * Multiple MIDI devices.
  * Unlimited number of tracks.
  * Sync to external devices: MTC/MMC, Midi Clock, Master/Slave.
  * Audio tracks, LADSPA host for master effects.
  * Multithreaded.
  * Uses raw MIDI devices.
  * XML project file.
  * Project file contains complete app state (session data).
  * Application spanning Cut/Paste Drag/Drop.

 

linux-desktop~:$ muse

muse-advanced-midi-editor-free-software-for-linux

 

Below is another non-free program that you might, try if MusE doesn't fit your needs (is not rich enough for editting capabilities is bitwig (though I don't recommend since it is not free software)

bitwig – Bitwig Studio is a multi-platform music-creation system for production, performance and DJing, with a focus on flexible editing tools and a super-fast workflow.
 


bitwig-midi-and-audio-non-free-software-advanced-useful-sound-editor-for-linx


 

5. Some examples for Text editing and MIDI Conversion to CSV and ABC file formats There is pretty much more

For the MIDI Extremists who or people that create MIDIs and want to learn how a MIDI is made (the content of it etc.), I suggest you take a look at these 3 command line MIDI editing / conversion tools
 

  • midi2abc – A little tool to create MIDI formats to ABC format
  • midi2csv – Conver tour Favourite MIDI files to CSV for educational purposes so see what Channels, Tracks and Time Intervals is a MIDI song mad
  • midicopy – Copy selected, track, channel, time interval of MIDI file to another MIDI file3

 

Well, that's all folks now enjoy your MIDIs and don't forget to donate, as I'm jobless at the moment and the only profit I make is just a few bucks out of advertisement on this blog.
 

Flight from Sofia Minsk via Moscow Sheremetevo airport a few impressions from Russia and Belarus

Sunday, April 17th, 2016

Sofia-Sheremetevo-Minsk-my-impressions-on-Russian-and-Belarusian-airport-what-it-is-like-in-Eurasian-Union

Thanks God, today we had a safe flight  for one more time with my wife Svetlana from Sofia  airport Vrazhdebna ( Terminal 2 )  to Minsk Belarus National Airport.
We travel 2 times to Belarus from Bulgaria (European Union) and though in Summer time tickets are more cheap and more regular and more convenient direct flights are available from Bulgaria to Minsk (Varna -> Minsk and Burgas -> Minsk) because of tourism the only way to travel quickly by plane to Minsk from Bulgaria is either via Moscow or via Istanbul as in the late days the political situation in Turkey is so ignited and the problem with these refugees and crazy bomb hihhadists is escalating we decided to not fly through Istanbul Ataturk airport even though the bit lower ticket prices.

The trip started from Sofia airport, there we had to go through the regular Metal Detector scan and removal of all metal things and going through the scan "gateway" doors. It seems after the last terrorist acts in Brussels Belgium and the risk for many other Islamic ones the security in airports was raised even further.

The distrust to the regular flight traveller like me has reached a crazy levels, these times besies the regular Metal detector, we were asked to give our hands and with a small device called "Trivka", were checked whether our skin has recently been exposed to explosive materials and stuff like this …

For both flights Sofia -> Moscow Sheremetevo and Sheremevo -> Minsk we flied with Aeroflot. This is not the first time we fly with Aeroflot and I think it will definitely not to be the last time. The airplanes we flight with were a new airplanes (SU) Sukhoi airplane and I enjoyed the flight.

My impression was the Russian pilots are "driving" / flying the plane very much like they drive an ordinary military airplane and that's quite fun as the airplane lift off and landing is done very rapidly exactly like being done with a military aircraft. Of course that could be my own gut feeling but it feels that way just to compare the aviators of other Westerner airplanes such as Boeng 737 / 777 does speed up before lift off much more easily so the flight out doesn't boost up so much adrenaline 🙂

The food in Sofia -> Moscow plane was quite decent too we were served the so common and valued in Russia, Belarus and Eurasian Union (EAC) juice called RICH before the meail toghether with some option for a beef meal or a chicken meal. The meal itself was worm and served in a board inside holding 2 cardboard boxes one containing the beef together with some spaghetti with steamed vegetables and the other one containing a tiny soared corny with a few pieces of beef (or pork) meet with two pieces of some healthy "black" rye bread pieces one of which was with some healthy sesame seeds accompanied by a small package of butter and some other souce. For a dessert  a delicious waffle like made of drought mashed fruits.

To be honest my first impression seeing that strange meal combination was not very positive, not to say that I honestly wanted to puke but after tasting it I would say I really liked it. Aeroflot offers also a vegetarianian menu but this has to be pre-ordered in advance and as I didn't ordered it earlier but for next time if it is a fasting period before Easter or Christmas I would definitely pre-order a vegeraranian meal.

What I truly liked about Aeroflot's Sukhoi SuperJet 100-95 with which we flight is the the simplistic design, no extras inside the plane no annoying monitors that show you all the time the altitude and reporting over which country you're flying and generally no useless and often unwanted information.
However we were reported by the pilot a couple of times some information such as that we're flying over Brest and Minsk, but of course the English of the pilot was hard to understand. Well the radio from the pilot on the airplane is definitely something that didn't impressed me as in any other western airplane the radio connection from pilot is much more clearly heard but I guess this can be solved quickly.
The toilets in the airplane was also normal ones as in any other Western built Boeng, here is time to say it was always intesting for me the wiping of the toilet once the wipe off button is pressed a small hole is opened letting some air directly from outside to pull off what is inside the toilet and that quickly cleans it up 🙂

Entering the airplane was done through an attachable cordon (air tunnel) and not like with my earlier flights in which we were driven to the airplane by an ordinary bus.  

moscow-sheremetevo-terminal-2

Sheremetevo is really huge airport and what impressed me is the fact Sheremetevo's airport letters were written in pure cyrillic, something surely unique to see for any person coming from the west.

First impression from Moscow Sheremetevo if compared to Shiphol airport in Holland or Heatrow port in London is Sheremetevo is much more calm and quiet, the airport looks feels relaxing and cozy even though its outlook is a bit old fashioned. The marketing and advertisement ads all around the place and complexity of Sheremetevo is much less if compared to any other huge International airport and some things are made in a typical Russian manner, even the number of products being sold in the airport are much less than in any western airport, though there is plenty of caffeterias and restaurants to have lunch / dinner.

Moscow-Shermeetevo-free-duty-shops-and-terminal-D-red-cooridor-signatures

The simplicity on Sheremetevo airport is really a great thing as even the monitors showing up information for the flights are displaying the information in a very understanable and simply way even though possessing generally an old fashioned DOS like outlook if compared to the modern European Union / United States airports.
While waiting on Terminal D for the onboarding to flight Moscow -> Minsk, I had an amazing view of the airports airplanes, moving all around.
This is the first time I saw so many airplanes gathered on one place even though I've flight via Sheremetevo previous times this is the first time I'm starting to understand how big is really Sheremetevo.

One unfortunate fact about the flught was that our luggage was not transferred directly to Minsk Belarus but sent from Sofia to Sheremetevo and then we had to wait for some time to pick it up right after we've been checked by the border control kiosk.
There by the border police Russian lady I had to answer her few questions and she filled me a small list called "Migration card" blank for the Transfer visa.
Oh yes, I almost forgot in order to fly through Russian to another country inside the Eurasian Union such as Belarus, you need to have a Russian Transfer Visa which is being applied for from Russian embassy in Bulgaria.
The Russian Transit flight  VISA costs 60 EUR (if it is to be made from 4 to 10 days) and for a 3 days creation of VISA it costs 95 EURO.

Sheremetevo_airport_Saint_Nicolas_Eastern_Orthodox-Chapel

One very great thing about Sheremetevo which I liked so much as I'm an Eastern Orthodox Christian is the existence of the Eastern Orthodox Chapel in honour of Saint Nicolas the Myrh-Bearer who is in our Christian faith considered to be a protector of all travelers. Thus if like me you happen to be a Christian and you're flying via Moscow it is very nice to drop by for a few minutes in St. Nicolas chapel to light up a candle and pray to the saint with a beseach for a safe flight.

 

Sheremetevo-airport-Saint_Nicolas_Chapel-iconostas-icons-of-the-Savior-Jesus-Christ-and-Virgin_Mary
Here in Minsk the airport is also very cozy and warm (especially the old terminal), so one have a relaxing feeling once in Minsk.
Minsk airport is also very well organized and well maintained so to be honest it looks to me personally more beautiful than Sheremetevo.

Minsk-National-airport-logo

To transfer to the 2nd airplane that flight from Moscow Sheremetevo to Minsk we needed to make transfer from Terminal F (where we arrived) to Terminal D which is the terminal that runs the local Eurasian Union flights from Russia to Belarus (note that Russian, Belarus doesn't have a flight border so anyone flying from Russia to Belarus could fly freely and once you reach Belarus, you're not being checked at all from any border control and that's pretty cozy because we didn't have to be checked for a second time once we reached Minsk.

minsk-inside-airport-shops-and-infrastructure

My impression from Minsk National airport is that it is a nice mixture of communistic remains architecture and modern architecture.
There are plenty of private  busses (marshrutki) that goes every 15 minutes from Minsk airport which is 43 km from city center as well as an ordinary state bus that gues to the train station and city center.
The train station in Minsk is also on a very much Western level and for some things it is even better as it has a cheap shop, where you can buy food at same prices as in any other supermarket chain in Belarus. 
We travelled to train station and there what striked me is the touch screen interface allowing you to see various info about Minsk infrastructure the trains timing and even there was a video call to Train Station Staff to find out more about anything you can't find out yourself.

 

Check Windows Operating System install date, Full list of installed and uninstalled programs from command line / Check how old is your Windows installation?

Tuesday, March 29th, 2016

when-was-windows-installed-check-howto-from-command-line
Sometimes when you have some inherited Windows / Linux OS servers or Desktops, it is useful to be aware what is the Operating System install date. Usually the install date of the OS is closely to the date of purchase of the system this is especially true for Windows but not necessery true for Liunx based installs.

Knowing the install date is useful especially if you're not sure how outdated is a certain operating system. Knowing how long ago a current installation was performed could give you some hints on whether to create a re-install plans in order to keep system security up2date and could give you an idea whether the system is prone to some common errors of the time of installation or security flaws.

 

1. Check out how old is Windows install?

Finding out the age of WIndows installation can be performed across almost all NT 4.0 based Windowses and onwards, getting Winblows install date is obtained same way on both Windows XP / Vista/  7  and 8.

Besides many useful things such as detailed information about the configuration of your PC / notebook systeminfo could also provide you with install date, to do so just run from command line (cmd.exe).
 

C:\Users\hipo> systeminfo | find /i "install date"
Original Install Date:     09/18/13, 15:23:18 PM


check-windows-os-install-date-from-command-line-howto-screenshot

If you need to get the initial Windows system install date however it might be much better to use WMIC command to get the info:

 

 

C:\Users\hipo>WMIC OS GET installdate
InstallDate
20130918152318.000000+180


The only downside of using WMIC as you can see is it provides the Windows OS install date in a raw unparsed format, but for scripters that's great.

2. Check WIndows Installed and Uinstalled software and uptime from command line

One common other thing next to Windows install date is what is the Windows uptime, the easiest way to get that is to run Task Manager in command line run taskmgr

windows-task-manager-how-to-check-windows-operating-system-uptime-easily

For those who want to get the uptime from windows command line for scripting purposes, this can be done again with systeminfo cmd, i.e.:

 

C:\> systeminfo | find "System Boot Time:"
System Boot Time:          03/29/16, 08:48:59 AM


windows-os-command-to-get-system-uptime-screenshot

Other helpful Windows command liners you might want to find out about is getting all the Uninstalled and Installed programs from command line this again is done with WMIC

 

C:\> wmic /OUTPUT:my_software.txt product get name

 


get-a-full-list-of-installed-software-programs-on-windows-xp-vista-7-8-command-howto-screenshot

Alternative way to get a full list of installed software on Windows OS is to use Microsoft/SysInternals psinfo command:

 

C:\> psinfo -s > software.txt
C:\> psinfo -s -c > software.csv


If you need to get a complete list of Uinstalled Software using command line (e.g. for batch scripting) purposes, you can query that from Windows registry, like so:

 

C:\>reg query HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Uninstall


Command Output will be something like on below shot:

windows-OS-show-get-full-list-of-uninstalled-programs-using-a-command-line-screenshot

Well that's all folks 🙂