Archive for August, 2019

How to change default Text editor in Linux

Saturday, August 31st, 2019

This is a very trivial question but, as I thought someone that is starting with Linux basics Operating might be interested I will shortly explain in this small article how to change default text editor on Linux.

Changing default text editor is especially helpful if you have to administer a newly purchased dedicated servers, that comes with default Operating System preinstalled.

By default many Linux distributions versions such as Debian / Ubuntu comes with nano comes with a default text editor nano (ANOther enhanced free Pico editor clone) many people as me are irritated and prefer to use instead vim (Vi Improved), mcedit (the Midnight Commander), joe or emacs as a default.
 

1. Changing default console text editor on Debian based Linux


On Debian / Ubuntu / Mint and other deb based distributions the easiest way to change text editor is with update-alternatives cmd.
 

update-alternatives –config editor


changing-default-text-editor-in-linux

Using Debian update-alternatives is useful as it makes the change OS global wide and the default mc viewer program mcview will also understand the change in the default text editor, which makes it the preferrable way to do it on deb based OS family.

An alternative way to set the default programs for the OS system wide is to create the respective symbolic link in /etc/alternatives actually what update-alternatives wrapper script does is exactly this it creates the required symlink.

2. Changing default text editor on any Linux


To change the text editor for only a single system existing user in /etc/passwd you need to edit $HOME/.bashrc, e.g. ~/.bashrc on Debian based Linux or on Fedora / RHEL / CentOS by adding to ~/.bash_profile
 

vim ~/.bashrc


And add

alias editor=vim

or

export EDITOR='/path/to/text-editor/program'
export VISUAL='/path/to/text-editor/program'

To change to mcedit for example when opening in any program that triggers to run default text editor

export EDITOR='/usr/bin/mcedit'
export VISUAL='/usr/bin/mcedit'


To make the change system wide on any Linux distribution you have to add the export EDITOR / export VISUAL at the end of /etc/bash.bashrc

To load the newly included .bashrc* instructions use source command
 

source ~/.bashrc

 

3. Changing the default text editor for mcview if all else fails

 

Once mc is running, use following menu keys order (also visible from Midnight Commander) menus:
 

    F9 Activates the top menu.
    o Selects the Option menu.
    c Opens the configuration dialog.
    i Toggles the use internal edit option.
    s Saves your preferences.

mcview-how-to-change-default-text-editor-screenshot

That's all folks Enjoy !

 

Howto Configure Linux shell Prompt / Setup custom Terminal show Prompt using default shell variables PS1, PS2, PS3, PS4

Tuesday, August 27th, 2019

how-to-configure-lunux-bsd-shell-prompt-ps1-howto-make-your-terminal-console-shell-nice-and-shiny-1

System Console, Command Operation Console  or Terminal is a Physical device for text (command) input from keyboard, getting the command output and monitoring the status of a shell or programs I/O operations generated traditionally with attached screen. With the development of Computers, physical consoles has become emulated and the input output is translated on the monitor usually via a data transfer  protocol historically mostly over TCP/IP connection to remote IP with telnet or rsh, but due to security limitations Consoles are now accessed over data (encrypted) network protocols with SHA2 / MD5 cryptography algorithm enabled such as over SSH (Secure Shell) network protocol..
The ancestors of physical consoles which in the past were just a Terminal (Monitoring / Monitor device attached to a MainFrame system computer).

Mainframe-physical-terminal-monitor-Old-Computer

What is Physical Console
A classical TTY (TeleTYpewriter) device looked like so and served the purpose of being just a communication and display deivce, whether in reality the actual computing and storage tape devices were in a separate room and communicating to Terminal.

mainframe-super-computer-computing-tape-machine
TTYs are still present in  modern UNIX like GNU / Linux distrubions OSes and the BSD berkley 4.4 code based FreeBSD / NetBSD / OpenBSD if you have installed the OS on a physical computer in FreeBSD and Solaris / SunOS there is also tty command. TTY utility in *nix writes the name of the terminal attached to standard input to standard output, in Linux there is a GNU remake of same program part called GNU tty of coreutils package (try man tty) for more.

The physical console is recognizable in Linux as it is indicated with other tree letters pts – (pseudo terminal device) standing for a terminal device which is emulated by an other program (example: xterm, screen, or ssh are such programs). A pts is the slave part of a pts is pseudo there is no separate binary program for it but it is dynamically allocated in memory.
PTS is also called Line consle in Cisco Switches / Router devices, VTY is the physical Serial Console connected on your Cisco device and the network connection emulation to network device is creates with a virtual console session VTL (Virtual Terminal Line). In freebsd the actual /dev/pts* /dev/tty* temporary devices on the OS are slightly different and have naming such as /dev/ttys001.
But the existence of tty and pts emulator is not enough for communicating interrupts to Kernel and UserLand binaries of the Linux / BSD OS, thus to send the commands on top of it is running a System Shell as CSH / TSH / TCSH or BASH which is usually the first program set to run after user logs in over ptty or pseudo tty virtual terminal.

linux-tty-terminal-explained-brief-intro-to-linux-device-drivers-20-638

 

Setting the Bash Prompt in Terminal / Console on GNU / Linux

Bash has system environments to control multiple of variables, which are usually visible with env command, one important variable to change in the past was for example USER / USERNAME which was red by IRC Chat clients  such as BitchX / irssi and could be displayed publicly so if not changed to a separate value, one could have known your Linux login username by simple /whois query to the Nickname in question (if no inetd / xinetd service was running on the Linux box and usually inetd was not running).

Below is my custom set USER / USERNAME to separate

hipo@pcfreak:~$ env|grep USER
USERNAME=Attitude
USER=Attitude

There is plenty of variables to  tune email such as MAIL store directory, terminal used TERM, EDITOR etc. but there are some
variables that are not visible with env query as they're not globally available for all users but just for the single user, to show this ones you need to use declare command instead, to get a full list of All Single and System Wide defined variables and functions type declare in the bash shell, for readability, below is last 10 returned results:

 

hipo@pcfreak:~$ declare | tail -10
{
    local quoted=${1//\'/\'\\\'\'};
    printf "'%s'" "$quoted"
}
quote_readline ()
{
    local quoted;
    _quote_readline_by_ref "$1" ret;
    printf %s "$ret"
}

 

PS1 is present there virtually on any modern Linux distribution and is installed through user home's directory $HOME/.bashrc , ~/.profile or .bash_profile or System Wide globally for all existing users in /etc/passwd (password database file) from /etc/bash.bashrc
In Debian / Ubuntu / Mint GNU / Linux this system variable is set in user home's .bashrc but in Fedora / RHEL Linux distro,
PS1 is configured from /home/username/.bash_profile to find out where PS1 is located for ur user:

cd ~
grep -Rli PS1 .bash*

Here is one more example:

hipo@pcfreak:~$ declare|grep -i PS1|head -1
PS1='\[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
 

hipo@pcfreak:~$ grep PS1 /etc/bash.bashrc
[ -z “$PS1” ] && return
# but only if not SUDOing and have SUDO_PS1 set; then assume smart user.
if ! [ -n “${SUDO_USER}” -a -n “${SUDO_PS1}” ]; then
  PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '


Getting current logged in user shell configured PS1 variable can be done with echo:

hipo@pcfreak:~$ echo $PS1
\[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\u@\h:\w\$

So lets observe a little bit the meaning of this obscure line of (code) instructions code which are understood by BASH when being red from PS1 var to do so, I'll give a list of meaning of main understood commands, each of which is defined with \.

The ${debian_chroot} shell variable is defined from /etc/bash.bashrc

Easiest way to change PS1 is to export the string you like with the arguments like so:

 

root@linux:/home/hipo# export PS1='My-Custom_Server-Name# '
My-Custom_Server-Name# echo $PS1
My-Custom_Server-Name#

 

  •     \a : an ASCII bell character (07)
  •     \d : the date in “Weekday Month Date” format (e.g., “Tue May 26”)
  •     \D{format} : the format is passed to strftime(3) and the result is inserted into the prompt string; an empty format results in a locale-specific time representation. The braces are required
  •     \e : an ASCII escape character (033)
  •     \h : the hostname up to the first ‘.’
  •     \H : the hostname
  •     \j : the number of jobs currently managed by the shell
  •     \l : the basename of the shell's terminal device name
  •     \n : newline
  •     \r : carriage return
  •     \s : the name of the shell, the basename of $0 (the portion following the final slash)
  •     \t : the current time in 24-hour HH:MM:SS format
  •     \T : the current time in 12-hour HH:MM:SS format
  •     \@ : the current time in 12-hour am/pm format
  •     \A : the current time in 24-hour HH:MM format
  •     \u : the username of the current user
  •     \v : the version of bash (e.g., 2.00)
  •     \V : the release of bash, version + patch level (e.g., 2.00.0)
  •     \w : the current working directory, with $HOME abbreviated with a tilde
  •     \W : the basename of the current working directory, with $HOME abbreviated with a tilde
  •     \! : the history number of this command
  •     \# : the command number of this command
  •     \$ : if the effective UID is 0, a #, otherwise a $
  •     \nnn : the character corresponding to the octal number nnn
  •     \\ : a backslash
  •     \[ : begin a sequence of non-printing characters, which could be used to embed a terminal control sequence into the prompt
  •     \] : end a sequence of non-printing characters

The default's PS1 set prompt on Debian Linux is:
 

echo $PS1
\[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\u@\h:\w\$


As you can see \u (print username) \h (print hostname)  and \W (basename of current working dir) or \w (print $HOME/current working dir)
are the most essential, the rest are bell character, escape character etc.

A very good way to make your life easier and learn the abbreviations / generate exactly the PS1 PROMPT you want to have is with Easy Bash PS1 Generator Web Utility
with which you can just click over buttons that are capable to produce all of the PS1 codes.
 

1. How to show current hour:minute:seconds / print full date in Prompt Shell (PS)


Here is an example with setting the Bash Shell prompt  to include also the current time in format hour:minute:seconds (very useful if you're executing commands on a critical servers and you run commands in some kind of virtual terminal like screen or tmux.
 

root@pcfreak:~# PS1="\n\t \u@\h:\w# "
14:03:51 root@pcfreak:/home#


PS1-how-to-setup-date-time-hour-minutes-and-seconds-in-bash-shell-prompt
 

 

export PS1='\u@\H \D{%Y-%m-%d %H:%M;%S%z}] \W ] \$ '

 


export-PS1-Linux-set-full-date-time-clock-prompt-screenshot-console


Make superuser appear in RED color (adding PS1 prompt custom color for a User)
 

root@pcfreak:~$  PS1="\\[$(tput setaf 1)\\]\\u@\\h:\\w #\\[$(tput sgr0)\\]"

 

how-to-change-colors-in-bash-prompt-shell-on-linux-shell-environment

In above example the Shell Prompt Color changed is changed for administrator (root) to shebang symbol # in red, green, yellow and blue for the sake to show you how it is done, however this example can be adapted for any user on the system. Setting different coloring for users is very handy if you have to administer Mail Server service like Qmail or other Application that consists of multiple small ones of multiple daemons such as qmail + vpopmail + clamd + mysql etc. Under such circumstances, coloring each of the users in different color like in the example for debugging is very useful.

Coloring the PS1 system prompt on Linux to different color has been a standard practice in Linux Server environments running Redhat Enterprise Linux (RHEL) and SuSE Enterprise Linux and some Desktop distributions such as Mint Linux.

To make The Root prompt Red colored only for system super user (root) on any Linux distribution
, add the following to /etc/bashrc, e.g.

vim /etc/bashrc
 


# If id command returns zero, you've root access.
if [ $(id -u) -eq 0 ];
then # you are root, set red colour prompt
  PS1="\\[$(tput setaf 1)\\]\\u@\\h:\\w #\\[$(tput sgr0)\\]"
else # normal
  PS1="[\\u@\\h:\\w] $"
fi

 

 

2. How to make the prompt of a System user appear Green


Add to ~/.bashrc  following line

 

 

PS1="\\[$(tput setaf 2)\\]\\u@\\h:\\w #\\[$(tput sgr0)\\]"
 

 

3. Print New line, username@hostname, base PTY, shell level, history (number), newline and full working directory $PWD

 

export PS1='\n[\u@\h \l:$SHLVL:\!]\n$PWD\$ '

 

4. Showing the numbert of jobs the shell is currently managing.


This is useful if you run and switch with fg / bg (foreground / background) commands
to switch between jobs and forget some old job.

 

export PS1='\u@\H \D{%Y-%m-%d %H:%M;%S%z}] \W \$]'

 

Multi Lines Prompt / Make very colorful Shell prompt full of stats info

PS1="\n\[\033[35m\]\$(/bin/date)\n\[\033[32m\]\w\n\[\033[1;31m\]\u@\h: \[\033[1;34m\]\$(/usr/bin/tty | /bin/sed -e ‘s:/dev/::’): \[\033[1;36m\]\$(/bin/ls -1 | /usr/bin/wc -l | /bin/sed ‘s: ::g’) files \[\033[1;33m\]\$(/bin/ls -lah | /bin/grep -m 1 total | /bin/sed ‘s/total //’)b\[\033[0m\] -> \[\033[0m\]"

 

 

prompt-show-how-many-files-and-virtual-pts-ps1-linux
 

5. Set color change on command failure


If you have a broken command or the command ended with non zero output with some kind of bad nasty message and you want to make, that more appearing making it red heighlighted, here is how:

 

PROMPT_COMMAND='PS1="\[\033[0;33m\][\!]\`if [[ \$? = “0” ]]; then echo “\\[\\033[32m\\]”; else echo “\\[\\033[31m\\]”; fi\`[\u.\h: \`if [[ `pwd|wc -c|tr -d ” “` > 18 ]]; then echo “\\W”; else echo “\\w”; fi\`]\$\[\033[0m\] “; echo -ne “\033]0;`hostname -s`:`pwd`\007"'

 

6. Other beautiful PS1 Color Prompts with statistics

 

PS1="\n\[\e[32;1m\](\[\e[37;1m\]\u\[\e[32;1m\])-(\[\e[37;1m\]jobs:\j\[\e[32;1m\])-(\[\e[37;1m\]\w\[\e[32;1m\])\n(\[\[\e[37;1m\]! \!\[\e[32;1m\])-> \[\e[0m\]"

 

 

another-very-beuatiful-bash-colorful-prompt

 

7. Add Muliple Colors to Same Shell prompt

 

function prompt { local BLUE="\[\033[0;34m\]” local DARK_BLUE=”\[\033[1;34m\]” local RED=”\[\033[0;31m\]” local DARK_RED=”\[\033[1;31m\]” local NO_COLOR=”\[\033[0m\]” case $TERM in xterm*|rxvt*) TITLEBAR=’\[\033]0;\u@\h:\w\007\]’ ;; *) TITLEBAR=”” ;; esac PS1=”\u@\h [\t]> ” PS1=”${TITLEBAR}\ $BLUE\u@\h $RED[\t]>$NO_COLOR " PS2='continue-> ' PS4='$0.$LINENO+ ' }

colorful-prompt-blue-and-red-linux-console-PS1
 

8. Setting / Change Shell background Color


changing-background-color-of-bash-shell-prompt-linux

 

export PS1="\[$(tput bold)$(tput setb 4)$(tput setaf 7)\]\u@\h:\w $ \[$(tput sgr0)\]"

 

tput Color Capabilities:

  • tput setab [1-7] – Set a background color using ANSI escape
  • tput setb [1-7] – Set a background color
  • tput setaf [1-7] – Set a foreground color using ANSI escape
  • tput setf [1-7] – Set a foreground color

tput Text Mode Capabilities:

  • tput bold – Set bold mode
  • tput dim – turn on half-bright mode
  • tput smul – begin underline mode
  • tput rmul – exit underline mode
  • tput rev – Turn on reverse mode
  • tput smso – Enter standout mode (bold on rxvt)
  • tput rmso – Exit standout mode
  • tput sgr0 – Turn off all attributes

Color Code for tput:

  • 0 – Black
  • 1 – Red
  • 2 – Green
  • 3 – Yellow
  • 4 – Blue
  • 5 – Magenta
  • 6 – Cyan
  • 7 – White

 

9. Howto Use bash shell function inside PS1 variable

If you administrate Apache or other HTTPD servers or any other server whose processes are forked and do raise drastically at times to keep an eye while actively working on the server.

 

function httpdcount { ps aux | grep apache2 | grep -v grep | wc -l } export PS1="\u@\h [`httpdcount`]> "

10. PS2, PS3, PS4 little known variables
 

I'll not get much into detail to PS2, PS3, PS4 but will mention them as perhaps many people are not even aware they exist.
They're rarely used in the daily system administrator's work but useful for Shell scripting purposes of Dev Ops and Shell Scripting Guru Programmers.

  • PS2 – Continuation interactive prompt

A very long unix command can be broken down to multiple line by giving \ at the end of the line. The default interactive prompt for a multi-line command is “> “.  Let us change this default behavior to display “continue->” by using PS2 environment variable as shown below.

hipo@db-host :~$ myisamchk –silent –force –fast –update-state \
> –key_buffer_size=512M –sort_buffer_size=512M \
> –read_buffer_size=4M –write_buffer_size=4M \
> /var/lib/mysql/bugs/*.MYI
[Note: This uses the default “>” for continuation prompt]

  • PS3 – Prompt used by “select” inside shell script (usefulif you write scripts with user prompts)

     

  • PS4 – Used by “set -x” to prefix tracing output
    The PS4 shell variable defines the prompt that gets displayed.

You can find  example with script demonstrating PS2, PS3, PS4 use via small shell scripts in thegeekstuff's article Take control of PS1, PS2, PS3, PS4 read it here

 

Summary


In this article, I've shortly reviewed on what is a TTY, how it evolved into Pseudo TTY and how it relates to current shells which are the interface communicating with the modern UNIX like Operating systems's userland and kernel.
Also it was reviewed shortly how the current definitions of shell variables could be viewed with declare cmd. Also I went through on how to display the PS1 variable and  on how to modify PS1 and make the prompt different statistics and monitoring parameters straight into the command shell. I've shown some common PS1 strings that report on current date hour, minute, seconds, modify the coloring of the bash prompt shell, show processes count, and some PS1 examples were given that combines beuatiful shell coloring as well as how the Prompt background color can be changed.
Finally was shown how a combination of commands can be executed by exporting to PS1 to update process counf of Apache on every shell prompt iteration.
Other shell goodies are mostly welcome

 

 

How to build Linux logging bash shell script write_log, logging with Named Pipe buffer, Simple Linux common log files logging with logger command

Monday, August 26th, 2019

how-to-build-bash-script-for-logging-buffer-named-pipes-basic-common-files-logging-with-logger-command

Logging into file in GNU / Linux and FreeBSD is as simple as simply redirecting the output, e.g.:
 

echo "$(date) Whatever" >> /home/hipo/log/output_file_log.txt


or with pyping to tee command

 

echo "$(date) Service has Crashed" | tee -a /home/hipo/log/output_file_log.txt


But what if you need to create a full featured logging bash robust shell script function that will run as a daemon continusly as a background process and will output
all content from itself to an external log file?
In below article, I've given example logging script in bash, as well as small example on how a specially crafted Named Pipe buffer can be used that will later store to a file of choice.
Finally I found it interesting to mention few words about logger command which can be used to log anything to many of the common / general Linux log files stored under /var/log/ – i.e. /var/log/syslog /var/log/user /var/log/daemon /var/log/mail etc.
 

1. Bash script function for logging write_log();


Perhaps the simplest method is just to use a small function routine in your shell script like this:
 

write_log()
LOG_FILE='/root/log.txt';
{
  while read text
  do
      LOGTIME=`date "+%Y-%m-%d %H:%M:%S"`
      # If log file is not defined, just echo the output
      if [ “$LOG_FILE” == “” ]; then
    echo $LOGTIME": $text";
      else
        LOG=$LOG_FILE.`date +%Y%m%d`
    touch $LOG
        if [ ! -f $LOG ]; then echo "ERROR!! Cannot create log file $LOG. Exiting."; exit 1; fi
    echo $LOGTIME": $text" | tee -a $LOG;
      fi
  done
}

 

  •  Using the script from within itself or from external to write out to defined log file

 

echo "Skipping to next copy" | write_log

 

2. Use Unix named pipes to pass data – Small intro on what is Unix Named Pipe.


Named Pipe –  a named pipe (also known as a FIFO (First In First Out) for its behavior) is an extension to the traditional pipe concept on Unix and Unix-like systems, and is one of the methods of inter-process communication (IPC). The concept is also found in OS/2 and Microsoft Windows, although the semantics differ substantially. A traditional pipe is "unnamed" and lasts only as long as the process. A named pipe, however, can last as long as the system is up, beyond the life of the process. It can be deleted if no longer used.
Usually a named pipe appears as a file, and generally processes attach to it for IPC.

 

Once named pipes were shortly explained for those who hear it for a first time, its time to say named pipe in unix / linux is created with mkfifo command, syntax is straight foward:
 

mkfifo /tmp/name-of-named-pipe


Some older Linux-es with older bash and older bash shell scripts were using mknod.
So idea behind logging script is to use a simple named pipe read input and use date command to log the exact time the command was executed, here is the script.

 

#!/bin/bash
named_pipe='/tmp/output-named-pipe';
output_named_log='
/tmp/output-named-log.txt ';

if [ -p $named_pipe ]; then
rm -f $named_pipe
fi
mkfifo $named_pipe

while true; do
read LINE <$named_pipe
echo $(date): "$LINE" >>/tmp/output-named-log.txt
done


To write out any other script output and get logged now, any of your output with a nice current date command generated output write out any output content to the loggin buffer like so:

 

echo 'Using Named pipes is so cool' > /tmp/output-named-pipe
echo 'Disk is full on a trigger' > /tmp/output-named-pipe

  • Getting the output with the date timestamp

# cat /tmp/output-named-log.txt
Mon Aug 26 15:21:29 EEST 2019: Using Named pipes is so cool
Mon Aug 26 15:21:54 EEST 2019: Disk is full on a trigger


If you wonder why it is better to use Named pipes for logging, they perform better (are generally quicker) than Unix sockets.

 

3. Logging files to system log files with logger

 

If you need to do a one time quick way to log any message of your choice with a standard Logging timestamp, take a look at logger (a part of bsdutils Linux package), and is a command which is used to enter messages into the system log, to use it simply invoke it with a message and it will log your specified output by default to /var/log/syslog common logfile

 

root@linux:/root# logger 'Here we go, logging'
root@linux:/root # tail -n 3 /var/log/syslog
Aug 26 15:41:01 localhost CRON[24490]: (root) CMD (chown qscand:qscand -R /var/run/clamav/ 2>&1 >/dev/null)
Aug 26 15:42:01 localhost CRON[24547]: (root) CMD (chown qscand:qscand -R /var/run/clamav/ 2>&1 >/dev/null)
Aug 26 15:42:20 localhost hipo: Here we go, logging

 

If you have took some time to read any of the init.d scripts on Debian / Fedora / RHEL / CentOS Linux etc. you will notice the logger logging facility is heavily used.

With logger you can print out message with different priorities (e.g. if you want to write an error message to mail.* logs), you can do so with:
 

 logger -i -p mail.err "Output of mail processing script"


To log a normal non-error (priority message) with logger to /var/log/mail.log system log.

 

 logger -i -p mail.notice "Output of mail processing script"


A whole list of supported facility named priority valid levels by logger (as taken of its current Linux manual) are as so:

 

FACILITIES AND LEVELS
       Valid facility names are:

              auth
              authpriv   for security information of a sensitive nature
              cron
              daemon
              ftp
              kern       cannot be generated from userspace process, automatically converted to user
              lpr
              mail
              news
              syslog
              user
              uucp
              local0
                to
              local7
              security   deprecated synonym for auth

       Valid level names are:

              emerg
              alert
              crit
              err
              warning
              notice
              info
              debug
              panic     deprecated synonym for emerg
              error     deprecated synonym for err
              warn      deprecated synonym for warning

       For the priority order and intended purposes of these facilities and levels, see syslog(3).

 


If you just want to log to Linux main log file (be it /var/log/syslog or /var/log/messages), depending on the Linux distribution, just type', even without any shell quoting:

 

logger 'The reason to reboot the server Currently was a System security Update

 

So what others is logger useful for?

 In addition to being a good diagnostic tool, you can use logger to test if all basic system logs with its respective priorities work as expected, this is especially
useful as I've seen on a Cloud Holsted OpenXEN based servers as a SAP consultant, that sometimes logging to basic log files stops to log for months or even years due to
syslog and syslog-ng problems hungs by other thirt party scripts and programs.
To test test all basic logging and priority on system logs as expected use the following logger-test-all-basic-log-logging-facilities.sh shell script.

 

#!/bin/bash
for i in {auth,auth-priv,cron,daemon,kern, \
lpr,mail,mark,news,syslog,user,uucp,local0 \
,local1,local2,local3,local4,local5,local6,local7}

do        
# (this is all one line!)

 

for k in {debug,info,notice,warning,err,crit,alert,emerg}
do

logger -p $i.$k "Test daemon message, facility $i priority $k"

done

done

Note that on different Linux distribution verions, the facility and priority names might differ so, if you get

logger: unknown facility name: {auth,auth-priv,cron,daemon,kern,lpr,mail,mark,news, \
syslog,user,uucp,local0,local1,local2,local3,local4, \
local5,local6,local7}

check and set the proper naming as described in logger man page.

 

4. Using a file descriptor that will output to a pre-set log file


Another way is to add the following code to the beginning of the script

#!/bin/bash
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>log.out 2>&1
# Everything below will go to the file 'log.out':

The code Explaned

  •     Saves file descriptors so they can be restored to whatever they were before redirection or used themselves to output to whatever they were before the following redirect.
    trap 'exec 2>&4 1>&3' 0 1 2 3
  •     Restore file descriptors for particular signals. Not generally necessary since they should be restored when the sub-shell exits.

          exec 1>log.out 2>&1

  •     Redirect stdout to file log.out then redirect stderr to stdout. Note that the order is important when you want them going to the same file. stdout must be redirected before stderr is redirected to stdout.

From then on, to see output on the console (maybe), you can simply redirect to &3. For example
,

echo "$(date) : Do print whatever you want logging to &3 file handler" >&3


I've initially found out about this very nice bash code from serverfault.com's post how can I fully log all bash script actions (but unfortunately on latest Debian 10 Buster Linux  that is prebundled with bash shell 5.0.3(1)-release the code doesn't behave exactly, well but still on older bash versions it works fine.

Sum it up


To shortlysummarize there is plenty of ways to do logging from a shell script logger command but using a function or a named pipe is the most classic. Sometimes if a script is supposed to write user or other script output to a a common file such as syslog, logger command can be used as it is present across most modern Linux distros.
If you have a better ways, please drop a common and I'll add it to this article.

 

Check weather forecast from console (terminal) on GNU / Linux and FreeBSD howto

Friday, August 23rd, 2019

how to get weather forecast prognosis from command line text terminal / console on Linux and FreeBSD

Doing everything in Linux console / terminal is a question perhaps every Linux / BSD hacker wants to do as Graphical user interface and using web search or using Graphical Environment plugins is an unneded complexity + googling or duckduckgoing for weather to check your next vacation destination city has been more and more of a terrible experience (for me) as I'm not a big fan of using the OS in a GUI.
In that manner of thoughts, as a Linux console geek and hard core ASCII art fan. I was recently happy to find that  possible to check weather forecast in tty console or Linux terminal in a beautiful ascii art way easily through a Web wttr.in service – a web application weather forecast service that supports displaying the current and few days in future, weather forecast either in browser as a plain text or from the command line by simply accessing it with your favourite web access / transfer tool such as;
wget / curl or any of your favourite text browser elinks / lynx / w3m or if on *BSDs use fetch command.

 

Install Curl data transfer tool if it is not already


Wget is installed by default across most Linux distributions and fetch is present by default on BSDs, displaying it in text browser would perhaps be never used but if you decide to give it a try maybe try with elinks (to get colorful output), w3m and lynx will display a black and white results.

In case if you miss curl, install it:

On Debian distro

 

aptitude install -y curl


or Fedora

yum install -y curl


Of course to use wttr.in as it is Internet based Weather Forecast service the minimum you need to have is to have Internet connection to your Linux / BSD desktop computer.

Text based Weather Forecast Web App currently supports:

display the current weather as well as a 3-day weather forecast, split into morning, noon, evening and night

  • Temperature is displayed for morning, noon, evening and night (includes temperature range, wind speed and direction, viewing distance, precipitation amount and probability)
  • Provide results for Weather based on City / town / village location
  • Supports display of Moon Phases Forecast in calendar days
  • Supports multilingual names (Bulgarian Phonetic cyrillic / Russian and other exotic UTF-8 encodings such as Chineese and Japanese),  50+ languages are currently supported
  • Has ability for prognosis for hostname (domain) location based on an its IP GeoIP location on the Globe
  • Geographical locations / landmarks such as Lakes / Mountains etc. can be easily queried
  • Query results metrics could be configured, e.g. USCS units or EU and rest of world accepted ones (SI) metric
  • Displayed result could be either in ANSI (if from terminal / console / HTML if queried from browser or in PNG – if needed)

 

Where wttr.in could be useful ?

The best applications use, I can think of are for server (shell) / perl scripting automation purposes, it could be useful especially in TOO HOT, TOO, COLD, TOO WET location in Small and Middle sized Data Centers Green Energy (Sun Panel) Parks / Wind Energy situated Linux monitoring hosts to track possible problems of overheats or overcolding of servers due to abnormal excessive temperatures such as the ones we experienced this summer here All across in Europe or in too Cold DC locations such as heat locations Deserts in African Countries, Saudi Arabia or Chukotka or Siberia in Russia.
Other application is as a backup option to other normal Weather report services by PHP or Python scripts that fetch data, from multiple places.
Of course since this is a third party controlled service, the downtime is due to excessive connection requests, the service could get flooded and stopped working, but I guess for any Commercial use, wttr.in creator Igor Chubin would be happy to sell a specific crafted service for any end user candidates.


Here is few examples of the beautiful returned ASCII art formatted output of wttr.in.
 

1. Getting a three days Weather Forecast prognosis for city / town location

To get what is current weather in my current city of Living, Sofia Bulgaria just pass the city to the URL address

curl http://wttr.in/Sofia

text-console-wttr.in-Weather-forecast-Sofia-for-Linux

 

links http://wttr.in/Dobrich

 

curl-Linux-show--Dobrich-Weather-forecast-in-lynx-text-browser


Default links (Linux) www text browser produces ugly black and white

2. Displaying Weather forecast with wget

 

wget -O- -q http://wttr.in


getting-weather-forecast-on-linux-terminal-console-with-wget-command

If you're lazy you can even omit the http:// as wget will look for HyperText Transmission Protocol by itself

 

wget -O- -q wttr.in

 

3. Getting Forecast results for a Tourist Destination


Lets get the weather forecast for the popular tourist Bulgarian destination of the Seven Rila Lakes (near Rila Monastery), situated in the Rila Mountain BG.

 

curl http://wttr.in/Seven+Rila+Lakes

 

Console-terminal-Weather-forecast-Linux-Seven-Rila-Lakes

 

 

4. Display Forecast for a specific server IP


Displaying information on specific server IP address current situated in GeoIP database, of course could be not really true, as the IP could be just a Load Balancer a router that does NAT to some internal DMZ-ed location server, but anyways it is a cool feature.

Lets get information on what is the weather on Google Global's Public DNS server IP 8.8.8.8 so commonly used to guarantee a Windows and Linux Desktop client machines Internet connectivity.
 

curl wttr.in/@8.8.8.8

 

wttr.in-Linux-text--forecast-service-curl-screenshot Google Public DNS location weather forecast

5. Download PNG image picture from wttr.in service

 


Lets say you want to get a 3 days standard Weather forecast for the popular Black Sea Resort town in Bulgaria Pomorie (a beautiful sea city which has even a functioning 5 Monks Monastery Pomorie Monastery situated near sea coast)

 

curl http://wttr.in/Pomorie.png
 

 

–2019-08-22 20:15:51–  http://wttr.in/Pomorie.png
Resolving wttr.in (wttr.in)… 5.9.243.187
Connecting to wttr.in (wttr.in)|5.9.243.187|:80… connected.
HTTP request sent, awaiting response… 200 OK
Length: 42617 (42K) [image/png]
Saving to: ‘Pomorie.png’

Pomorie.png                                     100%[=======================================================================================================>]  41.62K  –.-KB/s    in 0.07s   

2019-08-22 20:15:52 (586 KB/s) – ‘Pomorie.png’ saved [42617/42617]

 

Note: The generated .png is again the ASCII art produced by a direct text fetch bug in pic format

 

6. Displaying Current Moon Phase


If you want to enjoy a text based Moon phase picture through wttr.in 🙂

wget -O- -q wttr.in/Moon


Display-current-Phase-of-Moon-in-terminal-console-Linux-wttr.in-service

You can also get a Moon Phase prognosis for a current future date or get a previous date phase

 

curl wttr.in/moon@2019-09-15

Full-Moon-Weather-forecast-text-console-reporting-via-wttr.in-on-Gnu_Linux


Full Moon Madness !! – Vampires are out beaware and Enjoy the ultra kewl ASCII Colorful Art 🙂
 

7. Getting help for wttr.in terminal Waether Forecast results

 

 

$ curl wttr.in/:help
Usage:

 

    $ curl wttr.in          # current location
    $ curl wttr.in/muc      # weather in the Munich airport

Supported location types:

    /paris                  # city name
    /~Eiffel+tower          # any location
    /Москва                 # Unicode name of any location in any language
    /muc                    # airport code (3 letters)
    /@stackoverflow.com     # domain name
    /94107                  # area codes
    /-78.46,106.79          # GPS coordinates

Special locations:

    /moon                   # Moon phase (add ,+US or ,+France for these cities)
    /moon@2016-10-25        # Moon phase for the date (@2016-10-25)

Units:

    m                       # metric (SI) (used by default everywhere except US)
    u                       # USCS (used by default in US)
    M                       # show wind speed in m/s

View options:

    0                       # only current weather
    1                       # current weather + 1 day
    2                       # current weather + 2 days
    A                       # ignore User-Agent and force ANSI output format (terminal)
    F                       # do not show the "Follow" line
    n                       # narrow version (only day and night)
    q                       # quiet version (no "Weather report" text)
    Q                       # superquiet version (no "Weather report", no city name)
    T                       # switch terminal sequences off (no colors)

PNG options:

    /paris.png              # generate a PNG file
    p                       # add frame around the output
    t                       # transparency 150
    transparency=…        # transparency from 0 to 255 (255 = not transparent)

Options can be combined:

    /Paris?0pq
    /Paris?0pq&lang=fr
    /Paris_0pq.png          # in PNG the file mode are specified after _
    /Rome_0pq_lang=it.png   # long options are separated with underscore

Localization:

    $ curl fr.wttr.in/Paris
    $ curl wttr.in/paris?lang=fr
    $ curl -H "Accept-Language: fr" wttr.in/paris

Supported languages:

    af da de el et fr fa hu id it nb nl pl pt-br ro ru tr uk vi (supported)
    az be bg bs ca cy cs eo es fi ga hi hr hy is ja jv ka kk ko ky lt lv mk ml nl fy nn pt pt-br sk sl sr sr-lat sv sw th te uz zh zu he (in progress)

Special URLs:

    /:help                  # show this page
    /:bash.function         # show recommended bash function wttr()
    /:translation           # show the information about the translators

 


 

 

8. Comparing two cities weather from command line

 


One useful use of wttr.in if you plan to travel from Location city A to Location city B is to compare the temperatures with a simple bash one liner script:

 

 

 

diff -Naur <(curl -s http://wttr.in/Sofia ) <(curl -s http://wttr.in/Beograd )

 

 

9. Using ansiweather command to get Weather Temperature / Wind / Humidity in one line beuatiful text

 


If you go and install answeather Linux package

 

apt-get install –yes ansiweather


You will get a shell script wrapper with ANSI colors and Unicode symbols support. Weather data comes from OpenWeatherMap, this is useful if wttr.in is not working due to some URL malfunction (due to service is DoS-ed) etc.

 

ansiweather -l Atina

 

ansiweather-Atina-weather-forecast-result-linux-text-console

Lets use ansiweather to print the weather prognosis for upcoming 5 days for near port of Burgas, BG
 

ansiweather -F -l Burgas

ansiweather-print-weather-forecast-prognosis-for-5-days-in-Linux-text-terminal

 

10. Get all Weather current forecast for each Capital in the world


You can download and use this simple plain text file list of All Country Capitals in the World (country-capitals-all-world.txt) with ansiweather and a bash loop to get displayed each and every current day Weather Forecast in the World, here is how:

 

while read line; do ansiweather -l $line; sleep 3; done < country-capitals-all-world.txt


ansiweather-all-countires-capitals-result

As you can see some of the very exotic third world capitals does not return data so 'ERROR: Cannot fetch weather data' is returned.


You can also substitute ansiweather with curl wttr.in/$line to do get the beautiful ASCII art 3 days weather forecast via wttr.in

 

while read line; do curl http://wttr.in/$line; sleep 3; done < country-capitals-all-world.txt


I'll be happy to know other nice ASCII Art supporting Web service to enjoy from text terminal on Linux (nomatter useful or) just funny joyful prank maniacal pranks such as Watching text ASCII version remake of Star Wars Classic Movie by simply telnetting to towel.blinkenlights.nl (if you haven't so just telnet and enjoy the streamed ASCIIs ! 🙂

 

telnet towel.blinkenlights.nl

 

watch-star-wars-ascii-art-version-remake-online-with-telnet-on-linux-console-terminal

 

Talking about fun and ASCII, its worthy to mention hollywood Linux package

hipo@jeremiah:~/Desktop$ apt-cache show hollywood|grep -i desc -A 3
Description-en: fill your console with Hollywood melodrama technobabble
 This utility will split your console into a multiple panes of genuine
 technobabble, perfectly suitable for any Hollywood geek melodrama.
 It is particularly suitable on any number of computer consoles in the


Description-md5: 768f44c76220ea2b35f855ea34c8bc35
Homepage: http://launchpad.net/hollywood
Section: games
Priority: optional


Once installed on Debian with:

aptitude install -y hollywood

You can get in a rapid manner plenty of tmux (screen like – virtual console emulator) split screen statistics about your notebook / workstation / server CPU usage, mlocate.db status, info about plugged in machine voltage, Speedometer (statistics about Network bandwidth usage), System load avarage (CPU Count, Memory Utilization) and some other random info coming out of dmesg kernel log and more. The information displayed in splitted windows changes rapidly and (assuming you run it at home Desktop with a soundblaster) and not remotely, a james bond Agent 007 soundtrack is played on the back, that brings up one's adrenaline and makes it look even cooler.

hollywood-melodrama-technobubble-split-console-multiple-panes-for-genuine-technobubble

To give you an idea what to expect, here is shot of /usr/games/hollywood (the program start binary location) on Debian GNU / Linux running, Enjoy! 🙂
 

A Concise and Complete Strategy to Earn Microsoft MCSE: Core Infrastructure Certification

Wednesday, August 21st, 2019

microsoft-certification-mcse-infrastructure-azure-mcse-boot-camp-499x330

This article is going to be a bit astray from Linux but as recently, there are so many jobs offered for Windows administrators, I believe it will be useful for sysadmins, who are more interested in Windows sysadmin job, so lets get through some of the essential Microsoft certificates to give you idea what kind of certificate you might want to enter the world of Windows.


In recent months, Microsoft is by-and-by altering its certification program. But, how does this affect the certification track as a whole? This creates a new breed of Microsoft credentials that are specifically aligned to certain job roles like administrator, solution architect, developer, and functional consultant.

Further, the incorporation of role-based certifications means the phasing out of old certifications tracks like MCSA: Cloud Platform, MCSA: Linux on Azure, MCSE: Mobility and the list continues. All the retired certifications and certification exams are pensioned off to reflect the newest technologies and advancements, which are highly needed by different IT job roles.

But even with the changes, Microsoft hasn’t totally ditched some of their previous certification tracks―simply because these are still significant up to the present time. And one of the limited expert-level Microsoft validations that deserve a mention is, without a doubt, MCSE: Core Infrastructure.

https://www.examsnap.com/microsoft-certification-training.html


microsoft-certified-solutions-master-main-qimg-82c85948f30e27f6eb3f8d5c4eda9915

The Past and the Present Days of MCSE: Core Infrastructure

MCSE: Core Infrastructure is certainly the best way to certify your expertise in managing more complex and modern IT technologies, including data center, system and identity management, storage, virtualization, and networking.

To get you ready, see the functional preparation guide that shows three main steps to earn this MCSE endorsement.

  1. Acquire your MCSA certification

The very first step is to arm yourself with an entry-level credential that declares your foundational understanding of specific IT technologies. This means that you can’t just jump directly to the expert-tier without gaining valuable groundwork, which for this case, is either the MCSA Windows Server 2012 or MCSA Windows Server 2012. Both these certifications are aimed to give you a significant footing in specific Microsoft infrastructure in an enterprise setting, to further improve the business worth and abate unnecessary expenses.

  1. Choose your preferred MCSE certification exam

Next step is to pick from given five MCSE certifications exams: 70-744, 70-745, 70-413, 70-414, and 70-537. Though there are five listed options, only four are available since exam 70-537 hasn’t been released up to now.

  • Exam 70-744

Dubbed as the exam for Securing Windows Server 2016, 70-744 tests how well you utilize various technologies and methodologies relating to server hardening environments and virtual and network machines infrastructure.

https://www.microsoft.com/en-us/learning/certification-overview.aspx

Featuring topics such as Active Directory, Enhanced Security Administrative Environment, Local Administrator Password Solution, Threat Detection Solutions, Privileged Access Workstations, and such, the exam serves a remarkable way to fully take a grasp of the security needed in Windows Server 2016.

  • Exam 70-745

If securing Windows Server 2016 does not entice you, there’s another option―exam 70-745, which is implementing a Software-Defined Datacenter. This test is suitable for both analysts and data scientists who’ve got a thing for complex processes and data sets as well as virtual machine manager.

Software-defined networking, software-defined data center, and software-defined storage are three main subjects expounded in this exam. You will learn how to implement, manage, secure, and maintain these various solutions. Accordingly, it’s recommended to have background skills in data structures, programming concepts, R functions, and statistical methods for you to easily take up and pass this exam.

  • Exam 70-413

Next on the list is the test that corroborates your capability in designing and implementing a 2012 Windows Server 2012 infrastructure. Exam 70-413 is part one of a two-series test that revolves around key functions of a server environment.

If you pass this exam, this means that you are fully-furnished with abilities in core topics related to Windows Server 2012, including network access services, server virtualization, deployment, and infrastructure. This is because your skills in creating and implementing both logical and physical active directory infrastructures will be put into test.

  • Exam 70-414

70-414 is the second test of the two-part series exam about Windows Server 2012. This means that you have to complete and pass both exams 70-413 and 70-414 to earn your MCSE.

In comparison to the first exam, this refers to a more complicated server infrastructure in a highly virtualized setting. The exam sets the seal in your command in managing and maintaining advanced server infrastructure. Furthermore, you get to mug up your skills in planning and implementing highly available enterprise and server virtualization infrastructures along with designing and executing identity and access solutions.

   3. Start practicing the exam

Once you’ve decided what exam/s you’ll take, you need to start gathering essential exam materials. Start with books and Microsoft exam guides so that you’ll acquire a deeper understanding of each topic. Training courses are other imperative resources you shouldn’t miss. These are relevant in mounting your knowledge―in a more stimulating and less stressful manner. Either in an instructor-led or self-paced format, these training courses are carved to give you a more advanced yet highly engaging type of learning. And luckily, there’s no need for you to look further because Microsoft provides candidates with official and vital training courses for every exam.

And to accompany your exam preparation, get assistance from Examsnap’s series of practice tests. Featuring the most updated test questions with answers, the practice tests offered by Examsnap are not just limited to one but a lot of files per exam. They have all the MCSE required and current exams, which are 70-744, 70-745, 70-413, and 70-414. With the various files on offer, these give you several options to expand your knowledge bank before the exam day. Since the tests are offered in .ete format, you can train them with the help of the ETE Simulator. This will give you the insight of what is waiting for you at the exam. Moreover, you can practice the file unlimited times, track your results, improve them, thus you’ll be confident in your skills and knowledge and escape nervousness.

Conclusion

And when you pass the required exam/s, you’ll be rewarded with the ever-famous MCSE: Core Infrastructure to your profile. More than that highly-distinguished international credential, you are now qualified for various job roles like information security specialist, computer support analyst, IT administrator, architect, and such. So, keep the ball rolling and tighten your preparation stage for you to earn this amazing Microsoft validation.

 

What is inode and how to find out which directory is eating up all your filesystem inodes on Linux, Increase inode count on a ext3 ext4 and ufs filesystems

Tuesday, August 20th, 2019

what-is-inode-find-out-which-filesystem-or-directory-eating-up-all-your-system-inodes-linux_inode_diagram

If you're a system administrator of multiple Linux servers used for Web serving delivery / Mail server sysadmin, Database admin or any High amount of Drives Data Storage used for backup servers infra, Data Repository administrator such as Linux hosted Samba / CIFS shares, etc. or using some Linux Hosting Provider to host your website or any other UNIX like Infrastructure servers that demands a storage of high number of files under a Directory  you might end up with the common filesystem inode depletion issues ( Maximum Inode number for a filesystem is predefined, limited and depending on the filesystem configured size).

In case a directory stored files end up exceding the amount of possible addressable inodes could prevent any data to be further assiged and stored on the Filesystem.

When a device runs out of inodes, new files cannot be created on the device, even though there may be plenty free space available and the first time it happened to me very long time ago I was completely puzzled how this is possible as I was not aware of Inodes existence  …

Reaching maximum inodes number (e.g. inode depletion), often happens on Busy Mail servers (receivng tons of SPAM email messages) or Content Delivery Network (CDN – Website Image caching servers) which contain many small files on EXT3 or EXT4 Journalled filesystems. File systems (such as Btrfs, JFS or XFS) escape this limitation with extents or dynamic inode allocation, which can 'grow' the file system or increase the number of inodes.

 

Hence ending being out of inodes could cause various oddities on how stored data behaves or communicated to other connected microservices and could lead to random application disruptions and odd results costing you many hours of various debugging to find the root cause of inodes (index nodes) being out of order.

In below article, I will try to give an overall explanation on what is an I-Node on a filesystem, how inodes of FS unit could be seen, how to diagnose a possible inode poblem – e.g.  see the maximum amount of inodes available per filesystem and how to prepare (format) a new filesystem with incrsed set of maximum inodes.

 

What are filesystem i-nodes?

 

This is a data structure in a Unix-style file system that describes a file-system object such as a file or a directory.
The data structure described in the inodes might vary slightly depending on the filesystem but usually on EXT3 / EXT4 Linux filesystems each inode stores the index to block that contains attributes and disk block location(s) of the object's data.
– Yes for those who are not aware on how a filesystem is structured on *nix it does allocate all stored data in logical separeted structures called data blocks. Each file stored on a local filesystem has a file descriptor, there are virtual unit structures file tables and each of the inodes that are a reference number has a own data structure (inode table).

Inodes / "Index" are slightly unusual on file system structure that stored the access information of files as a flat array on the disk, with all the hierarchical directory information living aside from this as explained by Unix creator and pioneer- Dennis Ritchie (passed away few years ago).

what-is-inode-very-simplified-explanation-diagram-data

Simplified explanation on file descriptors, file table and inode, table on a common Linux filesystem

Here is another description on what is I-node, given by Ken Thompson (another Unix pioneer and father of Unix) and Denis Ritchie, described in their paper published in 1978:

"    As mentioned in Section 3.2 above, a directory entry contains only a name for the associated file and a pointer to the file itself. This pointer is an integer called the i-number (for index number) of the file. When the file is accessed, its i-number is used as an index into a system table (the i-list) stored in a known part of the device on which the directory resides. The entry found thereby (the file's i-node) contains the description of the file:…
    — The UNIX Time-Sharing System, The Bell System Technical Journal, 1978  "


 

What is typical content of inode and how I-nodes play with rest of Filesystem units?


The inode is just a reference index to a data block (unit) that contains File-system object attributes. It may include metadata information such as (times of last change, access, modification), as well as owner and permission data.

 

On a Linux / Unix filesystem, directories are lists of names assigned to inodes. A directory contains an entry for itself, its parent, and each of its children.

Structure-of-inode-table-on-Linux-Filesystem-diagram

 

Structure of inode table-on Linux Filesystem diagram (picture source GeeksForGeeks.org)

  • Information about files(data) are sometimes called metadata. So you can even say it in another way, "An inode is metadata of the data."
  •  Inode : Its a complex data-structure that contains all the necessary information to specify a file. It includes the memory layout of the file on disk, file permissions, access time, number of different links to the file etc.
  •  Global File table : It contains information that is global to the kernel e.g. the byte offset in the file where the user's next read/write will start and the access rights allowed to the opening process.
  • Process file descriptor table : maintained by the kernel, that in turn indexes into a system-wide table of files opened by all processes, called the file table .

The inode number indexes a table of inodes in a known location on the device. From the inode number, the kernel's file system driver can access the inode contents, including the location of the file – thus allowing access to the file.

  •     Inodes do not contain its hardlink names, only other file metadata.
  •     Unix directories are lists of association structures, each of which contains one filename and one inode number.
  •     The file system driver must search a directory looking for a particular filename and then convert the filename to the correct corresponding inode number.

The operating system kernel's in-memory representation of this data is called struct inode in Linux. Systems derived from BSD use the term vnode, with the v of vnode referring to the kernel's virtual file system layer.


But enough technical specifics, lets get into some practical experience on managing Filesystem inodes.
 

Listing inodes on a Fileystem


Lets say we wan to to list an inode number reference ID for the Linux kernel (files):

 

root@linux: # ls -i /boot/vmlinuz-*
 3055760 /boot/vmlinuz-3.2.0-4-amd64   26091901 /boot/vmlinuz-4.9.0-7-amd64
 3055719 /boot/vmlinuz-4.19.0-5-amd64  26095807 /boot/vmlinuz-4.9.0-8-amd64


To list an inode of all files in the kernel specific boot directory /boot:

 

root@linux: # ls -id /boot/
26091521 /boot/


Listing inodes for all files stored in a directory is also done by adding the -i ls command flag:

Note the the '-1' flag was added to to show files in 1 column without info for ownership permissions

 

root@linux:/# ls -1i /boot/
26091782 config-3.2.0-4-amd64
 3055716 config-4.19.0-5-amd64
26091900 config-4.9.0-7-amd64
26095806 config-4.9.0-8-amd64
26091525 grub/
 3055848 initrd.img-3.2.0-4-amd64
 3055644 initrd.img-4.19.0-5-amd64
26091902 initrd.img-4.9.0-7-amd64
 3055657 initrd.img-4.9.0-8-amd64
26091756 System.map-3.2.0-4-amd64
 3055703 System.map-4.19.0-5-amd64
26091899 System.map-4.9.0-7-amd64
26095805 System.map-4.9.0-8-amd64
 3055760 vmlinuz-3.2.0-4-amd64
 3055719 vmlinuz-4.19.0-5-amd64
26091901 vmlinuz-4.9.0-7-amd64
26095807 vmlinuz-4.9.0-8-amd64

 

To get more information about Linux directory, file, such as blocks used by file-unit, Last Access, Modify and Change times, current External Symbolic or Static links for filesystem object:
 

root@linux:/ # stat /etc/
  File: /etc/
  Size: 16384         Blocks: 32         IO Block: 4096   catalog
Device: 801h/2049d    Inode: 6365185     Links: 231
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2019-08-20 06:29:39.946498435 +0300
Modify: 2019-08-14 13:53:51.382564330 +0300
Change: 2019-08-14 13:53:51.382564330 +0300
 Birth: –

 

Within a POSIX system (Linux-es) and *BSD are more or less such, a file has the following attributes[9] which may be retrieved by the stat system call:

   – Device ID (this identifies the device containing the file; that is, the scope of uniqueness of the serial number).
    File serial numbers.
    – The file mode which determines the file type and how the file's owner, its group, and others can access the file.
    – A link count telling how many hard links point to the inode.
    – The User ID of the file's owner.
    – The Group ID of the file.
    – The device ID of the file if it is a device file.
    – The size of the file in bytes.
    – Timestamps telling when the inode itself was last modified (ctime, inode change time), the file content last modified (mtime, modification time), and last accessed (atime, access time).
    – The preferred I/O block size.
    – The number of blocks allocated to this file.

 

Getting more extensive information on a mounted filesystem


Most Linuxes have the tune2fs installed by default (in debian Linux this is through e2fsprogs) package, with it one can get a very good indepth information on a mounted filesystem, lets say about the ( / ) root FS.
 

root@linux:~# tune2fs -l /dev/sda1
tune2fs 1.44.5 (15-Dec-2018)
Filesystem volume name:   <none>
Last mounted on:          /
Filesystem UUID:          abe6f5b9-42cb-48b6-ae0a-5dda350bc322
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Filesystem flags:         signed_directory_hash
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              30162944
Block count:              120648960
Reserved block count:     6032448
Free blocks:              13830683
Free inodes:              26575654
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      995
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Filesystem created:       Thu Sep  6 21:44:22 2012
Last mount time:          Sat Jul 20 11:33:38 2019
Last write time:          Sat Jul 20 11:33:28 2019
Mount count:              6
Maximum mount count:      22
Last checked:             Fri May 10 18:32:27 2019
Check interval:           15552000 (6 months)
Next check after:         Wed Nov  6 17:32:27 2019
Lifetime writes:          338 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:              256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
First orphan inode:       21554129
Default directory hash:   half_md4
Directory Hash Seed:      d54c5a90-bc2d-4e22-8889-568d3fd8d54f
Journal backup:           inode blocks


Important note to make here is file's inode number stays the same when it is moved to another directory on the same device, or when the disk is defragmented which may change its physical location. This also implies that completely conforming inode behavior is impossible to implement with many non-Unix file systems, such as FAT and its descendants, which don't have a way of storing this invariance when both a file's directory entry and its data are moved around. Also one inode could point to a file and a copy of the file or even a file and a symlink could point to the same inode, below is example:

$ ls -l -i /usr/bin/perl*
266327 -rwxr-xr-x 2 root root 10376 Mar 18  2013 /usr/bin/perl
266327 -rwxr-xr-x 2 root root 10376 Mar 18  2013 /usr/bin/perl5.14.2

A good to know is inodes are always unique values, so you can't have the same inode number duplicated. If a directory is damaged, only the names of the things are lost and the inodes become the so called “orphan”, e.g.  inodes without names but luckily this is recoverable. As the theory behind inodes is quite complicated and is complicated to explain here, I warmly recommend you read Ian Dallen's Unix / Linux / Filesystems – directories inodes hardlinks tutorial – which is among the best academic Tutorials explaining various specifics about inodes online.

 

How to Get inodes per mounted filesystem

 

root@linux:/home/hipo# df -i
Filesystem       Inodes  IUsed   IFree IUse% Mounted on

 

dev             2041439     481   2040958   1% /dev
tmpfs            2046359     976   2045383   1% /run
tmpfs            2046359       4   2046355   1% /dev/shm
tmpfs            2046359       6   2046353   1% /run/lock
tmpfs            2046359      17   2046342   1% /sys/fs/cgroup
/dev/sdb5        1221600    2562   1219038   1% /usr/var/lib/mysql
/dev/sdb6        6111232  747460   5363772  13% /var/www/htdocs
/dev/sdc1      122093568 3083005 119010563   3% /mnt/backups
tmpfs            2046359      13   2046346   1% /run/user/1000


As you see in above output Inodes reported for each of mounted filesystems has a specific number. In above output IFree on every mounted FS locally on Physical installed OS Linux is good.


Here is an example on how to recognize a depleted Inodes on a OpenXen Virtual Machine with attached Virtual Hard disks.

linux:~# df -i
Filesystem         Inodes     IUsed      IFree     IUse%   Mounted on
/dev/xvda         2080768    2080768     0      100%    /
tmpfs             92187      3          92184   1%     /lib/init/rw
varrun            92187      38          92149   1%    /var/run
varlock            92187      4          92183   1%    /var/lock
udev              92187     4404        87783   5%    /dev
tmpfs             92187       1         92186   1%    /dev/shm

 

Finding files with a certain inode


At some cases if you want to check all the copy files of a certain file that have the same i-node pointer it is useful to find them all by their shared inode this is possible with simple find (below example is for /usr/bin/perl binary sharing same inode as perl5.28.1:

 

ls -i /usr/bin/perl
23798851 /usr/bin/perl*

 

 find /usr/bin -inum 435308 -print
/usr/bin/perl5.28.1
/usr/bin/perl

 

Find directory that has a large number of files in it?

To get an overall number of inodes allocated by a certain directory, lets say /usr /var

 

root@linux:/var# du -s –inodes /usr /var
566931    /usr
56020    /var/

To get a list of directories use by inode for a directory with its main contained sub-directories sorted from 1 till highest number use:
 

du -s –inodes * 2>/dev/null |sort -g

 

Usually running out of inodes means there is a directory / fs mounts that has too many (small files) that are depleting the max count of possible inodes.

The most simple way to list directories and number of files in them on the server root directory is with a small bash shell loop like so:
 

for i in /*; do echo $i; find $i |wc -l; done


Another way to identify the exact directory that is most likely the bottleneck for the inode depletion in a sorted by file count, human readable form:
 

find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n


This will dump a list of every directory on the root (/) filesystem prefixed with the number of files (and subdirectories) in that directory. Thus the directory with the largest number of files will be at the bottom.

 

The -xdev switch is used to instruct find to narrow it's search to only the device where you're initiating the search (any other sub-mounted NAS / NFS filesystems from a different device will be omited).

 

Print top 10 subdirectories with Highest Inode Usage

 

Once identifed the largest number of files directories that is perhaps the issue, to further get a list of Top subdirectories in it with highest amount of inodes used, use below cmd:

 

for i in `ls -1A`; do echo "`find $i | sort -u | wc -l` $i"; done | sort -rn | head -10

 

To list more than 10 of the top inodes used dirs change the head -10 to whatever num needed.

N.B. ! Be very cautious when running above 2 find commands on a very large filesystems as it will be I/O Excessive and in filesystems that has some failing blocks this could create further problems.

To omit putting a high I/O load on a production filesystem, it is possible to also use du + very complex regular expression:
 

cd /backup
du –inodes -S | sort -rh | sed -n         '1,50{/^.\{71\}/s/^\(.\{30\}\).*\(.\{37\}\)$/\1…\2/;p}'


Results returned are from top to bottom.

 

How to Increase the amount of Inodes count on a new created volume EXT4 filesystem

Some FS-es XFS, JFS do have an auto-increase inode feature in case if their is physical space, whether otheres such as reiserfs does not have inodes at all but still have a field reported when queried for errors. But the classical Linux ext3 / ext4 does not have a way to increase the inode number on a live filesystem. Instead the way to do it there is to prepare a brand new filesystem on a Disk / NAS / attached storage.

The number of inodes at format-time of the block storage can be as high as 4 billion inodes. Before you create the new FS, you have to partition the new the block storage as ext4 with lets say parted command (or nullify the content of an with dd to clean up any previous existing data on a volume if there was already existing data:
 

parted /dev/sda


dd if=/dev/zero of=/dev/path/to/volume


  then format it with this additional parameter:

 

mkfs.ext4 -N 3000000000 /dev/path/to/volume

 

Here in above example the newly created filesystem of EXT4 type will be created with 3 Billion inodes !, for setting a higher number on older ext3 filesystem max inode count mkfs.ext3 could be used instead.

Bear in mind that 3 Billion number is a too high number and if you plan to have some large number of files / directories / links structures just raise it up to your pre-planning requirements for FS. In most cases it will be rarely anyone that want to have this number higher than 1 or 2 billion of inodes.

On FreeBSD / NetBSD / OpenBSD setting inode maximum number for a UFS / UFS2 (which is current default FreeBSD FS), this could be done via newfs filesystem creation command after the disk has been labeled with disklabel:

 

freebsd# newfs -i 1024 /dev/ada0s1d

 

Increase the Max Count of Inodes for a /tmp filesystem

 

Sometimes on some machines it is necessery to have ability to store very high number of small files (e.g. have a very large number of inodes) on a temporary filesystem kept in memory. For example some web applications served by Web Server Apache + PHP, Nginx + Perl-FastCGI are written in a bad manner so they kept tons of temporary files in /tmp, leading to issues with exceeded amount of inodes.
If that's the case to temporary work around you can increase the count of Inodes for /tmp to a very high number like 2 billions using:

 

mount -o remount,nr_inodes=<bignum> /tmp

To make the change permanent on next boot if needed don't forget to put the nr_inodes=whatever_bignum as a mount option for the temporary fs to /etc/fstab

Eventually, if you face this issues it is best to immediately track which application produced the mess and ask the developer to fix his messed up programs architecture.

 

Conclusion

 

It was explained on the very common issue of having maximum amount of inodes on a filesystem depleted and the unpleasent consequences of inability to create new files on living FS.
Then a general overview was given on what is inode on a Linux / Unix filesystem, what is typical content of inode, how inode addressing is handled on a FS. Further was explained how to get basic information about available inodes on a filesystem, how to get a filename/s based on inode number (with find), the well known way to determine inode number of a directory or file (with ls) and get more extensive information on a FS on inodes with tune2fs.
Also was explained how to identify directories containing multitudes of files in order to determine a sub-directories that is consuming most of the inodes on a filesystem. Finally it was explained very raughly how to prepare an ext4 filesystem from scratch with predefined number to inodes to much higher than the usual defaults by mkfs.ext3 / mkfs.ext4 and *bsds newfs as well as how to raise the number of inodes of /tmp tmpfs temporary RAM filesystem.

The Non Hand made Image of God “Mandylion” – The Saviour not made by hands icon in Eastern Orthodox Church

Friday, August 16th, 2019

Non-Hand-Made-Image-of-Our_Lord-and-Saviour_Jesus-Christ-Eastern-Orthodox-icon

 

Short History of Non Hand Made Image of God

The question of how really the Lord Jesus Christ looked liked during his earthly living is a question actively intriguing billion of people that lived over the centuries from year 33 A.D. in which Jesus Christ raised from the death as the Gospel testifies. Even for non Christian reliigon in pagan world before the Incarnation of Christ and the Old Hebrew Testament this question has been of interest whether the Abrahamic religions such as Jewish faith and Islam (as emerged in the 7th century) makes it clear that Image of God should not be made. However this Monotheistic religions did not went through the revelation given in Christian faith that God is Three in Faces but one in Essence and that The Holy Trinity. Thus the Icon Image being depicted in Christianity is an Image showing us a close copy to the Image of God The Son (The second Hypostasis / Face of God)  during his Humanity Saviour Mission as he was living and seen by our living ancestors while still being in fleshly earthly Body on Earth before his crucifixion / execution on the Cross of Golgotha in 33 A.D. in Jerusalem.

Hence the veneration of the Image of the Lord Jesus Christ is not in reality a break up of any of the 10 Commandments which says "Thou shalt not make unto thee any graven image" – (Hebrew: לֹא-תַעֲשֶׂה לְךָ פֶסֶל, וְכָל-תְּמוּנָה) as Jewish, Muslim and other people of Abrahamic religions claims but is a close copy of visualized graphical depiction, how Jesus (The Begotten Son of God before all Ages) looked like – there was no other way as Photography was not yet discovered so Jesus as a Son of God used this miracle to leave us a memory about his physical and spiritual likeness.. The Non-Hand Image of God is just an Image very much like on the Photo, you see an Image of someone but you cannot know exactly who he is he good except if you already don't know him and the photo is just a remembrance of him in the similar manner the Non-Hand Made image of God is a remembrance of God for those who believed in Him and already know him with their spirits, e.g. Christians. And a testimony memory for his physical existence in the world 2000 years ago.


The Non Hand made image (icon – word rooted from Greek εικών) also known as Acheiropoieta (Byzantine Greek: αχειροποίητα, "made without hand") is a painted copy surviving over the centuries to another Image the so called Image of Edessa known also under the term of Mandylion – a square or rectangle of cloth upon which a miraculous image of the face of Jesus had been imprinted as Jesus took a simple cloth (used for painting pictures put it on his face and miraculously imprinted  exact copy of his face.

This Imprinted Image has been very famous through the last 21 centuries and has been an object of interest to innumerous number of Christians and scholars and is known to have made multiple miracles, used as a shield for cities, have cured the incurrable sick king Abgar V-th (The King of Edessa) who attempted to find a cure for his terrible skin disease across all known and best medicians, pharmacists, herbalists, magicians etc. healers with various methods but none helped.
 

King Abgar healed by Jesus's Non Hand Made Image of God Mandylion

 

The most ancient known reference to the Non-Hand-Made story in History has been recorded by Eusebius of Caesarea in his Ecclesiastical History.

It was retold in elaborated form by Ephrem the Syrian in the fifth-century Syriac Doctrine of Addai.
An early version of the Abgar legend exists in the Syriac Doctrine of Addai, an early Christian document from Edessa. The Epistula Abgari is a Greek recension of the letter of correspondence exchanged between Jesus Christ and Abgar V of Edessa, known as the Acts of Thaddaeus. The letters were likely composed in the early 4th century.
The legend became relatively popular in the Middle Ages (In the West), even though it was well known in the East for centuries. The letters were translated from Syriac into the Greek, Armenian, Latin, Coptic and Arabic languages.

The Letters tells us the story of King Abgar being seriously desperated heard about Jesus whose fame has been growing as a healer who can heal any disease, because of the already famous multple miracles done by him of healing people who suffered diseases from birth, for healing leppers and giving sight to blind people from birth, resurrecting death people, such as Lazarus who was dead for 4 days until Jesus Resurrected him from the Grave.

Hearing about this great things done by this new healer King Abgar due to his inability to travel long distances sent his servant portrait-painter Anananias with a letter addressed to Jesus in which he was begging him to come to Edessa and heal him from his leprosy. Ananias was told by prince Abgar that in case if Jesus is unable to come to Edessa for some reason, Ananias should paint a picture of Jesus's face and bring it back to the kings palace, firmly hoping and believing that only seing the image will immediately heal him.

The servent went to search for Jesus and found him, preaching the Good news of Salvation being fulfilled in himself and stayed, kept himself on a side and tried to paint Jesus face on a number of times as each time he started painting the face of Jesus in a very short while the face has changed so he couldn't complete it and have to start again.
After some time Jesus has called him took the cloth (napkin) and placed the cloth on his face and an image depicting has face has imprinted on the cloth.

The Image of Christ on the cloth was brought to the King and as Abgar kissed the Image on the napkin with Love and Faith and the sickness disappeared as he was completely cured, only a small part of his face was still having the trace of the disease. After this glorious Miracle King Abgar ordered that the idols on the Entry Gate of the City that were believed to protect the city from Evil should be and in their place the Non-Hand made image of Jesus Christ to be put on top of the city Entry Gate having the Image stuck onto wood, surrounded with a gold frame and ornamented with pearls. Prince Abgar also wrote above the icon on the gateway:
 

"O Christ our God, no-one who hopes in Thee will be put to shame" !

 

, to let everyone know about the Great power of the one who is depicted on the Mandilyon cloth, just as well as a protection sign for the city from all evil and invaders as he was convinced the image of Christ had the power to not only heal him but had the Power to protect his city from the numerous barbaryan raids that were so common during Ist century.

In the coming centuries The Non Hand Made Image of God have been said to have cured multiple people from all kind of diseases.

Abgar-with-image-of-Edessa10thcentury-Monastery-Saint_Ekaterina-Sinai

Abgar receiving the Non Hand Made Image of Christ cloth icon – Saint Catherin (Ekaterina) Monastery Sinai 10th century

According to the eastern Tradition, Jesus has said to the survent to convey message to the the King that this Image of Him will cure most of the disease of the king but the complete healing will be completed by Jesus's desciple Thaddaeus (who was one of the 70th Apostles). This letter was brought to King Abgar too. The Orthodox Tradition believed in the Church is that the words of Jesus fulfilled, King Abgar biggest part of body lepper buds, have been healed and only a small part on his face remaimed and this was also healed by disciple Thaddaeus who after the Crucifix and Resurrection of Christ in the 3rd day after his execution went to the King and have preached the Good news of Gospel that Christ has resurrected and asked the king to repent and shortly baptized him in the New Christian faith In the Name of the Father, The Son and The Holy Spirit. Immediately after Baptism King Abgar become completely healed. More on the Story for the Image Not Made By Hands can be red here.

 

08.21_saint_ap_Thaddeus_rex_Abgar_ikona_Sinai_Sv_Ekaterina
Saint Apostle Thaddeus arriving to King Abgar V of Eddessa and healing – icon X-th Saint Catherin's Monastery Sinai

The Jesus Image of Edessa – a Syrian city in upper Mesopotamia on the Banks of Euphrates  (today city of Urfa in Turkey), painted copy has been made on number of occasions through the history before the original clothed image disappeared mysteriously in the 13th century as it was stolen by the Roman Catholic Crusaders during IV-th Crusade whose goal was the liberation of the Holy Lands (Jerusalem and the other biblical cities from the Muslim) in year 1204.  After the hungry and and mad crusaders destroed (sacked out) Constantinople they took the Mandylion have reappeared as a relic in King Louis IX of France's Sainte-Chapelle Church in Paris and has been said to have been there until its total disappearance in the French Revolution (during which many holy relics were stolen by revolutionists, many of whose are known to have been Masons as well as many relics were destroyed).

This Non-Hand Made image of Jesus Christ known in Church Slavonic under the term (Убрус / Ubrus) is considered the first icon ("image"). In the Eastern Orthodox Church (Bulgarian, Russian, Greek, Serbian, Macedonian, Moldovian, Syrian, Antiochian, Jerusalem's Churches) and the rest of Oriental Orthodox Churches (Armenian, Copts, Syriacs, Jacobites etc.) according to Church Tradition.

Christos_Acheiropoietos-Non-hand-made-image-of-Jesus-Christ-given-to-King-Abgar

Christos Acheiropoietos (Non-hand-made-image)  of Jesus Christ given to King Abgar of Edessa
(Novgorodian Russian Icon circa 1100)

 

The Eastern Orthodox Church observes a feast for this icon on August 16 (August 29 in N.S.), which commemorates its translation from Edessa to Constantinople and is being observed with a Church service Holy Liturgy.

Ancha_Icon_of_the_Savior_(Art_Museum_of_Georgia,_Tbilisi)

Ancha Icon of the Savior ანჩისხატი (Traditionally considered to have been the tile Kiramidion a tile copy of the Mandylion) – "holy tile" imprinted with the face of Jesus Christ miraculously transferred by contact with the Image of Edessa (Mandylion).

Ancha icon is dated to the 6th-7th century, it was covered with a chased silver riza and partly repainted in the following centuries. The icon derives its name from the Georgian monastery of Ancha in what is now Turkey, whence it was brought to Tbilisi in 1664.

Holy Face of Genoa

Mandylion image is also point of high interest in the Roman Catholic Church too as well as to some more conservative Protestant denominations (though perhaps most of them are rejecting that story as a pious myth.).
In Western Tradtion the Ubrus's copy there is the famous Holy Face of Genoa (which is a donation by the Byzantine Emperor John V Paleologos to Doge of Genos Leonardo Montaldo in the 14th century it was carefully studied by Colette Dufour Bozzo that the image is on cloth that is on a wooden board (the wooden board is a typical plane on which icons are being painted) and is dated also to 14th century.

Jesus_Christ_Face_Holy-face-of-Genoa-_Genes

Holy face of Genoa 14th Century with face made more visible

 

Holy Face of San Silvestro

 

Holy Face of San Silvestro (Saint Silvester) image was kept in Rome's Church San Silvestro in Capite and is now
kept in Matilda chapel Vatican Palace. Its earliest existence known is from 1517 (at that year the nuns were forbidden to exhibit it) as people often has mixed it up with the Veronica Veil (another famous Jesus cloth face imprint often called Volto Santo – holy face).

The_San_Silvestro-image-imprint-of-Jesus-non-hand-made-Mandylion_visage

 

Veronica Veil

 

The Western Roman Catholic tradition recounts that Saint Veronica from Jerusalem encountered Jesus along the Via Dolorosa on the way to Calvary. When she paused to wipe the blood and sweat (Latin sudor) off his face with her veil, his image was imprinted on the cloth. The Veronica Veil tradition however is legendary and not accepted in Eastern Orthodox Churches as this tradition is quite new first to occur somewhere in the middle ages, like in the 16th century and was not known at all in Christiandome prior to that.

The act of Saint Veronica wiping the face of Jesus with her veil is celebrated in the sixth Station of the Cross in many Anglican, Catholic, Lutheran rites, Methodist and Western Orthodox churches.

 

Image of the Saviour Other traditional Orthodox Copy icons

It is notable to mention few very famous Russian iconography interpretation, many of the interpretations are taken by the Russian Iconography from the sample Novgorodian Russian Icon which of itself has been an exact copy, the Russian and Greek monks have made of the Origianl Non Hand Made Cloth image that has been on top of Byzantine Eastern Empire Capital Constantinople's city hanging on the entry Gate.

Saviour-Wet-Beard-Spas_Mokraya_Brada_ikona-Russian-icon


Спас Мокрая Брада (Spas Mokraja Brada – The Saviour Wet Beard) Russian Orthodox Icon 16th Century

Even today, many of the Eastern Orthodox Church are having a copy interpretation of the icon hanging on top of the Dveri (The Inner part Alter Church Doors) – see below picture for reference:

Ubrus-Mandylion-Non-Hand-Made-icon-Hanging-upper-to-Church-Alter-Walls-Dveri-Eastern-Orthodox-Church-Vlaherna-Tsarigrad-Istanbul-ex-Konstantinopol

Non-Hand Made Image copy of the Saviour Jesus Christ hanging on top of Church Alter Walls (Dveri) on top of the priest head

Image-of-the-Saviour-traditional-Orthodox-Iconography-Simon_Ushakov_Nerukotvorniy

 

The Mandylion by Simeon Ushakov – year 1658.

As well as the bit newer but very beautiful Russian Iconograph interpretation  of  Image of Saviour from Harkov.

Harkovskij-Spas-Saviour-Orthodox-icon-18th-century-Harkov

Harkovskij Spas icon Harkov Saviour from 18th century Russian Orthodox Icon

Helpful Hints For Starting A Small WordPress Website or Ecomerce Business

Wednesday, August 14th, 2019

hints-for-starting-wordpress-site

Wordpress is the web application collection of PHP program behind thirty four percent (43%) of the internet’s websites, and fifteen percent (50%) of the top one hundred websites in the world, so if you’re considering it for your website then you’re perhaps thinking in the right direction. Small start-up projects a community website or even a small personal owned blog or mid to even large business presentation site  can benefit greatly from setting up their Web Platrform or Ecommerce shops on a WordPress website platform (that of itself depends just on a small number of technologies such as a Linux server with a Web Server installed on it to serve PHP as well as some kind of Linux host installed Database  backend engine such as MYSQL / PostgreSQL etc. …

But if you really want to create a successful ecommerce website on WordPress, that can seem a little intimidating at first as the general complexity to start up with WordPress looks very scary in the beginning. However in this article I’ll point to fewhelpful hints should get you off on the right foot, and make your entry into the world of Wodpress / WP Ecommerce a little easier and less scary.

This article is to be less technical than expected and in that will contrast slightly with many of the articles on this blog, the target audience is more of Web Marketing Manager or a Start-up Search Engine Optimization person at a small personal project or employed in the big bad corporate world.This is no something new that is going to be outlined in this article but a general rules that are known for the professional SEO Gurus but is most likely to be helpful for the starting persons.

If you happen to be one of these you should know you have to follow a set of well known rules on the website structure text, descriptions, text, orientation, ordering of menus and data etc. in order to have the WordPress based website running at full speed attracting more visitors to your site.
 

Photos
 

 

Importance of Photos on a Webiste
Although the text for your website is very important – more on that later – when a user first opens up your website in their browser, their eyes are going to be caught by the images that you have laid out on your website. Not using images is a big mistake, since it bores users’ eyes and makes your website seem amateur and basic, but using low quality images or irrelevant images can also harm your chances of appearing authentic to a user (yes here on this blog there are some of this low quality pictures but this is due to fact this website is more of information blog and not ecommerce. Thus at best case always make sure that you find the best, high-quality images for your website – make sure that you have the correct rights to use the images as well (as copyright infrignmenets) could cause you even a law suits ending in hundred or thousand dollar fines or even if this doesn't happen any publicity of such would reduce your website indexing rating. The images placed should always be relevant to your website. If you find a breath-taking sunset or tech-gadget picture, that’s great, but maybe not for your healthy food ecommerce store, but for your personal ranting or describing a personal experience.

 

Product Photos


Assuming that sooner or later even if you have a community website you will want to monerize it to bring back to yourself in material form at least part of the many years effort to bring the site to the web rank gained.
Leading on from that point, you’re going to be selling or advertise items – that’s the whole point of ecommerce. But users often find ads / online shopping frustrating due to not being able to properly see and understand what they’re buying before they make their purchase. This can lead to ‘buyer’s remorse’, and, consequently, refunds galore, which is not what you want. Make sure that images of your products are always available and of a high quality – investing in a fairly high quality camera might be a good idea – and consider many pictures for different angles or even rotating images so that the user can decide for themself which angle they want to look at.

 

Engaging Descriptions


“I can guarantee that you can’t remember the last five product descriptions you read – not even word-for-word, but the general ideas and vocabulary used will have been tossed into your short-term memory and forgotten in an instant. This is where your website can shine, and become better than ninety percent of those lingering on the internet,” Matthew Kelly, a project manager at WriteMyX and NextCoursework, suggests, “since putting effort into writing your product descriptions and making them lively and engaging will make your website memorable, and your subscribers will turn helpfully soon loyal customers will be more likely to come back time and time again and become repeat business, as well as mention you to their friends (social mounth to mouth marketing) and that way working as free advertising for you and making your website incredibly effective.”

 

Mobile-Friendly

 

Which device is most used to check email Laptop / PC or Mobile statistics as of year 2019

These days with the bloom of Mobile Devices that are currently overrunning the user of normal Desktop PCs, Laptops and Tablets and this trend is likely to stay and even increase, “If your website isn’t mobile-friendly in this day and age, then you won’t get anywhere with it.” Anne Baker, a marketer at BritStudent and Australia2Write, states. “Most people use their phones when they access websites, especially when they go shopping on the internet.

Statistics on user stay (secs / mins) stay on a website from Desktop PC and Mobile devices

On WordPress, this means finding a more recent theme – an older theme, maybe four-five years old, will probably not support mobile, and you just can’t afford to lose out on the mobile market.” In short, find yourself a mobile-friendly theme or install the right WordPress Pluguin that will enable you to have a Mobile Friendly theme in case if blog is accessed from a Mobile Dev or many of your customers will become frustrated with the badly formatted ‘mobile’ version of your website that they end up using, which might be for instance meant for a much larger screen. It can also ruin the atmosphere (experience) created at the accessed user site and have negative impact on your audience opion of your site or business. This is even more the case  if your website or webapp is targetting to be modern and keeping with the times – or especially if it deals with IT and electronics (where the competition is huge)!

 

Registration

 

Registration Ecommerce website

Registration form (Sign Up) on a website and the overall business cycle idea behind web product or business is of critical importance as this is the point that will guarantee intimidation with the customer, failing to have the person be engaged will quickly make your website rank lower and your producs less wanted. The general rule here is to make your registration be an easy (to orientate for the user) and be present on a very visible place on the site.

Registration steps should be as less as possible as this might piss off the user and repel him out of the site before the registration is completed. Showing oportunity to register with a Pop-Up window (while the user clicks on a place showing interest for the produce might be useful in some cases but generally might also push the user back so if you decide to implement it do it with a lot of care (beware of too much aggressive marketing on our site).

An example


The registration process should be as intimidating as possible to leave joy in the user that might later return and log in to your site or ecommerce platform, e.g. be interested to stay for a longer time. The marketing tactic aiming to make the user stay for a longer time on the website (dragging his attention / interest to stuff)  is nothing new by the way as it is well known marketing rule integrated in every supermarket you buy groceries, where all is made to keep you in the shop for as longer as possible. Research has shown that spending longer time within the supermarket makes the user buy more.

 

Returning customers can be intimidated with membership or a free gift (be it even virtual picture gift – free email whatever) or information store place could be given or if products are sold, registration will be obligatory to make them use their payment method or delivery address on next login to easify the buy out process. But if registration is convoluted and forced (e.g. user is somehow forced to become meber) then many customers will turn away and find another website for their shopping needs. Using a method like Quora’s ‘login to see more’ in that case might be a good idea even though for me this is also a very irritating and irritating – this method however should never be used if you run a ecommerce selling platform, on ecommerce site gatekeeping will only frustrate customers. Login is good to be implmeneted as a popup option (and not taking too much of the screen). Sign up and Login should be simplistic and self-explanatory – always not required but optioned and user should get the understanding of the advantage to be a member of the website if possible before the sign up procedure. Then, customers are more likely to sign up and won’t feel like they’ve been pushed into the decision – or pushed away, as the case may be.

Katrina Hatchett works as a lifestyle blogger at both Academic Brits and Assignment Help, due to a love of literature and writing, which she has had since youth. Throughout her career, she has become involved with many projects, such as writing for the PhD Kingdom blog.

Howto debug and remount NFS hangled filesystem on Linux

Monday, August 12th, 2019

nfsnetwork-file-system-architecture-diagram

If you're using actively NFS remote storage attached to your Linux server it is very useful to get the number of dropped NFS connections and in that way to assure you don't have a remote NFS server issues or Network connectivity drops out due to broken network switch a Cisco hub or other network hop device that is routing the traffic from Source Host (SRC) to Destination Host (DST) thus, at perfect case if NFS storage and mounted Linux Network filesystem should be at (0) zero dropped connectios or their number should be low. Firewall connectivity between Source NFS client host and Destination NFS Server and mount should be there (set up fine) as well as proper permissions assigned on the server, as well as the DST NFS should be not experiencing I/O overheads as well as no DNS issues should be present (if NFS is not accessed directly via IP address).
In below article which is mostly for NFS novice admins is described shortly few of the nuances of working with NFS.
 

1. Check nfsstat and portmap for issues

One indicator that everything is fine with a configured NFS mount is the number of dropped NFS connections
or with a very low count of dropped connections, to check them if you happen to administer NFS

nfsstat

 

linux:~# nfsstat -o net
Server packet stats:
packets    udp        tcp        tcpconn
0          0          0          0  


nfsstat is useful if you have to debug why occasionally NFS mounts are getting unresponsive.

As NFS is so dependent upon portmap service for mapping the ports, one other point to check in case of Hanged NFSes is the portmap service whether it did not crashed due to some reason.

 

linux:~# service portmap status
portmap (pid 7428) is running…   [portmap service is started.]

 

linux:~# ps axu|grep -i rpcbind
_rpc       421  0.0  0.0   6824  3568 ?        Ss   10:30   0:00 /sbin/rpcbind -f -w


A useful commands to debug further rcp caused issues are:

On client side:

 

rpcdebug -m nfs -c

 

On server side:

 

rpcdebug -m nfsd -c

 

It might be also useful to check whether remote NFS permissions did not changed with the good old showmount cmd

linux:~# showmount -e rem_nfs_server_host


Also it is useful to check whether /etc/exports file was not modified somehow and whether the NFS did not hanged due to attempt of NFS daemon to reload the new configuration from there, another file to check while debugging is /etc/nfs.conf – are there group / permissions issues as well as the usual /var/log/messages and the kernel log with dmesg command for weird produced NFS client / server or network messages.

nfs-utils disabled serving NFS over UDP in version 2.2.1. Arch core updated to 2.3.1 on 21 Dec 2017 (skipping over 2.2.1.) If UDP stopped working then, add udp=y under [nfsd] in /etc/nfs.conf. Then restart nfs-server.service.

If the remote NFS server is running also Linux it is useful to check its /etc/default/nfs-kernel-server configuration

At some stall cases it might be also useful to remount the NFS (but as there might be a process on the Linux server) trying to read / write data from the remote NFS mounted FS it is a good idea to check (whether a process / service) on the server is not doing I/O operations on the NFS and if such is existing to kill the process in question with fuser
 

linux:~# fuser -k [mounted-filesystem]
 

 

2. Diagnose the problem interactively with htop


    Htop should be your first port of call. The most obvious symptom will be a maxed-out CPU.
    Press F2, and under "Display options", enable "Detailed CPU time". Press F1 for an explanation of the colours used in the CPU bars. In particular, is the CPU spending most of its time responding to IRQs, or in Wait-IO (wio)?
 

3. Get more extensive Mount info with mountstats

 

nfs-utils package contains mountstats command which is very useful in debugging further the issues identified

$ mountstats
Stats for example:/tank mounted on /tank:
  NFS mount options: rw,sync,vers=4.2,rsize=524288,wsize=524288,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,soft,proto=tcp,port=0,timeo=15,retrans=2,sec=sys,clientaddr=xx.yy.zz.tt,local_lock=none
  NFS server capabilities: caps=0xfbffdf,wtmult=512,dtsize=32768,bsize=0,namlen=255
  NFSv4 capability flags: bm0=0xfdffbfff,bm1=0x40f9be3e,bm2=0x803,acl=0x3,sessions,pnfs=notconfigured
  NFS security flavor: 1  pseudoflavor: 0

 

NFS byte counts:
  applications read 248542089 bytes via read(2)
  applications wrote 0 bytes via write(2)
  applications read 0 bytes via O_DIRECT read(2)
  applications wrote 0 bytes via O_DIRECT write(2)
  client read 171375125 bytes via NFS READ
  client wrote 0 bytes via NFS WRITE

RPC statistics:
  699 RPC requests sent, 699 RPC replies received (0 XIDs not found)
  average backlog queue length: 0

READ:
    338 ops (48%)
    avg bytes sent per op: 216    avg bytes received per op: 507131
    backlog wait: 0.005917     RTT: 548.736686     total execute time: 548.775148 (milliseconds)
GETATTR:
    115 ops (16%)
    avg bytes sent per op: 199    avg bytes received per op: 240
    backlog wait: 0.008696     RTT: 15.756522     total execute time: 15.843478 (milliseconds)
ACCESS:
    93 ops (13%)
    avg bytes sent per op: 203    avg bytes received per op: 168
    backlog wait: 0.010753     RTT: 2.967742     total execute time: 3.032258 (milliseconds)
LOOKUP:
    32 ops (4%)
    avg bytes sent per op: 220    avg bytes received per op: 274
    backlog wait: 0.000000     RTT: 3.906250     total execute time: 3.968750 (milliseconds)
OPEN_NOATTR:
    25 ops (3%)
    avg bytes sent per op: 268    avg bytes received per op: 350
    backlog wait: 0.000000     RTT: 2.320000     total execute time: 2.360000 (milliseconds)
CLOSE:
    24 ops (3%)
    avg bytes sent per op: 224    avg bytes received per op: 176
    backlog wait: 0.000000     RTT: 30.250000     total execute time: 30.291667 (milliseconds)
DELEGRETURN:
    23 ops (3%)
    avg bytes sent per op: 220    avg bytes received per op: 160
    backlog wait: 0.000000     RTT: 6.782609     total execute time: 6.826087 (milliseconds)
READDIR:
    4 ops (0%)
    avg bytes sent per op: 224    avg bytes received per op: 14372
    backlog wait: 0.000000     RTT: 198.000000     total execute time: 198.250000 (milliseconds)
SERVER_CAPS:
    2 ops (0%)
    avg bytes sent per op: 172    avg bytes received per op: 164
    backlog wait: 0.000000     RTT: 1.500000     total execute time: 1.500000 (milliseconds)
FSINFO:
    1 ops (0%)
    avg bytes sent per op: 172    avg bytes received per op: 164
    backlog wait: 0.000000     RTT: 2.000000     total execute time: 2.000000 (milliseconds)
PATHCONF:
    1 ops (0%)
    avg bytes sent per op: 164    avg bytes received per op: 116
    backlog wait: 0.000000     RTT: 1.000000     total execute time: 1.000000 (milliseconds)


nfs-utils disabled serving NFS over UDP in version 2.2.1. Arch core updated to 2.3.1 on 21 Dec 2017 (skipping over 2.2.1.) If UDP stopped working then, add udp=y under [nfsd] in /etc/nfs.conf. Then restart nfs-server.service.
 

4. Check for firewall issues
 

If all fails make sure you don't have any kind of firewall issues. Sometimes firewall changes on remote server or somewhere in the routing servers might lead to stalled NFS mounts.

 

To use properly NFS as you should know as a minimum you need to have opened as ports is Port 111 (TCP and UDP) and 2049 (TCP and UDP) on the NFS server (side) as well as any traffic inspection routers on the road from SRC (Linux client host) and NFS Storage destination DST server.

There are also ports for Cluster and client status (Port 1110 TCP for the former, and 1110 UDP for the latter) as well as a port for the NFS lock manager (Port 4045 TCP and UDP) but having this opened or not depends on how the NFS is configured. You can further determine which ports you need to allow depending on which services are needed cross-gateway.
 

5. How to Remount a Stalled unresponsive NFS filesystem mount

 

At many cases situation with remounting stalled NFS filesystem is not so easy but if you're lucky a standard mount and remount should do the trick.

Most simple way to remout the NFS (once you're sure this might not disrupt any service) – don't blame me if you break something is with:
 

umount -l /mnt/NFS_mnt_point
mount /mnt/NFS_mnt_point


Note that the lazy mount (-l) umount opt is provided here as very often this is the only way to unmount a stalled NFS mount.

Sometimes if you have a lot of NFS mounts and all are inacessible it is useful to remount all NFS mounts, if the remote NFS is responsive this should be possible with a simple for bash loop:

for P in $(mount | awk '/type nfs / {print $3;}'); do echo $P; echo "sudo umount $P && sudo mount $P" && echo "ok :)"; done


If you cd /mnt/NFS_mnt_point and try ls and you get

$ ls
.: Stale File Handle

 

You will need to unmount the FS with forceful mount flag

umount -f /mnt/NFS_mnt_point
 

Sum it up


In this article, I've shown you a few simple ways to debug what is wrong with a Stalled / Hanged NFS filesystem present on a NFS server mounted on a Linux client server.
Above was explained the common issues caused by NFS portmap (rpcbind) dependency, how to its status is fine, some further diagnosis with htop and mountstat was pointed. I've pointed the minimum amount of TCP / UDP ports 2049 and 111 that needs to be opened for the NFS communication to work and finally explained on how to remount a stalled NFS single or all attached mount on a NFS client to restore to normal operations.
As NFS is a whole ocean of things and the number of ways it is used are too extensive this article is just a general info useful for the NFS dummy admin for more robust configs read some good book on NFS such as Managing NFS and NIS, 2nd Edition – O'Reilly Media and for Kernel related NFS debugging make sure you check as a minimum ArchLinux's NFS troubleshooting guide and sourceforge's NFS Troubleshoting and Optimizing NFS Performance guides.

 

How to View and Delete NetApp Storage qtree, Get statistics about Filer Volume Read / Writes operations and delete and show mounted volumes

Friday, August 2nd, 2019

how-to-delete-volume-qtree-snapmirror-view-netapp-volume-qtree-and-and-view-netapp-cluster-device-statistics-NetAppLogo

I've had recently the trivial decomissioning task to delete some NetApp Storage qtrees on some of the SAP Hana Enterprise Cloud NetApp filers.
If it is first time you heard of NetApp is a hybrid cloud data services and data management (ranked in the Fortune 500 companies).

Netapps are hybrid cloud data services for management of applications and data across cloud and on-premises environments and are a de-facto standard for Data storage on many of the existing Internet Clouds and Large Corporatons that Stores many Pentabytes of Data.

The Netapp storage devices are a kinda of proprietary Clustered version of the Small business NAS storage Solution FreeNAS (which of itself is a Free FreeBSD based Data Storage OS – The #1 Storage OS).
NetApps allow plenty of things to do such as Data Mirroring (Data Backups), Data Syncing, SpapMirroring, SnapVault and many, many more custom Data revolutionary solutions such as StorageGrid.

NetApp supports integration with Kubernetes, Docker, Oracle / SAP DB, Citrix, Xen, KVM as well as multiple cloud environments such as AWS, Azure, OpenStack and has even integration with some CI/CD DevOps data provisioning – i.e. Jenkins.

In this small article, I'll show you how a Volume / Qtree on a NetApp filer could be viewed, mounted, unmounted, deleted. I'll also show you how to get statistics, while logged in remotely to the NetApp console and finally how to view and delete a NetApp configured snapmirror.

 

View NetApp Qtree

 

Here is how to view the Storage Qtree:

netapp> qtree show -vserver netapp01fv018 -volume VOL_OS_MIG -qtree bck_01v046485_20190108

To view the file content existing on the Storage server from the Linux bost next step to do is mount it with regular mount:

linux-host:~# mount netappfiler01fv018:/VOL_OS_MIG/ bck_01v035527_20190108 /mnt/test

 

Delete the Qtree from NetApp (Storage) Filer

Become administrator on the device

Once assured the content could go on to delete the qtree, it is necessery to become superuser (root) on the NetApp device, to do so, I hed to type:

 

netapp> set -privilege advanced

 

Then to delete the unneded volume previously used for transferring system update files, when logged in via SSH to the NetApp device – ONTAP Proprietary Operating system :

 

netapp> qtree delete -vserver netapp01fv018 -volume VOL_OS_MIG -force -qtree bck_01v035527_20190108


Note that this command will return back a job ID
assigned until operation is completed, to check the status of completion of generated JOB that is backgrounded, I've used command:

netapp> job show 53412

If all is okay you should get a Status of Success otherwise, if you get failed status you have to debug further what's causing it.
 

How to view existing export polcities and remove them

 

If you don't want to delete the qtree or volume but want to prevent a certain Linux server / application to not have access to it, it is useful to view existing export policy for a qtree.
 

netappfc001::> qtree show -exports -volume vol1_vmspace_netapp01v000885 -qtree q_01v002131
                                                   Is Export
Vserver    Volume        Qtree        Policy Name  Policy Inherited
———- ————- ———— ———— —————–
netapp01fv001 vol1_vmspace_netapp01v000885
                         q_01v002131
                                      vol1_vmspace_netapp01v000885.exports
                                                   true

 


To remove then export policy (to not exist at all), this is how:

 

 

netapp> volume qtree modify -vserver hec01fv018 -qtree-path /vol/volume_name/qtree_name -export-policy ""

 

I've also found the following volume qtree commands NetApp ONTAP documentation page helpful to read and recommend to anyone that wants to learn more.
 

How to delete a NetApp Volume if it is not used anymore

To delete unsed netapp volume, you have to do 3 things.
1. Unmount the volume
2. Put it offline
3. Delete it

to do so run below 3 cmds:

 

netapp> volume unmount -vserver vserver_name -volume volume_name
netapp> volume offline -vserver vserver_name volume_name
netapp> volume delete -vserver vserver_name volume_name

 

Show mounted Volume junctions (Get Extra Storage Volume information)

 

netapp> volume show -vserver netapp01fv004 -junction
netapp> volume show -vserver netapp01fv004 -volume MUFCF01_BACKUP

 

How to delete a Configured SnapMirror

What is a snapmirror?

 

Recovery-Scenario-Restore-Changes-To-Recovery-site-snapmirror-diagram

SnapMirror is a feature of Data ONTAP that enables you to replicate data. SnapMirror enables you to replicate data from specified source volumes or qtrees to specified destination volumes or qtrees, respectively. You need a separate license to use SnapMirror.

You can use SnapMirror to replicate data within the same storage system or with different storage systems.

After the data is replicated to the destination storage system, you can access the data on the destination to perform the following actions:

  • You can provide users immediate access to mirrored data in case the source goes down.
  • You can restore the data to the source to recover from disaster, data corruption (qtrees only), or user error.
  • You can archive the data to tape.
  • You can balance resource loads.
  • You can back up or distribute the data to remote sites.

 

netapp> snapmirror show -destination-path netapp02fv001:vol1_MUF_PS1_DR

 

netapp> snapmirror delete -destination-path netapp02fv001:vol1_MUF_PS1_DR -force
Operation succeeded: snapmirror delete for the relationship with destination "hec02fv001:vol1_MUF_PS1_DR".
 

If the snapmirror deletion gets scheduled you can use snapmirror status command to check status:
 

netapp> snapmirror status MUF_PS1_PRD
Snapmirror is on.

 

How to telnet from NetApp Storage to another one / check status of configured SMTPs for NetApp Cluster (filer)

 

 

You can use the autosupport and options autosupport commands to change or view AutoSupport configuration, display information about past AutoSupport messages, and send or resend an AutoSupport message.

For example if NetApp Filers have configured SMTP or SMTPs servers or other Proxy Configurations to pass on traffic from DMZ-ed network to external Internet resources or Relay servers this command will provide information on the Connection status of this remote services.

 

rows 0
set diag
node show

autosupport check show
systemshell -node netapp01f0018 -c telnet
autosupport show -fields proxy-url
systemshell -node netappf0018 -c telnet  147.204.148.38 80

netapp09fc001::*> systemshell -node  netapp08f0013 -c telnet  8080
  (system node systemshell)
Trying 100.127.20.4


node show – will provide information about configured nodes
rows 0 – will set the output print rows how they will be displayed
set – diag sets the device in diagnostic state

As you can see you can use the systemshell netapp command to try out telnet connections from the Configured NetApp logged in Source to any remote destination to make sure the set Proxy or SMTP is properly reachable.

How to get Statistics about NetApp existing volume Read / Write operations

 

On Netapp side issue:

netapp> statistics volume show -interval 5 -iterations 1 -max 25 -vserver netapp01fv004 -volume MUFCF01_BACKUP

For people starting up with NetApps, it is very useful to get a in-depth read on quick and dirty –  Netapp Commandline CheatSheet (for simplicity I've stored it in netapp-commands-cheatsheat.txt formatted file here ).

Conclusion

NetApp storages are used in many Governments and Large Corporations and for critical applications with SLAs forfeits for million bucks, mostly for applications and Database storage that are of a very large scale and too critical to be handled by the conventional storage computing of simple RAIDS 1,2,3,5,6 etc. / LVM and so on. ONTAP and NetApp Filers and Filer Clusters, are easy to maintain but due to its high number of features, not many NetApp Storage / Backup system administrators have the knowledge how to take a good advantage of this beasts. Thus finally, my even small experience with them shows that even simple things as critical errors are not handled properly at least that was my experience as a SAP consultant with SAP Hana Enterprise Cloud (HEC) and their HANA Converged Cloud where, main storage. 
This article's goal was pretty simple to guide the user on a minimum set of commands for simple qtree / volume / snapmirror view and removal decomissioning tasks. NetApps Clusters are a whole ocean of stuff and knowledge so before doing anything complex, if you're not sure what you're doing always consult a NetApp storage sysadmin as some of this animals features looks easy for the common general sysadmin but not are not so.