Posts Tagged ‘var’

How to build Linux logging bash shell script write_log, logging with Named Pipe buffer, Simple Linux common log files logging with logger command

Monday, August 26th, 2019

how-to-build-bash-script-for-logging-buffer-named-pipes-basic-common-files-logging-with-logger-command

Logging into file in GNU / Linux and FreeBSD is as simple as simply redirecting the output, e.g.:
 

echo "$(date) Whatever" >> /home/hipo/log/output_file_log.txt


or with pyping to tee command

 

echo "$(date) Service has Crashed" | tee -a /home/hipo/log/output_file_log.txt


But what if you need to create a full featured logging bash robust shell script function that will run as a daemon continusly as a background process and will output
all content from itself to an external log file?
In below article, I've given example logging script in bash, as well as small example on how a specially crafted Named Pipe buffer can be used that will later store to a file of choice.
Finally I found it interesting to mention few words about logger command which can be used to log anything to many of the common / general Linux log files stored under /var/log/ – i.e. /var/log/syslog /var/log/user /var/log/daemon /var/log/mail etc.
 

1. Bash script function for logging write_log();


Perhaps the simplest method is just to use a small function routine in your shell script like this:
 

write_log()
LOG_FILE='/root/log.txt';
{
  while read text
  do
      LOGTIME=`date "+%Y-%m-%d %H:%M:%S"`
      # If log file is not defined, just echo the output
      if [ “$LOG_FILE” == “” ]; then
    echo $LOGTIME": $text";
      else
        LOG=$LOG_FILE.`date +%Y%m%d`
    touch $LOG
        if [ ! -f $LOG ]; then echo "ERROR!! Cannot create log file $LOG. Exiting."; exit 1; fi
    echo $LOGTIME": $text" | tee -a $LOG;
      fi
  done
}

 

  •  Using the script from within itself or from external to write out to defined log file

 

echo "Skipping to next copy" | write_log

 

2. Use Unix named pipes to pass data – Small intro on what is Unix Named Pipe.


Named Pipe –  a named pipe (also known as a FIFO (First In First Out) for its behavior) is an extension to the traditional pipe concept on Unix and Unix-like systems, and is one of the methods of inter-process communication (IPC). The concept is also found in OS/2 and Microsoft Windows, although the semantics differ substantially. A traditional pipe is "unnamed" and lasts only as long as the process. A named pipe, however, can last as long as the system is up, beyond the life of the process. It can be deleted if no longer used.
Usually a named pipe appears as a file, and generally processes attach to it for IPC.

 

Once named pipes were shortly explained for those who hear it for a first time, its time to say named pipe in unix / linux is created with mkfifo command, syntax is straight foward:
 

mkfifo /tmp/name-of-named-pipe


Some older Linux-es with older bash and older bash shell scripts were using mknod.
So idea behind logging script is to use a simple named pipe read input and use date command to log the exact time the command was executed, here is the script.

 

#!/bin/bash
named_pipe='/tmp/output-named-pipe';
output_named_log='
/tmp/output-named-log.txt ';

if [ -p $named_pipe ]; then
rm -f $named_pipe
fi
mkfifo $named_pipe

while true; do
read LINE <$named_pipe
echo $(date): "$LINE" >>/tmp/output-named-log.txt
done


To write out any other script output and get logged now, any of your output with a nice current date command generated output write out any output content to the loggin buffer like so:

 

echo 'Using Named pipes is so cool' > /tmp/output-named-pipe
echo 'Disk is full on a trigger' > /tmp/output-named-pipe

  • Getting the output with the date timestamp

# cat /tmp/output-named-log.txt
Mon Aug 26 15:21:29 EEST 2019: Using Named pipes is so cool
Mon Aug 26 15:21:54 EEST 2019: Disk is full on a trigger


If you wonder why it is better to use Named pipes for logging, they perform better (are generally quicker) than Unix sockets.

 

3. Logging files to system log files with logger

 

If you need to do a one time quick way to log any message of your choice with a standard Logging timestamp, take a look at logger (a part of bsdutils Linux package), and is a command which is used to enter messages into the system log, to use it simply invoke it with a message and it will log your specified output by default to /var/log/syslog common logfile

 

root@linux:/root# logger 'Here we go, logging'
root@linux:/root # tail -n 3 /var/log/syslog
Aug 26 15:41:01 localhost CRON[24490]: (root) CMD (chown qscand:qscand -R /var/run/clamav/ 2>&1 >/dev/null)
Aug 26 15:42:01 localhost CRON[24547]: (root) CMD (chown qscand:qscand -R /var/run/clamav/ 2>&1 >/dev/null)
Aug 26 15:42:20 localhost hipo: Here we go, logging

 

If you have took some time to read any of the init.d scripts on Debian / Fedora / RHEL / CentOS Linux etc. you will notice the logger logging facility is heavily used.

With logger you can print out message with different priorities (e.g. if you want to write an error message to mail.* logs), you can do so with:
 

 logger -i -p mail.err "Output of mail processing script"


To log a normal non-error (priority message) with logger to /var/log/mail.log system log.

 

 logger -i -p mail.notice "Output of mail processing script"


A whole list of supported facility named priority valid levels by logger (as taken of its current Linux manual) are as so:

 

FACILITIES AND LEVELS
       Valid facility names are:

              auth
              authpriv   for security information of a sensitive nature
              cron
              daemon
              ftp
              kern       cannot be generated from userspace process, automatically converted to user
              lpr
              mail
              news
              syslog
              user
              uucp
              local0
                to
              local7
              security   deprecated synonym for auth

       Valid level names are:

              emerg
              alert
              crit
              err
              warning
              notice
              info
              debug
              panic     deprecated synonym for emerg
              error     deprecated synonym for err
              warn      deprecated synonym for warning

       For the priority order and intended purposes of these facilities and levels, see syslog(3).

 


If you just want to log to Linux main log file (be it /var/log/syslog or /var/log/messages), depending on the Linux distribution, just type', even without any shell quoting:

 

logger 'The reason to reboot the server Currently was a System security Update

 

So what others is logger useful for?

 In addition to being a good diagnostic tool, you can use logger to test if all basic system logs with its respective priorities work as expected, this is especially
useful as I've seen on a Cloud Holsted OpenXEN based servers as a SAP consultant, that sometimes logging to basic log files stops to log for months or even years due to
syslog and syslog-ng problems hungs by other thirt party scripts and programs.
To test test all basic logging and priority on system logs as expected use the following logger-test-all-basic-log-logging-facilities.sh shell script.

 

#!/bin/bash
for i in {auth,auth-priv,cron,daemon,kern, \
lpr,mail,mark,news,syslog,user,uucp,local0 \
,local1,local2,local3,local4,local5,local6,local7}

do        
# (this is all one line!)

 

for k in {debug,info,notice,warning,err,crit,alert,emerg}
do

logger -p $i.$k "Test daemon message, facility $i priority $k"

done

done

Note that on different Linux distribution verions, the facility and priority names might differ so, if you get

logger: unknown facility name: {auth,auth-priv,cron,daemon,kern,lpr,mail,mark,news, \
syslog,user,uucp,local0,local1,local2,local3,local4, \
local5,local6,local7}

check and set the proper naming as described in logger man page.

 

4. Using a file descriptor that will output to a pre-set log file


Another way is to add the following code to the beginning of the script

#!/bin/bash
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>log.out 2>&1
# Everything below will go to the file 'log.out':

The code Explaned

  •     Saves file descriptors so they can be restored to whatever they were before redirection or used themselves to output to whatever they were before the following redirect.
    trap 'exec 2>&4 1>&3' 0 1 2 3
  •     Restore file descriptors for particular signals. Not generally necessary since they should be restored when the sub-shell exits.

          exec 1>log.out 2>&1

  •     Redirect stdout to file log.out then redirect stderr to stdout. Note that the order is important when you want them going to the same file. stdout must be redirected before stderr is redirected to stdout.

From then on, to see output on the console (maybe), you can simply redirect to &3. For example
,

echo "$(date) : Do print whatever you want logging to &3 file handler" >&3


I've initially found out about this very nice bash code from serverfault.com's post how can I fully log all bash script actions (but unfortunately on latest Debian 10 Buster Linux  that is prebundled with bash shell 5.0.3(1)-release the code doesn't behave exactly, well but still on older bash versions it works fine.

Sum it up


To shortlysummarize there is plenty of ways to do logging from a shell script logger command but using a function or a named pipe is the most classic. Sometimes if a script is supposed to write user or other script output to a a common file such as syslog, logger command can be used as it is present across most modern Linux distros.
If you have a better ways, please drop a common and I'll add it to this article.

 

Why du and df reporting different on a filesystem / How to fix inconsistency between used space on FS and disk showing full strangeness

Wednesday, July 24th, 2019

linux-why-du-and-df-shows-different-result-inconsincy-explained-filesystem-full-oddity

If you're a sysadmin on a large server environment such as a couple of hundred of Virtual Machines running Linux OS on either physical host or OpenXen / VmWare hosted guest Virtual Machine, you might end up sometimes at an odd case where some mounted partition mount point reports its file use different when checked with
df
cmd than when checked with du command, like for example:
 

root@sqlserver:~# df -hT /var/lib/mysql
Filesystem   Type  Size Used Avail Use% Mounted On
/dev/sdb5      ext4    19G  3,4G    14G  20% /var/lib/mysql

Here the '-T' argument is used to show us the filesystem.

root@sqlserver:~# du -hsc /var/lib/mysql
0K    /var/lib/mysql/
0K    total

 

1. Simple debug on what might be the root cause for df / du inconsistency reporting

 

Of course the basic thing to do when in that weird situation is to be totally shocked how this is possible and to investigate a bit what is the biggest first level sub-directories that eat up the space on the mounted location, with du:

 

# du -hkx –max-depth=1 /var/lib/mysql/|uniq|sort -n
4       /var/lib/mysql/test
8       /var/lib/mysql/ezmlm
8       /var/lib/mysql/micropcfreak
8       /var/lib/mysql/performance_schema
12      /var/lib/mysql/mysqltmp
24      /var/lib/mysql/speedtest
64      /var/lib/mysql/yourls
144     /var/lib/mysql/narf
320     /var/lib/mysql/webchat_plus
424     /var/lib/mysql/goodfaithair
528     /var/lib/mysql/moonman
648     /var/lib/mysql/daniel
852     /var/lib/mysql/lessn
1292    /var/lib/mysql/gallery

The given output is in Kilobytes so it is a little bit hard to read, if you're used to Mbytes instead, do

 

 # du -hmx –max-depth=1 /var/lib/mysql/|uniq|sort -n|less

 

I've also investigated on the complete /var directory contents sorted by size with:

 

 # du -akx ./ | sort -n
5152564    ./cache/rsnapshot/hourly.2/localhost
5255788    ./cache/rsnapshot/hourly.2
5287912    ./cache/rsnapshot
7192152    ./cache


Even after finding out the bottleneck dirs and trying to clear up a bit, continued facing that inconsistently shown in two commands and if you're likely to be stunned like me and try … to move some files to a different filesystem to free up space or assigned inodes with a hope that shown inconsitency output will be fixed as it might be caused  due to some kernel / FS caching ?? and this will eventually make the mounted FS to refresh …

But unfortunately, if you try it you'll figure out clearing up a couple of Megas or Gigas will make no difference in cmd output.

In my exact case /var/lib/mysql is a separate mounted ext4 filesystem, however same issue was present also on a Network Filesystem (NFS) and thus, my first thought that this is caused by a network failure problem or NFS bug turned to be wrong.

After further short investigation on the inodes on the Filesystem, it was clear enough inodes are available:
 

# df -i /var/lib/mysql
Filesystem       Inodes  IUsed   IFree IUse% Mounted on
/dev/sdb5      1221600  2562 1219038   1% /var/lib/mysql

 

So the filled inodes count assumed issue also has been rejected.
P.S. (if you're not well familiar with them read manual, i.e. – man 7 inode).
 

– Remounting the mounted filesystem

To make sure the filesystem shown inconsistency between du and df is not due to some hanging network mount or bug, first logical thing I did is to remount the filesytem showing different in size, in my case this was done with:
 

# mount -o remount,rw -t ext4 /var/lib/mysql

For machines with NFS remote mounted storage locations, used:

# mount -o remount,rw -t nfs /var/www


FS remount did not solved it so I continued to ponder what oddity and of course I thought of a workaround (in case if this issues are caused by kernel bug or OS lib issue) reboot might be the solution, however unfortunately restarting the VMs was not a wanted easy to do solution, thus I continued investigating what is wrong …

Next check of course was to check, what kind of network connections are opened to the affected hosts with:
 

# netstat -tupanl


Did not found anything that might point me to the reported different Megabytes issue, so next step was to check what is the situation with currently opened files by running processes on the weird df / du reported systems with lsof, and boom there I observed oddity such as multiple files

 

# lsof -nP | grep '(deleted)'

COMMAND   PID   USER   FD   TYPE DEVICE    SIZE NLINK  NODE NAME
mysqld   2588  mysql    4u   REG 253,17      52     0  1495 /var/lib/mysql/tmp/ibY0cXCd (deleted)
mysqld   2588  mysql    5u   REG 253,17    1048     0  1496 /var/lib/mysql/tmp/ibOrELhG (deleted)
mysqld   2588  mysql    6u   REG 253,17       777884290     0  1497 /var/lib/mysql/tmp/ibmDFAW8 (deleted)
mysqld   2588  mysql    7u   REG 253,17       123667875     0 11387 /var/lib/mysql/tmp/ib2CSACB (deleted)
mysqld   2588  mysql   11u   REG 253,17       123852406     0 11388 /var/lib/mysql/tmp/ibQpoZ94 (deleted)

 

Notice that There were plenty of '(deleted)' STATE files shown in memory an overall of 438:

 

# lsof -nP | grep '(deleted)' |wc -l
438


As I've learned a bit online about the problem, I found it is also possible to find deleted unlinked files only without any greps (to list all deleted files in memory files with lsof args only):

 

# lsof +L1|less


The SIZE field (fourth column)  shows a number of files that are really hard in size and that are kept in open on filesystem and in memory, totally messing up with the filesystem. In my case this is temp files created by MYSQLD daemon but depending on the server provided service this might be apache's www-data, some custom perl / bash script executed via a cron job, stalled rsync jobs etc.
 

2. Check all the list open files with the mysql / root user as part of the the server filesystem inconsistency debugging with:

 

– Grep opened files on server by user

# lsof |grep mysql
mysqld    1312                       mysql  cwd       DIR               8,21       4096          2 /var/lib/mysql
mysqld    1312                       mysql  rtd       DIR                8,1       4096          2 /
mysqld    1312                       mysql  txt       REG                8,1   20336792   23805048 /usr/sbin/mysqld
mysqld    1312                       mysql  mem       REG               8,21      24576         20 /var/lib/mysql/tc.log
mysqld    1312                       mysql  DEL       REG               0,16                 29467 /[aio]
mysqld    1312                       mysql  mem       REG                8,1      55792   14886933 /lib/x86_64-linux-gnu/libnss_files-2.28.so

 

# lsof | grep root
COMMAND    PID   TID TASKCMD          USER   FD      TYPE             DEVICE   SIZE/OFF       NODE NAME
systemd      1                        root  cwd       DIR                8,1       4096          2 /
systemd      1                        root  rtd       DIR                8,1       4096          2 /
systemd      1                        root  txt       REG                8,1    1489208   14928891 /lib/systemd/systemd
systemd      1                        root  mem       REG                8,1    1579448   14886924 /lib/x86_64-linux-gnu/libm-2.28.so

Other command that helped to track the discrepancy between df and du different file usage on FS is:
 

# du -hxa  / | egrep '^[[:digit:]]{1,1}G[[:space:]]*'
 

 

3. Fixing large files kept in memory filesystem problem


What is the real reason for ending up with this file handlers opened by running backgrounded programs on the Linux OS?
It could be multiple  but most likely it is due to exceeded server / client interactions or breaking up RAM or HDD drive with writing plenty of logs on the FS without ending keeping space occupied or Programming library bugs used by hanged service leaving the FH opened on storage.

What is the solution to file system files left in memory problem?

The best solution is to first fix custom script or hanged service and then if possible to simply restart the server to make the kernel / services reload or if this is not possible just restart the problem creation processes.

Once the process is identified like in my case this was MySQL on systemd enabled newer OS distros, just do:

 

 

# systemctl restart mysqld.service


or on older init.d system V ones:

# /etc/init.d/service restart


For custom hanged scripts being listed in ps axuwef you can grep the pid and do a kill -HUP (if the script is written in a good way to recognize -HUP and restart the sub-running process properly – BE EXTRA CAREFUL IF YOU'RE RESTARTING BROKEN SCRIPTS as this might cause your running service disruptions …).

# pgrep -l script.sh
7977 script.sh


# kill -HUP PID

 

Now finally this should either mitigate or at best case completely solve the reported disagreement between df and du, after which the calculated / reported disk space should be back to normal and show up approximately the same (note that size changes a bit as mysql service is writting data) constantly extending the size between the two checks.

 

# df -hk /var/lib/mysql; du -hskc /var/lib/mysql
Filesystem       Inodes  IUsed   IFree IUse% Mounted on
/dev/sdb5        19097172 3472744 14631296  20% /var/lib/mysql
3427772    /var/lib/mysql
3427772    total

 

What we learned?

What I've explained in this article is why and how it comes that 'zoombie' files reside on a filesystem
appearing to be eating disk space on a mounted local or network partition, giving strange inconsistent
reports, leading to system service disruptions and impossibility to have correctly shown information on used
disk space on mounted drive.

I went through with some standard logic on debugging service / filesystem / inode issues up explainat, that led me to the finding about deleted files being kept in filesystem and producing the filesystem strange sized / showing not correct / filled even after it was extended with tune2fs and was supposed to have extra 50GBs.

Finally it was explained shortly how to HUP / restart hanging script / service to fix it.

Some few good readings that helped to fix the issue:

What to do when du and df report different usage is here
df in linux not showing correct free space after file removal is here
Why do “df” and “du” commands show different disk usage?
 

Install and use personal Own Cloud on Debian Linux for better shared data security – OwnCloud a Free Software replacement for Google Drive

Thursday, August 23rd, 2018

owncloud-self-hosted-cloud-file-sharing-and-storage-service-for-gnu-linux-howto-install-on-debian

Basicly I am against the use of any Cloud type of service but as nowadays Cloud usage is almost inevitable and most of the times you need some kind of service to store and access remotely your Data from multiple devices such as DropBox, Google Drive, iCloud etc. and using some kind of infrastructure to execute high-performance computing is invitable just like the Private Cloud paid services online are booming nowdays, I decided to give a to research and test what is available as a free software in the field of Clouding (your data) 🙂

Undoubfully, it is really nice fact that there are Free Software / Open Source alternatives to run your Own personal Cloud to store your data from multiple locations on a single point.

The most popular and leading Cloud Collaboration service (which is OpenSource but unfortunately not under GPLv2 / GPV3 – e.g. not fully free software) is OwnCloud.

ownCloud is a flexible self-hosted PHP and Javascript based web application used for data synchronization and file sharing (where its remote file access capabilites are realized by Sabre/Dav an open source WebDav server.
OwnCloud allows end user to easily Store / Manage files, Calendars, Contacts, To-Do lists (user and group administration via OpenID and LDAP), public URLs can be easily, created, the users can interact with browser-based ODF (Open Document Format) word processor , there is a Bookmarking, URL Shortening service integrated, Gallery RSS Feed and Document Viewer tools such as PDF viewer etc. which makes it a great alternative to the popular Google Drive, iCloud, DropBox etc.

The main advantage of using a self-hosted Cloud is that Your data is hosted and managed by you (on your server and your hard drives) and not by some God knows who third party provider such as the upmentioned.
In other words by using OwnCloud you manage your own data and you don't share it ot on demand with the Security Agencies with CIA, MI6, Mussad … (as it is very likely most of publicly offered Cloud storage services keeps track on the data stored on them).

The other disadvantage of Cloud Computing is that the stored data on such is usually stored on multiple servers and you can never know for sure where your data is physically located, which in my opinion is way worse than the option with Self Hosted Cloud where you know where your data belongs and you can do whatever you want with your data keep it secret / delete it or share it on your demand.

OwnCloud has its clients for most popular Mobile (Smart Phone) platforms – an Android client is available in Google Play Store as well as in Apple iTunes besides the clients available for FreeBSD OS, the GNOME desktop integration package and Raspberry Pi.

For those who are looking for additional advanced features an Enterprise version of OwnCloud is also available aiming business use and included software support.

Assuming you have a homebrew server or have hired a dedidacted or VPS server (such as the Ones we provide) ,Installing OwnCloud on GNU / Linux is a relatively easy
task and it will take no more than 15 minutes to 2 hours of your life.
In that article I am going to give you a specific instructions on how to install on Debian GNU / Linux 9 but installing on RPM based distros is similar and straightfoward process.
 

1. Install MySQL / MariaDB database server backend
 

By default OwnCloud does use SQLite as a backend data storage but as SQLite stores its data in a file and is becoming quickly slow, is generally speaking slowre than relational databases such as MariaDB server (or the now almost becoming obsolete MySQL Community server).
Hence in this article I will explain how to install OwnCloud with MariaDB as a backend.

If you don't have it installed already, e.g. it is a new dedicated server install MariaDB with:
 

server:~# apt-get install –yes mariadb-server


Assuming you're install on a (brand new fresh Linux install – you might want to install also the following set of tools / services).

 

server:~# systemctl start mariadb
server:~# systemctl enable mariadb
server:~# mysql_secure_installation


mysql_secure_installation – is to finalize and secure MariaDB installation and set the root password.
 

2. Create necessery database and users for OwnCloud to the database server
 

linux:~# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE owncloud CHARACTER SET utf8;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON owncloud.* TO 'owncloud'@'localhost' IDENTIFIED BY 'owncloud_passwd';
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> \q

 

3. Install Apache + PHP necessery deb packages
 

As of time of writting the article on Debian 9.0 the required packages for a working Apache + PHP install for OwnCloud are as follows.

 

server:~# apt-get install –yes apache2 mariadb-server libapache2-mod-php7.0 \
openssl php-imagick php7.0-common php7.0-curl php7.0-gd \
php7.0-imap php7.0-intl php7.0-json php7.0-ldap php7.0-mbstring \
php7.0-mcrypt php7.0-mysql php7.0-pgsql php-smbclient php-ssh2 \
php7.0-sqlite3 php7.0-xml php7.0-zip php-redis php-apcu

 

4. Install Redis to use as a Memory Cache for accelerated / better performance ownCloud service


Redis is an in-memory kept key-value database that is similar to Memcached so OwnCloud could use it to cache stored data files. To install latest redis-server on Debian 9:
 

server:~# apt-get install –yes redis-server

5. Install ownCloud software packages on the server

Unfortunately, default package repositories on Debian 9 does not provide owncloud server packages but only some owncloud-client packages are provided, that's perhaps the packages issued by owncloud does not match debian packages.

As of time of writting this article, the latest available OwnCloud server  version package for Debian is OC 10.

a) Add necessery GPG keys

The repositories to use are provided by owncloud.org, to use them we need to first add the necessery gpg key to verify the binaries have a legit checksum.
 

server:~# wget -qO- https://download.owncloud.org/download/repositories/stable/Debian_9.0/Release.key | sudo apt-key add –

 

b) Add owncloud.org repositories in separete sources.list file

 

server:~# echo 'deb https://download.owncloud.org/download/repositories/stable/Debian_9.0/ /' | sudo tee /etc/apt/sources.list.d/owncloud.list

 

c) Enable https transports for the apt install tool

 

server:~# apt-get –yes install apt-transport-https

 

d) Update Debian apt cache list files and install the pack

 

server:~# apt-get update

 

server:~# apt-get install –yes owncloud-files

 

By default owncloud store file location is /var/www/owncloud but on many servers that location is not really appropriate because /var/www might be situated on a hard drive partition whose size is not big enough, if that's the case just move the folder to another partition and create a symbolic link in /var/www/owncloud pointing to it …


6. Create necessery Apache configurations to make your new self-hosted cloud accessible
 

a) Create Apache config file

 

server:~# vim /etc/apache2/sites-available/owncloud.conf

 

 

Alias /owncloud "/var/www/owncloud/"

<Directory /var/www/owncloud/>
Options +FollowSymlinks
AllowOverride All

<IfModule mod_dav.c>
Dav off
</IfModule>

SetEnv HOME /var/www/owncloud
SetEnv HTTP_HOME /var/www/owncloud

</Directory>

b) Enable Mod_Dav (WebDAV) if it is not enabled yet

 

server:~# ln -sf ../mods-available/dav_fs.conf
server:~# ln -sf ../mods-available/dav_fs.load
server:~# ln -sf ../mods-available/dav.load
server:~# ln -sf ../mods-available/dav_lock.load

c) Set proper permissions for /var/www/owncloud to make upload work properly

 

chown -R www-data: /var/www/owncloud/


d) Restart Apache WebServer (to make new configuration affective)

 

 

server:~# /etc/init.d/apache2 restart


7. Finalize  OwnCloud Install
 

Access OwnCloud Web Interface to finish the database creation and set the administrator password for the New Self-Hosted cloud
 

http://Your_server_ip_address/owncloud/

By default the Web interface is accessible in unencrypted (insecure) http:// it is a recommended practice (if you already don't have an HTTPS SSL certificate install for the IP or the domain to install one either a self-signed certificate or even better to use LetsEncrypt CertBot to easily create a valid SSL for free for your domain

 

installing-OwnCloud-Web-Config-User-Pass-interface-Owncloud-10-on-Debian-9-Linux-howto

Just fill in in your desired user / pass and pass on the database user / password / db name (if required you can set also a different location for the data directory from the default one /var/www/owncloud/data.

Click Finish Setup and That's all folks!

owncloud-server-web-ui-interface

OwnCloud is successfully installed on the server, you can now go and download a Mobile App or Desktop application for whatever OS you're using and start using it as a Dropbox replacement. In a certain moment you might want to consult also the official UserManual documentation as you would probably need further information on how to manage your owncloud.

Enjoy !

Qmail redirect mail to another one and keep local Mailbox copy with .qmail file – Easy Set up email forwarding Qmail

Saturday, August 11th, 2018

Qmail redirect mail box to another one with .Qmail file dolphin artistic logo

QMail (Considered to be the most secure Mail server out there whose modified version is running on Google – Gmail.com and Mail Yahoo! and Yandex EMail (SMTP) servers, nowadays has been highly neglected and considered obsolete thus most people prefer to use postfix SMTP or EXIM but still if you happen to be running a number of qmail old rack Mail servers (running a bunch of Email addresses and Virtual Domains straight on the filesystem – very handy by the way for administration much better than when you have a Qmail Mail server configured to store its Mailboxes within MySQL / PostgreSQL or other Database server – because simple vpopmail configured to play nice with Qmail and store all user emails directly on Filesystem (though considered more insecure the email correspondence can be easily red, if the server is hacked it is much better managable for a small and mid-sized mailserver) or have inherited them from another sys admin and you wonder how to redirect a single Mailbox:

(under domain lets say domain's email  my-server1.com should forward to to SMTP domain my-server-whatever2.com (e.g. your-email-username@server-whatever1.com is supposed to forward to your-email-username2@server-whatever2.com).
To achieve it create new file called .qmail

Under the Qmail or VirtualDomain location for example:

/var/qmail/mailnames/myserver1.com/username/.qmail

 

e.g
 

root@qmail-server:~# vim /var/qmail/mailnames/myserver1.com/your-email-username/.qmail
&your-email-username@server-whatever1.com

your-email-username@example1.com
/home/vpopmail/domains/server-whatever2.com/your-email-username/Maildir/


!!! NOTE N.B. !!! the last slash / after Maildir (…Maildir/) is important to be there otherwise mail will not get delivered
That's all now send a test email, just to make sure redirection works properly, assuming the .qmail file is created by root, by default the file permissions will be with privileges root:root.

Note
 

That shouldn't be a problem at all. That's all now enjoy emails being dropped out to the second mail 🙂

 

OSCommerce how to change / reset lost admin password

Monday, October 16th, 2017

reset-forgotten-lost-oscommerce-password-howto-Os_commerce-logo.svg

How to change / reset OSCommerce lost / forgotten admin password?

The password in OSCommerce is kept in table "admin", so to reset password connect to MySQL with mysql cli client.

First thing to do is to generate the new hash string, you can do that with a simple php script using the md5(); function

 

root@pcfreak:/var/www/files# cat 1.php
<?
$pass=md5('password');
echo $pass;
?>

 

root@pcfreak:/var/www/files# php 1.php
5f4dcc3b5aa765d61d8327deb882cf99
root@pcfreak:/var/www/files#

 

Our just generated string (for text password password) is hash: 5f4dcc3b5aa765d61d8327deb882cf99

Next to update the new hash string into SQL, we connect to MySQL:

 

$ mysql -u root -p

 


And issue following command to modify the encrypted hash string:

 

UPDATE `DB`.`admin` SET `admin_password` = '5f4dcc3b5aa765d61d8327deb882cf99' WHERE `admin`.`admin_id` = 6;

Linux: /var/log/wtmp – No such file or directory quick fix and why it might be missing on a server

Thursday, May 4th, 2017

fix-var-log-wtmp-btmp-no-such-file-or-directory-linux_last_command-howto-quick-fix

If you have to occasionally log  into some client old inherited (not installed by you) Linux servers on and just out of curiosity and for security sake dediced do a quick security (last user login) evaluation, e.g. issued the
last command just to find out you get the error:

last: /var/log/wtmp: No such file or directory

Perhaps this file was removed by the operator to prevent logging last info.

Then this might be a sure indicator that some malicious script kiddie (hax0r) activity has been run over the server or the ex-system administrator if fired recently decided to wipe out all his login tracks among with installing some other nasty rootkit or backdoor.

Under some circumstances the error might be caused also by badly written end user rotate script bugs (like shell or perl script) bugs or by a buggy deployment of Linux OS virtual machine.
The last: /var/log/wtmp: No such file or directory error is likely to happen on Ubuntu / Debian / Redhat / CentOS Linux distributions running on a Cloud PaaS service such as Amazon EC2, some of the Cloud services vendors do choose to explicitly remove /var/log/wtmp for the reason that many of end customers are using their Linux VM servers (Xen Virtualization / OpenVZ / LXC – Linux Containers) etc. irresponsibly and hence become a victim of script kiddie attacks and the failed logins attempts logged in /var/log/wtmp grow to many gigabytes.

Even some Linux distributions or system administrators of Linux server login hosts that has to keep tens of thousands of  login records monthly or are concentrating on simplicity and on an attempt to reduce size has purposefully deleted the last login entry file /var/log/wtmp file to save space.

But anyways if you happen to be missing this file always bear in mind that you might have been a victim of intrusion and you better run chkrootkit and rkhunter

Run below commands to fix the missing /var/log/wtmp

touch /var/log/wtmp
chmod 0664 /var/log/wtmp
chown root:utmp /var/log/wtmp

On some Linux distributions such as Ubuntu and Fedora you might also want to create /var/log/btmp (which is used to log failed login attempts to server)

touch /var/log/btmp
chmod 0664 /var/log/btmp
chown root:utmp /var/log/btmp

Once the files are created the last command will start logging server in logins and logouts as it is supposed to be again, e.g.:
 

linux:~# last -15
root pts/0 192.168.0.15 Fri May 5 16:41 still logged in


This article was inspired by a prior article found on root.bg the site is in Bulgarian so unfortunately you might not be able to read it, but as a content and concept it is pretty similar to pc-freak.net, actually the site author Nikolay Nikolov (known in Internet Relay Chat IRC under the pseudonym Joni-B, happened to be an old friend from youth geek IT years 🙂

Enjoy

Briefly unavailable for scheduled maintenance. Fix WordPress after interrupted upgrade

Thursday, March 2nd, 2017

briefly-unavaiable-for-a-scheduled-maintenance-wordpress-website-fix-howto_1

I've recenty tried Update my WordPress blog sites and being unattentive I've selected all the plugins possitble for Upgrade by checking the "Select All" check box on the Update dialogs and almost automatically int he hurry pressing Update button however out of a sudden I've realized I could screw up my websites brutally as some of the plugins to upgraded might be lacking 100% compitability with their prior versions.

I've made a messes out of my blog many times during upgrades because of choosing to upgrade the wrong not 100% compatible plugins and I know well how painful and hard to track it could be a misbehaving incompatible plugin or how ot could cause a severe sluggishness to blog which automatically reflects on how well the website search engine ranked in Google / Yahoo / Bing indexed etc.

Thus as an almost unconcsious reaction to prevent myself the future troubles I've tried to cancel the update request in Firefox browser and trying to reload the Update page with a hope that I might be quick enough for the Apache / WP / MySQLbackend Update Update queries request to be delaying for processing but I was too slow and bang! I ended up with the following unpleasent message in my browser:

briefly-unavaiable-for-a-scheduled-maintenance-wordpress-website-fix-howto
Briefly unavailable for scheduled maintenance.
 

As you could guess that message caused me quite a lot of worries at hand especially since I've already break up my sites many times by doing quick unmindful reactions and the fact that there is Google Adsense ads appearing which does give me some Return on Investment cents every now and then …

It took me few minutes of research online to find what really happened and how to fix / resolve the WebSites normal operations.
 


So what causes the Briefly unavailable for scheduled maintenance. appears ?

When WordPress does some of its integrated maintenance jobs a plugin enable / disable or any task that has to modify crucial configurations inside the database WordPress does disable access to all end clients to itself in order to protect its sensitive data to appear to browser requestors as showing some unexpected information to end client browser could be later used by crackers / hackers or a possibly open a security hole for an attacker.

The message is wordpress generated notice and it is pretty normal for the end user to see it during the WP site installation update depending on how many plugins are installed and loaded to the site and how long it will take for the backend Linux / Windows server to fetch the archived .zips of plugins and substitute with the new ones and update the files extracting them to wp-content/plugins and updating the respective required SQL database / tables it could be showing for end users from few secs to few minutes.

However under some circumstances on Browser request timeout to remote wordpress site due to a network connectivity issue or just a bad configuration of Apache for requests timeout (or a slow remote server Apache responce time due to server Hardware / Mem overload) or a stupid browser "Stop" / cancel request like in my case you end up with the Briefly unavailable for scheduled maintenance and you can can longer access the your https://siteurl.com/wp-admin Admin Panel.

The message is triggered by a WP craeted file .maintenance inside /var/www/blog-site/ e.g. WordPress PHP scripts does check for /var/www/blog-site/.maintenance
existence and if it is matched the WP scripts does generate the Briefly unavailble … message.


How to resolve the "Briefly unavailable for scheduled maintenance. Check back in a minute" WordPress error ?

As you might guess removing the maintenance "coming soon" like message in most of the cases comes to just deleting the .maintenance file, to do so:

1. Login to remote server via FTP or SFTP
2. Locate your WP website root folder that should be something like /var/www/blog-site/.maintenance and issue:
issue something like:
 

$ rm -rf /var/www/bog-site/.maintenance


Assuming that some plugin Update .zip extraction or SQL update query did not ended being half installed / executed that should solve the error.
To check whether all is back to normal just refresh your browser pointing to the "broken" site. If it appears well you can thank God for that 🙂
If not check the apache error logs and php error logs and see which of the php scripts is failing and then try to manually fetch and unzip the WP .zip package to wp-content/plugins folder and give it another try and if God bless so it will work as before 🙂

How to prevent your WP based business in future from such nasty errors using A Staging site (test) version of your blog ?

Just run a duplicate of your website under a separate folder on your hosting and do enable the same plugins as on the primary website and copy over the MySQL / PostgreSQL Database from your Live site to the Staging, then once it is enabled before doing any crucial WordPress version updates or Plugins Update always do try the Upgrade first on your Test Staging site. If it does execute fine there in most of the cases the result should be the same on the Production host and that could solve you effors and nerves of debugging a hard to get failure errors or faulty plugins without affecting what your End users see.

If you're not hosting the WordPress install under your own hosting like me you can always use some of the public available hostings like  BlueHost or WPEngine

 

How to force logrorate process logs / Make logrotate changes take effect immediately

Sunday, April 10th, 2016

how-to-force-logrorate-to-process-logs-make-logrorate-changes-take-effect-immediately-log-rotate-300x299

Dealing with logrorate as admins we need to change or add new log-rorate configurations (on most Linux distributions configs are living uder
/etc/logrotate.d/
 

logrotate uses crontab to work. It's scheduled work, not as daemon, so usually no need to reload its configuration.
When the crontab executes logrotate, it will use your new config file automatically.

Most of the logrotate setups I've seen on various distros runs out of the /etc/cron.daily

$ ls -l /etc/cron.daily/logrotate 
-rwxr-xr-x 1 root root 180 May 18  2014 /etc/cron.daily/logrotate

Here is content of cron job scheduled script:

$ cat /etc/cron.daily/logrorate

#!/bin/sh /usr/sbin/logrotate /etc/logrotate.conf EXITVALUE=$? if [ $EXITVALUE != 0 ]; then /usr/bin/logger -t logrotate "ALERT exited abnormally with [$EXITVALUE]" fi exit 0

Configurations change to lograte configs takes effect on next crontab run,
but what If you need to test your config you can also execute logrotate
on your own with below ommand:

 

logrotate -vf /etc/logrotate.conf 

If you encounter some issues with just modified or newly added logrorate script to check out the status of last logrorate executing bunch of log-rotate scripts run on Debian  / Ubuntu etc. deb based Linux:

cat /var/lib/logrotate/status

Or on RHEL, Fedora, CentOS Linux


cat /var/lib/logrotate.status

logrotate state -- version 2

 

"/var/log/syslog" 2016-4-9
"/var/log/dpkg.log" 2016-4-1
"/var/log/unattended-upgrades/unattended-upgrades.log" 2012-9-20
"/var/log/unattended-upgrades/unattended-upgrades-shutdown.log" 2013-5-17
"/var/log/apache2/mailadmin.pc-freak.net-access.log" 2012-9-19
"/var/log/snort/portscan.log" 2012-9-12
"/var/log/apt/term.log" 2016-4-1
"/var/log/squid/access.log" 2015-3-21
"/var/log/mysql/mysql-slow.log" 2016-4-9
"/var/log/debug" 2016-4-3
"/var/log/mysql.log" 2016-4-9
"/var/log/squid/store.log" 2015-3-21
"/var/log/apache2/mailadmin.pc-freak.net-error.log" 2012-9-19
"/var/log/daemon.log" 2016-4-3
"/var/log/munin/munin-update.log" 2016-4-9
"/var/log/unattended-upgrades/unattended-upgrades*.log" 2013-5-16
"/var/log/razor-agent.log" 2015-2-19
"/var/log/btmp" 2016-4-1
"/var/log/squid/*.log" 2014-11-24
"/var/log/munin/munin-graph.log" 2016-4-9
"/var/log/mysql/mysql.log" 2012-9-12
"/var/log/munin/munin-html.log" 2016-4-9
"/var/log/clamav/freshclam.log" 2016-4-3
"/var/log/munin/munin-node.log" 2016-1-23
"/var/log/mail.info" 2016-4-3
"/var/log/apache2/other_vhosts_access.log" 2016-4-3
"/var/log/exim4/rejectlog" 2012-9-12
"/var/log/squid/cache.log" 2015-3-21
"/var/log/messages" 2016-4-3
"/var/log/stunnel4/stunnel.log" 2012-9-19
"/var/log/apache2/php_error.log" 2012-10-21
"/var/log/ConsoleKit/history" 2016-4-1
"/var/log/rsnapshot.log" 2013-4-15
"/var/log/iptraf/*.log" 2012-9-12
"/var/log/snort/alert" 2012-10-17
"/var/log/privoxy/logfile" 2016-4-3
"/var/log/auth.log" 2016-4-3
"/var/log/postgresql/postgresql-8.4-main.log" 2012-10-21
"/var/log/apt/history.log" 2016-4-1
"/var/log/pm-powersave.log" 2012-11-1
"/var/log/proftpd/proftpd.log" 2016-4-3
"/var/log/proftpd/xferlog" 2016-4-1
"/var/log/zabbix-agent/zabbix_agentd.log" 2016-3-25
"/var/log/alternatives.log" 2016-4-7
"/var/log/mail.log" 2016-4-3
"/var/log/kern.log" 2016-4-3
"/var/log/privoxy/errorfile" 2013-5-28
"/var/log/aptitude" 2015-5-6
"/var/log/apache2/access.log" 2016-4-3
"/var/log/wtmp" 2016-4-1
"/var/log/pm-suspend.log" 2012-9-20
"/var/log/snort/portscan2.log" 2012-9-12
"/var/log/mail.warn" 2016-4-3
"/var/log/bacula/log" 2013-5-1
"/var/log/lpr.log" 2012-12-12
"/var/log/mail.err" 2016-4-3
"/var/log/tor/log" 2016-4-9
"/var/log/fail2ban.log" 2016-4-3
"/var/log/exim4/paniclog" 2012-9-12
"/var/log/tinyproxy/tinyproxy.log" 2015-3-25
"/var/log/munin/munin-limits.log" 2016-4-9
"/var/log/proftpd/controls.log" 2012-9-19
"/var/log/proftpd/xferreport" 2012-9-19
"/var/spool/qscan/qmail-queue.log" 2013-5-15
"/var/log/user.log" 2016-4-3
"/var/log/apache2/error.log" 2016-4-3
"/var/log/exim4/mainlog" 2012-10-16
"/var/log/privoxy/jarfile" 2013-5-28
"/var/log/cron.log" 2016-4-3
"/var/log/clamav/clamav.log" 2016-4-3

 

The timestamp date next to each of the rotated service log is when the respective log was last rorated

It is also a handy thing to rorate only a certain service log, lets say clamav-server, mysql-server, apache2 and nginx
 


logrorate /etc/logrorate.d/clamav-server
logrorate /etc/logrorate.d/mysql-server
logrotate /etc/logrotate.d/nginx

Migrate Webserver and SQL data from old SATA Hard drive to SSD to boost websites performance / Installing new SSD KINGSTON 120GB hard disk on Linux

Monday, March 28th, 2016

ssd-linux-migrate-webserver-and-mysql-from-old-SATA-to-SSD-Kingston-Hard-drive-to-boost-performance-installing-new-SSD-on-Debian-linux
Blog and websites hosted on a server were giving bad performance lately and the old SATA Hard Disk on the Lenovo Edge server seemed to be overloaded from In/Out operations and thus slowing down the websites opeining time as well as SQL queries (especially the ones from Related Posts WordPress plugin was quite slow. Sometimes my blog site opening times were up to 8-10 seconds.

To deal with the issue I obviously needed a better speed of I/O of hard drive thus as I've never used SSD hard drives so far,  I decided to buy a new SSD (Solid State Drive) KINGSTON SV300S37A120G, 605ABBF2, max UDMA/133  hard disk.
Mounting the hard disk physically on the computer tower case wasn't a big deal as there are no rotating elements of the SSD it doesn't really matter how it is mounted main thing is that it is being hooked up somewhere to the case.

I was not sure whether the SSD HDD is supported by my Debian GNU / Linux so I had see whether Linux Operating System has properly detected your hard disk use dmesg

1. Check if SSD Hard drive is supported in Linux

 

linux:~# dmesg|grep -i kingston
[    1.182734] ata5.00: ATA-8: KINGSTON SV300S37A120G, 605ABBF2, max UDMA/133
[    1.203825] scsi 4:0:0:0: Direct-Access     ATA      KINGSTON SV300S3 605A PQ: 0 ANSI: 5

 

linux:~# dmesg|grep -i sdb
[    1.207819] sd 4:0:0:0: [sdb] 234441648 512-byte logical blocks: (120 GB/111 GiB)
[    1.207847] sd 4:0:0:0: [sdb] Write Protect is off
[    1.207848] sd 4:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[    1.207860] sd 4:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    1.207928]  sdb: unknown partition table
[    1.208319] sd 4:0:0:0: [sdb] Attached SCSI disk

 

Well great news as you see from above output obviously the Kingston SSD HDD was detected by the kernel.
I've also inspected whether the proper dimensions of hard drive (all 120 Gigabytes are being detected by the OS):

 

linux:~# fdisk -l /dev/sdb
Disk /dev/sdb: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Even better as the proper HDD sizing was detected by Linux kernel.
Next thing to do was of course to create ext4 filesystem on the SSD HDD.
I wanted to give 2 separate partitions for my Webserver Websites DocumentRoot directories which all lay under the standard Apache location inside /var/www as well as MySQL data folder which is also under the standard Debian based Linuces – /var/lib/mysql as the SQL data directory was just 3.3 GB size, I've decided to reserve 20GB gigabytes for the MySQL and another 100 GB for my PHP / CSS / JS / HTML and other data files /var/www.
 

2. Create SSD partitions with cfdisk

Hence I needed to create:

1. SSD partition of 100GB
2. SSD partition of 20GB

I have cfdisk installed and I believe, the easiest way to create the partitions is using interactive partitioner as CFDISK instead of fdisk: so in order to make the proper partitions I've ran

 

linux:~# cfdisk /dev/sdb


I' will skip explainig details on how to use CFDISK as it is pretty standard – display or manipulate disk partition table tool.
Just press on NEW button (moving with arrow keys buttons) and choose the 2 partitions size 100000 and 20000 MB (one thing to note here is that you have to choose between Primary and  Logical creation of partitions, as my SSD is a secondary drive and I already have a ) and then press the
WRITE button to save all the partition changes.

!!! Be very careful here as you might break up your other disks data make sure you're really modifying the SSD Hard Drive and not your other /dev/sda or other attached external Hard drive or ATA / SATA disk.
Press the WRITE button only once you're absolutely sure, you do it at your own (always create backup of your other data and don't blame me if something goes wrong) …

Once created the two partitions will look like in the screenshot below:
creating-linux-partitions-with-cfdisk-linux-partitioning-tool.png

 


3. Create ext4 filesystem 100 and 20 GB partitions

Next thing to do before the two partitions are ready to mount under Webserver's files documentroot /var/www and /var/lib/mysql is to create ext4 filesystem, though some might prefer to stick to ext3 or reiserfs partition, I would recommend you use ext4 for the reason ext4 according to my quick research is said to perform much better with SSD Hard Drives.

The tool to create the ext4 filesystems is mkfs4.ext4 it is provided by debian package e2fsprogs I have it already installed on my server, if you don't have it just go on and install it with:
 

linux:~# apt-get install –yes e2fsprogs

 

To create the two ext4 partitions run:
 

linux:~# mkfs4.ext4 /dev/sdb5

 

linux:~# mfs4.ext4 /dev/sdb6


Here the EXT4 filesystem on partition that is supposed to be 100 Gigabytes will take 2, 3 minutes as the dimensions of partition are a bit bigger, so if you don't want to get boring go grab a coffee, once the partitions are ready you can evaluate whether everyhing is properly created with fdisk you should get output like the one below

 

linux:~# fdisk -l /dev/sdb
Disk /dev/sdb: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1              63   234441647   117220792+   5  Extended
/dev/sdb5             126    39070079    19534977   83  Linux
/dev/sdb6        39070143   234441647    97685752+  83  Linux

 

4. Mount newly created SSD partitions under /var/www and /var/lib/mysql

Before I mounted /var/www and /var/lib/mysql in order to be able to mount under the already existing directories I had to:

1. Stop Apache and MySQL server
2. Move Mysql and Apache Documentroot and Data directories to -bak
3. Create new empty /var/www and /var/lib/mysql direcotries
4. Copy backpups ( /var/www-bak and /var/lib/mysql-bak ) to the newly mounted ext4 SSD partitions

To achieve that I had to issue following commands:
 

linux:~# /etc/init.d/apache2 stop
linux:~# /etc/init.d/mysql stop

linux:~# mv /var/www /var/www-bak
linux:~# mv /var/lib/mysql /var/lib/mysql-bak

linux:~# mkdir /var/www
linux:~# mkdir /var/lib/mysql
linux:~# chown -R mysql:mysql /var/lib/mysql


Then to manually mount the SSD partitions:
 

linux:~# mount  /dev/sdb5 /var/lib/mysql
linux:~# mount /dev/sdb6 /var/www


To check that the folders are mount into the SSD drive, ran mount cmd:

 

linux:~# mount
/dev/sda1 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
/dev/sdc1 on /backups type ext4 (rw)
/dev/sdb5 on /var/lib/mysql type ext4 (rw,relatime,discard,data=ordered))
/dev/sdb6 on /var/www type ext4 (rw,relatime,discard,data=ordered))

 

That's great now the filesystem mounts fine, however as it an SSD drive and SSD drives are being famous for having a number of limited writes on disk before the drive lifetime is over it is a good idea to increase a bit the lifetime of the SSD by mounting the SSD partitions with noatime and errors=remount-ro (in order to not log file access times to filesystem table and to remount the FS read only in case of some physical errors of the drive).

5. Configure SSD partitions to boot every time Linux reboots

Now great, the filesystems gets mounted fine so next thing to do is to make it automatically mount every time the Linux OS boots up, this on GNU / Linux is done through /etc/fstab, for my 2 ext4 partitions this is the content to add at the end of /etc/fstab:

 

/dev/sdb5               /var/lib/mysql      ext4        noatime,errors=remount-ro       0       1
/dev/sdb6               /var/www        ext4    noatime,errors=remount-ro       0       1

 

quickest way to add it without a text editor is to echo to the end of file:
 

linux:~# cp -rpf /etc/fstab /etc/fstab.bak_25_03_2016
linux:~# echo ' /dev/sdb5               /var/lib/mysql      ext4        noatime,errors=remount-ro,discard       0       1' >> /etc/fstab
linux:~# echo ' /dev/sdb6               /var/www        ext4    noatime,errors=remount-ro,discard       0       1 ' >> /etc/fstab


Then mount again all the filesystems including the 2 new created SSD (100 and 20 GB) partitions:
 

linux:~# umount /var/www
linux:~# umount /var/lib/mysql
linux:~# mount -a


To assure properly mounted with noatime and remount-ro on errors options:


linux:~# mount | grep -i sdb
/dev/sdb5 on /var/lib/mysql type ext4 (rw,noatime,errors=remount-ro)
/dev/sdb6 on /var/www type ext4 (rw,noatime,errors=remount-ro)

 

It is also a good idea to check a statistics of disk free command:
 

linux:~# df -h|grep -i sdb
/dev/sdb5         19G  0G    19G  0% /var/lib/mysql
/dev/sdb6         92G   0G    92G  0% /var/www


6. Copy all Webserver and SQL data from backupped directories to new SSD mounted

Last but not least is to copy all original content files from /var/www-bak and /var/lib/mysql-bak to the new freshly  created SSD partitions, though copying the files can be made with normal linux copy command (cp),
I personally prefer rsync because rsync is much quicker and more efficient in copying large amount of files in my case this were 48 Gigabytes.

To copy files from with rsync:

 

linux:~# rsync -av –log-file /var/log/backup.log  /var/www-bak /var/www
linux:~# rsync -av –log-file /var/log/backup.log  /var/lib/mysql-bak /var/lib/mysql


Then ofcourse, finally to restore my websites normal operation I had to bring up the Apache Webservers and MySQL service

 

linux:~# /etc/init.d/apache2 start
linux:~# /etc/init.d/mysql start


7. Optimizing SSD performance with periodic trim (discard of unused blocks on a mounted filesystem)

As I digged deeper into how to even further optimize SSD drive performance I learned about the cleaning action TRIM of the partitions for a long term performance proper operation, to understand it better think about trimming like Windows degrament operatin.
 

NAME
fstrim – discard unused blocks on a mounted filesystem

SYNOPSIS
fstrim [-o offset] [-l length] [-m minimum-free-extent] [-v] mountpoint

DESCRIPTION
fstrim is used on a mounted filesystem to discard (or "trim") blocks which are not in use by the filesystem. This is useful for
solid-state drives (SSDs) and thinly-provisioned storage.

By default, fstrim will discard all unused blocks in the filesystem. Options may be used to modify this behavior based on range or
size, as explained below.


Trimming is really necessery, otherwise SSD become very slow after some time. All modern SSD's support TRIM, but older SSD's from before 2010 usually don't.
Thus for an older SSD you'll want to check this on the website of the manufacturer.

As I mentioned earlier TRIM is not supported by all SSD drives, to check whether TRIM is supported by SSD:

linux:~# hdparm -I /dev/sdb|grep -i -E 'trim|discard'
                  *          Data set Management TRIM supported (limit 1 block)

It's easiest to let the system perform an automatic TRIM. That can be done in several ways.

The quickest way for trimming is to place into /etc/rc.local trim  commands, in my case it was the following commands:

 

fstrim -v /var/lib/mysql
fstrim -v /var/www

To add it I've used my favourite vim text editor.
Adding commands to rc.local will make SSD trimming be executed at boot time so this will reduce a bit the downtime during the trim with some time so perhaps for those like me which are running a crually important websites a better

An alternative way is to schedule a daily cron job to do just place a new job in /etc/cron.daily/trim e.g.:
 

linux:~# vim /etc/cron.daily/trim

 

#!/bin/sh
fstrim -v /var/lib/mysql
fstrim -v /var/www

linux:~# chmod +x /etc/cron.daily/trim

However the best way to enable automatic trimming to SSD  is to just add the discard parameter to /etc/fstab I've already done that earlier in this article.

Not really surprising the increase of websites opening (page load times) were decreased dramatically web page loading waiting time fall down 2 to 2.5 times, so the moral of story for me is always when possible from now on to use SSD in order to have superb websites opening times.

To sum it up what was achieved with moving my data into SSD Drive, before moving websites and SQL data to SSD drive the websites were opening for 6 to 10 seconds now sites open in 2 to 4.5 seconds which is below 5 seconds (the normal waiting time for a user to see your website).
By the way it should be not a news forfor people that are into Search Engine Optimization but might be for some of unexperienced new Admins and Webmasters that, all that all page opening times that  exceeds 5 secs is considered to be a slow website (and therefore perhaps not worthy to read).
The high load page times >5 secs makes the website also less interesting not only for end users but also for search engines (Google / Yahoo / Bing / Baidoo etc.) will is said to crawl it less if website is slow.
Search Engines are said to Index much better and crawl more frequently into more responsive websites.
Hence implementing SSD to a server and decreasing the page load time should bring up my visitors stats a bit too.

Well that's all for today, hope you enjoyed 🙂

Howto Fix “sysstat Cannot open /var/log/sysstat/sa no such file or directory” on Debian / Ubuntu Linux

Monday, February 15th, 2016

sysstast-no-such-file-or-directory-fix-Debian-Ubuntu-Linux-howto
I really love sysstat and as a console maniac I tend to install it on every server however by default there is some <b>sysstat</b> tuning once installed to make it work, for those unfamiliar with <i>sysstat</i> I warmly recommend to check, it here is in short the package description:<br /><br />
 

server:~# apt-cache show sysstat|grep -i desc -A 15
Description: system performance tools for Linux
 The sysstat package contains the following system performance tools:
  – sar: collects and reports system activity information;
  – iostat: reports CPU utilization and disk I/O statistics;
  – mpstat: reports global and per-processor statistics;
  – pidstat: reports statistics for Linux tasks (processes);
  – sadf: displays data collected by sar in various formats;
  – nfsiostat: reports I/O statistics for network filesystems;
  – cifsiostat: reports I/O statistics for CIFS filesystems.
 .
 The statistics reported by sar deal with I/O transfer rates,
 paging activity, process-related activities, interrupts,
 network activity, memory and swap space utilization, CPU
 utilization, kernel activities and TTY statistics, among
 others. Both UP and SMP machines are fully supported.
Homepage: http://pagesperso-orange.fr/sebastien.godard/

 

If you happen to install sysstat on a Debian / Ubuntu server with:

server:~# apt-get install –yes sysstat


, and you try to get some statistics with sar command but you get some ugly error output from:

 

server:~# sar Cannot open /var/log/sysstat/sa20: No such file or directory


And you wonder how to resolve it and to be able to have the server log in text databases periodically the nice sar stats load avarages – %idle, %iowait, %system, %nice, %user, then to FIX that Cannot open /var/log/sysstat/sa20: No such file or directory

You need to:

server:~# vim /etc/default/sysstat


By Default value you will find out sysstat stats it is disabled, e.g.:

ENABLED="false"

Switch the value to "true"

ENABLED="true"


Then restart sysstat init script with:

server:~# /etc/init.d/sysstat restart

However for those who prefer to do things from menu Ncurses interfaces and are not familiar with Vi Improved, the easiest way is to run dpkg reconfigure of the sysstat:

server:~# dpkg –reconfigure


sysstat-reconfigure-on-gnu-linux

 

root@server:/# sar
Linux 2.6.32-5-amd64 (pcfreak) 15.02.2016 _x86_64_ (2 CPU)

0,00,01 CPU %user %nice %system %iowait %steal %idle
0,15,01 all 24,32 0,54 3,10 0,62 0,00 71,42
1,15,01 all 18,69 0,53 2,10 0,48 0,00 78,20
10,05,01 all 22,13 0,54 2,81 0,51 0,00 74,01
10,15,01 all 17,14 0,53 2,44 0,40 0,00 79,49
10,25,01 all 24,03 0,63 2,93 0,45 0,00 71,97
10,35,01 all 18,88 0,54 2,44 1,08 0,00 77,07
10,45,01 all 25,60 0,54 3,33 0,74 0,00 69,79
10,55,01 all 36,78 0,78 4,44 0,89 0,00 57,10
16,05,01 all 27,10 0,54 3,43 1,14 0,00 67,79


Well that's it now sysstat error resolved, text reporting stats data works again, Hooray! 🙂