Posts Tagged ‘var’

Install and use personal Own Cloud on Debian Linux for better shared data security – OwnCloud a Free Software replacement for Google Drive

Thursday, August 23rd, 2018

owncloud-self-hosted-cloud-file-sharing-and-storage-service-for-gnu-linux-howto-install-on-debian

Basicly I am against the use of any Cloud type of service but as nowadays Cloud usage is almost inevitable and most of the times you need some kind of service to store and access remotely your Data from multiple devices such as DropBox, Google Drive, iCloud etc. and using some kind of infrastructure to execute high-performance computing is invitable just like the Private Cloud paid services online are booming nowdays, I decided to give a to research and test what is available as a free software in the field of Clouding (your data) 🙂

Undoubfully, it is really nice fact that there are Free Software / Open Source alternatives to run your Own personal Cloud to store your data from multiple locations on a single point.

The most popular and leading Cloud Collaboration service (which is OpenSource but unfortunately not under GPLv2 / GPV3 – e.g. not fully free software) is OwnCloud.

ownCloud is a flexible self-hosted PHP and Javascript based web application used for data synchronization and file sharing (where its remote file access capabilites are realized by Sabre/Dav an open source WebDav server.
OwnCloud allows end user to easily Store / Manage files, Calendars, Contacts, To-Do lists (user and group administration via OpenID and LDAP), public URLs can be easily, created, the users can interact with browser-based ODF (Open Document Format) word processor , there is a Bookmarking, URL Shortening service integrated, Gallery RSS Feed and Document Viewer tools such as PDF viewer etc. which makes it a great alternative to the popular Google Drive, iCloud, DropBox etc.

The main advantage of using a self-hosted Cloud is that Your data is hosted and managed by you (on your server and your hard drives) and not by some God knows who third party provider such as the upmentioned.
In other words by using OwnCloud you manage your own data and you don't share it ot on demand with the Security Agencies with CIA, MI6, Mussad … (as it is very likely most of publicly offered Cloud storage services keeps track on the data stored on them).

The other disadvantage of Cloud Computing is that the stored data on such is usually stored on multiple servers and you can never know for sure where your data is physically located, which in my opinion is way worse than the option with Self Hosted Cloud where you know where your data belongs and you can do whatever you want with your data keep it secret / delete it or share it on your demand.

OwnCloud has its clients for most popular Mobile (Smart Phone) platforms – an Android client is available in Google Play Store as well as in Apple iTunes besides the clients available for FreeBSD OS, the GNOME desktop integration package and Raspberry Pi.

For those who are looking for additional advanced features an Enterprise version of OwnCloud is also available aiming business use and included software support.

Assuming you have a homebrew server or have hired a dedidacted or VPS server (such as the Ones we provide) ,Installing OwnCloud on GNU / Linux is a relatively easy
task and it will take no more than 15 minutes to 2 hours of your life.
In that article I am going to give you a specific instructions on how to install on Debian GNU / Linux 9 but installing on RPM based distros is similar and straightfoward process.
 

1. Install MySQL / MariaDB database server backend
 

By default OwnCloud does use SQLite as a backend data storage but as SQLite stores its data in a file and is becoming quickly slow, is generally speaking slowre than relational databases such as MariaDB server (or the now almost becoming obsolete MySQL Community server).
Hence in this article I will explain how to install OwnCloud with MariaDB as a backend.

If you don't have it installed already, e.g. it is a new dedicated server install MariaDB with:
 

server:~# apt-get install –yes mariadb-server


Assuming you're install on a (brand new fresh Linux install – you might want to install also the following set of tools / services).

 

server:~# systemctl start mariadb
server:~# systemctl enable mariadb
server:~# mysql_secure_installation


mysql_secure_installation – is to finalize and secure MariaDB installation and set the root password.
 

2. Create necessery database and users for OwnCloud to the database server
 

linux:~# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE owncloud CHARACTER SET utf8;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON owncloud.* TO 'owncloud'@'localhost' IDENTIFIED BY 'owncloud_passwd';
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> \q

 

3. Install Apache + PHP necessery deb packages
 

As of time of writting the article on Debian 9.0 the required packages for a working Apache + PHP install for OwnCloud are as follows.

 

server:~# apt-get install –yes apache2 mariadb-server libapache2-mod-php7.0 \
openssl php-imagick php7.0-common php7.0-curl php7.0-gd \
php7.0-imap php7.0-intl php7.0-json php7.0-ldap php7.0-mbstring \
php7.0-mcrypt php7.0-mysql php7.0-pgsql php-smbclient php-ssh2 \
php7.0-sqlite3 php7.0-xml php7.0-zip php-redis php-apcu

 

4. Install Redis to use as a Memory Cache for accelerated / better performance ownCloud service


Redis is an in-memory kept key-value database that is similar to Memcached so OwnCloud could use it to cache stored data files. To install latest redis-server on Debian 9:
 

server:~# apt-get install –yes redis-server

5. Install ownCloud software packages on the server

Unfortunately, default package repositories on Debian 9 does not provide owncloud server packages but only some owncloud-client packages are provided, that's perhaps the packages issued by owncloud does not match debian packages.

As of time of writting this article, the latest available OwnCloud server  version package for Debian is OC 10.

a) Add necessery GPG keys

The repositories to use are provided by owncloud.org, to use them we need to first add the necessery gpg key to verify the binaries have a legit checksum.
 

server:~# wget -qO- https://download.owncloud.org/download/repositories/stable/Debian_9.0/Release.key | sudo apt-key add –

 

b) Add owncloud.org repositories in separete sources.list file

 

server:~# echo 'deb https://download.owncloud.org/download/repositories/stable/Debian_9.0/ /' | sudo tee /etc/apt/sources.list.d/owncloud.list

 

c) Enable https transports for the apt install tool

 

server:~# apt-get –yes install apt-transport-https

 

d) Update Debian apt cache list files and install the pack

 

server:~# apt-get update

 

server:~# apt-get install –yes owncloud-files

 

By default owncloud store file location is /var/www/owncloud but on many servers that location is not really appropriate because /var/www might be situated on a hard drive partition whose size is not big enough, if that's the case just move the folder to another partition and create a symbolic link in /var/www/owncloud pointing to it …


6. Create necessery Apache configurations to make your new self-hosted cloud accessible
 

a) Create Apache config file

 

server:~# vim /etc/apache2/sites-available/owncloud.conf

 

 

Alias /owncloud "/var/www/owncloud/"

<Directory /var/www/owncloud/>
Options +FollowSymlinks
AllowOverride All

<IfModule mod_dav.c>
Dav off
</IfModule>

SetEnv HOME /var/www/owncloud
SetEnv HTTP_HOME /var/www/owncloud

</Directory>

b) Enable Mod_Dav (WebDAV) if it is not enabled yet

 

server:~# ln -sf ../mods-available/dav_fs.conf
server:~# ln -sf ../mods-available/dav_fs.load
server:~# ln -sf ../mods-available/dav.load
server:~# ln -sf ../mods-available/dav_lock.load

c) Set proper permissions for /var/www/owncloud to make upload work properly

 

chown -R www-data: /var/www/owncloud/


d) Restart Apache WebServer (to make new configuration affective)

 

 

server:~# /etc/init.d/apache2 restart


7. Finalize  OwnCloud Install
 

Access OwnCloud Web Interface to finish the database creation and set the administrator password for the New Self-Hosted cloud
 

http://Your_server_ip_address/owncloud/

By default the Web interface is accessible in unencrypted (insecure) http:// it is a recommended practice (if you already don't have an HTTPS SSL certificate install for the IP or the domain to install one either a self-signed certificate or even better to use LetsEncrypt CertBot to easily create a valid SSL for free for your domain

 

installing-OwnCloud-Web-Config-User-Pass-interface-Owncloud-10-on-Debian-9-Linux-howto

Just fill in in your desired user / pass and pass on the database user / password / db name (if required you can set also a different location for the data directory from the default one /var/www/owncloud/data.

Click Finish Setup and That's all folks!

owncloud-server-web-ui-interface

OwnCloud is successfully installed on the server, you can now go and download a Mobile App or Desktop application for whatever OS you're using and start using it as a Dropbox replacement. In a certain moment you might want to consult also the official UserManual documentation as you would probably need further information on how to manage your owncloud.

Enjoy !

Qmail redirect mail to another one and keep local Mailbox copy with .qmail file – Easy Set up email forwarding Qmail

Saturday, August 11th, 2018

Qmail redirect mail box to another one with .Qmail file dolphin artistic logo

QMail (Considered to be the most secure Mail server out there whose modified version is running on Google – Gmail.com and Mail Yahoo! and Yandex EMail (SMTP) servers, nowadays has been highly neglected and considered obsolete thus most people prefer to use postfix SMTP or EXIM but still if you happen to be running a number of qmail old rack Mail servers (running a bunch of Email addresses and Virtual Domains straight on the filesystem – very handy by the way for administration much better than when you have a Qmail Mail server configured to store its Mailboxes within MySQL / PostgreSQL or other Database server – because simple vpopmail configured to play nice with Qmail and store all user emails directly on Filesystem (though considered more insecure the email correspondence can be easily red, if the server is hacked it is much better managable for a small and mid-sized mailserver) or have inherited them from another sys admin and you wonder how to redirect a single Mailbox:

(under domain lets say domain's email  my-server1.com should forward to to SMTP domain my-server-whatever2.com (e.g. your-email-username@server-whatever1.com is supposed to forward to your-email-username2@server-whatever2.com).
To achieve it create new file called .qmail

Under the Qmail or VirtualDomain location for example:

/var/qmail/mailnames/myserver1.com/username/.qmail

 

e.g
 

root@qmail-server:~# vim /var/qmail/mailnames/myserver1.com/your-email-username/.qmail
&your-email-username@server-whatever1.com

your-email-username@example1.com
/home/vpopmail/domains/server-whatever2.com/your-email-username/Maildir/


!!! NOTE N.B. !!! the last slash / after Maildir (…Maildir/) is important to be there otherwise mail will not get delivered
That's all now send a test email, just to make sure redirection works properly, assuming the .qmail file is created by root, by default the file permissions will be with privileges root:root.

Note
 

That shouldn't be a problem at all. That's all now enjoy emails being dropped out to the second mail 🙂

 

OSCommerce how to change / reset lost admin password

Monday, October 16th, 2017

reset-forgotten-lost-oscommerce-password-howto-Os_commerce-logo.svg

How to change / reset OSCommerce lost / forgotten admin password?

The password in OSCommerce is kept in table "admin", so to reset password connect to MySQL with mysql cli client.

First thing to do is to generate the new hash string, you can do that with a simple php script using the md5(); function

 

root@pcfreak:/var/www/files# cat 1.php
<?
$pass=md5('password');
echo $pass;
?>

 

root@pcfreak:/var/www/files# php 1.php
5f4dcc3b5aa765d61d8327deb882cf99
root@pcfreak:/var/www/files#

 

Our just generated string (for text password password) is hash: 5f4dcc3b5aa765d61d8327deb882cf99

Next to update the new hash string into SQL, we connect to MySQL:

 

$ mysql -u root -p

 


And issue following command to modify the encrypted hash string:

 

UPDATE `DB`.`admin` SET `admin_password` = '5f4dcc3b5aa765d61d8327deb882cf99' WHERE `admin`.`admin_id` = 6;

Linux: /var/log/wtmp – No such file or directory quick fix and why it might be missing on a server

Thursday, May 4th, 2017

fix-var-log-wtmp-btmp-no-such-file-or-directory-linux_last_command-howto-quick-fix

If you have to occasionally log  into some client old inherited (not installed by you) Linux servers on and just out of curiosity and for security sake dediced do a quick security (last user login) evaluation, e.g. issued the
last command just to find out you get the error:

last: /var/log/wtmp: No such file or directory

Perhaps this file was removed by the operator to prevent logging last info.

Then this might be a sure indicator that some malicious script kiddie (hax0r) activity has been run over the server or the ex-system administrator if fired recently decided to wipe out all his login tracks among with installing some other nasty rootkit or backdoor.

Under some circumstances the error might be caused also by badly written end user rotate script bugs (like shell or perl script) bugs or by a buggy deployment of Linux OS virtual machine.
The last: /var/log/wtmp: No such file or directory error is likely to happen on Ubuntu / Debian / Redhat / CentOS Linux distributions running on a Cloud PaaS service such as Amazon EC2, some of the Cloud services vendors do choose to explicitly remove /var/log/wtmp for the reason that many of end customers are using their Linux VM servers (Xen Virtualization / OpenVZ / LXC – Linux Containers) etc. irresponsibly and hence become a victim of script kiddie attacks and the failed logins attempts logged in /var/log/wtmp grow to many gigabytes.

Even some Linux distributions or system administrators of Linux server login hosts that has to keep tens of thousands of  login records monthly or are concentrating on simplicity and on an attempt to reduce size has purposefully deleted the last login entry file /var/log/wtmp file to save space.

But anyways if you happen to be missing this file always bear in mind that you might have been a victim of intrusion and you better run chkrootkit and rkhunter

Run below commands to fix the missing /var/log/wtmp

touch /var/log/wtmp
chmod 0664 /var/log/wtmp
chown root:utmp /var/log/wtmp

On some Linux distributions such as Ubuntu and Fedora you might also want to create /var/log/btmp (which is used to log failed login attempts to server)

touch /var/log/btmp
chmod 0664 /var/log/btmp
chown root:utmp /var/log/btmp

Once the files are created the last command will start logging server in logins and logouts as it is supposed to be again, e.g.:
 

linux:~# last -15
root pts/0 192.168.0.15 Fri May 5 16:41 still logged in


This article was inspired by a prior article found on root.bg the site is in Bulgarian so unfortunately you might not be able to read it, but as a content and concept it is pretty similar to pc-freak.net, actually the site author Nikolay Nikolov (known in Internet Relay Chat IRC under the pseudonym Joni-B, happened to be an old friend from youth geek IT years 🙂

Enjoy

Briefly unavailable for scheduled maintenance. Fix WordPress after interrupted upgrade

Thursday, March 2nd, 2017

briefly-unavaiable-for-a-scheduled-maintenance-wordpress-website-fix-howto_1

I've recenty tried Update my WordPress blog sites and being unattentive I've selected all the plugins possitble for Upgrade by checking the "Select All" check box on the Update dialogs and almost automatically int he hurry pressing Update button however out of a sudden I've realized I could screw up my websites brutally as some of the plugins to upgraded might be lacking 100% compitability with their prior versions.

I've made a messes out of my blog many times during upgrades because of choosing to upgrade the wrong not 100% compatible plugins and I know well how painful and hard to track it could be a misbehaving incompatible plugin or how ot could cause a severe sluggishness to blog which automatically reflects on how well the website search engine ranked in Google / Yahoo / Bing indexed etc.

Thus as an almost unconcsious reaction to prevent myself the future troubles I've tried to cancel the update request in Firefox browser and trying to reload the Update page with a hope that I might be quick enough for the Apache / WP / MySQLbackend Update Update queries request to be delaying for processing but I was too slow and bang! I ended up with the following unpleasent message in my browser:

briefly-unavaiable-for-a-scheduled-maintenance-wordpress-website-fix-howto
Briefly unavailable for scheduled maintenance.
 

As you could guess that message caused me quite a lot of worries at hand especially since I've already break up my sites many times by doing quick unmindful reactions and the fact that there is Google Adsense ads appearing which does give me some Return on Investment cents every now and then …

It took me few minutes of research online to find what really happened and how to fix / resolve the WebSites normal operations.
 


So what causes the Briefly unavailable for scheduled maintenance. appears ?

When WordPress does some of its integrated maintenance jobs a plugin enable / disable or any task that has to modify crucial configurations inside the database WordPress does disable access to all end clients to itself in order to protect its sensitive data to appear to browser requestors as showing some unexpected information to end client browser could be later used by crackers / hackers or a possibly open a security hole for an attacker.

The message is wordpress generated notice and it is pretty normal for the end user to see it during the WP site installation update depending on how many plugins are installed and loaded to the site and how long it will take for the backend Linux / Windows server to fetch the archived .zips of plugins and substitute with the new ones and update the files extracting them to wp-content/plugins and updating the respective required SQL database / tables it could be showing for end users from few secs to few minutes.

However under some circumstances on Browser request timeout to remote wordpress site due to a network connectivity issue or just a bad configuration of Apache for requests timeout (or a slow remote server Apache responce time due to server Hardware / Mem overload) or a stupid browser "Stop" / cancel request like in my case you end up with the Briefly unavailable for scheduled maintenance and you can can longer access the your https://siteurl.com/wp-admin Admin Panel.

The message is triggered by a WP craeted file .maintenance inside /var/www/blog-site/ e.g. WordPress PHP scripts does check for /var/www/blog-site/.maintenance
existence and if it is matched the WP scripts does generate the Briefly unavailble … message.


How to resolve the "Briefly unavailable for scheduled maintenance. Check back in a minute" WordPress error ?

As you might guess removing the maintenance "coming soon" like message in most of the cases comes to just deleting the .maintenance file, to do so:

1. Login to remote server via FTP or SFTP
2. Locate your WP website root folder that should be something like /var/www/blog-site/.maintenance and issue:
issue something like:
 

$ rm -rf /var/www/bog-site/.maintenance


Assuming that some plugin Update .zip extraction or SQL update query did not ended being half installed / executed that should solve the error.
To check whether all is back to normal just refresh your browser pointing to the "broken" site. If it appears well you can thank God for that 🙂
If not check the apache error logs and php error logs and see which of the php scripts is failing and then try to manually fetch and unzip the WP .zip package to wp-content/plugins folder and give it another try and if God bless so it will work as before 🙂

How to prevent your WP based business in future from such nasty errors using A Staging site (test) version of your blog ?

Just run a duplicate of your website under a separate folder on your hosting and do enable the same plugins as on the primary website and copy over the MySQL / PostgreSQL Database from your Live site to the Staging, then once it is enabled before doing any crucial WordPress version updates or Plugins Update always do try the Upgrade first on your Test Staging site. If it does execute fine there in most of the cases the result should be the same on the Production host and that could solve you effors and nerves of debugging a hard to get failure errors or faulty plugins without affecting what your End users see.

If you're not hosting the WordPress install under your own hosting like me you can always use some of the public available hostings like  BlueHost or WPEngine

 

How to force logrorate process logs / Make logrotate changes take effect immediately

Sunday, April 10th, 2016

how-to-force-logrorate-to-process-logs-make-logrorate-changes-take-effect-immediately-log-rotate-300x299

Dealing with logrorate as admins we need to change or add new log-rorate configurations (on most Linux distributions configs are living uder
/etc/logrotate.d/
 

logrotate uses crontab to work. It's scheduled work, not as daemon, so usually no need to reload its configuration.
When the crontab executes logrotate, it will use your new config file automatically.

Most of the logrotate setups I've seen on various distros runs out of the /etc/cron.daily

$ ls -l /etc/cron.daily/logrotate 
-rwxr-xr-x 1 root root 180 May 18  2014 /etc/cron.daily/logrotate

Here is content of cron job scheduled script:

$ cat /etc/cron.daily/logrorate

#!/bin/sh /usr/sbin/logrotate /etc/logrotate.conf EXITVALUE=$? if [ $EXITVALUE != 0 ]; then /usr/bin/logger -t logrotate "ALERT exited abnormally with [$EXITVALUE]" fi exit 0

Configurations change to lograte configs takes effect on next crontab run,
but what If you need to test your config you can also execute logrotate
on your own with below ommand:

 

logrotate -vf /etc/logrotate.conf 

If you encounter some issues with just modified or newly added logrorate script to check out the status of last logrorate executing bunch of log-rotate scripts run on Debian  / Ubuntu etc. deb based Linux:

cat /var/lib/logrotate/status

Or on RHEL, Fedora, CentOS Linux


cat /var/lib/logrotate.status

logrotate state -- version 2

 

"/var/log/syslog" 2016-4-9
"/var/log/dpkg.log" 2016-4-1
"/var/log/unattended-upgrades/unattended-upgrades.log" 2012-9-20
"/var/log/unattended-upgrades/unattended-upgrades-shutdown.log" 2013-5-17
"/var/log/apache2/mailadmin.pc-freak.net-access.log" 2012-9-19
"/var/log/snort/portscan.log" 2012-9-12
"/var/log/apt/term.log" 2016-4-1
"/var/log/squid/access.log" 2015-3-21
"/var/log/mysql/mysql-slow.log" 2016-4-9
"/var/log/debug" 2016-4-3
"/var/log/mysql.log" 2016-4-9
"/var/log/squid/store.log" 2015-3-21
"/var/log/apache2/mailadmin.pc-freak.net-error.log" 2012-9-19
"/var/log/daemon.log" 2016-4-3
"/var/log/munin/munin-update.log" 2016-4-9
"/var/log/unattended-upgrades/unattended-upgrades*.log" 2013-5-16
"/var/log/razor-agent.log" 2015-2-19
"/var/log/btmp" 2016-4-1
"/var/log/squid/*.log" 2014-11-24
"/var/log/munin/munin-graph.log" 2016-4-9
"/var/log/mysql/mysql.log" 2012-9-12
"/var/log/munin/munin-html.log" 2016-4-9
"/var/log/clamav/freshclam.log" 2016-4-3
"/var/log/munin/munin-node.log" 2016-1-23
"/var/log/mail.info" 2016-4-3
"/var/log/apache2/other_vhosts_access.log" 2016-4-3
"/var/log/exim4/rejectlog" 2012-9-12
"/var/log/squid/cache.log" 2015-3-21
"/var/log/messages" 2016-4-3
"/var/log/stunnel4/stunnel.log" 2012-9-19
"/var/log/apache2/php_error.log" 2012-10-21
"/var/log/ConsoleKit/history" 2016-4-1
"/var/log/rsnapshot.log" 2013-4-15
"/var/log/iptraf/*.log" 2012-9-12
"/var/log/snort/alert" 2012-10-17
"/var/log/privoxy/logfile" 2016-4-3
"/var/log/auth.log" 2016-4-3
"/var/log/postgresql/postgresql-8.4-main.log" 2012-10-21
"/var/log/apt/history.log" 2016-4-1
"/var/log/pm-powersave.log" 2012-11-1
"/var/log/proftpd/proftpd.log" 2016-4-3
"/var/log/proftpd/xferlog" 2016-4-1
"/var/log/zabbix-agent/zabbix_agentd.log" 2016-3-25
"/var/log/alternatives.log" 2016-4-7
"/var/log/mail.log" 2016-4-3
"/var/log/kern.log" 2016-4-3
"/var/log/privoxy/errorfile" 2013-5-28
"/var/log/aptitude" 2015-5-6
"/var/log/apache2/access.log" 2016-4-3
"/var/log/wtmp" 2016-4-1
"/var/log/pm-suspend.log" 2012-9-20
"/var/log/snort/portscan2.log" 2012-9-12
"/var/log/mail.warn" 2016-4-3
"/var/log/bacula/log" 2013-5-1
"/var/log/lpr.log" 2012-12-12
"/var/log/mail.err" 2016-4-3
"/var/log/tor/log" 2016-4-9
"/var/log/fail2ban.log" 2016-4-3
"/var/log/exim4/paniclog" 2012-9-12
"/var/log/tinyproxy/tinyproxy.log" 2015-3-25
"/var/log/munin/munin-limits.log" 2016-4-9
"/var/log/proftpd/controls.log" 2012-9-19
"/var/log/proftpd/xferreport" 2012-9-19
"/var/spool/qscan/qmail-queue.log" 2013-5-15
"/var/log/user.log" 2016-4-3
"/var/log/apache2/error.log" 2016-4-3
"/var/log/exim4/mainlog" 2012-10-16
"/var/log/privoxy/jarfile" 2013-5-28
"/var/log/cron.log" 2016-4-3
"/var/log/clamav/clamav.log" 2016-4-3

 

The timestamp date next to each of the rotated service log is when the respective log was last rorated

It is also a handy thing to rorate only a certain service log, lets say clamav-server, mysql-server, apache2 and nginx
 


logrorate /etc/logrorate.d/clamav-server
logrorate /etc/logrorate.d/mysql-server
logrotate /etc/logrotate.d/nginx

Migrate Webserver and SQL data from old SATA Hard drive to SSD to boost websites performance / Installing new SSD KINGSTON 120GB hard disk on Linux

Monday, March 28th, 2016

ssd-linux-migrate-webserver-and-mysql-from-old-SATA-to-SSD-Kingston-Hard-drive-to-boost-performance-installing-new-SSD-on-Debian-linux
Blog and websites hosted on a server were giving bad performance lately and the old SATA Hard Disk on the Lenovo Edge server seemed to be overloaded from In/Out operations and thus slowing down the websites opeining time as well as SQL queries (especially the ones from Related Posts WordPress plugin was quite slow. Sometimes my blog site opening times were up to 8-10 seconds.

To deal with the issue I obviously needed a better speed of I/O of hard drive thus as I've never used SSD hard drives so far,  I decided to buy a new SSD (Solid State Drive) KINGSTON SV300S37A120G, 605ABBF2, max UDMA/133  hard disk.
Mounting the hard disk physically on the computer tower case wasn't a big deal as there are no rotating elements of the SSD it doesn't really matter how it is mounted main thing is that it is being hooked up somewhere to the case.

I was not sure whether the SSD HDD is supported by my Debian GNU / Linux so I had see whether Linux Operating System has properly detected your hard disk use dmesg

1. Check if SSD Hard drive is supported in Linux

 

linux:~# dmesg|grep -i kingston
[    1.182734] ata5.00: ATA-8: KINGSTON SV300S37A120G, 605ABBF2, max UDMA/133
[    1.203825] scsi 4:0:0:0: Direct-Access     ATA      KINGSTON SV300S3 605A PQ: 0 ANSI: 5

 

linux:~# dmesg|grep -i sdb
[    1.207819] sd 4:0:0:0: [sdb] 234441648 512-byte logical blocks: (120 GB/111 GiB)
[    1.207847] sd 4:0:0:0: [sdb] Write Protect is off
[    1.207848] sd 4:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[    1.207860] sd 4:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    1.207928]  sdb: unknown partition table
[    1.208319] sd 4:0:0:0: [sdb] Attached SCSI disk

 

Well great news as you see from above output obviously the Kingston SSD HDD was detected by the kernel.
I've also inspected whether the proper dimensions of hard drive (all 120 Gigabytes are being detected by the OS):

 

linux:~# fdisk -l /dev/sdb
Disk /dev/sdb: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Even better as the proper HDD sizing was detected by Linux kernel.
Next thing to do was of course to create ext4 filesystem on the SSD HDD.
I wanted to give 2 separate partitions for my Webserver Websites DocumentRoot directories which all lay under the standard Apache location inside /var/www as well as MySQL data folder which is also under the standard Debian based Linuces – /var/lib/mysql as the SQL data directory was just 3.3 GB size, I've decided to reserve 20GB gigabytes for the MySQL and another 100 GB for my PHP / CSS / JS / HTML and other data files /var/www.
 

2. Create SSD partitions with cfdisk

Hence I needed to create:

1. SSD partition of 100GB
2. SSD partition of 20GB

I have cfdisk installed and I believe, the easiest way to create the partitions is using interactive partitioner as CFDISK instead of fdisk: so in order to make the proper partitions I've ran

 

linux:~# cfdisk /dev/sdb


I' will skip explainig details on how to use CFDISK as it is pretty standard – display or manipulate disk partition table tool.
Just press on NEW button (moving with arrow keys buttons) and choose the 2 partitions size 100000 and 20000 MB (one thing to note here is that you have to choose between Primary and  Logical creation of partitions, as my SSD is a secondary drive and I already have a ) and then press the
WRITE button to save all the partition changes.

!!! Be very careful here as you might break up your other disks data make sure you're really modifying the SSD Hard Drive and not your other /dev/sda or other attached external Hard drive or ATA / SATA disk.
Press the WRITE button only once you're absolutely sure, you do it at your own (always create backup of your other data and don't blame me if something goes wrong) …

Once created the two partitions will look like in the screenshot below:
creating-linux-partitions-with-cfdisk-linux-partitioning-tool.png

 


3. Create ext4 filesystem 100 and 20 GB partitions

Next thing to do before the two partitions are ready to mount under Webserver's files documentroot /var/www and /var/lib/mysql is to create ext4 filesystem, though some might prefer to stick to ext3 or reiserfs partition, I would recommend you use ext4 for the reason ext4 according to my quick research is said to perform much better with SSD Hard Drives.

The tool to create the ext4 filesystems is mkfs4.ext4 it is provided by debian package e2fsprogs I have it already installed on my server, if you don't have it just go on and install it with:
 

linux:~# apt-get install –yes e2fsprogs

 

To create the two ext4 partitions run:
 

linux:~# mkfs4.ext4 /dev/sdb5

 

linux:~# mfs4.ext4 /dev/sdb6


Here the EXT4 filesystem on partition that is supposed to be 100 Gigabytes will take 2, 3 minutes as the dimensions of partition are a bit bigger, so if you don't want to get boring go grab a coffee, once the partitions are ready you can evaluate whether everyhing is properly created with fdisk you should get output like the one below

 

linux:~# fdisk -l /dev/sdb
Disk /dev/sdb: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1              63   234441647   117220792+   5  Extended
/dev/sdb5             126    39070079    19534977   83  Linux
/dev/sdb6        39070143   234441647    97685752+  83  Linux

 

4. Mount newly created SSD partitions under /var/www and /var/lib/mysql

Before I mounted /var/www and /var/lib/mysql in order to be able to mount under the already existing directories I had to:

1. Stop Apache and MySQL server
2. Move Mysql and Apache Documentroot and Data directories to -bak
3. Create new empty /var/www and /var/lib/mysql direcotries
4. Copy backpups ( /var/www-bak and /var/lib/mysql-bak ) to the newly mounted ext4 SSD partitions

To achieve that I had to issue following commands:
 

linux:~# /etc/init.d/apache2 stop
linux:~# /etc/init.d/mysql stop

linux:~# mv /var/www /var/www-bak
linux:~# mv /var/lib/mysql /var/lib/mysql-bak

linux:~# mkdir /var/www
linux:~# mkdir /var/lib/mysql
linux:~# chown -R mysql:mysql /var/lib/mysql


Then to manually mount the SSD partitions:
 

linux:~# mount  /dev/sdb5 /var/lib/mysql
linux:~# mount /dev/sdb6 /var/www


To check that the folders are mount into the SSD drive, ran mount cmd:

 

linux:~# mount
/dev/sda1 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
/dev/sdc1 on /backups type ext4 (rw)
/dev/sdb5 on /var/lib/mysql type ext4 (rw,relatime,discard,data=ordered))
/dev/sdb6 on /var/www type ext4 (rw,relatime,discard,data=ordered))

 

That's great now the filesystem mounts fine, however as it an SSD drive and SSD drives are being famous for having a number of limited writes on disk before the drive lifetime is over it is a good idea to increase a bit the lifetime of the SSD by mounting the SSD partitions with noatime and errors=remount-ro (in order to not log file access times to filesystem table and to remount the FS read only in case of some physical errors of the drive).

5. Configure SSD partitions to boot every time Linux reboots

Now great, the filesystems gets mounted fine so next thing to do is to make it automatically mount every time the Linux OS boots up, this on GNU / Linux is done through /etc/fstab, for my 2 ext4 partitions this is the content to add at the end of /etc/fstab:

 

/dev/sdb5               /var/lib/mysql      ext4        noatime,errors=remount-ro       0       1
/dev/sdb6               /var/www        ext4    noatime,errors=remount-ro       0       1

 

quickest way to add it without a text editor is to echo to the end of file:
 

linux:~# cp -rpf /etc/fstab /etc/fstab.bak_25_03_2016
linux:~# echo ' /dev/sdb5               /var/lib/mysql      ext4        noatime,errors=remount-ro,discard       0       1' >> /etc/fstab
linux:~# echo ' /dev/sdb6               /var/www        ext4    noatime,errors=remount-ro,discard       0       1 ' >> /etc/fstab


Then mount again all the filesystems including the 2 new created SSD (100 and 20 GB) partitions:
 

linux:~# umount /var/www
linux:~# umount /var/lib/mysql
linux:~# mount -a


To assure properly mounted with noatime and remount-ro on errors options:


linux:~# mount | grep -i sdb
/dev/sdb5 on /var/lib/mysql type ext4 (rw,noatime,errors=remount-ro)
/dev/sdb6 on /var/www type ext4 (rw,noatime,errors=remount-ro)

 

It is also a good idea to check a statistics of disk free command:
 

linux:~# df -h|grep -i sdb
/dev/sdb5         19G  0G    19G  0% /var/lib/mysql
/dev/sdb6         92G   0G    92G  0% /var/www


6. Copy all Webserver and SQL data from backupped directories to new SSD mounted

Last but not least is to copy all original content files from /var/www-bak and /var/lib/mysql-bak to the new freshly  created SSD partitions, though copying the files can be made with normal linux copy command (cp),
I personally prefer rsync because rsync is much quicker and more efficient in copying large amount of files in my case this were 48 Gigabytes.

To copy files from with rsync:

 

linux:~# rsync -av –log-file /var/log/backup.log  /var/www-bak /var/www
linux:~# rsync -av –log-file /var/log/backup.log  /var/lib/mysql-bak /var/lib/mysql


Then ofcourse, finally to restore my websites normal operation I had to bring up the Apache Webservers and MySQL service

 

linux:~# /etc/init.d/apache2 start
linux:~# /etc/init.d/mysql start


7. Optimizing SSD performance with periodic trim (discard of unused blocks on a mounted filesystem)

As I digged deeper into how to even further optimize SSD drive performance I learned about the cleaning action TRIM of the partitions for a long term performance proper operation, to understand it better think about trimming like Windows degrament operatin.
 

NAME
fstrim – discard unused blocks on a mounted filesystem

SYNOPSIS
fstrim [-o offset] [-l length] [-m minimum-free-extent] [-v] mountpoint

DESCRIPTION
fstrim is used on a mounted filesystem to discard (or "trim") blocks which are not in use by the filesystem. This is useful for
solid-state drives (SSDs) and thinly-provisioned storage.

By default, fstrim will discard all unused blocks in the filesystem. Options may be used to modify this behavior based on range or
size, as explained below.


Trimming is really necessery, otherwise SSD become very slow after some time. All modern SSD's support TRIM, but older SSD's from before 2010 usually don't.
Thus for an older SSD you'll want to check this on the website of the manufacturer.

As I mentioned earlier TRIM is not supported by all SSD drives, to check whether TRIM is supported by SSD:

linux:~# hdparm -I /dev/sdb|grep -i -E 'trim|discard'
                  *          Data set Management TRIM supported (limit 1 block)

It's easiest to let the system perform an automatic TRIM. That can be done in several ways.

The quickest way for trimming is to place into /etc/rc.local trim  commands, in my case it was the following commands:

 

fstrim -v /var/lib/mysql
fstrim -v /var/www

To add it I've used my favourite vim text editor.
Adding commands to rc.local will make SSD trimming be executed at boot time so this will reduce a bit the downtime during the trim with some time so perhaps for those like me which are running a crually important websites a better

An alternative way is to schedule a daily cron job to do just place a new job in /etc/cron.daily/trim e.g.:
 

linux:~# vim /etc/cron.daily/trim

 

#!/bin/sh
fstrim -v /var/lib/mysql
fstrim -v /var/www

linux:~# chmod +x /etc/cron.daily/trim

However the best way to enable automatic trimming to SSD  is to just add the discard parameter to /etc/fstab I've already done that earlier in this article.

Not really surprising the increase of websites opening (page load times) were decreased dramatically web page loading waiting time fall down 2 to 2.5 times, so the moral of story for me is always when possible from now on to use SSD in order to have superb websites opening times.

To sum it up what was achieved with moving my data into SSD Drive, before moving websites and SQL data to SSD drive the websites were opening for 6 to 10 seconds now sites open in 2 to 4.5 seconds which is below 5 seconds (the normal waiting time for a user to see your website).
By the way it should be not a news forfor people that are into Search Engine Optimization but might be for some of unexperienced new Admins and Webmasters that, all that all page opening times that  exceeds 5 secs is considered to be a slow website (and therefore perhaps not worthy to read).
The high load page times >5 secs makes the website also less interesting not only for end users but also for search engines (Google / Yahoo / Bing / Baidoo etc.) will is said to crawl it less if website is slow.
Search Engines are said to Index much better and crawl more frequently into more responsive websites.
Hence implementing SSD to a server and decreasing the page load time should bring up my visitors stats a bit too.

Well that's all for today, hope you enjoyed 🙂

Howto Fix “sysstat Cannot open /var/log/sysstat/sa no such file or directory” on Debian / Ubuntu Linux

Monday, February 15th, 2016

sysstast-no-such-file-or-directory-fix-Debian-Ubuntu-Linux-howto
I really love sysstat and as a console maniac I tend to install it on every server however by default there is some <b>sysstat</b> tuning once installed to make it work, for those unfamiliar with <i>sysstat</i> I warmly recommend to check, it here is in short the package description:<br /><br />
 

server:~# apt-cache show sysstat|grep -i desc -A 15
Description: system performance tools for Linux
 The sysstat package contains the following system performance tools:
  – sar: collects and reports system activity information;
  – iostat: reports CPU utilization and disk I/O statistics;
  – mpstat: reports global and per-processor statistics;
  – pidstat: reports statistics for Linux tasks (processes);
  – sadf: displays data collected by sar in various formats;
  – nfsiostat: reports I/O statistics for network filesystems;
  – cifsiostat: reports I/O statistics for CIFS filesystems.
 .
 The statistics reported by sar deal with I/O transfer rates,
 paging activity, process-related activities, interrupts,
 network activity, memory and swap space utilization, CPU
 utilization, kernel activities and TTY statistics, among
 others. Both UP and SMP machines are fully supported.
Homepage: http://pagesperso-orange.fr/sebastien.godard/

 

If you happen to install sysstat on a Debian / Ubuntu server with:

server:~# apt-get install –yes sysstat


, and you try to get some statistics with sar command but you get some ugly error output from:

 

server:~# sar Cannot open /var/log/sysstat/sa20: No such file or directory


And you wonder how to resolve it and to be able to have the server log in text databases periodically the nice sar stats load avarages – %idle, %iowait, %system, %nice, %user, then to FIX that Cannot open /var/log/sysstat/sa20: No such file or directory

You need to:

server:~# vim /etc/default/sysstat


By Default value you will find out sysstat stats it is disabled, e.g.:

ENABLED="false"

Switch the value to "true"

ENABLED="true"


Then restart sysstat init script with:

server:~# /etc/init.d/sysstat restart

However for those who prefer to do things from menu Ncurses interfaces and are not familiar with Vi Improved, the easiest way is to run dpkg reconfigure of the sysstat:

server:~# dpkg –reconfigure


sysstat-reconfigure-on-gnu-linux

 

root@server:/# sar
Linux 2.6.32-5-amd64 (pcfreak) 15.02.2016 _x86_64_ (2 CPU)

0,00,01 CPU %user %nice %system %iowait %steal %idle
0,15,01 all 24,32 0,54 3,10 0,62 0,00 71,42
1,15,01 all 18,69 0,53 2,10 0,48 0,00 78,20
10,05,01 all 22,13 0,54 2,81 0,51 0,00 74,01
10,15,01 all 17,14 0,53 2,44 0,40 0,00 79,49
10,25,01 all 24,03 0,63 2,93 0,45 0,00 71,97
10,35,01 all 18,88 0,54 2,44 1,08 0,00 77,07
10,45,01 all 25,60 0,54 3,33 0,74 0,00 69,79
10,55,01 all 36,78 0,78 4,44 0,89 0,00 57,10
16,05,01 all 27,10 0,54 3,43 1,14 0,00 67,79


Well that's it now sysstat error resolved, text reporting stats data works again, Hooray! 🙂

No space left on device with free disk space / Why no space left on device while there is plenty of disk space on drive – Running out of Inodes

Tuesday, November 17th, 2015

no_space_left-on-device-while-there-is-disk-space-running-out-of-file-inodes-unix_linux_file_system_diagram.gif

 

On one of the servers, I'm administrating the websites started showing some Mysql database table corrup errors like:
 

 

Table './database_name/site_news_list_com' is marked as crashed and last (automatic?) repair failed

The server is using Oracle MySQL server community stable edition on Debian GNU / Linux 6.0, so I first thought during work the server crashed either due to some bug issue in MySQL or it crashed due to some PHP cron job that did something messy. Thus to solve the crashed tables, tried using mysqlcheck tool which helped pretty fine, at many times whether there were database / table corruptions. I've run the following set of mysqlcheck commands with root (superuser) in a bash shell after logging in through SSH:

:

server:~# /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–check –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log
server:~# /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf –analyze –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log
server:~# /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–auto-repair –optimize –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log
server:~# /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–optimize –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log


In order for above commands to work, I've created the /root/.my.cnf containing my root (mysql CLI) mysql username and password, e.g. file has content like below:

 

[client]
user=root
password=MySecretPassword8821238

 

Btw a good note here is its generally a good idea (if you want to have consistent mysql databases) to automatically execute via a cron job 2 times a month, I've in root cronjob the following:

 

crontab -u root -l |grep -i mysqlcheck
04 06 5,10,15,20,25,1 * * /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–check –all-databases –silent -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log 07 06 5,10,15,20,25,1 * * /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf –analyze –all-databases –silent -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log 12 06 5,10,15,20,25,1 * * /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–auto-repair –optimize –all-databases –silent -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log 17 06 5,10,15,20,25,1 * * /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–optimize –all-databases –silent -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log


Strangely I got a lot of errors that some .MYI / .MYD .frm temp files, necessery for the mysql tables recovery can't be written inside /home/mysql/database_name

That was pretty weird and I thought there might be some issues with permissions, causing the inability to write, due to some bug or something so I went straight and checked /home/mysql/database_name permissions, e.g.::

 

server:/home/mysql/database_name# ls -ld soccerfame
drwx—— 2 mysql mysql 36864 Nov 17 12:00 soccerfame
server:/home/mysql/database_name# ls -al1|head -n 10
total 1979012
drwx—— 2 mysql mysql 36864 Nov 17 12:00 .
drwx—— 36 mysql mysql 4096 Nov 17 11:12 ..
-rw-rw—- 1 mysql mysql 8712 Nov 17 10:26 1_campaigns_diez.frm
-rw-rw—- 1 mysql mysql 14672 Jul 8 18:57 1_campaigns_diez.MYD
-rw-rw—- 1 mysql mysql 1024 Nov 17 11:38 1_campaigns_diez.MYI
-rw-rw—- 1 mysql mysql 8938 Nov 17 10:26 1_campaigns.frm
-rw-rw—- 1 mysql mysql 8738 Nov 17 10:26 1_campaigns_logs.frm
-rw-rw—- 1 mysql mysql 883404 Nov 16 22:01 1_campaigns_logs.MYD
-rw-rw—- 1 mysql mysql 330752 Nov 17 11:38 1_campaigns_logs.MYI


As seen from above output, all was perfect with permissions, so it should have been something else, so I decided to try to create a random file with touch command inside /home/mysql/database_name directory:

 

touch /home/mysql/database_name/somefile-to-test-writtability.txt touch: cannot touch ‘/scr1/data/somefile-to-test-writtability.txt‘: No space left on device


Then logically I thought the /home/mysql/ mounted ext4 partition got filled, because of crashed SQL database or a bug thus, checked with disk free command df whether there is enough space on server:

server:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md1 20G 7.6G 11G 42% /
udev 10M 0 10M 0% /dev
tmpfs 13G 1.3G 12G 10% /run
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/md2 256G 134G 110G 55% /home

Well that's weird? Obviously only 55% of available disk space is used and available 134G which was more than enough so I got totally puzzled why, files can't be written.

Then very logically, I thought it might be that /home directory has remounted as read only, because the SSD memory disk on server is failing and checked for errors in dmesg, i.e.:

 

server:~# dmesg|grep -i error


Also checked how exactly was partition mounted, to check whether it is (RO) read-only:

 

server:~# mount -l|grep -i /home
/dev/md2 on /home type ext4 (rw,relatime,discard,data=ordered)


Now everything become even more weirder, as obviously the disk continued to be claiming no space left on device, while in reality there was plenty of disk space.

Then after running a quick research on the internet for the no space left on device with free disk space, I've come across this great superuser.com thread which let me realize the partition run out of inodes and that's why no new file inodes could be assigned and therefore, the linux kernel is refusing to write the file on ext4 partition.

For those who haven't heard of Linux Partition Inodes here is link to Wikipedia and a quick quote:

 

In a Unix-style file system, the inode is a data structure used to represent a filesystem object, which can be one of various things including a file or a directory. Each inode stores the attributes and disk block location(s) of the filesystem object's data.[1] Filesystem object attributes may include manipulation metadata (e.g. change,[2] access, modify time), as well as owner and permission data (e.g. group-id, user-id, permissions).[3]
Directories are lists of names assigned to inodes. The directory contains an entry for itself, its parent, and each of its children.


Once I understood it is the inodes, I checked how many of them are occupied with cmd:

 

server:~# df -i /home
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/md2 17006592 17006592 0 100% /home


You see, there were 0 (zero) free file inodes on server and that was the reason for no space left on device while there was actually free disk space

To clean up (free) some inodes on partition, first thing I did is to delete all old logs which were inside /home and files I positively know not to be necessery, then to find which directories allocating most innodes used:

 

server:~# find . -xdev -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n


If you're on a regular old fashined IDE Hard Drive and not SSD or you have too much files inside this command will take really long …:

Therefore a better solution might be to frist:

a) Try to find root folders with large inodes count:

for i in /home/*; do echo $i; find $i |wc -l; done
Try to find specific folders:


You should get output like:

 

/home/new_website
606692
/home/common
73
/home/pcfreak
5661
/home/hipo
33
/home/blog
13570
/home/log
123
/home/lost+found
1

b) Then once you know the directory allocating most inodes, run the command again to see the sub-directories with most files (eating) partition innodes:

 

for i in /home/webservice/*; do echo $i; find $i |wc -l; done

 

One usual large folder which could free you some nodes is the linux source headers, but in my case it was simply a lot of tiny old logs being logged on the system for few years in the past without cleaning:

After deleting the log dirs and cache folder in my case /home/new_website/{log,cache}:

server:~# rm -rf /home/new_website/log/*
server:~# rm -rf /home/new_website/cache/*

 

 

a) Then, stopping Apache webserver to check prevent Apache to use MySQl databases while running database repair and restaring MySQL:
 

server:~# /etc/init.d/apache2 stop Restarting MySQL server
..
server:~# /etc/init.d/mysql restart
..


b) And re-issuing MySQL Check / Repair / Optimize database commands:
 

 

mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–check –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log

mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf –analyze –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log

mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–auto-repair –optimize –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log

mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–optimize –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log

c) And finally starting the Apache Webserver again:
 

server:~# /etc/init.d/apache2 start


Some innodse got freed up:
 

server:~# df -i /home Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/md2 17006592 16797196 209396 99% /home


And hooray by God's Grace and with help of prayers of The most Holy Theotokos (Virgin) Mary, websites started again !

Install simscan on Qmail for better Mail server performance and get around unexisting suid perl in newer Linux Debian / Ubuntu servers

Tuesday, August 18th, 2015

qmail-fixing-clamdscan-errors-and-qq-errors-qmail-binary-migration-few-things-to-check-outclamav_logo-installing-clamav-antivirus-to-scan-periodically-debian-server-websites-for-viruses

I've been stuck with qmail-scanner-queue for a while on each and every new Qmail Mail server installation, I've done, this time it was not different but as time evolves and Qmail and Qmail Scanner Wrapper are not regularly updated it is getting, harder and harder to make a fully functional Qmail on newer Linux server distribution releases.

I know many would argue QMAIL is already obsolete but still I have plenty of old servers running QMAIL whose migration might cause more troubles than just continuing to use QMAIL. Moreover QMAIL once set-upped works like a charm.

I've been recently experiencing severe issues with clamdscan errors and I tried to work around this with compiling and using a suid wrapper, however still the clamdscan errors continued and as qmail-scanner is not actively developed and it is much slower than simscan, I've finally decided to give simscan as a mean to fix the clamdscan errors and thanksfully this worked as a solution.

Here is what I did "rawly" to make simscan work on this install:
 

Make sure simscan is properly installed on Debian Linux 7 or Ubuntu servers and probably (should work) on other Deb based Linuxes by following below steps:
 

a) Configure simscan with following compile time options as root (superuser)

./configure \
–enable-user=qscand \
–enable-clamav \
–enable-clamdscan=/usr/local/bin/clamdscan \
–enable-custom-smtp-reject=y \
–enable-per-domain=y \
–enable-attach=y \
–enable-dropmsg=n \
–enable-spam=y \
–enable-spam-hits=5 \
–enable-spam-passthru=y \
–enable-qmail-queue=/var/qmail/bin/qmail-queue \
–enable-ripmime=/usr/local/bin/ripmime \
–enable-sigtool-path=/usr/local/bin/sigtool \
–enable-received=y


b) Compile it

 

 make && make install-strip

c) Fix any wrong permissions of simscan queue directory

 

chmod g+s /var/qmail/simscan/

chown -R qscand:qscand /var/qmail/simscan/
chmod -R 777 simscan/chown -R qscand:qscand simscan/
chown -R qscand:qscand simscan/

d) Add some additional simscan options (how simscan is how to perform scans)

The restart qmail to make mailserver start using simscan instead of qmail-scanner, run below command (again as root):

echo ":clam=yes,spam=yes,spam_hits=8.5,attach=.vbs:.lnk:.scr:.wsh:.hta:.pif" > /var/qmail/control/simcontrol

 

e) Run /var/qmail/bin/simscanmk in order to convert /var/qmail/control/simcontrol into the /var/qmail/control/simcontrol.cdb database

/var/qmail/bin/simscanmk
/var/qmail/bin/simscanmk -g

f) Modify /service/qmail-smtpd/run to set simscan to be default Antivirus Wrapper Scanner

vim /service/qmail-smtpd/run

I'm using thibs's run script so I've uncommented the line there:

QMAILQUEUE="$VQ/bin/simscan"

Below two lines should stay commented as qmail-scanner is no longer used:

##QMAILQUEUE="$VQ/bin/qmail-scanner-queue"
##QMAILQUEUE="$VQ/bin/qmail-scanner-queue.pl"
export QMAILQUEUE

qmailctl restart
* Stopping qmail-smtpdssl.
* Stopping qmail-smtpd.
* Sending qmail-send SIGTERM and restarting.
* Restarting qmail-smtpd.
* Restarting qmail-smtpdssl.

g) Test whether simscan is properly sending / receiving emails:

echo "Testing Email" >> /tmp/mailtest.txt
env QMAILQUEUE=/var/qmail/bin/simscan SIMSCAN_DEBUG=3 /var/qmail/bin/qmail-inject hipo@my-mailserver.com < /tmp/mailtest.txt

Besides that as I'm using qscand:qscand as a user for my overall Qmail Thibs install I had to also do:

chown -R qscand:qscand /var/qmail/simscan/
chmod -R 777 simscan/
chown -R qscand:qscand simscan/

 

It might be a good idea to also place that lines in /etc/rc.local to auto change permissions on Linux boot, just in case something wents wrong with permissions.

Yeah, I know 777 is unsecure but without this permissions, I was still getting errors, plus the server doesn't have any accounts except the administrator, so I do not worry other system users might sniff on email 🙂

h) Test whether Qmail mail server send / receives fine with simscan

After that I've used another mail server with mail command to test whether mail is received:
 

mail -s "testing email1234" hipo@new-configured-qmail-server.com
asdfadsf
.
Cc:

Then it is necessery to also install latest clamav daemon from source in my case that's on Debian GNU / Linux 7, because somehow the Debian shipped binary version of clamav 0.98.5+dfsg-0+deb7u2 does fail to scan any incoming or outgoing email with error:
 

clamdscan: corrupt or unknown clamd scanner error or memory/resource/perms problem – exit status -1/72057594037927935

So to fix it you will have to install clamav on Debian Linux from source.


Voilla, that's all finally it worked !