Posts Tagged ‘www’

Mount remote Linux SSHFS Filesystem harddisk on Windows Explorer SWISH SSHFS file mounter and a short evaluation on what is available to copy files to SSHFS from Windows PC

Monday, February 22nd, 2016

swish-mount-and-copy-files-from-windows-to-linux-via-sshfs-mount

I'm forced to use Windows on my workbook and I found it really irritating, that I can't easily share files in a DropBox, Google Drive, MS OneDrive, Amazon Storage or other cloud-storage free remote service. etc.
I don't want to use DropBox like non self-hosted Data storage because I want to keep my data private and therefore the only and best option for me was to make possible share my Linux harddisk storage
dir remotely to the Windows notebook.

I didn't wanted to setup some complex environment such as Samba Share Server (which used to be often a common option to share file from Linux server to Windows), neither wanted to bother with  installing FTP service and bother with FTP clients, or configuring some other complex stuff such as WebDav – which BTW is an accepted and heavily used solution across corporate clients to access read / write files on a remote Linux servers.
Hence, I made a quick research what else besides could be used to easily share files and data from Windows PC / notebook to a home brew or professional hosting Linux server.

It turned out, there are few of softwares that gives a similar possibility for a home lan small network Linux / Windows hybrid network users such, here is few of the many:

  • SyncThingSyncthing is an open-source file synchronization client/server application, written in Go, implementing its own, equally free Block Exchange Protocol. The source code's content-management repository is hosted on GitHub

     

     

     

     

     

    syncthing-logo

  • OwnCloud – ownCloud provides universal access to your files via the web, your computer or your mobile devices

     

     

     

     

     

    owncloud-logo

  • Seafile – Seafile is a file hosting software system. Files are stored on a central server and can be synchronized with personal computers and mobile devices via the Seafile client. Files can also be accessed via the server's web interface


seafile-client-in-browser

I've checked all of them and give a quick try of Syncthing which is really easy to start, just download the binary launch it and configure it under https://Localhost:8385 URL from a browser on the Linux server.
Syncthing seemed to be nice and easy to configure solution to be able to Sync files between Server A (Windows) and Server B(Linux) and guess many would enjoy it, if you want to give it a try you can follow this short install syncthing article.
However what I didsliked in both SyncThing and OwnCloud and Seafile and all of the other Sync file solutions was, they only supported synchronization via web and didn't seemed to have a Windows Explorer integration and did required
the server to run more services, posing another security hole in the system as such third party softwares are not easily to update and maintain.

Because of that finally after rethinking about some other ways to copy files to a locally mounted Sync directory from the Linux server, I've decided to give SSHFS a try. Mounting SSHFS between two Linux / UNIX hosts is
quite easy task with SSHFS tool

In Windows however the only way I know to transfer files to Linux via SSHFS was with WinSCP client and other SCP clients as well as the experimental:

As well as few others such as ExpandDrive, Netdrive, Dokan SSHFS (mirrored for download here)
I should say that I first decided to try copying few dozen of Gigabyte movies, text, books etc. using WinSCP direct connection, but after getting a couple of timeouts I was tired of WinSCP and decided to look for better way to copy to remote Linux SSHFS.
However the best solution I found after a bit of extensive turned to be:

SWISH – Easy SFTP for Windows

Swish is very straight forward to configure compared to all of them you download the .exe which as of time of writting is at version 0.8.0 install on the PC and right in My Computer you will get a New Device called Swish next to your local and remote drives C:/ D:/ , USBs etc.

swish-new-device-to-appear-in-my-computer-to-mount-sshfs

As you see in below screenshot two new non-standard buttons will Appear in Windows Explorer that lets you configure SWISH

windows-mount-sshfs-swish-add-sftp-connection-button-screenshot

Next and final step before you have the SSHFS remote Linux filesystem visible on Windows Xp / 7 / 8 / 10 is to fill in remote Linux hostname address (or even better fill in IP to get rid of possible DNS issues), UserName (UserID) and Direcory to mount.

swish-new-fill-in-dialog-to-make-new-linux-sshfs-mount-directory-possible-on-windows

Then you will see the SSHFS moutned:

swish-sshfs-mounted-on-windows

You will be asked to accept the SSH host-key as it used to be unknown so far

swish-mount-sshfs-partition-on-windows-from-remote-linux-accept-key

That's it now you will see straight into Windows Explorer the remote Linux SSHFS mounted:

remote-sshfs-linux-filesystem-mounted-in-windows-explorer-with-swish

Once setupped a Swish connection to copy files directly to it you can use the Send to Embedded Windows dialog, as in below screenshot

swish-send-to-files-windows-screenshot

The only 3 problem with SWISH are:

1. It doesn't support Save password, so on every Windows PC reboot when you want to connect to remote Linux SSHFS, you will have to retype remote login user pass.
Fron security stand point this is not such a bad thing, but it is a bit irritating to everytime type the password without an option to save permanently.
The good thing here is you can use Launch Key Agent
as visible in above screenshot and set in Putty Key Agent your remote host SSH key so the passwordless login will work without any authentication at
all however, this might open a security hole if your Win PC gets infected by virus, which might delete something on remote mounted SSHFS filesystem so I personally prefer to retype password on every boot.

2. it is a bit slow so if you're planning to Transfer large amounts of Data as hundreds of megabytes, expect a very slow transfer rate, even in a Local  10Mbit Network to transfer 20 – 30 GB of data, it took me about 2-3 hours or so.
SWISH is not actively supported and it doesn't have new release since 20th of June 2013, but for the general work I need it is just perfect, as I don't tent to be Synchronizing Dozens of Gigabytes all the time between my notebook PC and the Linux server.

3. If you don't use the established mounted connection for a while or your computer goes to sleep mode after recovering your connection to remote Linux HDD if opened in Windows File Explorer will probably be dead and you will have to re-enable it.

For Mac OS X users who want to mount / attach remote directory from a Linux partitions should look in fuguA Mac OS X SFTP, SCP and SSH Frontend

I'll be glad to hear from people on other good ways to achieve same results as with SWISH but have a better copy speed while using SSHFS.

Change website .JS .PHP Python Perl CSS etc. file permissions recursively for Better Tightened Security on Linux Webhosting Servers

Friday, October 30th, 2015

change-permissions-recursively-on-linux-to-protect-website-against-security-breaches-hacks

It is a common security (breach) mistake that developers or a web design studio make with dedicated or shared hosted websites do to forget to set a nice restrictive file permissions.

This is so because most people (and especially nowdays) developers are not a security freaks and the important think for a programmer is to make the result running in shortest time without much caring on how secure that is.
Permissions issues are common among sites written in PHP / Perl / Python with some CSS and Javascript, but my observations are that JavaScript websites especially that are using some frameworks such as Zend / Smarty etc. and are using JQuery are the most susceptible to suffer from permission security holes such as the classic 777 file permissions, because of developers who’re overworking and pushed up for a deadlines to include new functionality on websites and thus often publish their experimental code on a Production systems without a serious testing by directly uploading the experimental code via FTP / WinSCP on Production system.

Such scenarios are very common for small and middle sized companies websites as well as many of the hobbyist developers websites running on ready CMS system platforms such as Joomla and WordPress.
I know pretty well from experience this is so. Often a lot of the servers where websites are hosted are just share-servers without a dedicated sysadmin and thus there are no routine security audits made on the server and the security permissions issue might lead to a serious website compromise by a cracker and make your website quickly be banned from Google / Yahoo / Ask Jeeves / Yandex and virtually most of Search Engines because of being marked as a spammer or hacked webiste inside some of the multiple website blacklists available nowdays.

Thus it is always a good idea to keep your server files (especially if you’re sysadmin) with restrictive permissions by making the files be owned by superuser (root) in order to prevent some XSS or vulnerable PHP / Python / Perl script to allow you to easily (inject) and overwrite code on your website.

1. Checking whether you have a all users read, write, executable permissions with find command

The first thing to do on your server to assure you don’t have a low security permissioend files is:

find /home/user/website -type f -perm 777 -print

You will get some file as an output like:

./www/tpl/images/js/ajax-dynamic-list/js/ajax-dynamic-list.js
./www/tpl/images/js/ajax-dynamic-list/js/ajax_admin.js
./www/tpl/images/js/ajax-dynamic-list/js/ajax_teams.js
./www/tpl/images/js/ajax-dynamic-list/js/ajax.js
./www/tpl/images/js/ajax-dynamic-list/js/ajax-dynamic-list_admin.js
./www/tpl/images/js/ajax-dynamic-list/lgpl.txt

2. Change permissions recursively to read, write and exec for root and read for everybody and set all files to be owned by (root) superuser

Then to fix the messy permissions files a common recommended permissions is 744 (e.g. Read / Write and Execute permissions for everyone and only read permissions for All Users and All groups).
Lets say you want to make files permissions to 744 just for all JavaScript (JQuery) files for a website, here is how:

find . -iname ‘*.js’ -type f -print -exec chown root:root ‘{}’ \;
find . -iname ‘*.js’ -type f -print -exec chmod 744 ‘{}’ \;

First find makes all Javascript files be owned by root user / group and second one sets all files permissions to 744.

To make 744 all files on server (including JPEG / PNG Pictures) etc.:

find . -iname /home/users/website -type f -print -exec chown root:root ‘{}’ \;
find . -iname /home/users/website -type f -print -exec chmod 744 ‘{}’ \;

Redirect www to non www with .htaccess Apache rewrite rule

Thursday, July 2nd, 2015

https://www.pc-freak.net/images/redirect_domain_name_without_changing_url_apache_rewrite_rule_preventing_host_in_ip_mod_rewrite_
Sometimes it happens that some websites are indexed in Search Engines (Google, Yandex, Yahoo, Bing, Ask Jeeves etc.) with www.website-name.com and you want to get rid of the www in the hostname in favour of just the hostname in terms of Apache .htaccess redirect. I knwo redirect www to non-www, might seem a bit weird as usually people want to redirect their website domain without www to point to www but there is a good reason for that weirdness, if you're a Christian and you dislike the fact that WWW is being red as Waw Waw Waw's or Vav / Vav Vav letters in Hebrew which represents in hebrew 666 or the mark of the beast prophecised in last book of Holy Bible (Revelation) written by saint John, the book is also called often Apocalypse.

Using Apache mod_rewrite's .htaccess is a good way to do the redirect especially if you're in a shared hosting, where you don't have direct access to edit Apache Virtualhost httpd.conf file but have only access to your user's home public_html directory via lets say FTP or SFTP.

To achieve the www to non-www domain URL redirect, just edit .htaccess with available hosting editor (in case if shell SSH access is available) or web interface or download the .htaccess via FTP / SFTP modify it and upload it back to server.

You need to include following mod_rewrite RewriteCond rules to .htaccess (preferrably somewhere near beginning of file):
 

RewriteEngine On
RewriteCond %{HTTP_HOST} ^www.Your-Website.org [NC]
RewriteRule ^(.*)$ http://Your-Website.org/$1 [L,R=301]


As .htaccess is being dynamically red by Apache's mod_rewrite module no Apache webserver restart is required and you should see immediately the affect, hopefully if the webhosting doesn't imply some caching with mod_cache or there is no some cache expiry setting preventing the new .htaccess to be properly redable by webserver.
Also in case of troubles make sure the new uploaded .htaccess file is properly readable e.g. has some permissions such as 755. Also in case if it doesn't immediately works out, make sure to clean up your browser cache and assure your browser is not configured to use some caching proxy host (be it visible or transparent).
Besides this would work and your Search Engines in future will hopefully stop indexing your site with WWW. in front of domain name, there is a downside of using .htaccess instead of including it straight into Apache's VirtualHost configuration is that this will cause a bit of degraded performance and add some milliseconds slowness to serve requests to your domain, thus if you're on your own dedicated server and have access to Apache configuration implement the www to non www hostname redirect directly using VirtualHost as explained in my prior article here

 

Fix MySQL ibdata file size – ibdata1 file growing too large, preventing ibdata1 from eating all your server disk space

Thursday, April 2nd, 2015

fix-solve-mysql-ibdata-file-size-ibdata1-file-growing-too-large-and-preventing-ibdata1-from-eating-all-your-disk-space-innodb-vs-myisam

If you're a webhosting company hosting dozens of various websites that use MySQL with InnoDB  engine as a backend you've probably already experienced the annoying problem of MySQL's ibdata1 growing too large / eating all server's disk space and triggering disk space low alerts. The ibdata1 file, taking up hundreds of gigabytes is likely to be encountered on virtually all Linux distributions which run default MySQL server <= MySQL 5.6 (with default distro shipped my.cnf). The excremental ibdata1 raise appears usually due to a application software bug on how it queries the database. In theory there are no limitation for ibdata1 except maximum file size limitation set for the filesystem (and there is no limitation option set in my.cnf) meaning it is quite possible that under certain conditions ibdata1 grow over time can happily fill up your server LVM (Storage) drive partitions.

Unfortunately there is no way to shrink the ibdata1 file and only known work around (I found) is to set innodb_file_per_table option in my.cnf to force the MySQL server create separate *.ibd files under datadir (my.cnf variable) for each freshly created InnoDB table.
 

1. Checking size of ibdata1 file

On Debian / Ubuntu and other deb based Linux servers datadir is /var/lib/mysql/ibdata1

server:~# du -hsc /var/lib/mysql/ibdata1
45G     /var/lib/mysql/ibdata1
45G     total


2. Checking info about Databases and Innodb storage Engine

server:~# mysql -u root -p
password:

mysql> SHOW DATABASES;
+——————–+
| Database           |
+——————–+
| information_schema |
| bible              |
| blog               |
| blog-sezoni        |
| blogmonastery      |
| daniel             |
| ezmlm              |
| flash-games        |


Next step is to get some understanding about how many existing InnoDB tables are present within Database server:

 

mysql> SELECT COUNT(1) EngineCount,engine FROM information_schema.tables WHERE table_schema NOT IN ('information_schema','performance_schema','mysql') GROUP BY engine;
+————-+——–+
| EngineCount | engine |
+————-+——–+
|         131 | InnoDB |
|           5 | MEMORY |
|         584 | MyISAM |
+————-+——–+
3 rows in set (0.02 sec)

To get some more statistics related to InnoDb variables set on the SQL server:
 

mysqladmin -u root -p'Your-Server-Password' var | grep innodb


Here is also how to find which tables use InnoDb Engine

mysql> SELECT table_schema, table_name
    -> FROM INFORMATION_SCHEMA.TABLES
    -> WHERE engine = 'innodb';

+————–+————————–+
| table_schema | table_name               |
+————–+————————–+
| blog         | wp_blc_filters           |
| blog         | wp_blc_instances         |
| blog         | wp_blc_links             |
| blog         | wp_blc_synch             |
| blog         | wp_likes                 |
| blog         | wp_wpx_logs              |
| blog-sezoni  | wp_likes                 |
| icanga_web   | cronk                    |
| icanga_web   | cronk_category           |
| icanga_web   | cronk_category_cronk     |
| icanga_web   | cronk_principal_category |
| icanga_web   | cronk_principal_cronk    |


3. Check and Stop any Web / Mail / DNS service using MySQL

server:~# ps -efl |grep -E 'apache|nginx|dovecot|bind|radius|postfix'

Below cmd should return empty output, (e.g. Apache / Nginx / Postfix / Radius / Dovecot / DNS etc. services are properly stopped on server).

4. Create Backup dump all MySQL tables with mysqldump

Next step is to create full backup dump of all current MySQL databases (with mysqladmin):

server:~# mysqldump –opt –allow-keywords –add-drop-table –all-databases –events -u root -p > dump.sql
server:~# du -hsc /root/dump.sql
940M    dump.sql
940M    total

 

If you have free space on an external backup server or remotely mounted attached (NFS or SAN Storage) it is a good idea to make a full binary copy of MySQL data (just in case something wents wrong with above binary dump), copy respective directory depending on the Linux distro and install location of SQL binary files set (in my.cnf).
To check where are MySQL binary stored database data (check in my.cnf):

server:~# grep -i datadir /etc/mysql/my.cnf
datadir         = /var/lib/mysql

If server is CentOS / RHEL Fedora RPM based substitute in above grep cmd line /etc/mysql/my.cnf with /etc/my.cnf

if you're on Debian / Ubuntu:

server:~# /etc/init.d/mysql stop
server:~# cp -rpfv /var/lib/mysql /root/mysql-data-backup

Once above copy completes, DROP all all databases except, mysql, information_schema (which store MySQL existing user / passwords and Access Grants and Host Permissions)

5. Drop All databases except mysql and information_schema

server:~# mysql -u root -p
password:

 

mysql> SHOW DATABASES;

DROP DATABASE blog;
DROP DATABASE sessions;
DROP DATABASE wordpress;
DROP DATABASE micropcfreak;
DROP DATABASE statusnet;

          etc. etc.

ACHTUNG !!! DON'T execute!DROP database mysql; DROP database information_schema; !!! – cause this might damage your User permissions to databases

6. Stop MySQL server and add innodb_file_per_table and few more settings to prevent ibdata1 to grow infinitely in future

server:~# /etc/init.d/mysql stop

server:~# vim /etc/mysql/my.cnf
[mysqld]
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_buffer_pool_size=4G

Delete files taking up too much space – ibdata1 ib_logfile0 and ib_logfile1

server:~# cd /var/lib/mysql/
server:~#  rm -f ibdata1 ib_logfile0 ib_logfile1
server:~# /etc/init.d/mysql start
server:~# /etc/init.d/mysql stop
server:~# /etc/init.d/mysql start
server:~# ps ax |grep -i mysql

 

You should get no running MySQL instance (processes), so above ps command should return blank.
 

7. Re-Import previously dumped SQL databases with mysql cli client

server:~# cd /root/
server:~# mysql -u root -p < dump.sql

Hopefully import should went fine, and if no errors experienced new data should be in.

Altearnatively if your database is too big and you want to import it in less time to mitigate SQL downtime, instead import the database with:

server:~# mysql -u root -p
password:
mysql>  SET FOREIGN_KEY_CHECKS=0;
mysql> SOURCE /root/dump.sql;
mysql> SET FOREIGN_KEY_CHECKS=1;

 

If something goes wrong with the import for some reason, you can always copy over sql binary files from /root/mysql-data-backup/ to /var/lib/mysql/
 

8. Connect to mysql and check whether databases are listable and re-check ibdata file size

Once imported login with mysql cli and check whther databases are there with:

server:~# mysql -u root -p
SHOW DATABASES;

Next lets see what is currently the size of ibdata1, ib_logfile0 and ib_logfile1
 

server:~# du -hsc /var/lib/mysql/{ibdata1,ib_logfile0,ib_logfile1}
19M     /var/lib/mysql/ibdata1
1,1G    /var/lib/mysql/ib_logfile0
1,1G    /var/lib/mysql/ib_logfile1
2,1G    total

Now ibdata1 will grow, but only contain table metadata. Each InnoDB table will exist outside of ibdata1.
To better understand what I mean, lets say you have InnoDB table named blogdb.mytable.
If you go into /var/lib/mysql/blogdb, you will see two files
representing the table:

  •     mytable.frm (Storage Engine Header)
  •     mytable.ibd (Home of Table Data and Table Indexes for blogdb.mytable)

Now construction will be like that for each of MySQL stored databases instead of everything to go to ibdata1.
MySQL 5.6+ admins could relax as innodb_file_per_table is enabled by default in newer SQL releases.


Now to make sure your websites are working take few of the hosted websites URLs that use any of the imported databases and just browse.
In my case ibdata1 was 45GB after clearing it up I managed to save 43 GB of disk space!!!

Enjoy the disk saving! 🙂

Apache Webserver disable file extension – Forbid / Deny access in Apache config to certain file extensions for a Virtualhost

Monday, March 30th, 2015


If you're a Webhosting company sysadmin like me and you already have configured directory listing for certain websites / Vhosts and those files are mirrored from other  development webserver location but some of the uploaded developer files extensions which are allowed to be interptered such as php include files .inc / .htaccess mod_rewrite rules / .phps / .html / .txt need to be working on the dev / test server but needs to be disabled (excluded) from delivery or interpretting for some directory on the prod server.

Open Separate host VirtualHost file or Apache config (httpd.conf / apache2.conf)  if all Vhosts  for which you want to disable certain file extensions and add inside:

<Directory "/var/www/sploits">
        AllowOverride All

</Directory>

 

Extension Deny Rules such as:

For disabling .inc files from inclusion from other PHP sources:
 

<Files  ~ "\.inc$">
  Order allow,deny
  Deny from all
</Files>


To Disable access to .htaccess single file only

 

<Files ~ "^\.htaccess">
  Order allow,deny
  Deny from all
</Files>


To Disable .txt from being served by Apache and delivered to requestor browser:

 

<Files  ~ "\.txt$">
  Order allow,deny
  Deny from all
</Files>

 


To Disable any left intact .html from being delivered to client:

 

<Files  ~ "\.html$">
  Order allow,deny
  Deny from all
</Files>

 

Do it for as many extensions as you need.
Finally to make changes affect restart Apache as usual:

If on Deb based Linux issue:

/etc/init.d/apach2 restart


On CentOS / RHEL and other Redhats / RedHacks 🙂

/etc/init.d/httpd restart

WordPress Security: Fix WordPress wp-config.php improper permissions to protect your sites from Database password steal / Website deface

Thursday, March 12th, 2015

wordpress-security-Fix-wordpress-wp-config-improper-permissions-to-protect-your-sites-from-Database-pass-steal
Keeping WordPress Site / Blog and related installed plugins up-to-date
is essential to prevent an attacker to hack into your Site / Database and deface your site, however if you're a company providing shell access from Cpanel / Plesk / Kloxo Panel to customers often customers are messing up permissions leaving important security credential files such as wp-config.php (which is storing user / pass credentials about connection to MySQL / PostgreSQL to have improper permissions and be world readable e.g. have permissions such as 666 or 777 while in reality the WordPress recommended permissions for wp-config.php is 600. I will skip here to explain in details difference between file permissions on Linux as this is already well described in any Linux book, however I just will recommend for any Share hosting Admin where Wordperss is hosted on Lighttpd / Apache Webserver + Some kind of backend database to be extra cautious.

Hence it is very useful to list all your WordPress sites on server wp-config.php permissions with find like this:

 

find /  -iname 'wp-config.php' -print1;

 

I find it a generally good practice to also automatically set all wp-config.php permissions to 600 (6= Read / Write  permissions only for File Owner  user 0 = No permissions for All groups, 0 = No Permissions for all non-owner users)

If find command output gives you some file permissions such as:
 

ls -al /var/www/wordpress-bak/wp-config.php
-rw-rw-rw- 1 www-data www-data 2654 jul 28  2009 wp-config.php

 

E.g. file permission has 666 permissions (Readable for all users), then it is wise to fix this with:
 

chmod 600 /var/www/wordpress-bak/wp-config.php


It is generally a very good practice to run also a chmod 600 to each and every found wp-config.php file on server:
 

find /  -iname 'wp-config.php' -print1 -exec chmod 600 '{}' \;


Above command will also print each file to whcih permission is set to Read / Write for Owner (this si done with -print1 option).

It is a good practice for shared hosting server to always configure a root cronjob to run above find chmod command at least once daily (whenever server hosts 50 – 100 wordpress+ more sites).
 

crontab -u root -l | { cat; echo “05 03 * * * find /  -iname 'wp-config.php' -print1 -exec chmod 600 '{}' \; } | crontab – 


If you don't have the 600 permissions set for all wp-config.php files this security "backdoor" can be used by any existing non-root user to be read and to break up (crack)  in your database and even when there are Deface bot-nets involved to deface all your hosted server wordpress sites.

One of my servers with wordpress has just recently suffered with this little but very important security hole due to a WordPress site directory backup  with improper permissions which allowed anyone to enter MySQL database, so I guess there are plenty of servers with this hidden vulnerability silently living.

Many thanks to my dear friend (Dimitar PaskalevNomen for sharing with me about this vulnerability! Very important note to make here is admins who are using some security enhancement modules such as SuPHP (which makes Apache webserver to run Separate Website instances with different user), should be careful with his set all wp-config.php modules to Owner, as it is possible the wp-config.php owner change to make customer WP based websites inaccessible.

Another good security measure to  protect your server WordPress based sites from malicious theme template injections (for both personal own hosted wordpress based blog / sites or a WordPress hosting company) is to install and activate WordPress Antivirus plugin.

How to SSH client Login to server with password provided from command line as a script argument – Running same commands to many Linux servers

Friday, March 6th, 2015

ssh-how-to-login-with-password-provided-from-command-line-use-sshpass-to-run-same-command-to-forest-of-linux-servers

Usually admins like me who casuanlly need to administer "forests" (thousands of identicallyconfigured services Linux servers) are generating and using RSA / DSA key authentication for passwordless login, however this is not always possible as some client environments does prohibit the use of RSA / DSA non-pass authentication, thus in such environments to make routine server basic package rpm / deb upgrades or do other maintanance patching its necessery to use normal ssh user / pass login but as ssh client doesn't allow password to be provided from prompt for security reasons and therefore using some custom bash loop to issue single command to many servers (such as explained in my previous article) requires you to copy / paste password on password prompt multiple times. This works its pretty annoying so if you want to run single command on all your 500 servers with specifying the password from password prompt use sshpass tool (for non-interactive ssh password auth).

SSHPASS official site description:
 

sshpass is a utility designed for running ssh using the mode referred to as "keyboard-interactive" password authentication, but in non-interactive mode.

 

Install sshpass on Debian / Ubuntu (deb based) Linux

sshpass is installable right out of regular repositories so to install run:
 

apt-get install —yes sshpass


Install sshpass on CentOS / Fedora (RPM based) Linux


sshpass is available also across most RPM based distros too so just use yum package manager

 

yum -y install sshpass


If its not available across standard RPM distro provided repositories, there should be RPM on the net for distro just download latest one and use wget and rpm to install:

 wget -q http://dl.fedoraproject.org/pub/epel/6/x86_64/sshpass-1.05-1.el6.x86_64.rpm

 rpm -ivh sshpass-1.05-1.el6.x86_64.rpm

 

How Does SshPass Works?

 

Normally openssh (ssh) client binary uses direct TTY (/dev/tty)= an abbreviation for PhyTeleTYpewriter or (the admin jargon call Physical Console access)  instead of standard remotely defined /dev/ptsVirtual PTY.
To get around this Sshpass runs ssh in a dedicated TTY to emulate the password is indeed issues by interactive keyboard user thus  fooling remote sshd server to thinking password
is provided by interactive user.


SSHPass use

Very basic standard use which allows you to pass the password from command line is like this:
 

sshpass -p 'Your_Password_Goes_here123' ssh username@server.your-server.com


Note that the server you're working is shared with other developers they might be able to steal your username / password by using a simple process list command such as:
 

 ps auxwwef


In my case security is not a hot issue, as I'm the only user on the server (and only concern might be if someone hacks into the server 🙂 

 

Then assuming that you have a plain text file with all your administered servers, you can easily use sshpass in a Bash Script loop in order to run, lets say a package upgrade across all identical Linux version machines:
 

while read line; do
sshpass -p 'Your_Password_Goes_here123' ssh username@$line "apt-get update && apt-get upgrade && apt-get dist-upgrade" < /dev/null;
done < all_servers_list.txt

Change the command you like to issue across all machines with the string "apt-get …"
Above command can be used to keep up2date all Debian stable server packages. What you will do on servers is up to your imaginations, very common use of above line would be if you want to see uptime /netstat command output across all your network servers.

 

while read line; do
sshpass -p 'Your_Password_Goes_here123' ssh username@$line "uptime; who; netstat -tunlp; " < /dev/null;
done < all_servers_list.txt

 


As you can guess SshPass is swiss army knife tool for admins whoneed to automate things with scripts simultaneously across number of servers.
 

Happy SSH-ing 🙂

 

 

 

Speed up WordPress / Joomla CMS and MySQL server on Linux with tmpfs ram file system / Decrease Website pageload times with RAM caching

Wednesday, March 4th, 2015

speed-up-accelerate-wordpress-joomla-drupal-cms-and-mysql-server-with-tmpfs_ramfs_decrease-pageload-times-with-ram-caching
As a WordPress blog owner and an sys admin that has to deal with servers running a lot of WordPress / Joomla / Droopal and other custom CMS installed on servers, performoing slow or big enough to put a significant load on servers
and I love efficiency and hardware cost saving is essential for my daily job, I'm constantly trying to find new ways to optimize Customer Website (WordPress) and rest of sites in order to utilize better our servers and improve our clients sites speed (and hence satisfaction). 

There is plenty of little things to do on servers but probably among the most crucial ones which we use nowadays that save us a lot of money is tmpfs, and earlier (ramfs) – previously known as shmfs).
TMPFS is a (Temporary File Storage Facility) Linux kernel technology based on ramfs (used by Linux kernel initrd / initramfs on boot time in order to load and store the Linux kernel in memory, before system hard disk partition file systems are mounted) which is heavily used by virtually all modern popular Linux distributions. 

Using ramfs (cramfs variation – Compressed ROM filesystem) has been used to store different system environment kernel and Desktop components of many Linux environment / applications and used by a lot of the Linux BootCD such as the most famous (Klaus Knopper's) KNOPPIX LiveCD and Trinity Rescue Kit Linux (TRK uses /dev/shm which btw can be seen on most modern Linux distros and is actually just another mounted tmpfs).
If you haven't tried Live Linux yet try it out as me and a lot of sysadmins out there use some kind of LiveLinux at least few times on yearly basis  to Recover Unbootable Linux servers after some applied remote Updates as well as for Rescuing (Save) Data from Linux server failing to properly boot because of hard disk (bad blocks) failures. As I said earlier TMPFS is also used on almost any distribution for the /dev/ filesystem which is kept in memory.

You can see which tmpfs partitions is used on your Linux server with:

 

debian-server:~# mount |grep -i tmpfs
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)

 

Above is an output from a standard Debian Linux server. On CentOS 7 standard mounted tmpfs are as follows:

 

[root@centos ~]# mount |grep -i tmpfs
devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=1016332k,nr_inodes=254083,mode=755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)
tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,seclabel,mode=755)

 

[root@centos ~]# df -h|grep -i tmpfs
devtmpfs                 993M     0  993M   0% /dev
tmpfs                   1002M   92K 1002M   1% /dev/shm
tmpfs                   1002M  8.8M  993M   1% /run
tmpfs                   1002M     0 1002M   0% /sys/fs/cgroup

The /run tmpfs mounted directory is also to be seen also on latest Ubuntus and Fedoras and is actually the good old /var/run ( where applications keep there pids and some small app related files) stored in tmpfs filesystem stored in memory.

If you're wondering what is /dev/shm and why it appears mounted on every single Linux Server / Desktop you've ever used this is a special filesystem shared memory which various running programs (processes) can use to transfer data quick and efficient between each other to preven the slow disk swapping. People using Linux for the rest 15 years should remember /dev/shm has been a target of a lot of kernel exploits as historically it had a lot of security issues.

While writting this article I've just checked about KNOPPIX developed amd just for info as of time of writting this distro has already 1000+ programs on CD version and 2600+ packages / application on DVD version.
Nowadays Knoppix is mostly used mostly as USB Live Flash drive as a lot of people are dropping CD / DVD use (many servers doesn't have a CD / DVD Drive) and for USB Live Flash Linux distros tmpfs is also key technology used as this gives the end user an amazing fast experience (Desktop applications run much fasten on Live USBs when tmpfs is used than when the slow 7200 RPM HDDs are used).

Loading big parts of the distribution within RAM (with tmpfs from Linux Kernel 2.4+ onwards) is also heavily used by a lot of Cluster vendors in most of Clustered (Cloud) Linux based environemnts, cause TMPFS gives often speeds up improvements to x30 times and decreases greatly I/O HDD. FreeBSD users will be happy to know that TMPFS is already ported and could be used on from FreeBSD 7.0+ onward.

In this small article I will give you example use on how I use tmpfs to speed up our WordPress Websites which use WP Caching plugins such as W3 Total Cache and WP Super Cache
and Hyper Cache / WP Super Cache disk caching and MySQL server as a Database backend.
Below example is wordpress specific but since it can be easily applied to JoomlaDrupal or any other CMS out there that uses mySQL server to make a lot of CPU expensive memory hungry (LEFT JOIN) queries which end up using a slow 7200 RPM hard disk.


 

1. Preparing tmpfs partitions for WordPress File Cache directory
 

If you want to give tmpfs a test drive, I recommend you try to create / mount a 20 Megabyte partition. To create a tmpfs partition you don't need to use a tool like mkfs.ext3 / mkfs.ext4 as TMPFS is in reality a virtual filesystem that is mapped in the server system physical RAM (volatile memory). TMPFS is very nice because if you run out of free RAM system starts a combination of RAM use + some Hard disk SWAP 
The great thing about TMPFS is it never uses all of the available RAM and SWAP, which would not halt your server if TMPFS partition gets filled, but instead you will start getting the usual "Insufficient Disk Space", just like with a physical HDD parititon. RAMFS cares much less about server compared to TMPFS, because if RAMFS is historically older.

ramfs file systems cannot be limited in size like a disk base file system which is limited by it’s capacity, thus ramfs will continue using memory storage until the system runs out of RAM and likely crashes or becomes unresponsive. This is a problem if the application writing to the file system cannot be limited in total size, so in my opinion you better stay away from RAMFS except you have a good idea what you're doing. Another disadvantage of RAMFS compared to TMPFS is you cannot see the size of the file system in df and it can only be estimated by looking at the cached entry in free.

Note that before proceeding to use TMPFS or RAMFS you should know besides having advantages, there are certain serious disadvantage that if the server using tmpfs (in RAM) to store files crashes the customer might loose his data, therefore using RAM filesystems on Production servers is best to be used just for caching folders which are regularly synchronized with (rsync) to some folder to assure no data will be lost on server reboot or crash.

Memory of fast storage areas are ideally suited for applications which need repetitively small data areas for caching or using as temporary space such as Jira (Issue and Proejct Tracking Software) Indexing  As the data is lost when the machine reboots the tmpfs stored data must not be data of high importance as even scheduling backups cannot guarantee that all the data will be replicated in the even of a system crash.

To test mounting a tmpfs virtual (memory stored) filesystem issue:
 

mount -t tmpfs tmpfs -o size=256m /mnt/tmpfs


If you want to test mount a ramfs instead:

 

 mount -t ramfs -o size=256m ramfs /mnt/ramfs

 

debian-server:~#  mount |grep -i -E "ramfs|tmpfs"
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /mnt/tmpfs type tmpfs (rw,size=256m)
ramfs on /mnt/ramfs type tmpfs (rw,size=256m)

 

Once mounted tmpfs can be used in the same way as any ext4 / reiserfs filesystem. In the same way to make mounts permanent, its necessery to add a line to /etc/fstab

To illustrate better a tmpfs use case on my blog running WordPress with W3TotalCache (W3TC) plugin cache folder in /var/www/blog/wp-content/w3tc to get advantage of tmpfs to store w3tc files.

a) Stop Apache

On Debian
 

debian-server:~# /etc/init.d/apache stop


On CentOS 
 

[root@centos ~]# /etc/init.d/httpd stop


b) Move w3tc dir to w3tc-bak

 

debian-server:~# cd /var/www/blog/wp-content/
debian-server:~# mv w3tc w3tc-bak

 

c) Create w3tc directory
 

debian-server:/var/www/blog/wp-content# mkdir w3tc
debian-server:/var/www/blog/wp-content# chown -R www-data:www-data w3tc


d) Add tmpfs record to /etc/fstab

My W3TC Cache didn't grow bigger than 2Gigabytes so I create a 2Giga directory for it by adding following in /etc/fstab 
 

debian-server:~# vim /etc/fstab

 

tmpfs /var/www/blog/wp-content/w3tc tmpfs defaults,size=2g,noexec,nosuid,uid=33,gid=33,mode=1755 0 0


You might also want to add the nr_inodes (option) to tmpfs while mounting. nr_inodes is the maximum inode for instance. Default is half the number of your physical RAM pages, (on a machine with highmem) the number of lowmem RAM page, some common option that should work is nr_inodes=5k, if you're unsure what this option does you can safely skip it 🙂

e) Mount new added tmpfs folder

Then to mount the newly added filesystem issue:
 

mount -a


Or if you're on a CentOS / RHEL server use httpd Apache user instead and whenever you have docroot and wordpress installed.

 

[root@centos ~]# chown -R apache:apache: w3tc


If you're using Apache SuPHP use whatever the UID / GID is proper.

On CentOS you will need to set proper UID and GID (UserID / GroupID), to find out which ones to to use check in /etc/passwd:
 

[root@centos ~]# grep -i apache /etc/passwd
apache:x:48:48:Apache:/var/www:/sbin/nologin


f) Move old w3tc cache from w3tc-bak to w3tc

 

debian-server:/var/www/blog/wp-content# mv w3tc-bak/* w3tc/

 

g) Start again Apache

On Debian:

 

debian-server:~# /etc/init.d/apache2 start

 


On CentOS:
 

[root@centos~]# /etc/init.d/httpd start

h) Keeping w3tc cache site folder synced

As I said earlier the biggest problem with caching (the reason why many hosting providers) and site admins refuse to use it is they might loose some data, to prevent data loss or at least mitigate the data loss to few minutes intervals it is a good idea to synchronize tmpfs kept folders somewhere to disk with rsync.

To achieve that use a cronjob like this:
 

debian-server:~# crontab -u root -e
*/5 * * * * /usr/bin/ionice -c3 -n7 /usr/bin/nice -n 19 /usr/bin/rsync -ah –stats –delete /var/www/blog/wp-content/w3tc/ /backups/tmpfs/cache/ 1>/dev/null


Note that you will need to have the /backups/tmpfs/cache folder existing, create it with:

 

debian-server:~# mkdir -p /backups/tmpfs/cache


You will also need to add a rsync synchronization from backupped folder to tmpfs (in case if the server gets accidently rebooted because it hanged or power outage), place in

/etc/rc.local

 

ionice -c3 -n7 nice -n 19 rsync -ahv –stats –delete /backups/tmpfs/cache/ /var/www/blog/wp-content/w3tc/ 1>/dev/null


(somewhere before exit 0) line
 

0 05 * * * /usr/bin/ionice -c3 -n7 /bin/nice -n 19 /usr/bin/rsync -ah –stats –delete /var/www/blog/wp-content/w3tc/ /backups/tmpfs/cache/ 1>/dev/null

 

 

2. Preparing tmpfs partitions for MySQL server temp File Cache directory


Its common that MySQL servers had to serve a lot of long and heavy SQL JOIN Queries mostly by related posts WP plugins such as (Zemanta Related Posts) and Contextual Related posts though MySQLs are well optimized  to work as much as efficient using mysql tuner (tuning primer) still often SQL servers get a lot of temp tables created to disk (about 25% to 30%) of all SQL queries use somehow HDD to serve queries and as this is very slow and there is file lock created the overall MySQL performance becomes sluggish at times to fix (resolve) that without playing with SQL code to optimize the slow queries the best way I found is by using TMPFS as MySQL temp folder.

To do so I create a TMPFS usually the size of 256 MB because this is usually enough for us, but other hosting companies might want to add bigger virtual temp disk:

a) Add tmpfs new dir to /etc/fstab

In /etc/fstab add below record with vim editor:
 

debian-server:~# vim /etc/fstab

 

tmpfs /var/mysqltmp tmpfs rw,gid=111,uid=108,size=256M,nr_inodes=10k,mode=0700 0 0

 

Note that the uid / and gid 105 and 114 are taken again from /etc/passwd

On Debian

debian-server:~# grep -i mysql /etc/passwd
mysql:x:108:111:MySQL Server,,,:/var/lib/mysql:/bin/false


On CentOS
 

[root@centos ~]# grep -i mysql /etc/passwd
mysql:x:27:27:MySQL Server:/var/lib/mysql:/bin/bash


b) Create folder /var/mysqltmp or whenever you want to place the tmpfs memory kept SQL folder

 

debian-server:~# mkdir /var/mysqltmp
debian-server:~# chown mysql:mysql /var/mysqltmp

 

debian-server:~# mount|grep -i tmpfs
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /var/www/blog/wp-content/w3tc type tmpfs (rw,noexec,nosuid,size=2g,uid=33,gid=33,mode=1755)
tmpfs on /var/mysqltmp type tmpfs (rw,gid=108,uid=111,size=256M,nr_inodes=10k,mode=0700)


c) Add new path to tmpfs created folder in my.cnf 

Then  edit /etc/mysql/my.cnf

 

debian-server:~# vim /etc/mysql/my.cnf

[mysqld]
#
# * Basic Settings
#
user        = mysql
pid-file    = /var/run/mysqld/mysqld.pid
socket      = /var/run/mysqld/mysqld.sock
port        = 3306
basedir     = /usr
datadir     = /var/lib/mysql
tmpdir      = /var/mysqltmp

 

On CentOS edit and change tmpdir in same way within /etc/my.cnf


d) Finally Restart Apache and MySQL to make mysql start using new set tmpfs memory kept folder

On Debian:
 

debian-server:~# /etc/init.d/apache2 stop; /etc/init.d/mysql restart; /etc/init.d/apache2 start

On CentOS:
 

[root@centos ~]# /etc/init.d/httpd stop; /etc/init.d/mysqld restart; /etc/initd/httpd start


Now monitor your server and check your pagespeed increase for me such an optimization usually improves site performance so site becomes +50% faster, to see the difference you can test your website before applying tmpfs caching for site and after that by using Google PageInsight (PageSpeed) Online Test. Though this example is for MySQL and WordPress you can easily adopt the same for Joomla if you have Joomla Caching enabled to some folder, same goes for any other CMS such as Drupal that can take use of Disk Caching. Actually its a small secret of many Hosting providers that allow clients to create sites via CPanel and Kloxo this tmpfs optimizations are already used for sites and by this the provider is able to offer better website service on lower prices. VPS hosting providers also use heavy caching. A lot of people are using TMPFS also to accelerate Sites that have enabled Google Pagespeed as Cacher and accelerator, as PageSpeed module puts a heavy HDD I/O load that can easily stone the server. Many admins also choose to use TMPFS for  /tmp, /var/run, and /var/lock directories as this leads often to significant overall server services operations improvement.
Once you have tmpfs enabled, It is a good idea to periodically monitor your SWAP used space with (df -h), because if you allocate bigger tmpfs partitions than your physical memory and tmpfs's full size starts to be used your machine will start swapping heavily and this could have a very negative performance affect.
 

debian-server:~# df -h|grep -i tmpfs
tmpfs            3,9G     0   3,9G   0% /lib/init/rw
tmpfs            3,9G     0   3,9G   0% /dev/shm
tmpfs            2,0G  1,4G   712M  66% /var/www/blog/wp-content/w3tc
tmpfs            256M     0   256M   0% /mnt/tmpfs
tmpfs            256M  236K   256M   1% /var/mysqltmp

The applications of tmpfs to accelerate services is up to your imagination, so I will be glad to hear from other admins on any interesting other application or problems faced while using TMPFS.

 Enjoy! 🙂