Archive for April, 2015

How to Copy large data directories between 2 Linux / Unix servers without direct ssh / ftp access between server1 and server2 other by using SSH, TAR and Unix pipes

Monday, April 27th, 2015

how-to-copy-large-data-directories-between-2-linux-unix-servers-without-direct-ssh-ftp-access-btween-each-other

In a Web application data migration project, I've come across a situation where I have to copy / transfer 500 Gigabytes of data from Linux server 1 (host A) to Linux server 2 (host B). However the two machines doesn't have direct access to each other (via port 22) for security reasons and hence I cannot use sshfs to mount remotely host dir via ssh and copy files like local ones.

As this is a data migration project its however necessery to migrate the data finding a way … Normal way companies do it is to copy the data to External Hard disk storage and send it via some Country Post services or some employee being send in Data center to attach the SAN to new server where data is being migrated However in my case this was not possible so I had to do it different.

I have access to both servers as they're situated in the same Corporate DMZ network and I can thus access both UNIX machines via SSH.

Thanksfully there is a small SSH protocol + TAR archiver and default UNIX pipe's capabilities hack that makes possible to transfer easy multiple (large) files and directories. The only requirement to use this nice trick is to have SSH client installed on the middle host from which you can access via SSH protocol Server1 (from where data is migrated) and Server2 (where data will be migrated).

If the hopping / jump server from which you're allowed to have access to Linux  servers Server1 and Server2 is not Linux and you're missing the SSH client and don't have access on Win host to install anything on it just use portable mobaxterm (as it have Cygwin SSH client embedded )

Here is how:
 

jump-host:~$ ssh server1 "tar czf – /somedir/" | pv | ssh server2 "cd /somedir/; tar xf


As you can see from above command line example an SSH is made to server1  a tar is used to archive the directory / directories containing my hundred of gigabytes and then this is passed to another opened ssh session to server 2  via UNIX Pipe mechanism and then TAR archiver is used second time to unarchive previously passed archived content. pv command which is in the middle is not obligitory though it is a nice way to monitor status about data transfer like below:
 

500GB 0:00:01 [10,5MB/s] [===================================================>] 27%


P.S. If you don't have PV installed install it either with apt-get on Debian:

 

debian:~# apt-get install –yes pv

 

Or on CentOS / Fedora / RHEL etc.

 

[root@centos ~]# yum -y install pv

 

Below is a small chunk of PV manual to give you better idea of what it does:

NAME
       pv – monitor the progress of data through a pipe

SYNOPSIS
       pv [OPTION] [FILE]…
       pv [-h|-V]

DESCRIPTION
       pv  allows  a  user to see the progress of data through a pipeline, by giving information such as time elapsed, percentage
       completed (with progress bar), current throughput rate, total data transferred, and ETA.

       To use it, insert it in a pipeline between two processes, with the appropriate options.  Its standard input will be passed
       through to its standard output and progress will be shown on standard error.

       pv  will  copy  each  supplied FILE in turn to standard output (- means standard input), or if no FILEs are specified just
       standard input is copied. This is the same behaviour as cat(1).

       A simple example to watch how quickly a file is transferred using nc(1):

              pv file | nc -w 1 somewhere.com 3000

       A similar example, transferring a file from another process and passing the expected size to pv:

              cat file | pv -s 12345 | nc -w 1 somewhere.com 3000


Note that with too big file transfers using PV will delay data transfer because everything will have to pass through another 2 pipes, however for file transfers up to few gigabytes its really nice to include it.

If you only need to transfer huge .tar.gz archive and you don't bother about traffic security (i.e. don't care whether transferred traffic is going through encrypted SSH tunnel and don't want to put an overhead to both systems for encrypting the data and you have some unfiltered ports between host 1 and host 2 you can run netcat on host 2 to listen for connections and forward .tar.gz content via netcat's port like so:
 

linux2:~$ nc -l -p 12345 > /path/destinationfile
linux2:~$ cat /path/sourcfile | nc desti.nation.ip.address 12345


Another way to transfer large data without having connection with server1 and server2 but having connection to a third host PC is to use rsync and good old SSH Tunneling, like so:
 

jump-host:~$ ssh -R 2200:Linux-server1:22 root@Linux-server2 "rsync -e 'ssh -p 2200' –stats –progress -vaz /directory/to/copy root@localhost:/copy/destination/dir"

Resume sftp / scp cancelled (interrupted) network transfer – Continue (large) partially downloaded files on Linux / Windows

Thursday, April 23rd, 2015

resume-sftp-scp-cancelled-interrupted-file-transfer-download-upload-network-transfer-continue-large-partially-downloaded-file-howto-linux-windows
I've recentely have a task to transfer some huge Application server long time stored data (about 70GB) of data after being archived between an old Linux host server and a new one to where the new Tomcat Application (Linux) server will be installed to fit the increased sites accessibility (server hardware overload).

The two systems are into a a paranoid DMZ network and does not have access between each other via SSH / FTP / FTPs and even no Web Access on port (80 or SSL – 443) between the two hosts, so in order to move the data I had to use a third HOP station Windows (server) which have a huge SAN network attached storage of 150 TB (as a Mapped drive I:/).

On the Windows HOP station which is giving me access via Citrix Receiver to the DMZ-ed network I'm using mobaxterm so I have the basic UNIX commands such as sftp / scp already existing on the Windows system via it.
Thus to transfer the Chronos Tomcat application stored files .tar.gz archived I've sftp-ed into the Linux host and used get command to retrieve it, e.g.:

 

sftp UserName@Linux-server.net
Password:
Connected to Linux-server.
sftp> get Chronos_Application_23_04_2015.tar.gz

….


The Secured DMZ Network seemed to have a network shaper limiting my get / Secured SCP download to be at 2.5MBytes / sec, thus the overall file transfer seemed to require a lot of time about 08:30 hours to complete. As it was the middle of day about 13:00 and my work day ends at 18:00 (this meant I would be able to keep the file retrieval session for a maximum of 5 hrs) and thus file transfer would cancel when I logout of the HOP station (after 18:00). However I've already left the file transfer to continue for 2hrs and thus about 23% of file were retrieved, thus I wondered whether SCP / SFTP Protocol file downloads could be resumed. I've checked thoroughfully all the options within sftp (interactive SCP client) and the scp command manual itself however none of it doesn't have a way to do a resume option. Then I thought for a while what I can use to continue the interrupted download and I remembered good old rsync (versatile remote and local file copying tool) which I often use to create customer backup stragies has the ability to resume partially downloaded files I wondered whether this partially downloaded file resume could be done only if file transfer was only initiated through rsync itself and luckily rsync is able to continue interrupted file transfers no matter what kind of HTTP / HTTPS / SCP / FTP program was used to start file retrievalrsync is able to continue cancelled / failed transfer due to network problems or user interaction activity), that turned even pretty easy to continue failed file transfer download from where it was interrupted I had to change to directory where file is located:
 

cd /path/to/interrupted_file/


and issue command:
 

rsync -av –partial username@Linux-server.net:/path/to/file .


the –partial option is the one that does the file resume trick, -a option stands for –archive and turns on the archive mode; equals -rlptgoD (no -H,-A,-X) arguments and -v option shows a file transfer percantage status line and an avarage estimated time for transfer to complete, an easier to remember rsync resume is like so:
 

rsync -avP username@Linux-server.net:/path/to/file .
Password:
receiving incremental file list
chronos_application_23_04_2015.tar.gz
  4364009472   8%    2.41MB/s    5:37:34

To continue a failed file upload with rsync (e.g. if you used sftp put command and the upload transfer failed or have been cancalled:
 

rsync -avP chronos_application_23_04_2015.tar.gz username@Linux-server.net:/path/where_to/upload


Of course for the rsync resume to work remote Linux system had installed rsync (package), if rsync was not available on remote system this would have not work, so before using this method make sure remote Linux / Windows server has rsync installed. There is an rsync port also for Windows so to resume large Giga or Terabyte file archive downloads easily between two Windows hosts use cwRsync.

Windows add / delete services command sc.exe create/delete – Create or Delete services

Thursday, April 9th, 2015

sc-exe-command-line-program-to-create-delete--service-start-stop-service-on-ms-windows
If you end up in situation where you need to delete or create services from current existing ones with a batch script because it is existing on multiple servers and you want to automate / deploy it via batch (.bat) script you will have to use sc.exe tool – Service Controller Command Tool

SC.EXE command  is standard windows console tool to create and delete services. If you want to create a new service, you can use the "create" parameter.

1. Deleting Service with Windows Service Controller Tool

Deleting service is very easy to do you just have to know the service you like to remove not to be listed or Automatically start

sc delete SERVICE_NAME_TO_BE_REMOVED


Below is a real time example on how to remove Apache webserver configured to auto-start

 

sc.exe delete ApacheService

[SC] DeleteService SUCCESS

2. Create / Add service to existing to auto start on Windows

Before running sc.exe create command with arguments for service you will need to know following info:

  • What will be the Service Name: A single word to name the new service.
  • Full Binary Path: The path name where the executable program for new service is currently located.
  • Dispaly Name for service: A short name for the new service.

 

C:\Windows>sc.exe create ApacheService 
   binPath= "C:\local\apache\bin\httpd.exe -k runservice" 
   DisplayName= "Apache Server"

 

[SC] CreateService SUCCESS


Once service is added to be showing in services.msc GUI  you can start the newly added Apache service with:

 

C:\Windows>sc.exe start ApacheService

 

SERVICE_NAME: ApacheService
   TYPE               : 10  WIN32_OWN_PROCESS
   STATE              : 2  START_PENDING
                      (NOT_STOPPABLE,NOT_PAUSABLE,IGNORES_SHUTDOWN)
   WIN32_EXIT_CODE    : 0  (0x0)
   SERVICE_EXIT_CODE  : 0  (0x0)
   CHECKPOINT         : 0x2
   WAIT_HINT          : 0x7530
   PID                : 2552
   FLAGS              :


To stop the ApacheService or any other service:

 

C:\Windows>sc.exe stop ApacheService

SERVICE_NAME: ApacheService
   TYPE               : 10  WIN32_OWN_PROCESS
   STATE              : 3  STOP_PENDING
                      (NOT_STOPPABLE,NOT_PAUSABLE,IGNORES_SHUTDOWN)
   WIN32_EXIT_CODE    : 0  (0x0)
   SERVICE_EXIT_CODE  : 0  (0x0)
   CHECKPOINT         : 0x4
   WAIT_HINT          : 0x7530


To get the status of a running service lets say the recently ran ApacheService:

C:\Windows>sc.exe query ApacheService

 

SERVICE_NAME: ApacheService
   TYPE               : 10  WIN32_OWN_PROCESS
   STATE              : 4  RUNNING
                      (STOPPABLE,NOT_PAUSABLE,IGNORES_SHUTDOWN)
   WIN32_EXIT_CODE    : 0  (0x0)
   SERVICE_EXIT_CODE  : 0  (0x0)
   CHECKPOINT         : 0x0
   WAIT_HINT          : 0x0


As you see from above output the STATE of service is RUNNING. If Apache Webserver was not running you will get an error like:

[SC] EnumQueryServicesStatus:OpenService FAILED 1060:
The specified service does not exist as an installed service.

 

Enjoy sc-ing 🙂

Apache Webserver: No space left on device: Couldn’t create accept lock /var/lock/apache2/accept.lock – Fix

Wednesday, April 8th, 2015

Apache-http-server-no-space-left-on-device-semaphores-quotes-hard-disk-space-resolve-fix-howto
If out of a sudden your Apache webserver crashes and is refusing to start up by manually trying to restart it through its init script on Debian Linux servers – /etc/init.d/apache2 and RPM based ones: /etc/init.d/httpd

Checking in php_error.log there was no shown errors related to loading PHP modules, however apache's error.log show following errors:

[Wed Apr 08 14:20:14 2015] [error] [client 180.76.5.149] client denied by server configuration: /var/www/sploits/info/trojans_info/tr_data/y3190.html
[Wed Apr 08 14:20:39 2015] [warn] pid file /var/run/apache2.pid overwritten — Unclean shutdown of previous Apache run?
[Wed Apr 08 14:20:39 2015] [emerg] (28)No space left on device: Couldn't create accept lock (/var/lock/apache2/accept.lock.15974) (5)
[Wed Apr 08 14:25:39 2015] [warn] pid file /var/run/apache2.pid overwritten — Unclean shutdown of previous Apache run?
[Wed Apr 08 14:25:39 2015] [emerg] (28)No space left on device: Couldn't create accept lock (/var/lock/apache2/accept.lock.16790) (5)
[Wed Apr 08 14:27:03 2015] [warn] pid file /var/run/apache2.pid overwritten — Unclean shutdown of previous Apache run?
[Wed Apr 08 14:27:03 2015] [emerg] (28)No space left on device: Couldn't create accept lock (/var/lock/apache2/accept.lock.16826) (5)
[Wed Apr 08 14:27:53 2015] [warn] pid file /var/run/apache2.pid overwritten — Unclean shutdown of previous Apache run?
[Wed Apr 08 14:27:53 2015] [emerg] (28)No space left on device: Couldn't create accept lock (/var/lock/apache2/accept.lock.16852) (5)
[Wed Apr 08 14:30:48 2015] [warn] pid file /var/run/apache2.pid overwritten — Unclean shutdown of previous Apache run?
[Wed Apr 08 14:30:48 2015] [emerg] (28)No space left on device: Couldn't create accept lock (/var/lock/apache2/accept.lock.17710) (5)
[Wed Apr 08 14:31:21 2015] [warn] pid file /var/run/apache2.pid overwritten — Unclean shutdown of previous Apache run?
[Wed Apr 08 14:31:21 2015] [emerg] (28)No space left on device: Couldn't create accept lock (/var/lock/apache2/accept.lock.17727) (5)
[Wed Apr 08 14:32:40 2015] [warn] pid file /var/run/apache2.pid overwritten — Unclean shutdown of previous Apache run?
[Wed Apr 08 14:32:40 2015] [emerg] (28)No space left on device: Couldn't create accept lock (/var/lock/apache2/accept.lock.17780) (5)
[Wed Apr 08 14:38:32 2015] [warn] pid file /var/run/apache2.pid overwritten — Unclean shutdown of previous Apache run?

As you can read the most likely reason behind above errors preventing for apache to start is /var/run/apache2.pid  unable to be properly written due to lack of disk space or due to disk quota set for users including for userID with which Apache is running.

First thing I did is of course to see how much free space is on the server:

df -h
Filesystem                     Size  Used Avail Use% Mounted on
/dev/mapper/vg00-rootvol       4.0G  1.7G  2.2G  44% /
udev                           7.8G  204K  7.8G   1% /dev
tmpfs                           24G     0   24G   0% /dev/shm
/dev/sda1                      486M   40M  422M   9% /boot
/dev/mapper/vg00-lv_crashdump 1008M   34M  924M   4% /crashdump
/dev/mapper/vg00-homevol       496M   26M  445M   6% /home
/dev/mapper/vg00-lv_opt         12G  1.4G  9.9G  13% /opt
/dev/mapper/vg00-tmpvol        2.0G   68M  1.9G   4% /tmp
/dev/mapper/vg00-varvol        7.9G  609M  6.9G   8% /var
/dev/mapper/vg00-crashvol      1.9G   35M  1.8G   2% /var/crash
/dev/mapper/vg00-auditvol      124M  5.6M  113M   5% /var/log/audit
/dev/mapper/vg00-webdienste     60G   12G   48G  19% /webservice

 

As visible from above df command output , there is enough disk on HDD, so this is definitely not the issue:

Then I Checked whether there is Quota enabled on the Linux server with repquota command shows there are no quotas enabled:

# repquota / var/
repquota: Mountpoint (or device) / not found or has no quota enabled.
repquota: Mountpoint (or device) /var not found or has no quota enabled.
repquota: Not all specified mountpoints are using quota.

 

So obviously the only few left possible reason for Apache failing to start after invoked via init script is  either due to left tainted semaphores or due to some server hardware  RAM problem / or a dying  hard disk with bad blocks.

So what are Semaphores? Generally speaking Semaphores are apparatus for conveying information by means of visual signals between applications (something like sockets).They're used for communicating between the active processes of a certain application. In the case of Apache, they’re used to communicate between the parent and child processes, hence if Apache can’t properly write and coordinate these things down, then it can’t communicate properly with all of the processes it starts and hence the Main HTTPD process can't spawn probably its childs preventing Webserver to enter "started mode" and write its PID file.

To check general information about system semaphore arrays there is the ipcs -s command, however my experience is that ipcs -a is more useful (because it lists generally all kind of semaphores) including Semaphore Shared Memory Signals which are the most likely to cause you the problem.

ipcs -a

—— Shared Memory Segments ——–
key        shmid      owner      perms      bytes      nattch     status

—— Semaphore Arrays ——–
key        semid      owner      perms      nsems
0x00000000 22970368   www-data   600        1

—— Message Queues ——–
key        msqid      owner      perms      used-bytes   messages

As you see in my case there is a Semaphore Arrays which had to be cleaned to make Apache2 be able to start again.
 

To clean all left semaphores (arrays) preventing Apache from start properly, use below for one liner bash loop:
 

for i in `ipcs -s | awk '/www-data/ {print $2}'`; do (ipcrm -s $i); done
ipcrm -m 0x63637069


Note that above for loop is specific to Debian on CentOS / Fedora / RHEL and other Linuxes the username with which stucked semaphores might stay will be apache or httpd

Depending on the user with which the Apache Webserver is running, run above loop like so:

For RPM based distros (CentOS / RHEL):

 

for i in `ipcs -s | awk '/apache/ {print $2}'`; do (ipcrm -s $i); done
ipcrm -m 0x63637069


For other distros such as Slackware or FreeBSD or any custom compiled Apache webserver:

for i in `ipcs -s | awk '/httpd/ {print $2}'`; do (ipcrm -s $i); done
ipcrm -m 0x63637069


If there is also Shared Memory Segments you can remove them with ipcrm i.e.:

ipcrm -m 0x63637069


An alternative way to get rid of left uncleaned semaphores is with xargs:
 

ipcs -s | grep nobody | awk ‘ { print $2 } ‘ | xargs ipcrm


Even though this fixes the issue I understood my problems were due to exceeding semaphores, to check default number of set semaphores on Linux Kernel level as well as few Semaphore related values run below sysctl:

sysctl -a | egrep kernel.sem\|kernel.msgmni
kernel.msgmni = 15904

kernel.sem = 250        32000   32      128


As you can see the number of maximum semaphores is quite large so in my case the failure because of left semaphores was most likely due to some kind of Cracker / Automated bot scanner attack or someone trying malicious against the webserver or simply because of some kind of Apache bug or enormous high load the server faced.

Adding another level of security to your shared Debian Linux webhosting server with SuPHP

Tuesday, April 7th, 2015

suphp_improve-apache-security-protect-against-virus-internal-server-infections-suphp-webserver-logo

There are plenty of security schemes and strategies you can implement if you're a Shared Web Hosting company sysadmin however probably the most vital one is to install on Apache + PHP Webserver SuPHP module.

# apt-cache show suphp-common|grep -i descrip -A 4

Description: Common files for mod suphp Suphp consists of an Apache module (mod_suphp for either Apache 1.3.x or Apache 2.x) and a setuid root binary (suphp) that is called by the Apache module to change the uid of the process executing the PHP interpreter to the owner of the php script.

So what SuPHP actuall  does is to run separate CPanel / Kloxo etc. Users with separate username and groupid permissions coinciding with the user present in /etc/passwd , /etc/shadow files existing users, thus in case if someone hacks some of the many customer sites he would be able to only write files and directories under the user with which the security breach occured.

On servers where SuPHP is not installed, all  systemusers are using the same UserID / GuID to run PHP executable scripts under separate domains Virtualhost which are coinciding with Apache (on Debian / Ubuntu  uid, gid – www-data) or on (CentOS / RHEL / Fedora etc. – user apache) so once one site is defaced  exploited by a worm all or most server websites might end up infected with a Web Virus / Worm which will be trying to exploit even more sites of a type running silently in the background.  This is very common scenarios as currently there are donezs of PHP / CSS / Javasripts / XSS vulnerability exploited on VPS and Shared hosting servers due to failure of a customer to update his own CMS  scripts / Website  (Joomla, Wordpress, Drupal etc.) and the lack of resource to regularly monitor all customer activities / websites.

Therefore installing SuPHP Apache module is essential one to install on new serverslarge hosting providers as it saves the admin a lot of headache from spreading malware across all hosted servers sites ..
Some VPS admins that are security freaks tend to also install SuPHP module together with many chrooted Apache / LiteSpeed / Nginx webservers each of which running in a separate Jailed environment.

Of course using SuPHP besides giving a improved security layer to the webserver has its downsides such as increased load for the server and making Apache PHP scripts being interpretted a little bit slower than with plain Apache + PHP but performance difference while running a site on top of SuPHP is often not so drastic so you can live it up ..

Installing SuPHP on a Debian / Ubuntu servers is a piece of cake, just run the as root superuser, usual:
 

# apt-get install libapache2-mod-suphp


Once installed only thing to make is to turn off default installed Apache PHP module (without SuPHP compiled support and restart Apache webserver):
 

# a2dismod php5 …

# /etc/init.d/apache2 restart


To test the SuPHP is properly working on the Apache Webserver go into some of many hosted server websites DocumentRoot

And create new file called test_suphp.php with below content:

# vim test_suphp.php
<?php
system('id');
?>

Then open in browser http://whatever-website/test_suphp.php assuming that system(); function is not disabled for security reasons in php.ini you should get an User ID, GroupID bigger than reserved system IDs on GNU / Linux e.g. ID > UID / GID 99

Its also a good idea to take a look into SuPHP configuration file /etc/suphp/suphp.conf and tailor options according to your liking 

If different hosted client users home directories are into /home directory, set in suphp.conf

;Path all scripts have to be in

docroot=/home/


Also usually it is a good idea to set 

umask=0022 

Fix MySQL ibdata file size – ibdata1 file growing too large, preventing ibdata1 from eating all your server disk space

Thursday, April 2nd, 2015

fix-solve-mysql-ibdata-file-size-ibdata1-file-growing-too-large-and-preventing-ibdata1-from-eating-all-your-disk-space-innodb-vs-myisam

If you're a webhosting company hosting dozens of various websites that use MySQL with InnoDB  engine as a backend you've probably already experienced the annoying problem of MySQL's ibdata1 growing too large / eating all server's disk space and triggering disk space low alerts. The ibdata1 file, taking up hundreds of gigabytes is likely to be encountered on virtually all Linux distributions which run default MySQL server <= MySQL 5.6 (with default distro shipped my.cnf). The excremental ibdata1 raise appears usually due to a application software bug on how it queries the database. In theory there are no limitation for ibdata1 except maximum file size limitation set for the filesystem (and there is no limitation option set in my.cnf) meaning it is quite possible that under certain conditions ibdata1 grow over time can happily fill up your server LVM (Storage) drive partitions.

Unfortunately there is no way to shrink the ibdata1 file and only known work around (I found) is to set innodb_file_per_table option in my.cnf to force the MySQL server create separate *.ibd files under datadir (my.cnf variable) for each freshly created InnoDB table.
 

1. Checking size of ibdata1 file

On Debian / Ubuntu and other deb based Linux servers datadir is /var/lib/mysql/ibdata1

server:~# du -hsc /var/lib/mysql/ibdata1
45G     /var/lib/mysql/ibdata1
45G     total


2. Checking info about Databases and Innodb storage Engine

server:~# mysql -u root -p
password:

mysql> SHOW DATABASES;
+——————–+
| Database           |
+——————–+
| information_schema |
| bible              |
| blog               |
| blog-sezoni        |
| blogmonastery      |
| daniel             |
| ezmlm              |
| flash-games        |


Next step is to get some understanding about how many existing InnoDB tables are present within Database server:

 

mysql> SELECT COUNT(1) EngineCount,engine FROM information_schema.tables WHERE table_schema NOT IN ('information_schema','performance_schema','mysql') GROUP BY engine;
+————-+——–+
| EngineCount | engine |
+————-+——–+
|         131 | InnoDB |
|           5 | MEMORY |
|         584 | MyISAM |
+————-+——–+
3 rows in set (0.02 sec)

To get some more statistics related to InnoDb variables set on the SQL server:
 

mysqladmin -u root -p'Your-Server-Password' var | grep innodb


Here is also how to find which tables use InnoDb Engine

mysql> SELECT table_schema, table_name
    -> FROM INFORMATION_SCHEMA.TABLES
    -> WHERE engine = 'innodb';

+————–+————————–+
| table_schema | table_name               |
+————–+————————–+
| blog         | wp_blc_filters           |
| blog         | wp_blc_instances         |
| blog         | wp_blc_links             |
| blog         | wp_blc_synch             |
| blog         | wp_likes                 |
| blog         | wp_wpx_logs              |
| blog-sezoni  | wp_likes                 |
| icanga_web   | cronk                    |
| icanga_web   | cronk_category           |
| icanga_web   | cronk_category_cronk     |
| icanga_web   | cronk_principal_category |
| icanga_web   | cronk_principal_cronk    |


3. Check and Stop any Web / Mail / DNS service using MySQL

server:~# ps -efl |grep -E 'apache|nginx|dovecot|bind|radius|postfix'

Below cmd should return empty output, (e.g. Apache / Nginx / Postfix / Radius / Dovecot / DNS etc. services are properly stopped on server).

4. Create Backup dump all MySQL tables with mysqldump

Next step is to create full backup dump of all current MySQL databases (with mysqladmin):

server:~# mysqldump –opt –allow-keywords –add-drop-table –all-databases –events -u root -p > dump.sql
server:~# du -hsc /root/dump.sql
940M    dump.sql
940M    total

 

If you have free space on an external backup server or remotely mounted attached (NFS or SAN Storage) it is a good idea to make a full binary copy of MySQL data (just in case something wents wrong with above binary dump), copy respective directory depending on the Linux distro and install location of SQL binary files set (in my.cnf).
To check where are MySQL binary stored database data (check in my.cnf):

server:~# grep -i datadir /etc/mysql/my.cnf
datadir         = /var/lib/mysql

If server is CentOS / RHEL Fedora RPM based substitute in above grep cmd line /etc/mysql/my.cnf with /etc/my.cnf

if you're on Debian / Ubuntu:

server:~# /etc/init.d/mysql stop
server:~# cp -rpfv /var/lib/mysql /root/mysql-data-backup

Once above copy completes, DROP all all databases except, mysql, information_schema (which store MySQL existing user / passwords and Access Grants and Host Permissions)

5. Drop All databases except mysql and information_schema

server:~# mysql -u root -p
password:

 

mysql> SHOW DATABASES;

DROP DATABASE blog;
DROP DATABASE sessions;
DROP DATABASE wordpress;
DROP DATABASE micropcfreak;
DROP DATABASE statusnet;

          etc. etc.

ACHTUNG !!! DON'T execute!DROP database mysql; DROP database information_schema; !!! – cause this might damage your User permissions to databases

6. Stop MySQL server and add innodb_file_per_table and few more settings to prevent ibdata1 to grow infinitely in future

server:~# /etc/init.d/mysql stop

server:~# vim /etc/mysql/my.cnf
[mysqld]
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_buffer_pool_size=4G

Delete files taking up too much space – ibdata1 ib_logfile0 and ib_logfile1

server:~# cd /var/lib/mysql/
server:~#  rm -f ibdata1 ib_logfile0 ib_logfile1
server:~# /etc/init.d/mysql start
server:~# /etc/init.d/mysql stop
server:~# /etc/init.d/mysql start
server:~# ps ax |grep -i mysql

 

You should get no running MySQL instance (processes), so above ps command should return blank.
 

7. Re-Import previously dumped SQL databases with mysql cli client

server:~# cd /root/
server:~# mysql -u root -p < dump.sql

Hopefully import should went fine, and if no errors experienced new data should be in.

Altearnatively if your database is too big and you want to import it in less time to mitigate SQL downtime, instead import the database with:

server:~# mysql -u root -p
password:
mysql>  SET FOREIGN_KEY_CHECKS=0;
mysql> SOURCE /root/dump.sql;
mysql> SET FOREIGN_KEY_CHECKS=1;

 

If something goes wrong with the import for some reason, you can always copy over sql binary files from /root/mysql-data-backup/ to /var/lib/mysql/
 

8. Connect to mysql and check whether databases are listable and re-check ibdata file size

Once imported login with mysql cli and check whther databases are there with:

server:~# mysql -u root -p
SHOW DATABASES;

Next lets see what is currently the size of ibdata1, ib_logfile0 and ib_logfile1
 

server:~# du -hsc /var/lib/mysql/{ibdata1,ib_logfile0,ib_logfile1}
19M     /var/lib/mysql/ibdata1
1,1G    /var/lib/mysql/ib_logfile0
1,1G    /var/lib/mysql/ib_logfile1
2,1G    total

Now ibdata1 will grow, but only contain table metadata. Each InnoDB table will exist outside of ibdata1.
To better understand what I mean, lets say you have InnoDB table named blogdb.mytable.
If you go into /var/lib/mysql/blogdb, you will see two files
representing the table:

  •     mytable.frm (Storage Engine Header)
  •     mytable.ibd (Home of Table Data and Table Indexes for blogdb.mytable)

Now construction will be like that for each of MySQL stored databases instead of everything to go to ibdata1.
MySQL 5.6+ admins could relax as innodb_file_per_table is enabled by default in newer SQL releases.


Now to make sure your websites are working take few of the hosted websites URLs that use any of the imported databases and just browse.
In my case ibdata1 was 45GB after clearing it up I managed to save 43 GB of disk space!!!

Enjoy the disk saving! 🙂