Posts Tagged ‘cnf’

No space left on device with free disk space / Why no space left on device while there is plenty of disk space on drive – Running out of Inodes

Tuesday, November 17th, 2015

no_space_left-on-device-while-there-is-disk-space-running-out-of-file-inodes-unix_linux_file_system_diagram.gif

 

On one of the servers, I'm administrating the websites started showing some Mysql database table corrup errors like:
 

 

Table './database_name/site_news_list_com' is marked as crashed and last (automatic?) repair failed

The server is using Oracle MySQL server community stable edition on Debian GNU / Linux 6.0, so I first thought during work the server crashed either due to some bug issue in MySQL or it crashed due to some PHP cron job that did something messy. Thus to solve the crashed tables, tried using mysqlcheck tool which helped pretty fine, at many times whether there were database / table corruptions. I've run the following set of mysqlcheck commands with root (superuser) in a bash shell after logging in through SSH:

:

server:~# /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–check –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log
server:~# /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf –analyze –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log
server:~# /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–auto-repair –optimize –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log
server:~# /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–optimize –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log


In order for above commands to work, I've created the /root/.my.cnf containing my root (mysql CLI) mysql username and password, e.g. file has content like below:

 

[client]
user=root
password=MySecretPassword8821238

 

Btw a good note here is its generally a good idea (if you want to have consistent mysql databases) to automatically execute via a cron job 2 times a month, I've in root cronjob the following:

 

crontab -u root -l |grep -i mysqlcheck
04 06 5,10,15,20,25,1 * * /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–check –all-databases –silent -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log 07 06 5,10,15,20,25,1 * * /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf –analyze –all-databases –silent -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log 12 06 5,10,15,20,25,1 * * /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–auto-repair –optimize –all-databases –silent -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log 17 06 5,10,15,20,25,1 * * /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–optimize –all-databases –silent -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log


Strangely I got a lot of errors that some .MYI / .MYD .frm temp files, necessery for the mysql tables recovery can't be written inside /home/mysql/database_name

That was pretty weird and I thought there might be some issues with permissions, causing the inability to write, due to some bug or something so I went straight and checked /home/mysql/database_name permissions, e.g.::

 

server:/home/mysql/database_name# ls -ld soccerfame
drwx—— 2 mysql mysql 36864 Nov 17 12:00 soccerfame
server:/home/mysql/database_name# ls -al1|head -n 10
total 1979012
drwx—— 2 mysql mysql 36864 Nov 17 12:00 .
drwx—— 36 mysql mysql 4096 Nov 17 11:12 ..
-rw-rw—- 1 mysql mysql 8712 Nov 17 10:26 1_campaigns_diez.frm
-rw-rw—- 1 mysql mysql 14672 Jul 8 18:57 1_campaigns_diez.MYD
-rw-rw—- 1 mysql mysql 1024 Nov 17 11:38 1_campaigns_diez.MYI
-rw-rw—- 1 mysql mysql 8938 Nov 17 10:26 1_campaigns.frm
-rw-rw—- 1 mysql mysql 8738 Nov 17 10:26 1_campaigns_logs.frm
-rw-rw—- 1 mysql mysql 883404 Nov 16 22:01 1_campaigns_logs.MYD
-rw-rw—- 1 mysql mysql 330752 Nov 17 11:38 1_campaigns_logs.MYI


As seen from above output, all was perfect with permissions, so it should have been something else, so I decided to try to create a random file with touch command inside /home/mysql/database_name directory:

 

touch /home/mysql/database_name/somefile-to-test-writtability.txt touch: cannot touch ‘/scr1/data/somefile-to-test-writtability.txt‘: No space left on device


Then logically I thought the /home/mysql/ mounted ext4 partition got filled, because of crashed SQL database or a bug thus, checked with disk free command df whether there is enough space on server:

server:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md1 20G 7.6G 11G 42% /
udev 10M 0 10M 0% /dev
tmpfs 13G 1.3G 12G 10% /run
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/md2 256G 134G 110G 55% /home

Well that's weird? Obviously only 55% of available disk space is used and available 134G which was more than enough so I got totally puzzled why, files can't be written.

Then very logically, I thought it might be that /home directory has remounted as read only, because the SSD memory disk on server is failing and checked for errors in dmesg, i.e.:

 

server:~# dmesg|grep -i error


Also checked how exactly was partition mounted, to check whether it is (RO) read-only:

 

server:~# mount -l|grep -i /home
/dev/md2 on /home type ext4 (rw,relatime,discard,data=ordered)


Now everything become even more weirder, as obviously the disk continued to be claiming no space left on device, while in reality there was plenty of disk space.

Then after running a quick research on the internet for the no space left on device with free disk space, I've come across this great superuser.com thread which let me realize the partition run out of inodes and that's why no new file inodes could be assigned and therefore, the linux kernel is refusing to write the file on ext4 partition.

For those who haven't heard of Linux Partition Inodes here is link to Wikipedia and a quick quote:

 

In a Unix-style file system, the inode is a data structure used to represent a filesystem object, which can be one of various things including a file or a directory. Each inode stores the attributes and disk block location(s) of the filesystem object's data.[1] Filesystem object attributes may include manipulation metadata (e.g. change,[2] access, modify time), as well as owner and permission data (e.g. group-id, user-id, permissions).[3]
Directories are lists of names assigned to inodes. The directory contains an entry for itself, its parent, and each of its children.


Once I understood it is the inodes, I checked how many of them are occupied with cmd:

 

server:~# df -i /home
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/md2 17006592 17006592 0 100% /home


You see, there were 0 (zero) free file inodes on server and that was the reason for no space left on device while there was actually free disk space

To clean up (free) some inodes on partition, first thing I did is to delete all old logs which were inside /home and files I positively know not to be necessery, then to find which directories allocating most innodes used:

 

server:~# find . -xdev -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n


If you're on a regular old fashined IDE Hard Drive and not SSD or you have too much files inside this command will take really long …:

Therefore a better solution might be to frist:

a) Try to find root folders with large inodes count:

for i in /home/*; do echo $i; find $i |wc -l; done
Try to find specific folders:


You should get output like:

 

/home/new_website
606692
/home/common
73
/home/pcfreak
5661
/home/hipo
33
/home/blog
13570
/home/log
123
/home/lost+found
1

b) Then once you know the directory allocating most inodes, run the command again to see the sub-directories with most files (eating) partition innodes:

 

for i in /home/webservice/*; do echo $i; find $i |wc -l; done

 

One usual large folder which could free you some nodes is the linux source headers, but in my case it was simply a lot of tiny old logs being logged on the system for few years in the past without cleaning:

After deleting the log dirs and cache folder in my case /home/new_website/{log,cache}:

server:~# rm -rf /home/new_website/log/*
server:~# rm -rf /home/new_website/cache/*

 

 

a) Then, stopping Apache webserver to check prevent Apache to use MySQl databases while running database repair and restaring MySQL:
 

server:~# /etc/init.d/apache2 stop Restarting MySQL server
..
server:~# /etc/init.d/mysql restart
..


b) And re-issuing MySQL Check / Repair / Optimize database commands:
 

 

mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–check –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log

mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf –analyze –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log

mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–auto-repair –optimize –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log

mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–optimize –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log

c) And finally starting the Apache Webserver again:
 

server:~# /etc/init.d/apache2 start


Some innodse got freed up:
 

server:~# df -i /home Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/md2 17006592 16797196 209396 99% /home


And hooray by God's Grace and with help of prayers of The most Holy Theotokos (Virgin) Mary, websites started again !

Fix MySQL ibdata file size – ibdata1 file growing too large, preventing ibdata1 from eating all your server disk space

Thursday, April 2nd, 2015

fix-solve-mysql-ibdata-file-size-ibdata1-file-growing-too-large-and-preventing-ibdata1-from-eating-all-your-disk-space-innodb-vs-myisam

If you're a webhosting company hosting dozens of various websites that use MySQL with InnoDB  engine as a backend you've probably already experienced the annoying problem of MySQL's ibdata1 growing too large / eating all server's disk space and triggering disk space low alerts. The ibdata1 file, taking up hundreds of gigabytes is likely to be encountered on virtually all Linux distributions which run default MySQL server <= MySQL 5.6 (with default distro shipped my.cnf). The excremental ibdata1 raise appears usually due to a application software bug on how it queries the database. In theory there are no limitation for ibdata1 except maximum file size limitation set for the filesystem (and there is no limitation option set in my.cnf) meaning it is quite possible that under certain conditions ibdata1 grow over time can happily fill up your server LVM (Storage) drive partitions.

Unfortunately there is no way to shrink the ibdata1 file and only known work around (I found) is to set innodb_file_per_table option in my.cnf to force the MySQL server create separate *.ibd files under datadir (my.cnf variable) for each freshly created InnoDB table.
 

1. Checking size of ibdata1 file

On Debian / Ubuntu and other deb based Linux servers datadir is /var/lib/mysql/ibdata1

server:~# du -hsc /var/lib/mysql/ibdata1
45G     /var/lib/mysql/ibdata1
45G     total


2. Checking info about Databases and Innodb storage Engine

server:~# mysql -u root -p
password:

mysql> SHOW DATABASES;
+——————–+
| Database           |
+——————–+
| information_schema |
| bible              |
| blog               |
| blog-sezoni        |
| blogmonastery      |
| daniel             |
| ezmlm              |
| flash-games        |


Next step is to get some understanding about how many existing InnoDB tables are present within Database server:

 

mysql> SELECT COUNT(1) EngineCount,engine FROM information_schema.tables WHERE table_schema NOT IN ('information_schema','performance_schema','mysql') GROUP BY engine;
+————-+——–+
| EngineCount | engine |
+————-+——–+
|         131 | InnoDB |
|           5 | MEMORY |
|         584 | MyISAM |
+————-+——–+
3 rows in set (0.02 sec)

To get some more statistics related to InnoDb variables set on the SQL server:
 

mysqladmin -u root -p'Your-Server-Password' var | grep innodb


Here is also how to find which tables use InnoDb Engine

mysql> SELECT table_schema, table_name
    -> FROM INFORMATION_SCHEMA.TABLES
    -> WHERE engine = 'innodb';

+————–+————————–+
| table_schema | table_name               |
+————–+————————–+
| blog         | wp_blc_filters           |
| blog         | wp_blc_instances         |
| blog         | wp_blc_links             |
| blog         | wp_blc_synch             |
| blog         | wp_likes                 |
| blog         | wp_wpx_logs              |
| blog-sezoni  | wp_likes                 |
| icanga_web   | cronk                    |
| icanga_web   | cronk_category           |
| icanga_web   | cronk_category_cronk     |
| icanga_web   | cronk_principal_category |
| icanga_web   | cronk_principal_cronk    |


3. Check and Stop any Web / Mail / DNS service using MySQL

server:~# ps -efl |grep -E 'apache|nginx|dovecot|bind|radius|postfix'

Below cmd should return empty output, (e.g. Apache / Nginx / Postfix / Radius / Dovecot / DNS etc. services are properly stopped on server).

4. Create Backup dump all MySQL tables with mysqldump

Next step is to create full backup dump of all current MySQL databases (with mysqladmin):

server:~# mysqldump –opt –allow-keywords –add-drop-table –all-databases –events -u root -p > dump.sql
server:~# du -hsc /root/dump.sql
940M    dump.sql
940M    total

 

If you have free space on an external backup server or remotely mounted attached (NFS or SAN Storage) it is a good idea to make a full binary copy of MySQL data (just in case something wents wrong with above binary dump), copy respective directory depending on the Linux distro and install location of SQL binary files set (in my.cnf).
To check where are MySQL binary stored database data (check in my.cnf):

server:~# grep -i datadir /etc/mysql/my.cnf
datadir         = /var/lib/mysql

If server is CentOS / RHEL Fedora RPM based substitute in above grep cmd line /etc/mysql/my.cnf with /etc/my.cnf

if you're on Debian / Ubuntu:

server:~# /etc/init.d/mysql stop
server:~# cp -rpfv /var/lib/mysql /root/mysql-data-backup

Once above copy completes, DROP all all databases except, mysql, information_schema (which store MySQL existing user / passwords and Access Grants and Host Permissions)

5. Drop All databases except mysql and information_schema

server:~# mysql -u root -p
password:

 

mysql> SHOW DATABASES;

DROP DATABASE blog;
DROP DATABASE sessions;
DROP DATABASE wordpress;
DROP DATABASE micropcfreak;
DROP DATABASE statusnet;

          etc. etc.

ACHTUNG !!! DON'T execute!DROP database mysql; DROP database information_schema; !!! – cause this might damage your User permissions to databases

6. Stop MySQL server and add innodb_file_per_table and few more settings to prevent ibdata1 to grow infinitely in future

server:~# /etc/init.d/mysql stop

server:~# vim /etc/mysql/my.cnf
[mysqld]
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_buffer_pool_size=4G

Delete files taking up too much space – ibdata1 ib_logfile0 and ib_logfile1

server:~# cd /var/lib/mysql/
server:~#  rm -f ibdata1 ib_logfile0 ib_logfile1
server:~# /etc/init.d/mysql start
server:~# /etc/init.d/mysql stop
server:~# /etc/init.d/mysql start
server:~# ps ax |grep -i mysql

 

You should get no running MySQL instance (processes), so above ps command should return blank.
 

7. Re-Import previously dumped SQL databases with mysql cli client

server:~# cd /root/
server:~# mysql -u root -p < dump.sql

Hopefully import should went fine, and if no errors experienced new data should be in.

Altearnatively if your database is too big and you want to import it in less time to mitigate SQL downtime, instead import the database with:

server:~# mysql -u root -p
password:
mysql>  SET FOREIGN_KEY_CHECKS=0;
mysql> SOURCE /root/dump.sql;
mysql> SET FOREIGN_KEY_CHECKS=1;

 

If something goes wrong with the import for some reason, you can always copy over sql binary files from /root/mysql-data-backup/ to /var/lib/mysql/
 

8. Connect to mysql and check whether databases are listable and re-check ibdata file size

Once imported login with mysql cli and check whther databases are there with:

server:~# mysql -u root -p
SHOW DATABASES;

Next lets see what is currently the size of ibdata1, ib_logfile0 and ib_logfile1
 

server:~# du -hsc /var/lib/mysql/{ibdata1,ib_logfile0,ib_logfile1}
19M     /var/lib/mysql/ibdata1
1,1G    /var/lib/mysql/ib_logfile0
1,1G    /var/lib/mysql/ib_logfile1
2,1G    total

Now ibdata1 will grow, but only contain table metadata. Each InnoDB table will exist outside of ibdata1.
To better understand what I mean, lets say you have InnoDB table named blogdb.mytable.
If you go into /var/lib/mysql/blogdb, you will see two files
representing the table:

  •     mytable.frm (Storage Engine Header)
  •     mytable.ibd (Home of Table Data and Table Indexes for blogdb.mytable)

Now construction will be like that for each of MySQL stored databases instead of everything to go to ibdata1.
MySQL 5.6+ admins could relax as innodb_file_per_table is enabled by default in newer SQL releases.


Now to make sure your websites are working take few of the hosted websites URLs that use any of the imported databases and just browse.
In my case ibdata1 was 45GB after clearing it up I managed to save 43 GB of disk space!!!

Enjoy the disk saving! 🙂

How to completely disable Replication in MySQL server 5.1.61 on Debian GNU / Linux

Monday, July 16th, 2012

Replication_mysql_disable

Some time ago on one of the Database MySQL servers, I've configured replication as it was required to test somethings. Eventually it turned out replication will be not used (for some reason) it was too slow and not fitting our company needs hence we needed to disable it.

It seemed logical to me that, simply removing any replication related directives from my.cnf and a restart of the SQL server will be enough to turn replication off on the Debian Linux host. Therefore I proceeded removed all replication configs from /etc/my/my.cnf and issued MySQL restart i. e.:

sql-server:~# /etc/init.d/mysql restart
....

This however didn't turned off replication,as I thought and in phpmyadminweb frontend interface, replication was still appearing to be active in the replication tab.

Something was still making the SQL server still act as an Replication Slave Host, so after a bit of pondering and trying to remember, the exact steps I took to make the replication work on the host I remembered that actually I issued:

mysql> START SLAVE;

Onwards I run:

mysql> SHOW SLAVE STATUS;
....

and found in the database the server was still running in Slave Replication mode

Hence to turn off the db host run as a Slave, I had to issue in mysql cli:

mysql> STOP SLAVE;
Query OK, 0 rows affected, 1 warning (0.01 sec)
mysql> RESET SLAVE;
Query OK, 0 rows affected, 1 warning (0.01 sec)

Then after a reload of SQL server in memory, the host finally stopped working as a Slave Replication host, e.g.

sql-server:~# /etc/init.d/mysql restart
....

After the restart, to re-assure myself the SQL server is no more set to run as MySQL replication Slave host:

mysql> SHOW SLAVE STATUS;
Empty set (0.00 sec)

Cheers 😉

‘host-name’ is blocked because of many connection errors; unblock with ‘mysqladmin flush-hosts’

Sunday, May 20th, 2012

mysql-logo-host-name-blocked-because-of-many-connection-errors
My home run machine MySQL server was suddenly down as I tried to check my blog and other sites today, the error I saw while trying to open, this blog as well as other hosted sites using the MySQL was:

Error establishing a database connection

The topology, where this error occured is simple, I have two hosts:

1. Apache version 2.0.64 compiled support externally PHP scripts interpretation via libphp – the host runs on (FreeBSD)

2. A Debian GNU / Linux squeeze running MySQL server version 5.1.61

The Apache host is assigned a local IP address 192.168.0.1 and the SQL server is running on a host with IP 192.168.0.2

To diagnose the error I've logged in to 192.168.0.2 and weirdly the mysql-server was appearing to run just fine:
 

debian:~# ps ax |grep -i mysql
31781 pts/0 S 0:00 /bin/sh /usr/bin/mysqld_safe
31940 pts/0 Sl 12:08 /usr/sbin/mysqld –basedir=/usr –datadir=/var/lib/mysql –user=mysql –pid-file=/var/run/mysqld/mysqld.pid –socket=/var/run/mysqld/mysqld.sock –port=3306
31941 pts/0 S 0:00 logger -t mysqld -p daemon.error
32292 pts/0 S+ 0:00 grep -i mysql

Moreover I could connect to the localhost SQL server with mysql -u root -p and it seemed to run fine. The error Error establishing a database connection meant that either something is messed up with the database or 192.168.0.2 Mysql port 3306 is not properly accessible.

My first guess was something is wrong due to some firewall rules, so I tried to connect from 192.168.0.1 to 192.168.0.2 with telnet:
 

freebsd# telnet 192.168.0.2 3306
Trying 192.168.0.2…
Connected to jericho.
Escape character is '^]'.
Host 'webserver' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'
Connection closed by foreign host.

Right after the telnet was initiated as I show in the above output the connection was immediately closed with the error:

Host 'webserver' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'Connection closed by foreign host.

In the error 'webserver' is my Apache machine set hostname. The error clearly states the problems with the 'webserver' apache host unable to connect to the SQL database are due to 'many connection errors' and a fix i suggested with mysqladmin flush-hosts

To temporary solve the error and restore my normal connectivity between the Apache and the SQL servers I logged I had to issue on the SQL host:

mysqladmin -u root -p flush-hostsEnter password:

Thogh this temporar fix restored accessibility to the databases and hence the websites errors were resolved, this doesn't guarantee that in the future I wouldn't end up in the same situation and therefore I looked for a permanent fix to the issues once and for all.

The permanent fix consists in changing the default value set for max_connect_error in /etc/mysql/my.cnf, which by default is not too high. Therefore to raise up the variable value, added in my.cnf in conf section [mysqld]:

debian:~# vim /etc/mysql/my.cnf
...
max_connect_errors=4294967295

and afterwards restarted MYSQL:

debian:~# /etc/init.d/mysql restart
Stopping MySQL database server: mysqld.
Starting MySQL database server: mysqld.
Checking for corrupt, not cleanly closed and upgrade needing tables..

To make sure the assigned max_connect_errors=4294967295 is never reached due to Apache to SQL connection errors, I've also added as a cronjob.

debian:~# crontab -u root -e
00 03 * * * mysqladmin flush-hosts

In the cron I have omitted the mysqladmin -u root -p (user/pass) input options because for convenience I have already stored the mysql root password in /root/.my.cnf

Here is how /root/.my.cnf looks like:

debian:~# cat /root/.my.cnf
[client]
user=root
password=a_secret_sql_password

Now hopefully, this would permanently solve SQL's 'failure to accept connections' due to too many connection errors for future.

Create SSH Tunnel to MySQL server to access remote filtered MySQL port 3306 host through localhost port 3308

Friday, February 27th, 2015

create_ssh_tunnel_to-mysql_server-to-access-remote-filtered-mysql-on-port-3306-secure_ssh_traffic
On our Debian / CentOS / Ubuntu Linux and Windows servers we're running multiple MySQL servers and our customers sometimes need to access this servers.
This is usually problem because MySQL Db  servers are running in a DMZ Zone with a strong firewall and besides that for security reasons SQLs are configured to only listen for connections coming from localhost, I mean in config files across our Debian Linux servers and CentOS / RHEL Linux machines the /etc/mysql/my.cnf and /etc/my.cnf the setting for bind-address is 127.0.0.1:
 

[root@centos ~]# grep -i bind-address /etc/my.cnf 
bind-address            = 127.0.0.1
##bind-address  = 0.0.0.0


For source code developers which are accessing development SQL servers only through a VPN secured DMZ Network there are few MySQL servers witha allowed access remotely from all hosts, e.g. on those I have configured:
 

[root@ubuntu-dev ~]# grep -i bind-address /etc/my.cnf 

bind-address  = 0.0.0.0


However though clients insisted to have remote access to their MySQL Databases but since this is pretty unsecure, we decided not to configure MySQLs to listen to all available IP addresses / network interfaces. 
MySQl acess is allowed only through PhpMyAdmin accessible via Cleint's Web interface which on some servers is CPanel  and on other Kloxo (This is open source CPanel like very nice webhosting platform).

For some stubborn clients which wanted to have a mysql CLI and MySQL Desktop clients access to be able to easily analyze their databases with Desktop clients such as MySQL WorkBench there is a "hackers" like work around to create and use a MySQL Tunnel to SQL server from their local Windows PCs using standard OpenSSH Linux Client from Cygwin,  MobaXterm which already comes with the SSH client pre-installed and has easy GUI interface to create SSH tunnels or eventually use Putty's Plink (Command Line Interface) to create the tunnel

Anyways the preferred and recommended (easiest) way to achieve a tunnel between MySQL and local PC (nomatter whether Windows or Linux client system) is to use standard ssh client and below command:
 

ssh -o ServerAliveInterval=10 -M -T -M -N -L 3308:localhost:3306 your-server.your-domain.com


By default SSH tunnel will keep opened for 3 minutes and if not used it will automatically close to get around this issue, you might want to raise it to (lets say 15 minutes). To do so in home directory user has to add in:
 

~/.ssh/config

ServerAliveInterval 15
ServerAliveCountMax 4


Note that sometimes it is possible ven though ssh tunnel timeout value is raised to not take affect if there is some NAT (Network Adress Translation) with low timeout setting on a firewall level. If you face constant SSH Tunnel timeouts you can use below bash few lines code to auto-respawn SSH tunnel connection (for Windows users use MobaXterm or install in advance bash shell cygwin package):
 

while true
do
 
ssh -o ServerAliveInterval=10 -M -T -M -N -L 3308:localhost:3306 your-server.your-domain.com
  sleep 15
done


Below is MySQLBench screenshot connected through server where this blog is located after establishing ssh tunnel to remote mysql server on port 3308 on localhost

mysql-workbench-database-analysis-and-management-gui-tool-convenient-for-data-migratin-and-queries-screenshot-

There is also another alternative way to access remote firewall filtered mysql servers without running complex commands to Run a tunnel which we recommend for clients (sql developers / sql designers) by using HeidiSQL (which is a useful tool for webdevelopers who has to deal with MySQL and MSSQL hosted Dbs).

heidisql-show-host_processlist-screenshot

To connect to remote MySQL server through a Tunnel using Heidi:

mysql_connection_configuration-heidi-mysql-gui-connect-tool

 

In the ‘Settings’ tab

1. In the dropdown list of ‘Network type’, please select SSH tunnel

2. Hostname/IP: localhost (even you are connecting remotely)

3. Username & Password: your mysql user and password

Next, in the tab SSH Tunnel:

1. specify plink.exe or you need to download it and specify where it’s located

2. Host + port: the remote IP of your SSH server(should be MySQL server as well), port 22 if you don’t change anything

3. Username & password: SSH username (not MySQL user)

 

heidi-connection_ssh_tunnel_configuration-heidi-sql-tool-screenshot
 

Linux: List last 10 (newest) and 10 oldest modified files in a directory with ls

Tuesday, April 8th, 2014

An useful thing on GNU / Linux sometimes is to list last or oldest modified files in directory.

Lets say you want to list last 10 modified files with ls from today / yesterday. Here is how:
 

ls -1t | head -10
my.cnf
wordperss_enabled_plugins.txt
newcerts/
mysql-hipo_pcfreakbiz.dump
NewArchive-Jan-10-15.zip
hipo_pcfreakbiz-mysqldb-any-out-1389361234.tgz
Tisho_Snimki/
wordpress/
wp-cron.php?doing_wp_cron=1.1
wp-cron.php

 

To list 10 oldest modified files on Linux:

 

ls -1t | tail -10
    my.cnf
    pcfreak_sql-15_10_05_2012
    mysql-tuning-primer*
    tuning-primer.sh*
    system-administration-services.html
    blog_backup_15_07_2012.tar.gz
    www-files/
    dump.sql
    courier-imap*
    djbdns-1.05.tar.gz


Cheers 😉

Fix MySQL connection error – Host ” is blocked because of many connection errors; unblock with ‘mysqladmin flush-hosts’

Wednesday, July 2nd, 2014

fix-mysql-too-many-connection-errors-explained

If you get a MySQL error like:

Host '' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'

This most likely means your PHP / Java whatever programming language application connecting to MySQL is failing to authenticate with the application created (existing) or that the application is trying too many connections to MySQL in a rate where MySQL server can't serve all the requests.

Some common errors for Too many Connection errors are:
 

  • Networking Problem
  • Server itself could be down
  • Authentication Problems
  • Maximum Connection Errors allowed.

The value of the max_connection_errors system variable determines how many successive interrupted connection requests are permitted to myqsl server.
 

Well anyways if you get the:

Host '' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'

You can consider this a sure sign application connections to MySQLis logging a lot of error connections, for some reason.
This error could also appear on very busy websites where high amount of separete connections are used – I've seen the error occur on PHP websites whether mysql_pconnect(); is selected in favour of the prooved working mysql_connect();

The first thing to do before changing / increasing default set of max connection errors is to check how many max connection errors are set within MySQL?

For that connect with MySQL CLI and issue:
 

mysql> SHOW VARIABLES LIKE '%error%';


+——————–+————————————————————-+
| Variable_name      | Value                                                           |
+——————–+————————————————————-+
| error_count        | 0                                                                     |
| log_error          | /var/log/mysql//mysqld.log                                |
| max_connect_errors | 10000                                                      |
| max_error_count    | 64                                                               |
| slave_skip_errors  | OFF                                                             |
+——————–+————————————————————-+


A very useful mysql cli command in debugging max connection errors reached problem is

mysql> SHOW PROCESSLIST;

 

To solve the error, try to tune in /etc/my.cnf, /etc/mysql/my.cnf or wherever my.cnf is located:

[mysqld]
max_connect_errors
variable

and

wait_timeout var. Some reasonable variable size would be:

max_connect_errors = 100000
wait_timeout = 60

If such (anyways) high values is still not high enough you can raise mysql config connection timeout

 

to

max_connect_errors = 100000000

Also if you want to try raise max_connect_errors var without making it permanenty (i.e. remember var setting after MySQL service restart), set it from MySQL cli with:

SET GLOBAL max_connect_errors


If you want to keep the set default max_connection_errors and fix it temporary, you can try to follow the error

Host '' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'

suggestion and issue in root console:

mysqladmin flush-hosts

Same could also be done from MySQL Cli with cmd:
 

FLUSH HOSTS;

MySQL SSL Configure Howto – How to Make MySQL communication secured

Wednesday, January 15th, 2014

mysql-over-ssl-how-to-configure-logo how to configure ssl on mysql server

Recently I've been asked How to make communication to MySQL database encrypted. The question was raised by a fellow developer who works on developing a Desktop standalone application in Delphi Programming Language with DevArt an (SQL Connection Component capable to connect Delphi applications to multiple databases like MySQL, Oracle, PostgreSQL, Interbase, Firebird etc.

Communicating in Secured form to MySQL database is not common task to do, as MySQL usually communicates to applications hosted on same server or applications to communicate to MySQL are in secured DMZ or administrated via phpMyAdmin web interface.

MySQL supports encrypted connections to itself using Secure Socket Layer (SSL) encryption. Setting up MySQL db to be communicated encrypted is a must for standalone Desktop applications which has to extract / insert data via remote SQL.
Configuring SQL to support communicated queries encrpytion is supported by default and easily configured on most standard Linux version distributions (Debian, RHEL, Fedora) with no need to recompile it.
1. Generate SSL Certificates

$ mkdir /etc/mysql-ssl && cd mysql-ssl

# Create CA certificate
$ openssl genrsa 2048 > ca-key.pem
$ openssl req -new -x509 -nodes -days 3600 \
         -key ca-key.pem -out ca-cert.pem

Create server certificate, remove passphrase, and sign it
server-cert.pem is public key, server-key.pem is private key
$ openssl req -newkey rsa:2048 -days 3600 \
         -nodes -keyout server-key.pem -out server-req.pem

$ openssl rsa -in server-key.pem -out server-key.pem
$ openssl x509 -req -in server-req.pem -days 3600 \
         -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem

Create client certificate, remove passphrase, and sign it
client-cert.pem is public key and client-key.pem is private key
$ openssl req -newkey rsa:2048 -days 3600 \
         -nodes -keyout client-key.pem -out client-req.pem

$ openssl rsa -in client-key.pem -out client-key.pem
$ openssl x509 -req -in client-req.pem -days 3600 \
         -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out client-cert.pem

After generating the certificates, verify them:

$ openssl verify -CAfile ca-cert.pem server-cert.pem client-cert.pem
 

2. Add SSL support variables to my.cnf

Once SSL key pair files are generated in order to active SSL encryption support in MySQL server, add to (/etc/my.cnf,  /etc/mysql/my.cnf, /usr/local/etc/my.cnf … ) or wherever config is depending on distro

# SSL
ssl-ca=/etc/mysql-ssl/ca-cert.pem
ssl-cert=/etc/mysql-ssl/server-cert.pem
ssl-key=/etc/mysql-ssl/server-key.pem

3. Restart MySQL server

/etc/init.d/mysqld restart
...

4. Create SQL user to require SSL login

Create new user with access to database;

GRANT ALL ON Sql_User_DB.* TO Sql_User@localhost;
FLUSH PRIVILEGES;

To create administrator privileges user:

GRANT ALL PRIVILEGES ON *.* TO ‘ssluser’@'%’ IDENTIFIED BY ‘pass’ REQUIRE SSL;
FLUSH PRIVILEGES;

5. Test SSL Connection with MySQL CLI client or with few lines of PHP

To use mysql cli for testing whether SSL connection works:

$ mysql -u ssluser -p'pass' –ssl-ca /etc/mysql-ssl/client-cert.pem –ssl-cert /etc/mysql-ssl/client-key.pem

Once connected to MySQL to verify SSL connection works fine:

mysql> SHOW STATUS LIKE 'Ssl_Cipher';
 +---------------+--------------------+
| Variable_name | Value              |
 +---------------+--------------------+
| Ssl_cipher    | DHE-RSA-AES256-SHA |
+---------------+--------------------+

If you get this output this means MySQL SSL Connection is working as should.

Alternative way is to use test-mysqli-ssl.php script to test availability to mysql over SSL.

$conn=mysqli_init();
mysqli_ssl_set($conn, '/etc/mysql-ssl/client-key.pem', '/etc/mysql-ssl/client-cert.pem', NULL, NULL, NULL);
if (!mysqli_real_connect($conn, '127.0.0.1', 'ssluser', 'pass')) { die(); }
$res = mysqli_query($conn, 'SHOW STATUS like "Ssl_cipher"');
print_r(mysqli_fetch_row($res));
mysqli_close($conn);

Note: Change username password according to your user / pass before using the script

That's all now you have mysql communicating queries data over SSL

 

swap_pager_getswapspace: failed, MySQL troubles on FreeBSD 7.2 cause and solution

Tuesday, May 3rd, 2011

Every now and then my FreeBSD router dmesg ( /var/log/dmesg.today ) logs, gets filled with error messages like:

pid 86369 (httpd), uid 80, was killed: out of swap space
swap_pager_getswapspace(14): failed
swap_pager_getswapspace(16): failed
swap_pager_getswapspace(11): failed
swap_pager_getswapspace(12): failed
swap_pager_getswapspace(16): failed
swap_pager_getswapspace(16): failed
swap_pager_getswapspace(16): failed
swap_pager_getswapspace(16): failed
swap_pager_getswapspace(14): failed
swap_pager_getswapspace(16): failed
swap_pager_getswapspace(8): failed

Using swapinfo during the swap_pager_getswapspace(16): failed messages were logged in, I figured out that definitely the swap memory over-use is the bottleneck for the troubles, to find this I used the command:

freebsd# swapinfo
Device 1K-blocks Used Avail Capacity Type
/dev/ad0s1b 49712 45920 3792 92% Interleaved

After some investigation, I’ve figured out that the MySQL server is causing the kernel exceeded swap troubles.

My current MySQL server version is installed from the ports tree, whether I’m using the bsd port /usr/ports/databases/mysql51-server/ and it appears to work just fine.

However I have noticed that the mysql-server is missing a my.cnf file!, which means the mysql server is running under a mode with some kind of default configurations.

Strangely in the system process list it appeared it is using a default my.cnf file located in /var/db/mysql/my.cnf

Below you see the paste from the ps command:

ps axuww freebsd# ps axuww | grep -i my.cnf | grep -v grep
mysql 7557 0.0 0.1 3464 1268 p1 I 12:03PM 0:00.01 /bin/sh /usr/local/bin/mysqld_safe --defaults-extra-file=/var/db/mysql/my.cnf --user=mysql --datadir=/var/db/mysql --pid-file=/var/db/mysql/pcfreak.pidmysql 7589 0.0 5.1 93284 52852 p1 I 12:03PM 0:59.01 /usr/local/libexec/mysqld --defaults-extra-file=/var/db/mysql/my.cnf --basedir=/usr/local --datadir=/var/db/mysql --user=mysql --pid-file=/var/db/mysql/pcfreak.pid --port=3306 --socket=/tmp/mysql.sock

Nevertheless it appeared the sql server is running the file /var/db/mysql/my.cnf conf was not existing! This was really weird for me as I’m used to have the default my.cnf from my previous experience with Linux servers!

Thus the next logical thing I did was to create my.cnf conf file in order to be able to have a proper limiting configuration for the sql server.

The FreeBSD my.cnf skele files are found in /usr/local/share/mysql/, here are the 4 files one can use as a starting basis for further configuration of the mysql-server.

freebsd# ls -al /usr/local/share/mysql/my-*.cnf
-r--r--r-- 1 root wheel 4948 Aug 12 2009 /usr/local/share/mysql/my-huge.cnf
-r--r--r-- 1 root wheel 20949 Aug 12 2009 /usr/local/share/mysql/my-innodb-heavy-4G.cnf
-r--r--r-- 1 root wheel 4924 Aug 12 2009 /usr/local/share/mysql/my-large.cnf
-r--r--r-- 1 root wheel 4931 Aug 12 2009 /usr/local/share/mysql/my-medium.cnf
-r--r--r-- 1 root wheel 2502 Aug 12 2009 /usr/local/share/mysql/my-small.cnf

I have chosen to use the my-medium.cnf as a skele to tune up, as my server is not high iron one e.g. the host I run the mysql is a (simple dual core 1.2Ghz system).

Further on I copied the /usr/local/share/mysql/my-medium.cnf to /var/db/mysql/my.cnf e.g.:

freebsd# cp -rpf /usr/local/share/mysql/my-medium.cnf /var/db/mysql/my.cnf

As a next step to properly tune up the default values of the newly copied my.cnf to my specific server I used the Tuning-Primer MySQL tuning script

Using tuning-primer.sh is really easy as all I did is download, launch it and follow the script suggestions to correct some of the values already in my.cnf

I have finally ended up with the following my.cnf after using tuning-primer.sh to optimize mysql server to work with my bsd host

Now I really hope the shitty swap_pager_getswapspace: failed errors would not haunt me once again by crashing my server and causing mem overheads.

Still I wonder why the port developer Alex Dupre – ale@FreeBSD.org choose not to provide the default mysql51-server conf with some kind of my.cnf file? I hope he had a good reason.

How to make a mysql root user to login interactive with mysql cli passwordless

Wednesday, June 29th, 2011

MySQL Logo Passwordless root login .my.cnf

I’m using access to the mysql servers via localhost with mysql cli on daily basis.
With time I’ve figured out that it’s pretty unahandy to always login with my root mysql password, I mean each time to enter it, e.g.:

root@mysql-server:~# mysql -u root
Enter password:
...

Thus to make my life a way easier I decided to store my mysql root password in order to allow my root admin user to be able to login to my mysql server without asking for password. This saves time and nerves, as I’m not supposed to look up for the password file I store my server mysql root pass.

To allow my mysql cli interface, to login passwordless to the SQL server I had to create the file /root/.my.cnf readable only for my root user and store my MySQL username and password there.

Here is a sample /root/.my.cnf file:

root@mysql-server:~# cat /root/.my.cnf
[client]
user="root"
pass="mysecretMySQLPasswordgoeshere"

Now next time I use the mysql console interface to access my mysql server I don’t have to supply the password, here is how easier is the mysql login afterwards:

root@mysql-server:~# mysql -u root
Welcome to the MySQL monitor. Commands end with ; or g.
Your MySQL connection id is 3520
Server version: 5.0.77 Source distribution

Type ‘help;’ or ‘h’ for help. Type ‘c’ to clear the buffer.

mysql>

The only downside of using .my.cnf to store permanently the mysql server root and password is from security standpoint.
If for instance somebody roots my servers, where I have stored my root user/pwds in .my.cnf , he will be able immediately to get access to the MySQL server.

Another possible security flaw with using the mysql passwordless login “trick” is if somebody forgets to set proper file permissions to, .my.cnf

Once again the file should possess the permissons of:

root@mysql-server:~# ls -al /root/.my.cnf
-rw------- 1 root root 90 Apr 2 00:05 /root/.my.cnf

Any other permissons might allow non-privileged users to read the file and gain unathorized admin access to the SQL server.