Posts Tagged ‘SQL’

Creating multi-part zip archives in Linux with 7zip command to transfer large zip files data in parts

Monday, December 1st, 2014

creating-multi-part-zip-archives-in-Linux-debian-ubuntu-fedora-centos-rhel-with-7z-command-to-transfer-large-files-data-in-parts
Recently, I've blogged on how to move large files from source to destination server in parts on a slow / restricted networks or whenever the media is limtied in size. This is not a common scenario but it happens so if you're admin sooner or later you will need that. I give example with UNIX's split and unrar. However strip's file naming can get you insane (in case if you don't want to use cstrip command – split a file into sections determined by context lines instead) plus normal split Linux / *nix command doesn't support compression and encryption. On the other side on many Company internal Networks with Windows server hosts running – Winblows (2003, 2008, 1012) for security purposes it might be that WinRar is not installed, thus you might need to transfer the file parted between the GNU / Linux server and Windows server in standard OS supported by Windows ZIP format. Assuming that you have root (admin) access to the Linux host you can then archive your file in parts using ZIP encryption algorithm with 7zip.

1. Installing 7zip on CentOS / Fedora / RHEL and other Redhat based Linuces

If the Linux server is running:
Fedora / CentOS  / RHEL and you don't have 7zip installed yet install it with:

yum -y install p7zip

According to distros version  it might be the name could be a bit different if p7zip is different to find the one you need search with:

yum search p7zip

and install whatever you need
 

2. Installing 7zip on Debian / Ubuntu and other Debian based servers

apt-get install –yes p7zip-full

Depending on Deb based distro just like with fedora if p7zip-full pack is not installable, check 7zip's package distro version:

apt-cache search p7zip

 

3. Archiving ZIP file in multiple (sized) parts on GNU / Linux

7z a -v512m Large-file-separated-in-multi-parts.zip Large-Many-Gigabytes-File.SQL

This would output multiple files:

Large-file-separated-in-multi-parts.zip.001, Large-file-separated-in-multi-parts.zip.002, Large-file-separated-in-multi-parts.zip.003, Large-file-separated-in-multi-parts.004 etc.

If you want to add security to the transferred file to protect newly created ZIP archive with password use following command:

7z a -v512m Large-file-separated-in-multi-parts.zip Large-Many-Gigabytes-File.SQL

 

7-Zip [64] 9.20  Copyright (c) 1999-2010 Igor Pavlov  2010-11-18
p7zip Version 9.20 (locale=bg_BG.UTF-8,Utf16=on,HugeFiles=on,2 CPUs)
Scanning

Creating archive Large-file-separated-in-multi-parts.zip


Enter password (will not be echoed) :

Once you have transferred all the many parts via (SSH/ FTPS or not preferrably HTTP / HTTPS / FTP) place them in the same folder and use Windows standard ZIP to unarchive.

If the archived 7zip files are to be unarchived on another Linux host (in case if multi part zip transfer is between Linux -> Linux hosts) to unarchive, parted files:

7z x Large-file-separated-in-multi-parts.zip.*

Manually deleting spam comments from WordPress blogs and websites to free disk space and optimize MySQL

Monday, November 24th, 2014

WordPress-delete_spam_comments_manually_with_sql_query_to-optimize_mysql-and-free-disk-space
If you're a web-hosting company or a web-development using WordPress to build multitudes of customer blogs or just an independent blogger or sys-admin with a task to optimize a server's MySQL allocated storage  / performance on triads of WordPress-es a a good tip that would help is to removing wp_comments marked as spam.

Even though sites might be protected of thousands of spam message daily caught by WP anti-spam plugin Akismet, spam caught messages aer forwarder by Akismet to WP's Spam filter and kept wp_comments table with comments_approved column  record 'spam'.

Therefore you will certainly gain of freeing disk space uselessly allocated by spam messages into current MySQL server storage dir (/var/lib/mysql   /usr/local/mysql/data – the directory where my.cnf tells the server to keep its binary data .MYI, .MYD, .frm files) as well as save a lot of disk space by excluding the useless spam messages from SQL daily backup archives.

Here is how to remove manually spam comments from a WordPress blog under database (wp_blog1);

mysql> use wp_blog1;
mysql> describe wp_comments;
+———————-+———————+——+—–+———————+—————-+
| Field | Type | Null | Key | Default | Extra |
+———————-+———————+——+—–+———————+—————-+
| comment_ID | bigint(20) unsigned | NO | PRI | NULL | auto_increment |
| comment_post_ID | bigint(20) unsigned | NO | MUL | 0 | |
| comment_author | tinytext | NO | | NULL | |
| comment_author_email | varchar(100) | NO | | | |
| comment_author_url | varchar(200) | NO | | | |
| comment_author_IP | varchar(100) | NO | | | |
| comment_date | datetime | NO | | 0000-00-00 00:00:00 | |
| comment_date_gmt | datetime | NO | MUL | 0000-00-00 00:00:00 | |
| comment_content | text | NO | | NULL | |
| comment_karma | int(11) | NO | | 0 | |
| comment_approved | varchar(20) | NO | MUL | 1 | |
| comment_agent | varchar(255) | NO | | | |
| comment_type | varchar(20) | NO | | | |
| comment_parent | bigint(20) unsigned | NO | MUL | 0 | |
| user_id | bigint(20) unsigned | NO | | 0 | |
+———————-+———————+——+—–+———————+—————-+


The most common and quick way useful for scripting (whether you have to do it for multiple blogs with separate dbs) is to delete all comments being filled as 'Spam'.

To delete all messages which were filled by Akismet's spam filter with high probabily being a spam issue from mysql cli interface:

DELETE FROM wp_comments WHERE comment_approved = 'spam';


For Unread (Unapproved) messages the value of comment_approved field are 0 or 1, 0 if the comment is Red and Approved and 1 if still it is to be marked as read (and not spam).
If a wordpress gets heavily hammered with mainly spam and the probability that unapproved message is different from spam is low and you want to delete any message waiting for approvel as not being spam from wordpress use following SQL query:

DELETE FROM wp_comments WHERE comment_approved = 0;

Another not very common you might want to do is delete only all apprved comments:

DELETE FROM wp_comments WHERE comment_approved = 1;

For old installed long time unmaintained blogs (with garbish content), it is very likely that 99% of the messages might be spam and in case if there are already >= 100 000 spam messages and you don't have the time to inspect 100 000 spam comments to get only some 1000 legitimate and you want to delete completely all wordpress comments for a blog in one SQL query use:

TRUNCATE wp_comments;

Another scenario if you know a blog has been maintained until certain date and comments were inspected and then it was left unmaintained for few years without any spam detect and clear plugin like Akismet, its worthy to delete all comments starting from the date wordpress site stopped to be maintained:

DELETE FROM wp_comments WHERE comment_date > '2008-11-20 05:00:10' AND comment_date <= '2014-11-24 00:30:00'

Script to Automatically change current MySQL server in wp-config.php to another MySQL host to minimize WordPress and Joomla downtimes

Friday, July 20th, 2012

I'm running a two servers for a couple of home hosted websites. One of the servers is serving as Apache host1 and has configured MySQL running on it and the second is used just for database host2 – (has another MySQL configured on it).
The MySQL servers are not configured to run as a MySQL MASTER and MySQL SLAVE (no mysql replication), however periodically (daily), I have a tiny shell script that is actualizing the data from the active SQL host2 server to host1.

Sometimes due to electricity problems or CPU overheats the active MySQL host at host2 gets stoned and stops working causing the 2 WordPress based websites and One joomla site inaccessible.
Until I manually get to the machine and restart host2 the 3 sites are down from the net and as you can imagine this has a very negative impact on the existing website indexing (PageRank) in Google.

When I'm at home, this is not a problem as I have physical access to the servers and if somethings gets messy I fix it quickly. The problem comes, whether I'm travelling or in another city far from home and there is no-one at home to give the hanged host hard reboot ….

Lately the problems with hang-ups of host2 happaned 3 times or so for 2 weeks, as a result the websites were inaccessible for hours and since there is nobody to reboot the server for hours; the websites keep hanging until the DB host is restarted ;;;;

To work-around this I came with the idea to write a tiny shell script to check if host2 is ping-able in order to assure the Database host is not down and then if script determines host2 (mysql) host is down it changes wp-config.php (set to use host2) to a wp-config.php (which I have beforehand configured to use) host1.

Using the script is a temporary solution, since I have to actually find the real hang-up causing troubles, but at least it saves me long downtimes. Here is a download link to the script I called change_blog_db.sh .
I've configured the script to be run on the Apache node (host1) via a crontab calling the script every 10 minutes, here is the crontab:
 

*/10 * * * * /usr/sbin/change_blog_db.sh > /dev/null 2>&1

The script is written in a way so if it determins host2 is reachable a copy of wp-config.php and Joomla's configuration.php tuned to use host2 is copied over the file config originals. In order to use the script one has to configured the head variables script section, e.g.:

host_to_ping='192.168.0.2';
blog_dir='/var/www/blog';
blog_dir2='/var/www/blog1';blog_dir3='/var/www/joomla';
notify_mail='hipo@pc-freak.net';
wp_config_orig='wp-config.php';
wp_config_localhost='wp-config-localhost.php';
wp_config_other_host='wp-config-192.168.0.2.php';
joomla_config_orig='configuration.php';
joomla_config_other_host='configuration-192.168.0.2.php';

You will have to manually prepare;;;

wp-config-localhost.php, wp-config-192.168.0.2.php ,configuration-192.168.0.2.php, wp-config-localhost.php to be existing files configured to with proper host1 and host2 IP addresses.
Hope the script will be useful to others, experiencing database downtimes with WordPress or Joomla installs.
 

How to completely disable Replication in MySQL server 5.1.61 on Debian GNU / Linux

Monday, July 16th, 2012

Replication_mysql_disable

Some time ago on one of the Database MySQL servers, I've configured replication as it was required to test somethings. Eventually it turned out replication will be not used (for some reason) it was too slow and not fitting our company needs hence we needed to disable it.

It seemed logical to me that, simply removing any replication related directives from my.cnf and a restart of the SQL server will be enough to turn replication off on the Debian Linux host. Therefore I proceeded removed all replication configs from /etc/my/my.cnf and issued MySQL restart i. e.:

sql-server:~# /etc/init.d/mysql restart
....

This however didn't turned off replication,as I thought and in phpmyadminweb frontend interface, replication was still appearing to be active in the replication tab.

Something was still making the SQL server still act as an Replication Slave Host, so after a bit of pondering and trying to remember, the exact steps I took to make the replication work on the host I remembered that actually I issued:

mysql> START SLAVE;

Onwards I run:

mysql> SHOW SLAVE STATUS;
....

and found in the database the server was still running in Slave Replication mode

Hence to turn off the db host run as a Slave, I had to issue in mysql cli:

mysql> STOP SLAVE;
Query OK, 0 rows affected, 1 warning (0.01 sec)
mysql> RESET SLAVE;
Query OK, 0 rows affected, 1 warning (0.01 sec)

Then after a reload of SQL server in memory, the host finally stopped working as a Slave Replication host, e.g.

sql-server:~# /etc/init.d/mysql restart
....

After the restart, to re-assure myself the SQL server is no more set to run as MySQL replication Slave host:

mysql> SHOW SLAVE STATUS;
Empty set (0.00 sec)

Cheers 😉

‘host-name’ is blocked because of many connection errors; unblock with ‘mysqladmin flush-hosts’

Sunday, May 20th, 2012

mysql-logo-host-name-blocked-because-of-many-connection-errors
My home run machine MySQL server was suddenly down as I tried to check my blog and other sites today, the error I saw while trying to open, this blog as well as other hosted sites using the MySQL was:

Error establishing a database connection

The topology, where this error occured is simple, I have two hosts:

1. Apache version 2.0.64 compiled support externally PHP scripts interpretation via libphp – the host runs on (FreeBSD)

2. A Debian GNU / Linux squeeze running MySQL server version 5.1.61

The Apache host is assigned a local IP address 192.168.0.1 and the SQL server is running on a host with IP 192.168.0.2

To diagnose the error I've logged in to 192.168.0.2 and weirdly the mysql-server was appearing to run just fine:
 

debian:~# ps ax |grep -i mysql
31781 pts/0 S 0:00 /bin/sh /usr/bin/mysqld_safe
31940 pts/0 Sl 12:08 /usr/sbin/mysqld –basedir=/usr –datadir=/var/lib/mysql –user=mysql –pid-file=/var/run/mysqld/mysqld.pid –socket=/var/run/mysqld/mysqld.sock –port=3306
31941 pts/0 S 0:00 logger -t mysqld -p daemon.error
32292 pts/0 S+ 0:00 grep -i mysql

Moreover I could connect to the localhost SQL server with mysql -u root -p and it seemed to run fine. The error Error establishing a database connection meant that either something is messed up with the database or 192.168.0.2 Mysql port 3306 is not properly accessible.

My first guess was something is wrong due to some firewall rules, so I tried to connect from 192.168.0.1 to 192.168.0.2 with telnet:
 

freebsd# telnet 192.168.0.2 3306
Trying 192.168.0.2…
Connected to jericho.
Escape character is '^]'.
Host 'webserver' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'
Connection closed by foreign host.

Right after the telnet was initiated as I show in the above output the connection was immediately closed with the error:

Host 'webserver' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'Connection closed by foreign host.

In the error 'webserver' is my Apache machine set hostname. The error clearly states the problems with the 'webserver' apache host unable to connect to the SQL database are due to 'many connection errors' and a fix i suggested with mysqladmin flush-hosts

To temporary solve the error and restore my normal connectivity between the Apache and the SQL servers I logged I had to issue on the SQL host:

mysqladmin -u root -p flush-hostsEnter password:

Thogh this temporar fix restored accessibility to the databases and hence the websites errors were resolved, this doesn't guarantee that in the future I wouldn't end up in the same situation and therefore I looked for a permanent fix to the issues once and for all.

The permanent fix consists in changing the default value set for max_connect_error in /etc/mysql/my.cnf, which by default is not too high. Therefore to raise up the variable value, added in my.cnf in conf section [mysqld]:

debian:~# vim /etc/mysql/my.cnf
...
max_connect_errors=4294967295

and afterwards restarted MYSQL:

debian:~# /etc/init.d/mysql restart
Stopping MySQL database server: mysqld.
Starting MySQL database server: mysqld.
Checking for corrupt, not cleanly closed and upgrade needing tables..

To make sure the assigned max_connect_errors=4294967295 is never reached due to Apache to SQL connection errors, I've also added as a cronjob.

debian:~# crontab -u root -e
00 03 * * * mysqladmin flush-hosts

In the cron I have omitted the mysqladmin -u root -p (user/pass) input options because for convenience I have already stored the mysql root password in /root/.my.cnf

Here is how /root/.my.cnf looks like:

debian:~# cat /root/.my.cnf
[client]
user=root
password=a_secret_sql_password

Now hopefully, this would permanently solve SQL's 'failure to accept connections' due to too many connection errors for future.

How to fix “ERROR 1577 (HY000) at line 1: Cannot proceed because system tables used by Event Scheduler were found damaged at server start”

Saturday, May 12th, 2012

After migrating databases data from FreeBSD MySQL 5.0.83 server to a Debian Squeeze Linux MySQL version 5.1.61, below is a mysql –version issued on both the FreeBSD and the Debian servers

freebsd# mysql --version
mysql Ver 14.12 Distrib 5.0.83, for portbld-freebsd7.2 (i386) using 5.2

debian:~# mysql --version
mysql Ver 14.14 Distrib 5.1.61, for debian-linux-gnu (i486) using readline 6.1

The data SQL dump from the FreeBSD server was dumped with following command arguments:

freebsd# mysqldump --opt --allow-keywords --add-drop-table --all-databases -u root -p > complete_db_dump.sql

Then I used sftp to transfer complete_db_dump.sql dump to the a brand new installed latest Debian Squeeze 6.0.2. The Debian server was installed using a "clean Debian install" without graphical environment with CD downloaded from debian.org's site.

On the Debian machine I imported the dump with command:

debian:~# mysq -u root -p < complete_db_dump.sql

Right After the dump was imported I re-started SQL server which was previously installed with:

debian:~# apt-get install mysql-server
The error I got after restarting the mysql server:

debian:~# #/etc/init.d/mysql restart

was:

ERROR 1577 (HY000) at line 1: Cannot proceed because system tables used by Event Scheduler were found damaged at server start
ERROR 1547 (HY000) at line 1: Column count of mysql.proc is wrong. Expected 20, found 16. The table is probably corrupted

This error cost me a lot of nerves and searching in google to solve. It took me like half an hour of serious googling ,until I finally found the FIX!!!:

debian:~# mysql_upgrade -u root -h localhost -p --verbose --force
Enter password:
Looking for 'mysql' as: mysql
Looking for 'mysqlcheck' as: mysqlcheck
Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost'
Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost'
bible.holy_bible OK
bible.holybible OK
bible.quotes_meta OK

Afterwards finally I had to restart the mysql server once again in order to finally get rid of the shitty:

ERROR 1547 (HY000) at line 1: Column count of mysql.proc is wrong. Expected 20, found 16. The table is probably corrupted error!

debian:~# /etc/init.d/mysql restart
Stopping MySQL database server: mysqld.
Starting MySQL database server: mysqld.
Checking for corrupt, not cleanly closed and upgrade needing tables..

This solved the insane Column count of mysql.proc is wrong. Expected 20, found 16 once and for all!

Before I came with this fix I tried all kind of forum suggested fixes like:

debian:~# mysql_upgrade -u root -p
Looking for 'mysql' as: mysql
Looking for 'mysqlcheck' as: mysqlcheck
This installation of MySQL is already upgraded to 5.1.61, use --force if you still need to run mysql_upgrade

debian:~# mysql_upgrade -p
Looking for 'mysql' as: mysql
Looking for 'mysqlcheck' as: mysqlcheck
This installation of MySQL is already upgraded to 5.1.61, use --force if you still need to run mysql_upgrade

And few more, none of them worked the only one that worked was:

debian:~# #mysql_upgrade -u root -h localhost -p --verbose --force

I have to say big thanks to Mats Lindth wonderful blog post which provided me with the solution.

It seems, since Oracle bought the Community edition of MySQL thinks with this database server are getting more and more messy and backwards incompatible day by day.
Lately, I'm experiencing too much hassles with MySQL version incompitabilities. Maybe I should think for migrating permanently to Postgre …

By the way the ERROR 1547 (HY000) at line 1: Column count of mysql.proc is wrong. is most probably caused of some kind of password hashing incompitability between the password hashing between the BSD and Debian SQL versions, as mysql -u root -p < dump.sql, does override default stored user passwords in the mysql database tables… Such password, hashing issues were common in prior MySQL 4 to MySQL 5 migrations I've done, however since MySQL 5+ is already storing its password strings encrypted with md5 encryption I wonder why on earth this mess happens ….
 

How to check MASTER / SLAVE MySQL nodes status – Check MySQL Replication Status

Thursday, April 19th, 2012

I'm doing replication for one server. Its not the first time I do configure replication between two MySQL database nodes, however since I haven't done it for a few years, my "know how" has mostly vanished so I had some troubles in setting it up. Once I followed some steps to configure replication I had to check if the two MASTER / Slave MySQL db nodes communicate properly. Hence I decided to drop a short post on that just in case if someone has to do the same or if I myself forget how I did it so I can check later on:

1. Check if MASTER MySQL server node is configured properly

The standard way to check a MySQL master node status info is with:
 

mysql> show master status;
+——————+———-+———————————————————+——————+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+——————+———-+———————————————————+——————+
| mysql-bin.000007 | 106 | database1,database2,database3 | |
+——————+———-+———————————————————+——————+
1 row in set (0.00 sec)

By putting \G some extra status info is provided:
 

mysql> show master status\G;
*************************** 1. row ***************************
File: mysql-bin.000007
Position: 106
Binlog_Do_DB: database1,database2,database3
Binlog_Ignore_DB:
1 row in set (0.00 sec)

ERROR:
No query specified

2. Check if Slave MySQL node is configured properly

To check status of the slave the cmd is:
 

mysql> show slave status;

The command returns an output like:
 

mysql> show slave status;+———————————-+————-+————-+————-+—————+——————+———————+————————-+—————+———————–+——————+——————-+——————————————————-+———————+——————–+————————+————————-+—————————–+————+————+————–+———————+—————–+—————–+—————-+—————+——————–+——————–+——————–+—————–+——————-+—————-+———————–+——————————-+—————+—————+—————-+—————-+| Slave_IO_State | Master_Host | Master_User | Master_Port | Connect_Retry | Master_Log_File | Read_Master_Log_Pos | Relay_Log_File | Relay_Log_Pos | Relay_Master_Log_File | Slave_IO_Running | Slave_SQL_Running | Replicate_Do_DB | Replicate_Ignore_DB | Replicate_Do_Table | Replicate_Ignore_Table | Replicate_Wild_Do_Table | Replicate_Wild_Ignore_Table | Last_Errno | Last_Error | Skip_Counter | Exec_Master_Log_Pos | Relay_Log_Space | Until_Condition | Until_Log_File | Until_Log_Pos | Master_SSL_Allowed | Master_SSL_CA_File | Master_SSL_CA_Path | Master_SSL_Cert | Master_SSL_Cipher | Master_SSL_Key | Seconds_Behind_Master | Master_SSL_Verify_Server_Cert | Last_IO_Errno | Last_IO_Error | Last_SQL_Errno | Last_SQL_Error |+———————————-+————-+————-+————-+—————+——————+———————+————————-+—————+———————–+——————+——————-+——————————————————-+———————+——————–+————————+————————-+—————————–+————+————+————–+———————+—————–+—————–+—————-+—————+——————–+——————–+——————–+—————–+——————-+—————-+———————–+——————————-+—————+—————+—————-+—————-+| Waiting for master to send event | HOST_NAME.COM | slave_user | 3306 | 10 | mysql-bin.000007 | 106 | mysqld-relay-bin.000002 | 251 | mysql-bin.000007 | Yes | Yes | database1,database2,database3 | | | | | | 0 | | 0 | 106 | 407 | None | | 0 | No | | | | | | 0 | No | 0 | | 0 | |+———————————-+————-+————-+————-+—————+——————+———————+————————-+—————+———————–+——————+——————-+——————————————————-+———————+——————–+————————+————————-+—————————–+————+————+————–+———————+—————–+—————–+—————-+—————+——————–+——————–+——————–+—————–+——————-+—————-+———————–+——————————-+—————+—————+—————-+—————-+

As you can see the output is not too readable, as there are too many columns and data to be displayed and this doesn't fit neither a text console nor a graphical terminal emulator.

To get more readable (more verbose) status for the SQL SLAVE, its better to use command:
 

mysql> show slave status\G;

Here is a sample returned output:
 

mysql> show slave status\G;*************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: HOST_NAME.COM Master_User: slave_user Master_Port: 3306 Connect_Retry: 10 Master_Log_File: mysql-bin.000007 Read_Master_Log_Pos: 106 Relay_Log_File: mysqld-relay-bin.000002 Relay_Log_Pos: 251 Relay_Master_Log_File: mysql-bin.000007 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: database1,database2,database3 Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 106 Relay_Log_Space: 407 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: 1 row in set (0.00 sec)ERROR: No query specified

If show master status or shwo slave status commands didn't reveal replication issue, one needs to stare at the mysql log for more info.

How to upgrade single package with their dependencies on Debian and Ubuntu Linux

Friday, March 16th, 2012

Debian GNU / Linux apt-get upgrade a package selection of a whole bunch of packages ready to upgrade apt artistic logo

Are you a Debian System Administrator and you recently run apt-get upgrade && apt-get upgrade finding out there are plenty of new packagesfor upgrade? Do you need only a pre-selected number of packages to upgrade with apt?
I run apt-get update && apt-get upgrade on one of our company Debian servers, just to see there are a number of packages to be upgraded among which there was some I didn't wanted to upgrade. Here is a little paste output from apt-get upgrade:

debian:~# apt-get update && apt-get upgrade
Hit http://security.debian.org squeeze/updates Release.gpg
...
Hit http://security.debian.org squeeze/updates/main amd64 Packages
Fetched 128 kB in 0s (441 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
at imagemagick libdbd-pg-perl libfreetype6 libmagickcore3 libmagickcore3-extra libmagickwand3 libmysqlclient16 mysql-client
mysql-client-5.1 mysql-common mysql-server mysql-server-5.1 mysql-server-core-5.1
Do you want to continue [Y/n]
14 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

From first sight it seems logical to issue apt-get upgrade packagename to upgrade only single package with its package dependencies, instead of the whole group the above packs. However doing:
apt-get upgrade imagemagick will still try to upgrade all the packages instead of just imagemagick and its dependency package deb libmagickcore3

debian:~# apt-get upgrade imagemagick
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
at imagemagick libdbd-pg-perl libfreetype6 libmagickcore3 libmagickcore3-extra libmagickwand3 libmysqlclient16 mysql-client
mysql-client-5.1 mysql-common mysql-server mysql-server-5.1 mysql-server-core-5.1
14 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Do you want to continue [Y/n]

Doing all package,upgrade is not a good idea in my case, since upgrading mysql-server will require a MySQL server restart (something which we cannot afford to do right now) on this production server.
MySQL server restart during upgrade is never a good idea especially on productive busy (heavy loaded) SQL servers.
A restart of the MySQL server serving thousands of requests per second could lead often to crashed tables and hence temporary server downtime etc.

Still it is a good idea to upgrade the rest of packages with their newer versions. For exmpl. to upgrade; imagemagick, at , libfreetype6 and so on.

In order to upgrade only this 3 ones and their respective package dependencies, issue:

debian:~# apt-get --yes install imagemagick at libfreetype6

Repeat the apt-get install command with passing all the single package name you want to be upgraded and voila you're done :).
Be sure the apt-get install packagename upgrade doesn't require also upgrade of myssql-server, mysql-client, mysql-common or mysql-server-core-5.1 or any of the package name you want to preserve from upgrading.

How to resolve (fix) WordPress wp-cron.php errors like “POST /wp-cron.php?doing_wp_cron HTTP/1.0″ 404” / What is wp-cron.php and what it does

Monday, March 12th, 2012

fix wordpress wp-cron.php 404 HTTP error, what is wp-cron.php schedule logo

One of the WordPress websites hosted on our dedicated server produces all the time a wp-cron.php 404 error messages like:

xxx.xxx.xxx.xxx - - [15/Apr/2010:06:32:12 -0600] "POST /wp-cron.php?doing_wp_cron HTTP/1.0

I did not know until recently, whatwp-cron.php does, so I checked in google and red a bit. Many of the places, I've red are aa bit unclear and doesn't give good exlanation on what exactly wp-cron.php does. I wrote this post in hope it will shed some more light on wp-config.php and how this major 404 issue is solved..
So

what is wp-cron.php doing?

 

  • wp-cron.php is acting like a cron scheduler for WordPress.
  • wp-cron.php is a wp file that controls routine actions for particular WordPress install.
  • Updates the data in SQL database on every, request, every day or every hour etc. – (depending on how it's set up.).
  • wp-cron.php executes automatically by default after EVERY PAGE LOAD!
  • Checks all pending comments for spam with Akismet (if akismet or anti-spam plugin alike is installed)
  • Sends all scheduled emails (e.g. sent a commentor email when someone comments on his comment functionality, sent newsletter subscribed persons emails etc.)
  • Post online scheduled articles for a day and time of particular day

Suppose you're writting a new post and you want to take advantage of WordPress functionality to schedule a post to appear Online at specific time:

What is wordpress wp-cron.php, Scheduling wordpress post screenshot

The Publish Immediately, field execution is being issued on the scheduled time thanks to the wp-cron.php periodic invocation.

Another example for wp-cron.php operation is in handling flushing of WP old HTML Caches generated by some wordpress caching plugin like W3 Total Cache
wp-cron.php takes care for dozens of other stuff silently in the background. That's why many wordpress plugins are depending heavily on wp-cron.php proper periodic execution. Therefore if something is wrong with wp-config.php, this makes wordpress based blog or website partially working or not working at all.
 

Our company wp-cron.php errors case

In our case the:
212.235.185.131 – – [15/Apr/2010:06:32:12 -0600] "POST /wp-cron.php?doing_wp_cron HTTP/1.0" 404
is occuring in Apache access.log (after each unique vistor request to wordpress!.), this is cause wp-cron.php is invoked on each new site visitor site request.
This puts a "vain load" on the Apache Server, attempting constatly to invoke the script … always returning not found 404 err.

As a consequence, the WP website experiences "weird" problems all the time. An illustration of a problem caused by the impoper wp-cron.php execution is when we are adding new plugins to WP.

Lets say a new wordpress extension is download, installed and enabled in order to add new useful functioanlity to the site.

Most of the time this new plugin would be malfunctioning if for example it is prepared to add some kind of new html form or change something on some or all the wordpress HTML generated pages.
This troubles are result of wp-config.php's inability to update settings in wp SQL database, after each new user request to our site.
So the newly added plugin website functionality is not showing up at all, until WP cache directory is manually deleted with rm -rf /var/www/blog/wp-content/cache/

I don't know how thi whole wp-config.php mess occured, however my guess is whoever installed this wordpress has messed something in the install procedure.

Anyways, as I researched thoroughfully, I red many people complaining of having experienced same wp-config.php 404 errs. As I red, most of the people troubles were caused by their shared hosting prohibiting the wp-cron.php execution.
It appears many shared hostings providers choose, to disable the wordpress default wp-cron.php execution. The reason is probably the script puts heavy load on shared hosting servers and makes troubles with server overloads.

Anyhow, since our company server is adedicated server I can tell for sure in our case wordpress had no restrictions for how and when wp-cron.php is invoked.
I've seen also some posts online claiming, the wp-cron.php issues are caused of improper localhost records in /etc/hosts, after a thorough examination I did not found any hosts problems:

hipo@debian:~$ grep -i 127.0.0.1 /etc/hosts
127.0.0.1 localhost.localdomain localhost

You see from below paste, our server, /etc/hosts has perfectly correct 127.0.0.1 records.

Changing default way wp-cron.php is executed

As I've learned it is generally a good idea for WordPress based websites which contain tens of thousands of visitors, to alter the default way wp-cron.php is handled. Doing so will achieve some efficiency and improve server hardware utilization.
Invoking the script, after each visitor request can put a heavy "useless" burden on the server CPU. In most wordpress based websites, the script did not need to make frequent changes in the DB, as new comments in posts did not happen often. In most wordpress installs out there, big changes in the wordpress are not common.

Therefore, a good frequency to exec wp-cron.php, for wordpress blogs getting only a couple of user comments per hour is, half an hour cron routine.

To disable automatic invocation of wp-cron.php, after each visitor request open /var/www/blog/wp-config.php and nearby the line 30 or 40, put:

define('DISABLE_WP_CRON', true);

An important note to make here is that it makes sense the position in wp-config.php, where define('DISABLE_WP_CRON', true); is placed. If for instance you put it at the end of file or near the end of the file, this setting will not take affect.
With that said be sure to put the variable define, somewhere along the file initial defines or it will not work.

Next, with Apache non-root privileged user lets say www-data, httpd, www depending on the Linux distribution or BSD Unix type add a php CLI line to invoke wp-cron.php every half an hour:

linux:~# crontab -u www-data -e

0,30 * * * * cd /var/www/blog; /usr/bin/php /var/www/blog/wp-cron.php 2>&1 >/dev/null

To assure, the php CLI (Command Language Interface) interpreter is capable of properly interpreting the wp-cron.php, check wp-cron.php for syntax errors with cmd:

linux:~# php -l /var/www/blog/wp-cron.php
No syntax errors detected in /var/www/blog/wp-cron.php

That's all, 404 wp-cron.php error messages will not appear anymore in access.log! 🙂

Just for those who can find the root of the /wp-cron.php?doing_wp_cron HTTP/1.0" 404 and fix the issue in some other way (I'll be glad to know how?), there is also another external way to invoke wp-cron.php with a request directly to the webserver with short cron invocation via wget or lynx text browser.

– Here is how to call wp-cron.php every half an hour with lynxPut inside any non-privileged user, something like:
01,30 * * * * /usr/bin/lynx -dump "http://www.your-domain-url.com/wp-cron.php?doing_wp_cron" 2>&1 >/dev/null

– Call wp-cron.php every 30 mins with wget:

01,30 * * * * /usr/bin/wget -q "http://www.your-domain-url.com/wp-cron.php?doing_wp_cron"

Invoke the wp-cron.php less frequently, saves the server from processing the wp-cron.php thousands of useless times.

Altering the way wp-cron.php works should be seen immediately as the reduced server load should drop a bit.
Consider you might need to play with the script exec frequency until you get, best fit cron timing. For my company case there are only up to 3 new article posted a week, hence too high frequence of wp-cron.php invocations is useless.

With blog where new posts occur once a day a script schedule frequency of 6 up to 12 hours should be ok.

 

How to change users quota to NO QUOTA on Qmail with Vpopmail Mail server install / Qmail mail over quota issue

Monday, February 20th, 2012

 

Qmail Vpopmail quota exceeded Dolphin Logo

Already on a couple of mail boxes located on one of the qmail powered mail servers I adminiter, there is an over QUOTA reached problem encountered.

Filling up the mailbox quota is not nice as mails starts get bounced back to the sender with a message QUOTA FULL or EXCEEDED MESSAGE, if this is a crucial mail waiting for some important data etc. the data is never received.
Below is a copy of the mail quota waarning notification message:

Delivered-To: email_use@my-mail-domain.net
Date: Wed, 15 Feb 2012 17:40:36 +0000
X-Comment: Rename/Copy this file to ~vpopmail/domains/.quotawarn.msg, and make appropriate changes
X-Comment: See README.quotas for more information
From: Mail Delivery System <Mailer-Daemon@different.bg>
Reply-To: email@pc-freak.net
To: Valued Customer:;
Subject: Mail quota warning
Mime-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 7bit
>
Your mailbox on the server is now more than 90% full. So that you can continue
to receive mail you need to remove some messages from your mailbox.

As you can read from the copy of the mail message above, the message content sent to the mail owner whose quota is getting full is red from /var/vpopmail/domains/.quotawarn.msg

The mail reaching quota problem is very likely to appear in cases like low mailbox quota set, but sometimes also occurs due to bugs in vpopmail quota handling.

Various interesting configuration settings for mail quotas etc. are in /home/vpopmail/etc/vlimits.default file, (assuming vpopmail is installed in /home).

In my specific case, the default vpopmail mailbox quota size was set to only 40 Megabytes.
40MB is too low if compared to todays mailbox size standards which in Gmail and Yahoo  mail services are already a couple of gigabytes.
Hence to get around the quota troubles, I  removed the quota for the mail.
To remove the quota size in vpopmail set for address (email_user@my-mail-domain.net) used cmd:

qmail-server:~# vmoduser -q NOQUOTA email_user@my-mail-domain.net

To save myself from future quota issues, I decided to apply a permanent fix to all those over quota size VPOPMAIL mailbox problems by removing completely quota restriction for all mailboxes in my vpopmail existent mail domain.

To do so, I wrote a quick simple bash loop one-liner script:

qmail-server:~# cd /home/vpopmail/domains
qmail-server:~/vpopmail/domains# cd my-mail-domain.net
qmail-server:~/vpopmail/domains/my-mail-domain.net# for i in *; do \
vmoduser -q NOQUOTA $(echo $i|grep -v vpasswd)@my-mail-domain.net; \
done

This works only on vpopmail installations which are configured to store the mail messages directly on the filesystem. Therefore this approach will not work for people who during vpopmail install had configured it to store mailboxes in MySQL or in other kind of SQL db engine.

Anyways for Vpopmail installed to use SQL backend, the script can be changed to read directly a list with all the mailboxes obtained from databasae (SQL query) and then, loop over each of the mail addresses apply the vmoduser -q NOQUOTA mail@samplemaildomain.net.

I've written also a few lines shell script (remove_vpopmail_emails_domain_quota.sh), it accepts one argument which is a vpopmail domain to which the admin would like to reset all applied mailbox quotas. The script is useful, if you have to often remove all quotas for vpopmail domainsor have to do quota wipe out simultaneously for multiple email domain names  located on different servers.