Posts Tagged ‘logs’

No space left on device with free disk space / Why no space left on device while there is plenty of disk space on drive – Running out of Inodes

Tuesday, November 17th, 2015

no_space_left-on-device-while-there-is-disk-space-running-out-of-file-inodes-unix_linux_file_system_diagram.gif

 

On one of the servers, I'm administrating the websites started showing some Mysql database table corrup errors like:
 

 

Table './database_name/site_news_list_com' is marked as crashed and last (automatic?) repair failed

The server is using Oracle MySQL server community stable edition on Debian GNU / Linux 6.0, so I first thought during work the server crashed either due to some bug issue in MySQL or it crashed due to some PHP cron job that did something messy. Thus to solve the crashed tables, tried using mysqlcheck tool which helped pretty fine, at many times whether there were database / table corruptions. I've run the following set of mysqlcheck commands with root (superuser) in a bash shell after logging in through SSH:

:

server:~# /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–check –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log
server:~# /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf –analyze –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log
server:~# /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–auto-repair –optimize –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log
server:~# /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–optimize –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log


In order for above commands to work, I've created the /root/.my.cnf containing my root (mysql CLI) mysql username and password, e.g. file has content like below:

 

[client]
user=root
password=MySecretPassword8821238

 

Btw a good note here is its generally a good idea (if you want to have consistent mysql databases) to automatically execute via a cron job 2 times a month, I've in root cronjob the following:

 

crontab -u root -l |grep -i mysqlcheck
04 06 5,10,15,20,25,1 * * /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–check –all-databases –silent -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log 07 06 5,10,15,20,25,1 * * /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf –analyze –all-databases –silent -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log 12 06 5,10,15,20,25,1 * * /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–auto-repair –optimize –all-databases –silent -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log 17 06 5,10,15,20,25,1 * * /usr/bin/mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–optimize –all-databases –silent -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log


Strangely I got a lot of errors that some .MYI / .MYD .frm temp files, necessery for the mysql tables recovery can't be written inside /home/mysql/database_name

That was pretty weird and I thought there might be some issues with permissions, causing the inability to write, due to some bug or something so I went straight and checked /home/mysql/database_name permissions, e.g.::

 

server:/home/mysql/database_name# ls -ld soccerfame
drwx—— 2 mysql mysql 36864 Nov 17 12:00 soccerfame
server:/home/mysql/database_name# ls -al1|head -n 10
total 1979012
drwx—— 2 mysql mysql 36864 Nov 17 12:00 .
drwx—— 36 mysql mysql 4096 Nov 17 11:12 ..
-rw-rw—- 1 mysql mysql 8712 Nov 17 10:26 1_campaigns_diez.frm
-rw-rw—- 1 mysql mysql 14672 Jul 8 18:57 1_campaigns_diez.MYD
-rw-rw—- 1 mysql mysql 1024 Nov 17 11:38 1_campaigns_diez.MYI
-rw-rw—- 1 mysql mysql 8938 Nov 17 10:26 1_campaigns.frm
-rw-rw—- 1 mysql mysql 8738 Nov 17 10:26 1_campaigns_logs.frm
-rw-rw—- 1 mysql mysql 883404 Nov 16 22:01 1_campaigns_logs.MYD
-rw-rw—- 1 mysql mysql 330752 Nov 17 11:38 1_campaigns_logs.MYI


As seen from above output, all was perfect with permissions, so it should have been something else, so I decided to try to create a random file with touch command inside /home/mysql/database_name directory:

 

touch /home/mysql/database_name/somefile-to-test-writtability.txt touch: cannot touch ‘/scr1/data/somefile-to-test-writtability.txt‘: No space left on device


Then logically I thought the /home/mysql/ mounted ext4 partition got filled, because of crashed SQL database or a bug thus, checked with disk free command df whether there is enough space on server:

server:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md1 20G 7.6G 11G 42% /
udev 10M 0 10M 0% /dev
tmpfs 13G 1.3G 12G 10% /run
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/md2 256G 134G 110G 55% /home

Well that's weird? Obviously only 55% of available disk space is used and available 134G which was more than enough so I got totally puzzled why, files can't be written.

Then very logically, I thought it might be that /home directory has remounted as read only, because the SSD memory disk on server is failing and checked for errors in dmesg, i.e.:

 

server:~# dmesg|grep -i error


Also checked how exactly was partition mounted, to check whether it is (RO) read-only:

 

server:~# mount -l|grep -i /home
/dev/md2 on /home type ext4 (rw,relatime,discard,data=ordered)


Now everything become even more weirder, as obviously the disk continued to be claiming no space left on device, while in reality there was plenty of disk space.

Then after running a quick research on the internet for the no space left on device with free disk space, I've come across this great superuser.com thread which let me realize the partition run out of inodes and that's why no new file inodes could be assigned and therefore, the linux kernel is refusing to write the file on ext4 partition.

For those who haven't heard of Linux Partition Inodes here is link to Wikipedia and a quick quote:

 

In a Unix-style file system, the inode is a data structure used to represent a filesystem object, which can be one of various things including a file or a directory. Each inode stores the attributes and disk block location(s) of the filesystem object's data.[1] Filesystem object attributes may include manipulation metadata (e.g. change,[2] access, modify time), as well as owner and permission data (e.g. group-id, user-id, permissions).[3]
Directories are lists of names assigned to inodes. The directory contains an entry for itself, its parent, and each of its children.


Once I understood it is the inodes, I checked how many of them are occupied with cmd:

 

server:~# df -i /home
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/md2 17006592 17006592 0 100% /home


You see, there were 0 (zero) free file inodes on server and that was the reason for no space left on device while there was actually free disk space

To clean up (free) some inodes on partition, first thing I did is to delete all old logs which were inside /home and files I positively know not to be necessery, then to find which directories allocating most innodes used:

 

server:~# find . -xdev -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n


If you're on a regular old fashined IDE Hard Drive and not SSD or you have too much files inside this command will take really long …:

Therefore a better solution might be to frist:

a) Try to find root folders with large inodes count:

for i in /home/*; do echo $i; find $i |wc -l; done
Try to find specific folders:


You should get output like:

 

/home/new_website
606692
/home/common
73
/home/pcfreak
5661
/home/hipo
33
/home/blog
13570
/home/log
123
/home/lost+found
1

b) Then once you know the directory allocating most inodes, run the command again to see the sub-directories with most files (eating) partition innodes:

 

for i in /home/webservice/*; do echo $i; find $i |wc -l; done

 

One usual large folder which could free you some nodes is the linux source headers, but in my case it was simply a lot of tiny old logs being logged on the system for few years in the past without cleaning:

After deleting the log dirs and cache folder in my case /home/new_website/{log,cache}:

server:~# rm -rf /home/new_website/log/*
server:~# rm -rf /home/new_website/cache/*

 

 

a) Then, stopping Apache webserver to check prevent Apache to use MySQl databases while running database repair and restaring MySQL:
 

server:~# /etc/init.d/apache2 stop Restarting MySQL server
..
server:~# /etc/init.d/mysql restart
..


b) And re-issuing MySQL Check / Repair / Optimize database commands:
 

 

mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–check –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log

mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf –analyze –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log

mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–auto-repair –optimize –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log

mysqlcheck –defaults-extra-file=/etc/mysql/debian.cnf \–optimize –all-databases -u root -p`grep -i password /root/.my.cnf |sed -e 's#password=##g'`>> /var/log/cronwork.log

c) And finally starting the Apache Webserver again:
 

server:~# /etc/init.d/apache2 start


Some innodse got freed up:
 

server:~# df -i /home Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/md2 17006592 16797196 209396 99% /home


And hooray by God's Grace and with help of prayers of The most Holy Theotokos (Virgin) Mary, websites started again !

Fix “FAIL – Application at context path /application-name could not be started” deployment error in Tomcat manager

Thursday, October 1st, 2015

tomcat-manager-FAIL-Application-at-context-path-application-name-could-not-be-started-fix-solution-error

While deploying an environment called "Interim" which is pretty much like a testing Java application deployed from a Java EAR (Enterprise Archive) file from within a Tomcat Manager GUI web interface after stopping the application and trying to start it, the developers come across the error:

 

FAIL – Application at context path /application-name could not be started


The error puzzled me for a while until I checked the catalina.out I've seen a number of thrown Java Eceptions errors like:

Okt 01, 2015 10:48:46 AM org.springframework.web.context.ContextLoader initWebApplicationContext

Schwerwiegend: Context initialization failed

org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'com.sun.xml.ws.transport.http.servlet.SpringBinding#2' defined in ServletContex

t resource [/WEB-INF/pp-server-beans.xml]: Cannot create inner bean ‘(inner bean)’ of type [org.jvnet.jax_ws_commons.spring.SpringService] while setting bean property

'service'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)#33': FactoryBean threw exception on

 object creation; nested exception is java.lang.OutOfMemoryError: PermGen space

I've googled a bit about the error:

"FAIL – Application at context path /application-name could not be started"

and come across this Stackoverflow thread and followed suggested solution to fix web.xml tag closing error but it seems there was no such error in my case, I then also tried solution suggested by this thread (e.g. adding in logging.properties) file:
 

org.apache.catalina.core.ContainerBase.[Catalina].level = INFO
org.apache.catalina.core.ContainerBase.[Catalina].handlers = java.util.logging.ConsoleHandler

unfortunately this helped neither to solve the error when it is tried to be started from tomcat manager.

After asking for help a colleague Kostadin, he pointed me to take a closer look in the error (which is a clear indication) that the reserved space is not enough (see below err):
 

java.lang.OutOfMemoryError: PermGen space

And he pointed me then to Solution (which is to modify the present tomcat setenv.sh) settings which looked like this:

# Heap size settings

export JAVA_OPTS="-Xms2048M -Xmx2048M"

 

# SSCO test page parameter

export JAVA_OPTS="$JAVA_OPTS -DTS1A_TESTSEITE_CONFIG_PATH=test-myapplication.com"

# Default garbage collector settings

export JAVA_OPTS="$JAVA_OPTS -XX:MaxPermSize=128M"

 

# Aggressive garbage collector settings.

# Do not use! For testing purposes only!

#export JAVA_OPTS="$JAVA_OPTS -Xss128k -XX:ParallelGCThreads=20 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=31 -XX:+AggressiveOpts -XX:MaxPermSize=128M"

 

####### DO NOT CHANGE BELOW HERE #######

# Disable X11 usage

unset DISPLAY

export JAVA_OPTS="$JAVA_OPTS -Djava.awt.headless=true"

 

# Garbage collection log settings

export JAVA_OPTS="$JAVA_OPTS -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -Xloggc:/web/tomcat/current/logs/gc.log -XX:-TraceClassUnloading"

 

# Enable JMX console

export JAVA_OPTS="$JAVA_OPTS -Dcom.sun.management.jmxremote"

 

 

 

 

 

 

The solution is to to add following two options to export JAVA_OPTS (above options):

-XX:PermSize=512m -XX:MaxPermSize=512m


After modifications, my new line within setenv.sh looked like so:

 

JAVA_OPTS="-Xms2048M -Xmx2048M -XX:PermSize=512m -XX:MaxPermSize=512m"


Finally to make new JAVA_OPTS settings, I've restarted Tomcat with:

 

cd /web/tomcat/bin/
./shutdown.sh
sleep 5; ./startup.sh


And hooray it works fine thanks God! 🙂

StatusNet – Start your own hosted microblogging twitter like social network on Debian GNU / Linux

Monday, July 14th, 2014

build-your-own-microblogging-service-like-twitter-on-debian-linux-Status.net-logo
I like experimenting with free and open source projects providing social networking capabilities like twitter and facebook. Historically I have run my own social network with Elgg – Open Source Social Network Engine. I had a very positive impression from Elgg as a social engine as, there are plenty of plugins and one can use Elgg to run free alternative to very basic equivalent of facebook, problem with Elgg I had however is if is not all the time monitored it quickly fills up with spam and besides that I found it to be still buggy and not easy to update.
The other social network free software I heard of isBuddyPress which I recently installed with Multisite (MuSite) enabled.

Since BuddyPress is WordPress based and it supports all the nice wordpress plugins, my impression is social networking based on wordpress behaves much more stable and since there is Akismet for WordPress, the amount of spammer registrations is much lower than with Elgg.

Recently I started blogging much more actively and I realized everyday I learn and read too much interesting articles and I don't log them anywhere and thought I need a way besides twitter to keep flashy notes of what I'm doing reading, learning in a short notes. I don't want to use Twitter on purpose, because I don't want to improve twitter's site SEO with adding my own stuff on their website but I want to keep my notes on my own local hosted server.

As I didn't wanted to loose time with Complexity of Elgg anymore and wanted to try to something new and I know the open source microblogging social network (Twitter Equivalent) – identi.ca runs StatusNet – Free and Open Source Social software. StatusNet is well known under the motto of "Decentralized Twitter"

screenshot-status-net-microblogging-twitter-like-free-software-network

I took the time to grab it and install it to my home brew machine pc-freak.net. If you haven't seen StatusNet so far – you can check out demo of my installation here – registration is not freely opened because, i don't want spammers to break in, however if you want to give a try drop me a mail or comment below and I will open access for you ..

There is no native statusnet package for Debian Linux (as I'm running Debian) so to install it, I've grabbed statusnet.

To install StatusNet, everything was pretty straight forward and literally following instructionsf rom INSTALL file, i.e.:

# status.example.com maps to /var/www/status/
cd /var/www/status/
wget http://status.net/statusnet-0.9.6.tar.gz
tar -xzf statusnet-0.9.6.tar.gz --strip-components=1
rm statusnet-0.9.6.tar.gz
cd ..
chgrp www-data status/
chmod g+w status/
cd status/
chmod a+w avatar/ background/ file/

mysqladmin -u "root" -p "sql-root-password" create statusnet
mysql -u root -p
GRANT ALL on statusnet.* TO 'statusnetuser'@'localhost' IDENTIFIED BY 'statusnet-secret-password';

To Change default behaviour of URls to be more SEO friendly and not to show .php in URL (e.g. add so called fancy URLs – described in INSTALL):

cp htaccess.sample .htaccess


Then had to configure a VirtualHost under a subdomain http://statusnet.yourdomain.com/install.php or you can alternatively install and access it in browser via http://your-domain.com/status/install.php

An important note to open here is you have to set the URLs via which status.net will be accessed further before proceeding with the install, if you will be using HTTPS here is time to configure it and test it before proceeding with install …  Just be warned that if you don't set the URLs properly now and try to modify them further you will get a lot of issues hard to solve which will cost you a lot of time and nervee ..

If you want to enable twitter bridging in Statusnet you will need to get Twitter consumer and secret keys, to get that you have to create new application on https://dev.twitter.com/apps afterwards you will be taken to a page containing Consumer Key & Consumer Secret string.
StatusNet installation will auto generate config.php, you can further edit it manually with text editor. Content of my current statusnet config.php is here.

Most important options to change are:

$config['daemon']['user'] = 'www-data';
$config['daemon']['group'] = 'www-data';

www-data is user with which Apache is running by default on Debian Linux.

$config['site']['profile'] = 'private';

By default Status.Net will be set to run as private – e.g. it will be fitted for priv. use – messages posted will not publicly be visible. Here the possible options to choose between are:

$config['site']['profile'] = 'private';
$config['site']['profile'] = 'community';
$config['site']['profile'] = 'singleuser';
$config['site']['profile'] = 'public';

singleuser is pretty self explanatory, setting public option will open registration for any user on the internet – probably your network will quickly be filled with spam – so beware with this option. community will make statusnet publicly visible but, registration will only possible via use creation / invitation to join the network from admin.

vi /var/www/status/config.php
$config['site']['fancy'] = true;

Then to enable twitter to statusnet bridge add to config.php

vi /var/www/status/config.php

addPlugin('TwitterBridge');
$config['twitter']['enabled'] = true;
$config['twitterimport']['enabled'] = true;
$config['avatar']['path'] = '/avatar';
$config['twitter']['consumer_key'] = 'XXXXXXXX';
$config['twitter']['consumer_secret'] = 'XXXXXXXX';
# disable sharing location by default
$config['location']['sharedefault'] = 'false';

Notice, I decided to disable statusnet sharedefault folder, because i don't have a lot of free space to provide to users. If you want to let users be allowed to share files (you the space for that), you might want to set a maximum quote of uploaded files (to prevent your webserver from being DoSed – for example by too many huge uploads), here is some reasonable settings:
 

$config['attachments']['file_quota'] = 7000000;
$config['attachments']['thumb_width'] = 400;
$config['attachments']['thumb_height'] = 300;

 

If you want to get the best out of performance of your new statusnet microblogging service, after each modification of config.php be sure to run:

 

php scripts/checkschema.php

Running checkschema.php is also useful, to check whether adding new plugins to check whether plugin will not throw an error.

Here is some extra useful config.php plugins to enable:
 

addPlugin('Gravatar', array());
#addPlugin('Textile');
addPlugin('InfiniteScroll');
addPlugin('Realtime');
addPlugin('Blog');
addPlugin('SiteNoticeInSidebar');
addPlugin('WikiHashtags');
addPlugin('SubMirror');
addPlugin('LinkPreview');
addPlugin('Blacklist');


If you expect to have quickly growing base of users it is recommended to also check out whether your MySQL is tuned with mysqltuner and optimize it for performance

Another useful think you would like to do is to increased the number of Statusnet avatars in the 'following' – 'followers' – 'groups' sections on my profile page by editing

lib/groupminilist.php

and

lib/profileminilist.php

line 36 in both files.
To get the full list of possible variables that can be set in config.php run in terminal:

 php scripts/setconfig.php -a

If you happen to encounter some oddities and issues with StatusNet after installation, this is most likely to some PHP hardering on compile time or some PHP.ini functions disabled for security or some missing component to install which is described as a prerequirement in statusnet INSTALL file

to debug the issues enable statusnet logging by adding in config.php

$config['site']['logdebug'] = true;
$config['site']['logfile'] = '/var/log/statusnet.log';

By default logs produced will be quite verbose and there will be plenty of information you will probably not need and will lead to a lot of "noise", to get around this, there is the LogFilter Plugin for some reasonable logging use in config.php:

addPlugin('LogFilter', array( 'priority' => array(LOG_ERR => true,
LOG_INFO => true,
LOG_DEBUG => false),
'regex' => array('/About to push/i' => false,
'/twitter/i' => false,
'/Successfully handled item/i' => false)
));

If you want tokeep log of statusnet it is a good idea to rorate logs periodically to keep them from growing too big, to do this create in /etc/logrotate.d new files /etc/logrotate.d/statusnet with following content:

/var/log/statusnet/*.log {
daily
missingok
rotate 7
compress
delaycompress
notifempty
create 770 www-data www-data
sharedscripts
postrotate
/path/to/statusnet/scripts/stopdaemons.sh > /dev/null
/path/to/statusnet/scripts/startdaemons.sh > /dev/null
endscript
}

You will probably want to to add new Links, next to StatusNet main navigation links for logged in users, this is possible by adding new line to

lib/primarynav.php

and

lib/secondarynav.php

You will have to add a PHP context like:

 $this->action->menuItem('http://www.pc-freak.net/blog/',
                              _m('MENU','Pc-Freak.Net Blog'),
                              _('A pC Freak Blog'),
                              false,
                              'nav_pcfreak');

Once you're done with installation, make sure you change permissions or move install.php from /var/www/status, otherwise someone might overwrite your config.php and mess your installation …

chmod 000 /var/www/status/install.php There is plenty of other things to do with StatusNet (Support for communication with Jabber XMPP protocol, notification via SMS etc. There are also some plugins to add more statusnet functionality.


Enjoy micro blogging ! 🙂

Apache increase loglevel – Increasing Apache logged data for better statistic analysis

Tuesday, July 1st, 2014

apache-increase-loglevel-howto-increasing-apache-logged-data-for-better-statistic-analysis
In case of development (QA) systems, where developers deploy new untested code, exposing Apache or related Apache modules to unexpected bugs often it is necessery to increase Apache loglevel to log everything, this is done with:

 

LogLevel debug

LogLevel warn is common logging option for Apache production webservers.
 

Loglevel warn


in httpd.conf is the default Apache setting for Log. For some servers that produce too many logs this setting could be changed to LogLevel crit which will make the web-server log only errors of critical importance to webserver. Using LogLevel debug setting is very useful whether you have to debug issues with unworking (failing) SSL certificates. It will give you whole dump with SSL handshake and reason for it failing.

You should be careful before deciding to increasing server log level, especially on production servers.
Increased logging level puts higher load on Apache webserver, as well as produces a lot of gigabytes of mostly useless logs that could lead quickly to filling all free disk space.

If you  would like to increase logged data in access.log / error.log, because you would like to perform versatile statistical analisys on daily hits, unique visits, top landing pages etc. with Webalizer, Analog or Awstats.

Change LogFormat and CustomLog variables from common to combined.

By default Apache is logging with following LogFormat and Customlog
 

LogFormat "%h %l %u %t "%r" %>s %b" common
CustomLog logs/access_log common


Which will be logging in access.log format:

 

127.0.0.1 – jericho [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326


Change it to something like:

 

LogFormat "%h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-agent}i"" combined CustomLog log/access_log combined


This would produce logs like:

127.0.0.1 – jericho [10/Oct/2000:13:55:36 -0700] “GET /apache_pb.gif HTTP/1.0” 200 2326 “http://www.example.com/start.html” “Mozilla/4.08 [en] (Win98; I ;Nav)"

 

Using Combined Log Format produces all logged information from CustomLog … common, and also logs the Referrer and User-Agent headers, which indicate where users were before visiting your Web site page and which browsers they used. You can read rore on custom Apache logging tailoring theme on Apache's website

Finding spam sending php scripts on multiple sites servers – Tracing and stopping spammer PHP scripts

Monday, April 14th, 2014

stop_php_mail-spam-find-spammer-and-stop-php-spammer-websites
Spam has become a severe issue for administrators, not only for mail server admins but also for webshosting adms. Even the most secure spam protected mail server can get affected by spam due to fact it is configured to relay mail from other servers acting as web hosting sites.

Webhosting companies almost always suffer seriously from spam issues and often their mail servers gets blocked (enter spam blacklists), because of their irresponsible clients uploading lets say old vulnerable Joomla, WordPress without Akismet or proper spam handling plugin,a CMS which is not frequently supported / updated or custom client insecure php code.

What I mean is Shared server A is often configured to sent mail via (mail) server B. And often some of the many websites / scripts hosted on server A gets hacked and a spam form is uploaded and tons of spam start being shipped via mail server B.

Of course on mail server level it is possible to configure delay between mail sent and adopt a couple of policies to reduce spam, but the spam protection issue can't be completely solved thus admin of such server is forced to periodically keep an eye on what mail is sent from hosting server to mail server.
 


If you happen to be one of those Linux (Unix) webhosting admins who find few thousand of spammer emails into mail server logs or your eMail server queue and you can't seem to find what is causing it, cause there are multiple websites shared hosting using mainly PHP + SQL and you can't identify what php script is spamming by reviewing  Apache log / PHP files. What you can do is get use of:

PHP mail.log directive

Precious tool in tracking spam issues is a PHP Mail.log parameter, mail log paramater is available since PHP version >= 5.3.0 and above.
PHP Mail.log parameter records all calls to the PHP mail() function including exact PHP headers, line numbers and path to script initiating mail sent.

Here is how it is used:
 

1. Create empty PHP Mail.log file

touch /var/log/phpmail.log

File has to be writtable to same user with which Apache is running in case of Apache with SuPHP running file has to be writtable by all users.

On Debian, Ubunut Linux:

chown www:data:www-data /var/log/phpmail.log

On CentOS, RHEL, SuSE phpmail.log has to be owned by httpd:

chown httpd:httpd /var/log/phpmail.log

On some other distros it might be chown nobody:nobody etc. depending on the user with which Apache server is running.

 

2. Add to php.ini configuration following lines

mail.add_x_header = On
mail.log = /var/log/phpmail.log

PHP directive instructs PHP to log complete outbund Mail header sent by mail() function, containing the UID of the web server or PHP process and the name of the script that sent the email;
 

(X-PHP-Originating-Script: 33:mailer.php)


i.e. it will make php start logging to phpmail.log stuff like:
 

 

mail() on [/var/www/pomoriemonasteryorg/components/com_xmap/2ktdz2.php:1]: To: info@globalremarketing.com.au — Headers: From: "Priority Mail" <status_93@pomoriemon
astery.org> X-Mailer: MailMagic2.0 Reply-To: "Priority Mail" <status_93@pomoriemonastery.org> Mime-Version: 1.0 Content-Type: multipart/alternative;boundary="——
—-13972215105347E886BADB5"
mail() on [/var/www/pomoriemonasteryorg/components/com_xmap/2ktdz2.php:1]: To: demil7167@yahoo.com — Headers: From: "One Day Shipping" <status_44@pomoriemonastery.
org> X-Mailer: CSMTPConnectionv1.3 Reply-To: "One Day Shipping" <status_44@pomoriemonastery.org> Mime-Version: 1.0 Content-Type: multipart/alternative;boundary="—
——-13972215105347E886BD344"
mail() on [/var/www/pomoriemonasteryorg/components/com_xmap/2ktdz2.php:1]: To: domainmanager@nadenranshepovser.biz — Headers: From: "Logistics Services" <customer.
id86@pomoriemonastery.org> X-Mailer: TheBat!(v3.99.27)UNREG Reply-To: "Logistics Services" <customer.id86@pomoriemonastery.org> Mime-Version: 1.0 Content-Type: mult
ipart/alternative;boundary="———-13972215105347E886BF43E"
mail() on [/var/www/pomoriemonasteryorg/components/com_xmap/2ktdz2.php:1]: To: bluesapphire89@yahoo.com — Headers: From: "Priority Mail" <status_73@pomoriemonaster
y.org> X-Mailer: FastMailer/Webmail(versionSM/1.2.6) Reply-To: "Priority Mail" <status_73@pomoriemonastery.org> Mime-Version: 1.0 Content-Type: multipart/alternativ
e;boundary="———-13972215105347E886C13F2"

 

On Debian / Ubuntu Linux to enable this logging, exec:

echo 'mail.add_x_header = On' >> /etc/php5/apache2/php.ini
echo 'mail.log = /var/log/phpmail.log' >> /etc/php5/apache2/php.ini


I find it useful to symlink /etc/php5/apache2/php.ini to /etc/php.ini its much easier to remember php location plus it is a standard location for many RPM based distros.

ln -sf /etc/php5/apache2/php.ini /etc/php.ini

Or another "Debian recommended way" to enable mail.add_x_header logging on Debian is via:

echo 'mail.add_x_header = On' >> /etc/php5/conf.d/mail.ini
echo 'mail.log = /var/log/phpmail.log' >> /etc/php5/conf.d/mail.ini

On Redhats (RHEL, CentOS, SuSE) Linux issue:

echo 'mail.add_x_header = On' >> /etc/php.ini
echo 'mail.log = /var/log/phpmail.log' >> /etc/php.ini

3. Restart Apache

On Debian / Ubuntu based linuces:

/etc/init.d/apache2 restart

P.S. Normally to restart Apache without interrupting client connections graceful option can be used, i.e. instead of restarting do:

/etc/init.d/apache2 graceful

On RPM baed CentOS, Fedora etc.:

/sbin/service httpd restart

or

apachectl graceful
 

4. Reading the log

To review in real time exact PHP scripts sending tons of spam tail it:

tail -f /var/log/phpmail.log

 

mail() on [/var/www/remote-admin/wp-includes/class-phpmailer.php:489]: To: theosfp813@hotmail.com — Headers: Date: Mon, 14 Apr 2014 03:27:23 +0000 Return-Path: wordpress@remotesystemadministration.com From: WordPress Message-ID: X-Priority: 3 X-Mailer: PHPMailer (phpmailer.sourceforge.net) [version 2.0.4] MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="UTF-8"
mail() on [/var/www/pomoriemonasteryorg/media/rsinstall_4de38d919da01/admin/js/tiny_mce/plugins/inlinepopups/skins/.3a1a1c.php:1]: To: 2070ccrabb@kiakom.net — Headers: From: "Manager Elijah Castillo" <elijah_castillo32@pomoriemonastery.org> X-Mailer: Mozilla/5.0 (Windows; U; Windows NT 5.0; es-ES; rv:1.9.1.7) Gecko/20100111 Thunderbird/3.0.1 Reply-To: "Manager Elijah Castillo" <elijah_castillo32@pomoriemonastery.org> Mime-Version: 1.0 Content-Type: multipart/alternative;boundary="———-1397463670534B9A76017CC"
mail() on [/var/www/pomoriemonasteryorg/media/rsinstall_4de38d919da01/admin/js/tiny_mce/plugins/inlinepopups/skins/.3a1a1c.php:1]: To: 20wmwebinfo@schools.bedfordshire.gov.uk — Headers: From: "Manager Justin Murphy" <justin_murphy16@pomoriemonastery.org> X-Mailer: Opera Mail/10.62 (Win32) Reply-To: "Manager Justin Murphy" <justin_murphy16@pomoriemonastery.org> Mime-Version: 1.0 Content-Type: multipart/alternative;boundary="———-1397463670534B9A7603ED6"
mail() on [/var/www/pomoriemonasteryorg/media/rsinstall_4de38d919da01/admin/js/tiny_mce/plugins/inlinepopups/skins/.3a1a1c.php:1]: To: tynyrilak@yahoo.com — Headers: From: "Manager Elijah Castillo" <elijah_castillo83@pomoriemonastery.org> X-Mailer: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; pl; rv:1.9.1.9) Gecko/20100317 Thunderbird/3.0.4 Reply-To: "Manager Elijah Castillo" <elijah_castillo83@pomoriemonastery.org> Mime-Version: 1.0 Content-Type: multipart/alternative;boundary="———-1397463670534B9A7606308"
mail() on [/var/www/pomoriemonasteryorg/media/rsinstall_4de38d919da01/admin/js/tiny_mce/plugins/inlinepopups/skins/.3a1a1c.php:1]: To: 2112macdo1@armymail.mod.uk — Headers: From: "Manager Justin Murphy" <justin_murphy41@pomoriemonastery.org> X-Mailer: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; pl; rv:1.9.1.9) Gecko/20100317 Thunderbird/3.0.4 Reply-To: "Manager Justin Murphy" <justin_murphy41@pomoriemonastery.org> Mime-Version: 1.0 Content-Type: multipart/alternative;boundary="———-1397463670534B9A76086D1"

 

As you can see there is a junky spam mails sent via some spammer script uploaded under name .3a1a1c.php, so to stop the dirty bastard, deleted the script:

rm -f /var/www/pomoriemonasteryorg/media/rsinstall_4de38d919da01/admin/js/tiny_mce/plugins/inlinepopups/skins/.3a1a1c.php

It is generally useful to also check (search) for all hidden .php files inside directoring storing multiple virtualhost websites, as often a weirdly named hidden .php is sure indicator of either a PHP Shell script kiddie tool or a spammer form.

Here is how to Find all Hidden Perl / PHP scripts inside /var/www:

find . -iname '.*.php'
./blog/wp-content/plugins/fckeditor-for-wordpress-plugin/ckeditor/plugins/selection/.0b1910.php
./blog/wp-content/plugins/fckeditor-for-wordpress-plugin/filemanager/browser/default/.497a0c.php
./blog/wp-content/plugins/__MACOSX/feedburner_feedsmith_plugin_2.3/._FeedBurner_FeedSmith_Plugin.php

find . -iname '.*.pl*'

….

Reviewing complete list of all hidden files is also often useful to determine shitty cracker stuff

 find . -iname ".*"

Debugging via  /var/log/phpmail.log enablement is useful but is more recommended on development and staging (QA) environments. Having it enable on productive server with high amounts of mail sent via PHP scripts or just on dedicated shared site server could cause both performance issues, hard disk could quickly get and most importantly could be a severe security hole as information from PHP scripts could be potentially exposed to external parties.

Tracking multiple log files in real time in Linux console / terminal (MultiTail)

Monday, July 29th, 2013

Multitail multiple tail Debian GNU Linux viewing Apache access and error log in shared screen
Whether you have to administer Apache, Nginx or Lighttpd, or whatever other kind of daemon which interactively logs user requests or errors you probably already know well of tail command (tail -f /var/log/apache2/access.log) is something Webserver Linux admin can't live without. Sometimes however you have number of Virtualhost (domains) each configured to log site activity in separate log file. One solution to the problem is to use GNU Screen (screen – terminal emulator) to launch multiple screen session and launch separate tail -f /var/log/apache2/domain1/access.log , tail -f /var/log/apache2/domain2/access.log etc. This however is a bit of hack and except configuring screen to show multiple windows on one Virtual Terminal (tty or vty in gnome), you can't really see output simultaneously in one separated window.

Here is where multitail comes handy. MultiTail is tool to visualize in real time log records output of multiple logs (tails) in one shared terminal Window. MultiTail is written to use ncurses library used by a bunch of other useful tools like Midnight Command so output is colorful and very nice looking.

Here is MultiTail package description on Debian Linux:

linux:~# apt-cache show multitail|grep -i description -A 1
Description-en: view multiple logfiles windowed on console
 multitail lets you view one or multiple files like the original tail

Description-md5: 5e2f688efb214b063bdc418a705860a1
Tag: interface::text-mode, role::program, scope::utility, uitoolkit::ncurses,
root@noah:/home/hipo# apt-cache show multitail|grep -i description -A 1
Description-en: view multiple logfiles windowed on console
 multitail lets you view one or multiple files like the original tail

Description-md5: 5e2f688efb214b063bdc418a705860a1
Tag: interface::text-mode, role::program, scope::utility, uitoolkit::ncurses,
 

Multiple Tail is available across most Linux distributions to install on Debian / Ubuntu / Mint etc. Linux:

debian:~# apt-get install --yes multitail
...

On recent Fedora / RHEL / CentOS etc. RPM based Linuces to install:

[root@centos ~]# yum -y install multitail
...

On FreeBSD multitail is available to install from ports:

freebsd# cd /usr/ports/sysutils/multitail
freebsd# make install clean
...

Once installed to display records in multiple files lets say Apache domain name access.log and error.log

debian:~# multitail -f /var/log/apache2/access.log /var/log/apache2/error.log

It has very extensive help invoked by simply pressing h while running

multtail-viewing-in-gnome-shared-screen-debian-2-log-files-screenshot

Even better multitail is written to already have integrated color schemes for most popular Linux services log files

multitail multiple tail debian gnu linux logformat different color schemes screenshot
List of supported MulLog Color schemes as of time of writting article is:

acctail, acpitail, apache, apache_error, argus, asterisk, audit, bind, boinc, boinctail ,checkpoint, clamav, cscriptexample, dhcpd, errrpt, exim, httping, ii, inn, kerberos, lambamoo, liniptfw, log4j, mailscanner, motion, mpstat, mysql, nagtail, netscapeldap, netstat, nttpcache, ntpd, oracle, p0f, portsentry, postfix, pptpd, procmail, qmt-clamd, qmt-send, qmt-smtpd, qmt-sophie, qmt-spamassassin, rsstail, samba, sendmail, smartd, snort spamassassin, squid, ssh, strace, syslog, tcpdump, vmstat, vnetbr, websphere, wtmptail

To tell it what kind of log Color scheme to use from cmd line use:

debian:~# multitail -Csapache /var/log/apache2/access.log /var/log/apache2/error.log

multiple tail with Apache highlight on Debian Linux screenshot

Useful feature is to run command display in separate Windows while still following log output, i.e.:

[root@centos:~]# multitail /var/log/httpd.log -l "netstat -nat"
...

Multitail can also merge output from files in one Window, while in second window some other log or command output is displayed. To merge output from Apache access.log and error.log:

debian:~# multitail /var/log/apache2/access.log -I /var/log/apache2/error.log

When merging two log files output to show in one Window it is useful to display each file output in different color for the sake of readability

For example:
 

debian:~# multitail -ci green /var/log/apache/access.log -ci red -I /var/log/apache/error.log

multitail merged Apache access and error log on Debian Linux

To display output from 3 log files in 3 separate shared Windows in console use:

linux:~# multitail -s 2 /var/log/syslog /var/log/apache2/access.log /var/log/apache2/error.log

For some more useful examples, check out MultiTail's official page examples
There is plenty of other useful things to do with multitail, for more RTFM 🙂

Linux PHP Disable chmod() and chown() functions for better Apache server security

Monday, July 15th, 2013

php_tighten_security_by_enabling_safe_mode-php-ini-function-prevent-crackers-break-in-your-server
I have to administer few inherited Linux servers with Ubuntu and Debian Linux. The servers hosts mainly websites with regularly un-updated Joomlas and some custom developed websites which were developed pretty unsecure. To mitigate hacked websites I already disabled some of most insecure functions like system(); eval etc. – I followed literally my previous tutorial PHP Webhosting security disable exec();, system();, open(); and eval();
Still in logs I see shits like:
 

[error] [client 66.249.72.100] PHP Warning:  mkdir(): No such file or directory in /var/www/site/plugins/system/jfdatabase/intercept.jdatabasemysql.php on line 161

Hence to prevent PHP mkdir(); and chown(); functiosn being active, I had to turn on in /etc/php5/apache2/php.ini – safe_mode . For some reason whoever configured Apache leave it off.

safe_mode = on

Hopefully by disabling this functions will keep cracker bot scripts to not create some weird directory structures on HDD or use it as mean to DoS overflow servers filesystem.

Hope this help others stabilize their servers too. Enjoy ! 🙂

Fixing Clamav error: “WARNING: Can’t download daily.cvd from database.clamav.net”

Thursday, June 6th, 2013

On one of the Debian Squeeze Servers, where I have Running QMAIL Server, I haven't checked logs for a long time. Cause Qmail is configured and all runs smoothly. Just today while checking logs, I've noticed in /var/log/clamav/clamav.log, clamav database fails to be updated with an error, e.g.:

qmail:~# tail -n 28 /var/log/clamav/clamav.log

ClamAV update process started at Thu Jun 6 20:47:14 2013
main.cvd is up to date (version: 54, sigs: 1044387, f-level: 60, builder: sven)
WARNING: getpatch: Can't download daily-16682.cdiff from db.local.clamav.net
WARNING: getpatch: Can't download daily-16682.cdiff from db.local.clamav.net
WARNING: getpatch: Can't download daily-16682.cdiff from db.local.clamav.net
WARNING: getpatch: Can't download daily-16682.cdiff from db.local.clamav.net
ERROR: getpatch: Can't download daily-16682.cdiff from db.local.clamav.net
WARNING: Incremental update failed, trying to download daily.cvd
ERROR: Can't download daily.cvd from db.local.clamav.net
Giving up on db.local.clamav.net…
ClamAV update process started at Thu Jun 6 20:47:15 2013
main.cvd is up to date (version: 54, sigs: 1044387, f-level: 60, builder: sven)
WARNING: getpatch: Can't download daily-16682.cdiff from database.clamav.net
WARNING: getpatch: Can't download daily-16682.cdiff from database.clamav.net
WARNING: getpatch: Can't download daily-16682.cdiff from database.clamav.net
WARNING: getpatch: Can't download daily-16682.cdiff from database.clamav.net
WARNING: getpatch: Can't download daily-16682.cdiff from database.clamav.net
WARNING: Incremental update failed, trying to download daily.cvd
WARNING: Can't download daily.cvd from database.clamav.net
Trying again in 5 secs…
ClamAV update process started at Thu Jun 6 20:47:20 2013
main.cvd is up to date (version: 54, sigs: 1044387, f-level: 60, builder: sven)
WARNING: getpatch: Can't download daily-16682.cdiff from database.clamav.net
WARNING: getpatch: Can't download daily-16682.cdiff from database.clamav.net
WARNING: getpatch: Can't download daily-16682.cdiff from database.clamav.net
WARNING: getpatch: Can't download daily-16682.cdiff from database.clamav.net
WARNING: getpatch: Can't download daily-16682.cdiff from database.clamav.net
WARNING: Incremental update failed, trying to download daily.cvd
WARNING: Can't download daily.cvd from database.clamav.net

On host Freshclam is configured to run in background as a service i.e.:

qmail:~#
 ps ax|grep -i fresh|grep -v grep
13615 ? Ss 0:02 /usr/bin/freshclam -d –quiet
 

I stopped clamav and tried running it manually through its script: qmail:~# /etc/init.d/clamav-freshclam restart

The error was reoccuring, so I decided to kill it and try running freshclam manually:

qmail:~# kill -9 freshclam

qmail:~# freshclam
I got same error again:
 

Thu Jun 6 16:46:20 2013 -> ClamAV update process started at Thu Jun 6 16:46:20 2013 Thu Jun 6 16:46:20 2013 -> main.cvd is up to date (version: 54, sigs: 1044387, f-level: 60, builder: sven) Thu Jun 6 16:46:20 2013 -> WARNING: getpatch: Can't download daily-16682.cdiff from database.clamav.net Thu Jun 6 16:46:20 2013 -> WARNING: getpatch: Can't download daily-16682.cdiff from database.clamav.net Thu Jun 6 16:46:20 2013 -> WARNING: getpatch: Can't download daily-16682.cdiff from database.clamav.net Thu Jun 6 16:46:20 2013 -> WARNING: getpatch: Can't download daily-16682.cdiff from database.clamav.net Thu Jun 6 16:46:20 2013 -> ERROR: getpatch: Can't download daily-16682.cdiff from database.clamav.net Thu Jun 6 16:46:20 2013 -> WARNING: Incremental update failed, trying to download daily.cvd Thu Jun 6 16:46:20 2013 -> ERROR: Can't download daily.cvd from database.clamav.net Thu Jun 6 16:46:20 2013 -> Giving up on database.clamav.net… Thu Jun 6 16:46:20 2013 -> Update failed. Your network may be down or none of the mirrors listed in /etc/clamav/freshclam.conf is working. Check http://www.clamav.net/support/mirror-problem for possible reasons.

The solution was to delete clamav database filedaily.cvd and then run another freshclam Clamav DB virus update:

qmail:~# rm -f /var/lib/clamav/daily.cvd
qmail:~# freshclam
root@pcfreak:/etc/init.d# freshclam ClamAV update process started at Thu Jun 6 22:07:21 2013
main.cvd is up to date (version: 54, sigs: 1044387, f-level: 60, builder: sven)
Downloading daily.cvd [100%]
daily.cvd updated (version: 17309, sigs: 1302714, f-level: 63, builder: neo)
bytecode.cld is up to date (version: 214, sigs: 41, f-level: 63, builder: neo)
Database updated (2347142 signatures) from db.local.clamav.net (IP: 195.222.33.229)

Finally, to make freshclam work as daemon, restarted init script:

qmail:~# /etc/init.d/clamav-freshclam restart
[ ok ] Stopping ClamAV virus database updater: freshclam.
[ ok ] Starting ClamAV virus database updater: freshclam.

 

Fixing Qmail 451 qq temporary problem (#4.3.0) / @4000000050587780174c60dc status: qmail-todo stop processing asap / status: exiting

Wednesday, September 19th, 2012

I’m in process of installing plain new Qmail mail (SMTP) server following QmailRocks updated: Thibs QmailRocks install guide for Debian 6.0 Squeeze
The install went smoothly so far and I’m already doing this installation for about 5 hours or so. I’m done with the minor install and following Thibs instructions to Implement validrcptto feature to Qmail.

Anyone who works with Qmail, should already know the lack of validrcptto tons of SPAM problems and useless Qmail load, because of QMAIL attempts to delivery to the local mail server unexisting mail boxes ….


Fixing this whole mess is implemented with the validrcptto. I myself has installed numerous times validrcptto and almost ever I ended up in some kind of mess before fixing it once and for all, this time of course (quite traditionally) the “story” repeated to piss me off for a while 🙂

After following steps literally as described on Thibs great Qmail install tutorial!, I ended up with a Qmail mail server unable to deliver properly e-mails.

To debug why mails are not properly delivered by the mail server I used telnet:


root@qmail-host:/var/qmail/control# telnet localhost 25
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
220 This is Mail Pc-Freak.NET ESMTP
HELO localhost
250 This is Mail Pc-Freak.NET
MAIL FROM:<hipo@pc-freak.net>
250 ok
RCPT TO:<hipo@pc-freak.net>
250 ok
DATA
354 go ahead
asdfdsfafsd
.
451 qq temporary problem (#4.3.0)

Some time, back while configuring another Qmail fresh install, I ended up with exactly same delivery error – I’ve take time to document how I fixed this weird qq temporary issue here

As I thought one error in “normal” Software can correspondent to one cause, I red my previous post and checked closely all that was in past wrong whether I encountered the err; guess what this time it wasn’t due to non-running (missing) clamav-daemon. Still though this was not the issue, it partially pointed me to the cause (problem with qmail-scanner.pl / spamd /pyzor / razor / dcc or whatever of this overall complexity ..).

First logical think was to check in logs. In /var/log/qmail/qmail-smtpd/current everything was looking good; my log looked like so:


root@qmail-host:/# tail -n 10 /var/log/qmail/qmail-smtpd/current
@40000000505877b91ab3aba4 tcpserver: end 23727 status 0
@40000000505877b91ab3af8c tcpserver: status: 0/30
@40000000505877f6273acefc tcpserver: status: 1/30
@40000000505877f6273ba9bc tcpserver: pid 23882 from 127.0.0.1
@40000000505877f6273f8dd4 tcpserver: ok 23882 mail.pc-freak.net:127.0.0.1:25 localhost:127.0.0.1::46769
@40000000505877fd1a3c647c qmail-smtpd[23882]: MFCHECK pass [127.0.0.1] pc-freak.net
@40000000505877fd1a3c935c qmail-smtpd[23882]: MAIL FROM:
@400000005058780123ba5eb4 qmail-smtpd[23882]: RCPT TO:

@4000000050587ccd179210b4 tcpserver: end 23882 status 256
@4000000050587ccd1792149c tcpserver: status: 0/30
root@qmail-host:/# tail -n 5 /var/log/qmail/qmail-smtpd/current
@40000000505877fd1a3c647c qmail-smtpd[23882]: MFCHECK pass [127.0.0.1] pc-freak.net

Second guess was to check in /var/log/qmail/qmail-send/current, there found errors like:


root@qmail-host:/# tail -n 10 /var/log/qmail/qmail-send/current
@4000000050584f8e0b799194 status: local 0/10 remote 0/120
@4000000050584f8e0b79957c end msg 9610091
@4000000050584fde2f5ebf44 status: qmail-todo stop processing asap
@4000000050584fde2f5ec32c status: exiting
@4000000050584fde32d2a884 status: local 0/10 remote 0/120
@4000000050584fe8136a44ac status: qmail-todo stop processing asap
@4000000050584fe8136a4894 status: exiting
@4000000050584fe8138b884c status: local 0/10 remote 0/120
@4000000050585014232903c4 status: qmail-todo stop processing asap
@4000000050585014232907ac status: exiting
@40000000505850142363e5fc status: local 0/10 remote 0/120
@40000000505851030773efa4 status: qmail-todo stop processing asap
@40000000505851030774320c status: exiting
@400000005058510307b5f214 status: local 0/10 remote 0/120

s you can see yourself, the errors are not giving any insight on what could be the reason, so I checked in /var/log/mail.log, just to find more errors there:


Sep 18 16:22:04 qmail-host qmail-scanner-queue.pl: X-Qmail-Scanner-2.10st:[pcfreak134797452279623171]

d_m: output spotted from /usr/bin/reformime -x/var/spool/qscan/tmp/qmail-host/I134797452279623171/ (sh: /usr/bin/reformime: not found#012) - that shouldn't happen!

As the error points out, the whole issues are caused by missing binary – /usr/bin/reformime. Logically I had to install reformime, so did a quick apt-cache search reformime and saw reformime is part of maildrop deb package. I thought it is installed but after checking with:


dpkg -a |grep -i maildrop

Realized it is missing and install it:


qmail-host:/# apt-get --yes install maildrop
....

That’s all after a qmail restart, i.e.:


qmail-host:/# qmailctl restart
* Stopping qmail-smtpdssl.
* Stopping qmail-smtpd.
* Sending qmail-send SIGTERM and restarting.
* Restarting qmail-smtpd.
* Restarting qmail-smtpdssl.
* Restarting qmail-pop3d.

qq temporary error got solved and from there on qmail received and sent mails normally with validrcptto enabled. Cheers 😉

How to disable tidy HTML corrector and validator to output error and warning messages

Sunday, March 18th, 2012

I've noticed in /var/log/apache2/error.log on one of the Debian servers I manage a lot of warnings and errors produced by tidy HTML syntax checker and reformatter program.

There were actually quite plenty frequently appearing messages in the the log like:

...
To learn more about HTML Tidy see http://tidy.sourceforge.net
Please fill bug reports and queries using the "tracker" on the Tidy web site.
Additionally, questions can be sent to html-tidy@w3.org
HTML and CSS specifications are available from http://www.w3.org/
Lobby your company to join W3C, see http://www.w3.org/Consortium
line 1 column 1 - Warning: missing <!DOCTYPE> declaration
line 1 column 1 - Warning: plain text isn't allowed in <head> elements
line 1 column 1 - Info: <head> previously mentioned
line 1 column 1 - Warning: inserting implicit <body>
line 1 column 1 - Warning: inserting missing 'title' element
Info: Document content looks like HTML 3.2
4 warnings, 0 errors were found!
...

I did a quick investigation on where from this messages are logged in error.log, and discovered few .php scripts in one of the websites containing the tidy string.
I used Linux find + grep cmds find in all php files the "tidy "string, like so:

server:~# find . -iname '*.php'-exec grep -rli 'tidy' '{}' ;
find . -iname '*.php' -exec grep -rli 'tidy' '{}' ; ./new_design/modules/index.mod.php
./modules/index.mod.php
./modules/index_1.mod.php
./modules/index1.mod.php

Opening the files, with vim to check about how tidy is invoked, revealed tidy calls like:

exec('/usr/bin/tidy -e -ashtml -utf8 '.$tmp_name,$rett);

As you see the PHP programmers who wrote this website, made a bigtidy mess. Instead of using php5's tidy module, they hard coded tidy external command to be invoked via php's exec(); external tidy command invocation.
This is extremely bad practice, since it spawns the command via a pseudo limited apache shell.
I've notified about the issue, but I don't know when, the external tidy calls will be rewritten.

Until the external tidy invocations are rewritten to use the php tidy module, I decided to at least remove the tidy warnings and errors output.

To remove the warning and error messages I've changed:

exec('/usr/bin/tidy -e -ashtml -utf8 '.$tmp_name,$rett);

exec('/usr/bin/tidy --show-warnings no --show-errors no -q -e -ashtml -utf8 '.$tmp_name,$rett);

The extra switches meaning is like so:

q – instructs tidy to produce quiet output
-e – show only errors and warnings
–show warnings no && –show errors no, completely disable warnings and error output

Onwards tidy no longer logs junk messages in error.log Not logging all this useless warnings and errors has positive effect on overall server performance especially, when the scripts, running /usr/bin/tidy are called as frequently as 1000 times per sec. or more