Posts Tagged ‘Cannot’

Fix “tar: Error exit delayed from previous errors” and its cause and solution

Monday, August 18th, 2014

fix-solve-tar-error-delayed-exit-from-previous-errors-tarball-error

tar: Error exit delayed from previous errors

error is a very common error encountered when creating archives (or backing up server configurations / websites / sql binary data). The error is quite unexplanatory and whenever creating files verbose in order to see the files added to archve in "real time" with lets say:

tar -czvf /tmp/filename_backup_date-of-backup.tar.gz /home/websites /home/sql


its pretty hard to track on exactly which file is the backup producing the Error exit delayed from previous errors, this is especially the case whenever adding to archive directories containing millions of tiny few kilobyte sized files. Many novice on uncautious Linux admins , might simply ignore the warning if they're in a hurry / are having excessive work to be done as there will be .tar.gz backup produced and whenever uncompressed most of the files are there and the backup error would seem not of a big issue.

However as backuping files is vital stuff, especially when moving the files from a server to be decomissioned you have to be extra careful and make the backup properly, e.g. figure out the cause of the error, to do so log the full output of tar operations with tee command, like so:

tar -czvf /tmp/filename_backup_date-of-backup.tar.gz /home/websites/ /home/sql | tee /tmp/backup_tar_full_output.log

Then you will have to review the file and lookup for errors with less search string – / (slash) – look for "error" and "permission den" keywords and this should point you to what is causing the error. In cases when millions of files are to be archived, the log might grow really big and hard to process, therefore a much quicker way to understand what's happening is to only log and show in shell standard output last file error with > (shell redirect):
 

tar -czvf /tmp/filename_backup_date-of-backup.tar.gz /home/websites /home/sql > /tmp/backup_failure-cause.log

 

tar: www.ur-website.com-http/2.0.63/conf/tnsnames.ora.20080918: Cannot open: Permission denied
tar: Removing leading `/' from member names

The error indicates clearly the cause of error is lack of Permissions to read the file tnsnames.ora.20080918 so solution is to either grant permissions to non-root user with (chmod / chown) cmds, in my case grant perms to user hipo with which tar is ran, or run again the website backup with superuser, I usually just run with root user to prevent tampering with original permissions, e.g. to solve the error, either:

$ su root
# tar -czvf /tmp/filename_backup_date-of-backup.tar.gz /home/websites /home/sql

Or even better if sudo is installed and user is added to /etc/sudoers file

$ sudo tar -czvf /tmp/filename_backup_date-of-backup.tar.gz /home/websites /home/sql


Though permission errors is the most often reason for:

tar: Error exit delayed from previous errors, you should keep in mind that in some cases the error might be caused due to failing RAID membered disk drive or single hdd failure on systems that are not in some RAID array

 

How to solve “[crit] [client xxx.xxx.xxx.xxx] (13)Permission denied: /var/lib/ejabberd/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable, referer: http://jabber.mydomain.com/” – Error Cause and Solution

Thursday, January 5th, 2012

While configuring JWchat domain, I've come across around an error:

pcfg_openfile: unable to check htaccess file, ensure it is readable

The exact error I got in /var/log/apache2/error.log looked like so:

[crit] [client xxx.xxx.xxx.xxx] (13)Permission denied: /var/lib/ejabberd/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable, referer: http://jabber.mydomain.com/

The error message suggested /var/lib/ejabberd/.htaccess – is missing or not readable, however after checking i've seen .htaccess existed as well as was readable:

debian:~# ls -al /var/lib/ejabberd/.htaccess
-rw-r--r-- 1 www-data www-data 114 2012-01-05 07:44 /var/lib/ejabberd/.htaccess

At first glimpse it seems like the message is misleading and not true, however when I switched to www-data user (the user with which Apache runs on Debian), I've figured out the error meaning of unreadability is exactly correct:

www-data@debian:$ ls -al /var/lib/ejabberd/.htaccess
ls: cannot access /var/lib/ejabberd/.htaccess: Permission denied

This permission denied was quite strange, especially when considering the .htaccess is readable for all users:

debian:~# ls -al /var/lib/ejabberd/.htaccess
-rw-r--r-- 1 www-data www-data 114 2012-01-05 07:44 /var/lib/ejabberd/.htaccess

After a thorough look on what might go wrong, thanksfully I've figured it out. The all issues were caused by wrong permissions of /var/lib/ejabberd/.htaccess .You can see below the executable flag for all users (including apache's www-data) is missing :

debian:/var/lib# ls -ld /var/lib/ejabberd/drw-r--r-- 3 ejabberd ejabberd 4096 2012-01-05 07:45 /var/lib/ejabberd/

Solving the error, hence is as easy as adding +x flag to /var/lib/ejabberd :

debian:/var/lib# chmod +x /var/lib/ejabberd

Another way to fix the error is to chmod to 755 to the directory which holds .htaccess:

From now onwards pcfg_openfile: unable to check htaccess file, ensure it is readable err is no more 😉

How to disable IPv6 on Debian / Ubuntu / CentOS and RHEL Linux

Friday, December 9th, 2011

I have few servers, which have automatically enabled IPv6 protocols (IPv6 gets automatically enabled on Debian), as well as on most latest Linux distribituions nowdays.

Disabling IPv6 network protocol on Linux if not used has 2 reasons:

1. Security (It’s well known security practice to disable anything not used on a server)
Besides that IPv6 has been known for few criticil security vulnerabilities, which has historically affected the Linux kernel.
2. Performance (Sometimes disabling IPv6 could have positive impact on IPv4 especially on heavy traffic network servers).
I’ve red people claiming disabling IPv6 improves the DNS performance, however since this is not rumors and did not check it personally I cannot positively confirm this.

Disabling IPv6 on all GNU / Linuces can be achieved by changing the kernel sysctl settings net.ipv6.conf.all.disable_ipv6 by default net.ipv6.conf.all.disable_ipv6 equals 1 which means IPv6 is enabled, hence to disable IPv6 I issued:

server:~# sysctl net.ipv6.conf.all.disable_ipv6=0

To set it permanently on system boot I put the setting also in /etc/sysctl.conf :

server:~# echo 'net.ipv6.conf.all.disable = 1 >> /etc/sysctl.conf

The aforedescribed methods should be working on most Linux kernels version > 2.6.27 in that number it should work 100% on recent versions of Fedora, CentOS, Debian and Ubuntu.

To disable IPv6 protocol on Debian Lenny its necessery to blackist the ipv6 module in /etc/modprobe.d/blacklist by issuing:

echo 'blacklist ipv6' >> /etc/modprobe.d/blacklist

On Fedora / CentOS there is a another universal “Redhat” way disable IPv6.

On them disabling IPv6 is done by editting /etc/sysconfig/network and adding:

NETWORKING_IPV6=no
IPV6INIT=no

I would be happy to hear how people achieved disabling the IPv6, since on earlier and (various by distro) Linuxes the way to disable the IPv6 is probably different.
 

Alto to stop Iptables IPV6 on CentOS / Fedora and RHEL issue:

# service ip6tables stop

# service ip6tables off

How to solve qmail /usr/local/bin/tcpserver: libc.so.6: failed to map segment from shared object: Cannot allocate memory

Saturday, April 30th, 2011

If you’re building (compiling) a new qmail server on some Linux host and after properly installing the qmail binaries and daemontools, suddenly you notice in readproctitle service errors: or somewhere in in qmail logs for instance in/var/log/qmail/current the error:

/usr/local/bin/tcpserver: error while loading shared libraries:
libc.so.6: failed to map segment from shared object: Cannot allocate memory

then you have hit a bug caused by insufficient memory assigned for tcpserver in your /var/qmail/supervise/qmail-smtpd/run daemontools qmail-smtpd initialize script:

This kind of issue is quite common especially on hardware architectures that are 64 bit and on Linux installations that are amd65 (x86_64) e.g. run 64 bit version of Linux.

It relates to the 64 bit architecture different memory distribution and thus as I said to solve requires increase in memory softlimit specified in the run script an example good qmail-smtpd run script configuration which fixed the libc.so.6: failed to map segment from shared object: Cannot allocate memory I use currently is as follows:

#!/bin/shQMAILDUID=`id -u vpopmail`NOFILESGID=`id -g vpopmail`MAXSMTPD=`cat /var/qmail/control/concurrencyincoming`# softlimit changed from 8000000exec /usr/local/bin/softlimit -m 32000000 /usr/local/bin/tcpserver -v -H -R -l 0 -x /home/vpopmail/etc/tcp.smtp.cdb -c "$MAXSMTPD"
-u "$QMAILDUID" -g "$NOFILESGID" 0 smtp
/var/qmail/bin/qmail-smtpd
/home/vpopmail/bin/vchkpw /bin/true 2>&1

The default value which was for softlimit was:

exec /usr/local/bin/softlimit -m 8000000

A good softlimit raise up values which in most cases were solving the issue for me are:

exec /usr/local/bin/softlimit -m 3000000

orexec /usr/local/bin/softlimit -m 4000000

The above example run configuration fixed the issue on a amd64 debian 5.0 lenny install, the server hardware was:

CPU: Intel(R) Core(TM)2 Duo CPU @ 2.93GHz
System Memory: 4GB
HDD Disk space: 240GB

The softlimit configuration which I had to setup on another server with system parameters:

Intel(R) Core(TM) i7 CPU (8 CPUS) @ 2.80GHz
System Memory: 8GB
HDD Disk Space: 1.4Terabytes

is as follows:

#!/bin/sh
QMAILDUID=`id -u vpopmail`
NOFILESGID=`id -g vpopmail`
MAXSMTPD=`cat /var/qmail/control/concurrencyincoming`
exec /usr/bin/softlimit -m 64000000
/usr/local/bin/tcpserver -v -H -R -l 0
-x /home/vpopmail/etc/tcp.smtp.cdb -c "$MAXSMTPD"
-u "$QMAILDUID" -g "$NOFILESGID" 0 smtp
/var/qmail/bin/qmail-smtpd
/home/vpopmail/bin/vchkpw /bin/true 2>&1

If none of the two configurations pointed out in the post works, for you just try to manually set up the exec /usr/bin/softlimit -m to some high value.

To assure that the newly set value is not producing the same error you will have to, reload completely the daemontools proc monitor system.
To do so open /etc/inittab comment out the line:

SV:123456:respawn:/command/svscanboot
to
#SV:123456:respawn:/command/svscanboot

Save again /etc/inittab and issue te cmd:

linux:~# init q

Now again open /etc/inittab and uncomment the commented line:

#SV:123456:respawn:/command/svscanbootto
SV:123456:respawn:/command/svscanboot

Lastly reload the inittab script once again with command:

linux:~# init q

To check if the error has disappeared check the readproctitle process, like so:

linux:~# ps ax|grep -i readproctitle

The command output should produce something like:

3070 ? S 0:00 readproctitle service errors: .......................................

Hope that helps.

How to solve “eAccelerator requires Zend Engine API version 220060519 , the Zend Engine API version 220090626 which is installed, is newer. Contact eAccelerator at http://eaccelerator.net for a later version of eAccelerator.” on FreeBSD

Monday, April 4th, 2011

I’ve recently upgraded my FreeBSD Apache server from port www/apache20 I had some issues before I tune up and recompile also the php5 port but eventually it worked out, however the Eaccelerator content caching module failed to load as it was outdated.

That’s a common inconvenient with eaccelerator that every system administrator out there has faced once or twice, especially on systems that has custom compiled Apache servers and does not use a specific precompiled version of the eaccelerator.

To solve the situation as you can expect I jumped on in the /usr/ports/www/eaccelerator and removed the current installed version of eaccelerator in order to compile and install the latest port version.:
To do that I first attempted to upgrade the eaccelerator port with portmaster but as there were some problems caused by autoconf initialization etc., I finally decided to abandon the idea of using portmaster and did it manually with the good old well known trivial commands:

freebsd# cd /usr/ports/www/eaccelerator
freebsd# make deinstall
freebsd# make install && make clean

I’ve continued further and restarted my Apache server to load the new eaccelerator version and made a small phpinfo php script to test if the eaccelerator is properly loaded, yet with zero success.

After checking out in my /var/log/httpd-error.log , I’ve determined the following error:

Failed loading /usr/local/lib/php/20060613/eaccelerator.so: Cannot open "/usr/local/lib/php/20060613/eaccelerator.so"

The error is quite obvious, to solve it I’ve opened my php configuration file /usr/local/etc/php.ini and placed in it:

and substituted the line:

zend_extension="/usr/local/lib/php/20060613/eaccelerator.so:"

with:

zend_extension="/usr/local/lib/php/20090626/eaccelerator.so"

Further on I gave Apache another restart with:

freebsd# /usr/local/etc/rc.d/apache2 restart
Performing sanity check on apache2 configuration:
Syntax OK
Stopping apache2.
Waiting for PIDS: 71140.
Performing sanity check on apache2 configuration:
Syntax OK
Starting apache2.

followed by another test if the eaccelerator is loaded with the phpinfo(); script.

Now even though the Failed loading /usr/local/lib/php/20060613/eaccelerator.so: Cannot open “/usr/local/lib/php/20060613/eaccelerator.so” was no more, the Eaccelerator was yet not loaded.

Another consult with /var/log/httpd-error.log now revealed me another eaccelerator error you read below:

eAccelerator requires Zend Engine API version 220060519.
The Zend Engine API version 220090626 which is installed, is newer.
Contact eAccelerator at http://eaccelerator.net for a later version of eAccelerator.

I did about 20 minutes of investigation on the internet looking for a possible fix which gave me some idea what might be the cause for error message, though it was finally my try/fail methodology that helped me solve the issue.

The solution to the issue appeared to be easy thanks God, to solve the error all you need to do is one more make clean right before installing the eaccelerator port.:
Here are the commands necessary to issue to solve the error and make the eaccelerator load properly:

freebsd# cd /usr/ports/www/eaccelerator
freebsd# make clean &&
freebsd# make install clean

Now after restarting the Apache server once again eaccelerator has properly been loaded once again.

phpMyAdmin – Error “Cannot start session without errors, please check errors given in your PHP and/or webserver log file and configure your PHP installation properly.”

Monday, February 8th, 2010

I’ve encountered a shitty problem while trying to access my phpmyadmin.
Here is the error:
phpMyAdmin - Error "Cannot start session without errors, please check errors given
in your PHP and/or webserver log file and configure your PHP installation properly.

After some time spend in investigation I’ve figured out something wrong is happening with my
php sessions, therefore I had to spend some time assuring myself php sessions are working correctly.
To achieve that I used a php code taken from the Internet.

Here is a link to the PHP code which checks, if sessions are correctly configured on a server .
Executing the code proove my sessions, are working okay, however still the problem remained.
Everytime I tried accessing phpMyAdmin I was unpleasently suprised by by:

phpmyadmin session error picture
After reconsidering the whole situation I remembered that since some time I’m using varnishd
therefore the problem could have something to do with the varnish-cache.
After checking my default.vcl file and recognizing a problem there I had to remove the following piece
of code from the default.vcl file:

#sub vcl_fetch {
# if( req.request != "POST" )
# {
# unset obj.http.set-cookie;
# }

# set obj.ttl = 600s;
# set obj.prefetch = -30s;
# deliver;
#}

Now after restarting varnishd with:

/usr/local/etc/rc.d/varnishd restart

All is back to normal I can login to PhpMyAdmin and everything is fine!
Thanks God.