Posts Tagged ‘error message’

Resolving “nf_conntrack: table full, dropping packet.” flood message in dmesg Linux kernel log

Wednesday, March 28th, 2012

nf_conntrack_table_full_dropping_packet
On many busy servers, you might encounter in /var/log/syslog or dmesg kernel log messages like

nf_conntrack: table full, dropping packet

to appear repeatingly:

[1737157.057528] nf_conntrack: table full, dropping packet.
[1737157.160357] nf_conntrack: table full, dropping packet.
[1737157.260534] nf_conntrack: table full, dropping packet.
[1737157.361837] nf_conntrack: table full, dropping packet.
[1737157.462305] nf_conntrack: table full, dropping packet.
[1737157.564270] nf_conntrack: table full, dropping packet.
[1737157.666836] nf_conntrack: table full, dropping packet.
[1737157.767348] nf_conntrack: table full, dropping packet.
[1737157.868338] nf_conntrack: table full, dropping packet.
[1737157.969828] nf_conntrack: table full, dropping packet.
[1737157.969928] nf_conntrack: table full, dropping packet
[1737157.989828] nf_conntrack: table full, dropping packet
[1737162.214084] __ratelimit: 83 callbacks suppressed

There are two type of servers, I've encountered this message on:

1. Xen OpenVZ / VPS (Virtual Private Servers)
2. ISPs – Internet Providers with heavy traffic NAT network routers
 

I. What is the meaning of nf_conntrack: table full dropping packet error message

In short, this message is received because the nf_conntrack kernel maximum number assigned value gets reached.
The common reason for that is a heavy traffic passing by the server or very often a DoS or DDoS (Distributed Denial of Service) attack. Sometimes encountering the err is a result of a bad server planning (incorrect data about expected traffic load by a company/companeis) or simply a sys admin error…

– Checking the current maximum nf_conntrack value assigned on host:

linux:~# cat /proc/sys/net/ipv4/netfilter/ip_conntrack_max
65536

– Alternative way to check the current kernel values for nf_conntrack is through:

linux:~# /sbin/sysctl -a|grep -i nf_conntrack_max
error: permission denied on key 'net.ipv4.route.flush'
net.netfilter.nf_conntrack_max = 65536
error: permission denied on key 'net.ipv6.route.flush'
net.nf_conntrack_max = 65536

– Check the current sysctl nf_conntrack active connections

To check present connection tracking opened on a system:

:

linux:~# /sbin/sysctl net.netfilter.nf_conntrack_count
net.netfilter.nf_conntrack_count = 12742

The shown connections are assigned dynamicly on each new succesful TCP / IP NAT-ted connection. Btw, on a systems that work normally without the dmesg log being flooded with the message, the output of lsmod is:

linux:~# /sbin/lsmod | egrep 'ip_tables|conntrack'
ip_tables 9899 1 iptable_filter
x_tables 14175 1 ip_tables

On servers which are encountering nf_conntrack: table full, dropping packet error, you can see, when issuing lsmod, extra modules related to nf_conntrack are shown as loaded:

linux:~# /sbin/lsmod | egrep 'ip_tables|conntrack'
nf_conntrack_ipv4 10346 3 iptable_nat,nf_nat
nf_conntrack 60975 4 ipt_MASQUERADE,iptable_nat,nf_nat,nf_conntrack_ipv4
nf_defrag_ipv4 1073 1 nf_conntrack_ipv4
ip_tables 9899 2 iptable_nat,iptable_filter
x_tables 14175 3 ipt_MASQUERADE,iptable_nat,ip_tables

 

II. Remove completely nf_conntrack support if it is not really necessery

It is a good practice to limit or try to omit completely use of any iptables NAT rules to prevent yourself from ending with flooding your kernel log with the messages and respectively stop your system from dropping connections.

Another option is to completely remove any modules related to nf_conntrack, iptables_nat and nf_nat.
To remove nf_conntrack support from the Linux kernel, if for instance the system is not used for Network Address Translation use:

/sbin/rmmod iptable_nat
/sbin/rmmod ipt_MASQUERADE
/sbin/rmmod rmmod nf_nat
/sbin/rmmod rmmod nf_conntrack_ipv4
/sbin/rmmod nf_conntrack
/sbin/rmmod nf_defrag_ipv4

Once the modules are removed, be sure to not use iptables -t nat .. rules. Even attempt to list, if there are any NAT related rules with iptables -t nat -L -n will force the kernel to load the nf_conntrack modules again.

Btw nf_conntrack: table full, dropping packet. message is observable across all GNU / Linux distributions, so this is not some kind of local distribution bug or Linux kernel (distro) customization.
 

III. Fixing the nf_conntrack … dropping packets error

– One temporary, fix if you need to keep your iptables NAT rules is:

linux:~# sysctl -w net.netfilter.nf_conntrack_max=131072

I say temporary, because raising the nf_conntrack_max doesn't guarantee, things will get smoothly from now on.
However on many not so heavily traffic loaded servers just raising the net.netfilter.nf_conntrack_max=131072 to a high enough value will be enough to resolve the hassle.

– Increasing the size of nf_conntrack hash-table

The Hash table hashsize value, which stores lists of conntrack-entries should be increased propertionally, whenever net.netfilter.nf_conntrack_max is raised.

linux:~# echo 32768 > /sys/module/nf_conntrack/parameters/hashsize
The rule to calculate the right value to set is:
hashsize = nf_conntrack_max / 4

– To permanently store the made changes ;a) put into /etc/sysctl.conf:

linux:~# echo 'net.netfilter.nf_conntrack_count = 131072' >> /etc/sysctl.conf
linux:~# /sbin/sysct -p

b) put in /etc/rc.local (before the exit 0 line):

echo 32768 > /sys/module/nf_conntrack/parameters/hashsize

Note: Be careful with this variable, according to my experience raising it to too high value (especially on XEN patched kernels) could freeze the system.
Also raising the value to a too high number can freeze a regular Linux server running on old hardware.

– For the diagnosis of nf_conntrack stuff there is ;

/proc/sys/net/netfilter kernel memory stored directory. There you can find some values dynamically stored which gives info concerning nf_conntrack operations in "real time":

linux:~# cd /proc/sys/net/netfilter
linux:/proc/sys/net/netfilter# ls -al nf_log/

total 0
dr-xr-xr-x 0 root root 0 Mar 23 23:02 ./
dr-xr-xr-x 0 root root 0 Mar 23 23:02 ../
-rw-r--r-- 1 root root 0 Mar 23 23:02 0
-rw-r--r-- 1 root root 0 Mar 23 23:02 1
-rw-r--r-- 1 root root 0 Mar 23 23:02 10
-rw-r--r-- 1 root root 0 Mar 23 23:02 11
-rw-r--r-- 1 root root 0 Mar 23 23:02 12
-rw-r--r-- 1 root root 0 Mar 23 23:02 2
-rw-r--r-- 1 root root 0 Mar 23 23:02 3
-rw-r--r-- 1 root root 0 Mar 23 23:02 4
-rw-r--r-- 1 root root 0 Mar 23 23:02 5
-rw-r--r-- 1 root root 0 Mar 23 23:02 6
-rw-r--r-- 1 root root 0 Mar 23 23:02 7
-rw-r--r-- 1 root root 0 Mar 23 23:02 8
-rw-r--r-- 1 root root 0 Mar 23 23:02 9

 

IV. Decreasing other nf_conntrack NAT time-out values to prevent server against DoS attacks

Generally, the default value for nf_conntrack_* time-outs are (unnecessery) large.
Therefore, for large flows of traffic even if you increase nf_conntrack_max, still shorty you can get a nf_conntrack overflow table resulting in dropping server connections. To make this not happen, check and decrease the other nf_conntrack timeout connection tracking values:

linux:~# sysctl -a | grep conntrack | grep timeout
net.netfilter.nf_conntrack_generic_timeout = 600
net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 120
net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 60
net.netfilter.nf_conntrack_tcp_timeout_established = 432000
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
net.netfilter.nf_conntrack_tcp_timeout_last_ack = 30
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_close = 10
net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300
net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 300
net.netfilter.nf_conntrack_udp_timeout = 30
net.netfilter.nf_conntrack_udp_timeout_stream = 180
net.netfilter.nf_conntrack_icmp_timeout = 30
net.netfilter.nf_conntrack_events_retry_timeout = 15
net.ipv4.netfilter.ip_conntrack_generic_timeout = 600
net.ipv4.netfilter.ip_conntrack_tcp_timeout_syn_sent = 120
net.ipv4.netfilter.ip_conntrack_tcp_timeout_syn_sent2 = 120
net.ipv4.netfilter.ip_conntrack_tcp_timeout_syn_recv = 60
net.ipv4.netfilter.ip_conntrack_tcp_timeout_established = 432000
net.ipv4.netfilter.ip_conntrack_tcp_timeout_fin_wait = 120
net.ipv4.netfilter.ip_conntrack_tcp_timeout_close_wait = 60
net.ipv4.netfilter.ip_conntrack_tcp_timeout_last_ack = 30
net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait = 120
net.ipv4.netfilter.ip_conntrack_tcp_timeout_close = 10
net.ipv4.netfilter.ip_conntrack_tcp_timeout_max_retrans = 300
net.ipv4.netfilter.ip_conntrack_udp_timeout = 30
net.ipv4.netfilter.ip_conntrack_udp_timeout_stream = 180
net.ipv4.netfilter.ip_conntrack_icmp_timeout = 30

All the timeouts are in seconds. net.netfilter.nf_conntrack_generic_timeout as you see is quite high – 600 secs = (10 minutes).
This kind of value means any NAT-ted connection not responding can stay hanging for 10 minutes!

The value net.netfilter.nf_conntrack_tcp_timeout_established = 432000 is quite high too (5 days!)
If this values, are not lowered the server will be an easy target for anyone who would like to flood it with excessive connections, once this happens the server will quick reach even the raised up value for net.nf_conntrack_max and the initial connection dropping will re-occur again …

With all said, to prevent the server from malicious users, situated behind the NAT plaguing you with Denial of Service attacks:

Lower net.ipv4.netfilter.ip_conntrack_generic_timeout to 60 – 120 seconds and net.ipv4.netfilter.ip_conntrack_tcp_timeout_established to stmh. like 54000

linux:~# sysctl -w net.ipv4.netfilter.ip_conntrack_generic_timeout = 120
linux:~# sysctl -w net.ipv4.netfilter.ip_conntrack_tcp_timeout_established = 54000

This timeout should work fine on the router without creating interruptions for regular NAT users. After changing the values and monitoring for at least few days make the changes permanent by adding them to /etc/sysctl.conf

linux:~# echo 'net.ipv4.netfilter.ip_conntrack_generic_timeout = 120' >> /etc/sysctl.conf
linux:~# echo 'net.ipv4.netfilter.ip_conntrack_tcp_timeout_established = 54000' >> /etc/sysctl.conf


Share this on

How to solve “[crit] [client xxx.xxx.xxx.xxx] (13)Permission denied: /var/lib/ejabberd/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable, referer: http://jabber.mydomain.com/” – Error Cause and Solution

Thursday, January 5th, 2012

While configuring JWchat domain, I've come across around an error:

pcfg_openfile: unable to check htaccess file, ensure it is readable

The exact error I got in /var/log/apache2/error.log looked like so:

[crit] [client xxx.xxx.xxx.xxx] (13)Permission denied: /var/lib/ejabberd/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable, referer: http://jabber.mydomain.com/

The error message suggested /var/lib/ejabberd/.htaccess – is missing or not readable, however after checking i've seen .htaccess existed as well as was readable:

debian:~# ls -al /var/lib/ejabberd/.htaccess
-rw-r--r-- 1 www-data www-data 114 2012-01-05 07:44 /var/lib/ejabberd/.htaccess

At first glimpse it seems like the message is misleading and not true, however when I switched to www-data user (the user with which Apache runs on Debian), I've figured out the error meaning of unreadability is exactly correct:

www-data@debian:$ ls -al /var/lib/ejabberd/.htaccess
ls: cannot access /var/lib/ejabberd/.htaccess: Permission denied

This permission denied was quite strange, especially when considering the .htaccess is readable for all users:

debian:~# ls -al /var/lib/ejabberd/.htaccess
-rw-r--r-- 1 www-data www-data 114 2012-01-05 07:44 /var/lib/ejabberd/.htaccess

After a thorough look on what might go wrong, thanksfully I've figured it out. The all issues were caused by wrong permissions of /var/lib/ejabberd/.htaccess .You can see below the executable flag for all users (including apache's www-data) is missing :

debian:/var/lib# ls -ld /var/lib/ejabberd/drw-r--r-- 3 ejabberd ejabberd 4096 2012-01-05 07:45 /var/lib/ejabberd/

Solving the error, hence is as easy as adding +x flag to /var/lib/ejabberd :

debian:/var/lib# chmod +x /var/lib/ejabberd

Another way to fix the error is to chmod to 755 to the directory which holds .htaccess:

From now onwards pcfg_openfile: unable to check htaccess file, ensure it is readable err is no more 😉


Share this on

Fix vnstat error “eth0: Not enough data available yet.” on Debian GNU / Linux

Monday, November 21st, 2011

Vnstat GNU Linux console terminal traffic statistics logo

After installing vnstat to keep an eye on server IN and OUT traffic on a Debian Squeeze server. I used the usual:

debian:~# vnstat -u -i eth0

In order to generate the initial database for the ethernet interface used by vnstat to generate its statistics.

However even though /var/lib/vnstat/eth0 got generated with above command statistics were not further generated and trying to check them with command:

debian:~# vnstat --days

Returned the error message:

eth0: Not enough data available yet.

To solve the eth0: Not enough data available yet. message I tried completely removing vnstat package by purging the package e.g.:

debian:~# apt-get --yes remove vnstat
...
debian:~# dpkg --purge vnstat
...

Even though dpkg –purge was invoked /var/lib/vnstat/ refused to be removed since it contained vnstat’s db file eth0

Therefore I deleted by hand before installing again vnstat:

debian:~# rm -rf /var/lib/vnstat/

Tried installing once again vnstat “from scratch”:

debian:~# apt-get install vnstat
...

After that I tried regenerating the vnstat db file eth0 once again with vnstat -u -i eth0 , hoping this should fix the error but it was no go and after that the error:

debian:~# vnstat --hours
eth0: Not enough data available yet.

persisted.

I checked in Debian bugs mailing lists and I found, some people complaining about the same issue with some suggsetions on how the error can be work arouned, anyways none of the suggestions worked for me.

Being irritated I further removed / purged once again vnstat and decided to give it a try by installing vnstat from source
As of time of writting this article, the latest stable vnstat version is 1.11 .
Therefore to install vnstat from source I issued:

debian:~# cd /usr/local/src
debian:/usr/local/src# wget http://humdi.net/vnstat/vnstat-1.11.tar.gz
...
debian:/usr/local/src# tar -zxvvf vnstat-1.11.tar.gz
debian:/usr/local/src# cd vnstat-1.11
debian:/usr/local/src/vnstat-1.11# make & make all & make install
debian:/usr/local/src/vnstat-1.11# cp examples/vnstat.cron /etc/cron.d/vnstat
debian:/usr/local/src/vnstat-1.11# vnstat -u -i eth0
Error: Unable to read database "/var/lib/vnstat/eth0".
Info: -> A new database has been created.

As a last step I put on root crontab to execute:

debian:~# crontab -u root -e

*/5 * * * * /usr/bin/vnstat -u >/dev/null 2>&1

This line updated vnstat db eth0 database, every 5 minutes. After the manual source install vnstat works, just fine 😉


Share this on

How to fix “sslserver: fatal: unable to load certificate” Qmail error on GNU / Linux

Friday, October 14th, 2011

After setupping a brand new Qmail installation following the QmailRocks Thibs Qmail Debian install guide , I’ve come across unexpected re-occuring error message in /var/log/qmail/qmail-smtpdssl/ , here is the message:

@400000004e9807b10d8bdb7c command-line: exec sslserver -e -vR -l my-mailserver-domain.com -c 30 -u 89 -g 89
-x /etc/tcp.smtp.cdb 0 465 rblsmtpd -r zen.spamhaus.org -r dnsbl.njabl.org -r dnsbl.sorbs.net -r bl.spamcop.net qmail-smtpd
my-mailserver-domain.com /home/vpopmail/bin/vchkpw /bin/true 2>&1
@400000004e9807b10dae2ca4 sslserver: fatal: unable to load certificate

I was completely puzzled initially by the error as the sertificate file /var/qmail/control/servercert.pem was an existing and properly self generated one. Besides that qmail daemontools init script /service/qmail-smtpd/run was loading the file just fine, where the same file failed to get loaded when sslserver command with the cert argument was invoked via /service/qmail-smtpdssl/run

It took me quite a while to thoroughfully investigate on what’s wrong with the new qmail install. Thanksfully after almost an hour of puzzling I found it out and I was feeling as a complete moron to find that the all issues was caused by incorrect permissions of the /var/qmail/control/servercert.pem file.
Here are the incorrect permissions the file possessed:

linux:~# ls -al /var/qmail/control/servercert.pem
-rw------- 1 qmaild qmail 2311 2011-10-12 13:21 /var/qmail/control/servercert.pem

To fix up the error I had to allow all users to have reading permissions over servercert.pem , e.g.:

linux:~# chmod a+r /var/qmail/control/servercert.pem

After adding all users readable bit on servercert.pem the file permissions are like so:

linux:~# ls -al /var/qmail/control/servercert.pem
-rw-r--r-- 1 qmaild qmail 2311 2011-10-12 13:21 /var/qmail/control/servercert.pem

Consequently I did a qmail restart to make sure the new readable servercert.pem will get loaded from the respective init script:

linux:~# qmailctl restart
* Stopping qmail-smtpdssl.
* Stopping qmail-smtpd.
* Sending qmail-send SIGTERM and restarting.
* Restarting qmail-smtpd.
* Restarting qmail-smtpdssl.

Now the annoying sslserver: fatal: unable to load certificate message is no more and all works fine, Hooray! 😉


Share this on

Error from park wrapper: mydomain.com is already configured. Sorry, that domain is already setup (remove it from httpd.conf) – How to solve

Monday, July 4th, 2011

If you’re administrating a Cpanel server and you come across an error message while trying to use cpanel’s domain addon menu and you want to fix that you will need to do the following logged in as root over an ssh connection:

1. Remove dns related stuff in /var/named and /var/named/cache cpanel:~# rm -f /var/named/mydomain.com.dbcpanel:~# rm -f /var/named/cache/mydomain.com.db

2. Edit the current used httpd.conf on the server and remove all virtualhost domain definitions

cpanel:~# vim /etc/httpd/conf/httpd.conf
# find the mydomain.com Virtualhost definitions and completely remove them

3. Remove any domain occurance in /var/cpanel/users

cpanel:~# cd /var/cpanel/users/
cpanel:/var/cpanel/users# grep -rli 'mydomain.com' *
/var/cpanel/users/hipo
cpanel:~# vim /var/cpanel/users/hipo
# remove in above file any domain related entries

3. Remove anything related to mydomain.com in /etc/userdomains and /etc/localdomains

cpanel:~# vim /etc/userdomains
cpanel:~# vim /etc/localdomains
# again look inside the two files and remove the occuring entries

4. Edit /etc/named.conf and remove any definitions of mydomain.com

cpanel:~# vim /etc/named.conf
# in above file remove DNS configuration for mydomain.com

5. Run /scripts/updateuserdomains

cpanel:~# /scripts/updateuserdomains

6. Delete any valias configurations

cpanel:~# rm -f /etc/valiases/mydomain.com
cpanel:~# rm -f /etc/vdomainaliases/mydomain.com
cpanel:~# rm -f/etc/vfilters/mydomain.com

7. Remove any occurance of mydomain.com in the user directory which experiences the Error from park wrapper: error

Let’s say the user testuser is experiencing the error, in that case you will have to remove:

cpanel:~# rm -rf /home/testuser/public_html/mydomain.com

8. Restart Cpanel

This step is optional though I think it’s also a good practice as it will at least restart the Cpanel webserver (Apache or Litespeed depending on your conf)

cpanel:~# /etc/init.d/cpanel restart

Now try to add up the domain via the Cpanel domain addon interface, hopefully the issue should be fixed by now. If not you might also check if there is no some record about mydomain.com in the mysql server.
Cheers 😉


Share this on

How to fix “Could not verify this certificate for unknown reasons” SSL certificate lighttpd troubles

Tuesday, June 28th, 2011

Firefox SSL Pro could not verify for uknown reasons solve error

I’ve been issuing new wildcard multiple SSL certificate to renew an expiring ones. After I completed the new certificate setup manually on the server (a CentOS 5.5 Final running SoluSVM Pro – Virtual Private Manager), I launched Firefox to give a try if the certificate is properly configured.

Instead of my expectations that the browser would just accept the certificate without spitting any error messages and all will be fine, insetad I got error with the just installed certificate and thus the browser failed to report the SSL cert is properly authenticated.

The company used to issue the SSL certificate is GlobeSSL – http://globessl.com , it was quite “hassle”, with the tech support as the first certficate generated by globessl was generation based on SSL key file with 4096 key encryption.

As the first issued Authenticated certificate generated by GlobeSSL was not good further on about a week time was necessery to completethe required certificate reissuing ….

It wasn’t just GlobeSSL’s failure, as there were some spam filters on my side that was preventing some of GlobeSSL emails to enter normally, however what was partially their fault as they haven’t made their notification and confirmation emails to pass by a mid-level strong anti-spam filter…

Anyways my overall experience with GlobeSSL certificate reissue and especially their technical support is terrible.
To make a parallel, issuing certificates with GoDaddy is a way more easier and straight forward.

Now let me come back to the main certificate error I got in Firefox …

A bit of further investigation with the cert failure, has led me to the error message which tracked back to the newly installed SSL certificate issues.
In order to find the exact cause of the SSL certificate failure in Firefox I followed to the menus:

Tools -> Page Info -> Security -> View Certificate

Doing so in the General browser tab, there was the following error:

Could not verify this certificate for unknown reasons

The information on Could not verify this certificate for unknown reasons on the internet was very mixed and many people online suggested many possible causes of the issue, so I was about to loose myself.

Everything with the certificate seemed to be configured just fine in lighttpd, all the GlobeSSL issued .cer and .key file as well as the ca bundle were configured to be read used in lighttpd in it’s configuration file:
/etc/lighttpd/lighttpd.conf

Here is a section taken from lighttpd.conf file which did the SSL certificate cert and key file configuration:

$SERVER["socket"] == "0.0.0.0:443" {
ssl.engine = "enable"
ssl.pemfile = "/etc/lighttpd/ssl/wildcard.mydomain.bundle"
}

The file /etc/lighttpd/ssl/wildcard.mydomain.bundle was containing the content of both the .key (generated on my server with openssl) and the .cer file (issued by GlobeSSL) as well as the CA bundle (by GlobeSSL).

Even though all seemed to be configured well the SSL error Could not verify this certificate for unknown reasons was still present in the browser.

GlobeSSL tech support suggested that I try their Web key matcher interfacehttps://confirm.globessl.com/key-matcher.html to verify that everything is fine with my certificate and the cert key. Thanks to this interface I figured out all seemed to be fine with the issued certificate itself and something else should be causing the SSL oddities.
I was further referred by GlobeSSL tech support for another web interface to debug errors with newly installed SSL certificates.
These interface is called Verify and Validate Installed SSL Certificate and is found here

Even though this SSL domain installation error report and debug tool did some helpful suggestions, it wasn’t it that helped me solve the issues.

What helped was First the suggestion made by one of the many tech support guy in GlobeSSL who suggested something is wrong with the CA Bundle and on a first place the documentation on SolusVM’s wiki – http://wiki.solusvm.com/index.php/Installing_an_SSL_Certificate .
Cccording to SolusVM’s documentation lighttpd.conf‘s file had to have one extra line pointing to a seperate file containing the issued CA bundle (which is a combined version of the issued SSL authority company SSL key and certificate).
The line I was missing in lighttpd.conf (described in dox), looked like so:

ssl.ca-file = “/usr/local/solusvm/ssl/gd_bundle.crt”

Thus to include the directive I changed my previous lighttpd.conf to look like so:

$SERVER["socket"] == "0.0.0.0:443" {
ssl.engine = "enable"
ssl.pemfile = "/etc/lighttpd/ssl/wildcard.mydomain.bundle"
ssl.ca-file = "/etc/lighttpd/ssl/server.bundle.crt"
}

Where server.bundle.crt contains an exact paste from the certificate (CA Bundle) mailed by GlobeSSL.

There was a couple of other ports on which an SSL was configured so I had to include these configuration directive everywhere in my conf I had anything related to SSL.

Finally to make the new settings take place I did a lighttpd server restart.

[root@centos ssl]# /etc/init.d/lighttpd restart
Stopping lighttpd: [ OK ]
Starting lighttpd: [ OK ]

After lighttpd reinitiated the error was gone! Cheers ! 😉


Share this on

How to fix php “Fatal error: Class ‘SimpleXMLElement’ not found” and “Fatal error: Class ‘JLoader’ not found” on FreeBSD

Tuesday, June 21st, 2011

One of the contact forms running on a FreeBSD server configured to work on top of Apache+MySQL suddenly stopped working.

The errors that appeared on the webpage during a page request to the form url was:

Fatal error: Class 'SimpleXMLElement' not found in /var/www/joomla/plugins/system/plugin_googlemap2_helper.php on line 2176 Fatal error: Class 'JLoader' not found in /var/www/joomla/plugins/libraries/loader.php on line 161

As you see in the output the website which was causing the issues was running a Joomla version 1.5.23 Stable configured with RSForm!ver 1.5.x (as a contact form solution) and Google Maps version 2.13b plugins.

The Google Map from Google Maps plugin and the RSform were configured to appear on one physical configured article in Joomla and seemed to work just until now. However yesterday suddenly the error messages:
Fatal error: Class ‘SimpleXMLElement’ not found
Fatal error: Class ‘JLoader’ not found

came out of nothing, it’s really strange as I don’t remember doing any changes to either Joomla or the PHP installation on this server.
There is one more guy who has access to the Joomla installation which I suspect might have changed something in the Joomla, but this scenario is not very likely.

Anyways as the problem was there I had to fix it up. Obviously as the error message Fatal error: Class ‘SimpleXMLElement’ not found reported the server php simplexml was missing!

Just to assure myself the php simplexml extension is not present on the server I used the classical method of setting up a php file with phpinfo(); in it to check all the installed php extensions on the server.

Finally to solve the issue I had to install the module from ports php5-simplexml , e.g.:

freebsd# cd /usr/ports/textproc/php5-simplexml
freebsd# make install clean

Afterwards to make the new settings take place I did restart of my Apache server:

freebsd# /usr/local/etc/rc.d/apache2 restart
Syntax OK
Stopping apache2.
Waiting for PIDS: 63883.
Performing sanity check on apache2 configuration:
Syntax OK
Starting apache2.

Now my Joomla contact form is back to normal 😉

If someone has any idea why this error occured without any php or server modifications, and how comes that all worked fine beforehand even though I did not have the simplexml module instlaled on the server o_O, I would be enormously greatful.


Share this on

How to fix “The function split() is deprecated in PHP 5.3” on FreeBSD

Saturday, May 28th, 2011

If you’re installing some PHP based CMS/blog like (Joomla or WordPress) or some kind of template and suddenly you stumble on a error:

Deprecated: Function split() is deprecated in /usr/local/www/websitedomain/templates/youbizz/html/modules.php on line 78

In order to fix that the file which spits the error message, in my case modules.php needs to be modified and the split php function has to be substituted with explode on every occuring place.

I experienced this error on FreeBSD 7_2 with php version 5.3.5 installed from ports.
This simple fix works fine.


Share this on

How to fix “vbAccelerator SGrid II Control Runtime Error” popup window in Windows XP

Tuesday, May 24th, 2011

Windows XPI’m in a friend and he asked me to take a look at his Win PC.
When the Windows boots up a weird and annoying error message appears that reads:

vBAccelerator SGrid II Control Runtime Error

I figured out the SGrid II Control Runtime Error was a cause of a mis-working old Malware Bytes portable installation.

I’ve found online the following tool which fixes the stupid VBAccelerator SGrid II error

By simply downloading and starting the mbam-clean.exe binary after a computer restart the error gets fixed.


Share this on