Posts Tagged ‘CentOS’
Thursday, August 25th, 2011 Sysctl is a great way to optimize Linux. sysctl has a dozens of values which could drastically improve server networking and overall performance.
One of the many heplful variables to optimize the way the Linuz kernel works on busy servers is net.ipv4.ip_local_port_range .
The default sysctl setting for net.ipv4.ip_local_port_range on Debian, Ubuntu Fedora, RHEL, CentOS is:
net.ipv4.ip_local_port_range = 32768 65536
This means that the kernel and the corresponding server running services instructing the Linuz kernel open new port sockets can only open local ports in the range of 32768 – 65536 .
On a regular Desktop GNU/Linux machine or a not high iron server this settins is perfectly fine, however on a high scale servers the local port range in the interval of 32768-65536 might be insufficient at times, especially if there are programs which require binding of many local ports.
Therefore on a high load servers, generally it’s a good to raise the port range to be assigned by kernel to 8912 – 65536 , to do so the setting has to be changed like shown below:
linux:~# sysctl -w net.ipv4.ip_local_port_range = 8192 65536
...
If changing this setting on the server doesn’t show any negative impact on performance in few hours time or a day or even better decreases the server average load, it’s a good idea that it be added to sysctl.conf to load up the setting on next kernel boot.
linux:~# echo 'net.ipv4.ip_local_port_range' >> /etc/sysctl.conf
Enjoy 😉
Tags: boot linux, CentOS, conf, confEnjoy, dozens, fedora, gnu linux, good, idea, impact, interval, ip port, ipv, iron, kernel works, Linux, linux machine, negative impact, net, performance, port, ports, range, scale, scale servers, server networking, setting, sockets, sysctl, thoroughput, time, Ubuntu, variables, way
Posted in Linux, Linux and FreeBSD Desktop, System Administration | 1 Comment »
Friday, August 12th, 2011 
I’m responsible for some GNU/Linux servers which are shared hosting and therefore contain plenty of user accounts.
Every now and then our company servers gets suspended because of a Phishing websites, Spammers script kiddies and all the kind of abusers one can think of.
To mitigate the impact of the server existing unwanted users activities, I decided to use the Clamav Antivirus – open source virus scanner to look up for potentially dangerous files, stored Viruses, Spammer mailer scripts, kernel exploits etc.
The Hosting servers are running latest CentOS 5.5. Linux and fortunately CentOS is equipped with an RPM pre-packaged latest Clamav release which of the time of writting is ver. (0.97.2).
Installing Clamav on CentOS is a piece of cake and it comes to issuing:
[root@centos:/root]# yum -y install clamav
...
After the install is completed, I’ve used freshclam to update clamav virus definitions
[root@centos:/root]# freshclam
ClamAV update process started at Fri Aug 12 13:19:32 2011
main.cvd is up to date (version: 53, sigs: 846214, f-level: 53, builder: sven)
WARNING: getfile: daily-13357.cdiff not found on remote server (IP: 81.91.100.173)
WARNING: getpatch: Can't download daily-13357.cdiff from db.gb.clamav.net
WARNING: getfile: daily-13357.cdiff not found on remote server (IP: 163.1.3.8)
WARNING: getpatch: Can't download daily-13357.cdiff from db.gb.clamav.net
WARNING: getfile: daily-13357.cdiff not found on remote server (IP: 193.1.193.64)
WARNING: getpatch: Can't download daily-13357.cdiff from db.gb.clamav.net
WARNING: Incremental update failed, trying to download daily.cvd
Downloading daily.cvd [100%]
daily.cvd updated (version: 13431, sigs: 173670, f-level: 60, builder: arnaud)
Downloading bytecode.cvd [100%]
bytecode.cvd updated (version: 144, sigs: 41, f-level: 60, builder: edwin)
Database updated (1019925 signatures) from db.gb.clamav.net (IP: 217.135.32.99)
In my case the shared hosting hosted websites and FTP user files are stored in /home directory thus I further used clamscan in the following way to check report and log into file the scan results for our company hosted user content.
[root@centos:/root]# screen clamscan -r -i --heuristic-scan-precedence=yes --phishing-scan-urls=yes --phishing-cloak=yes --phishing-ssl=yes --scan-archive=no /home/ -l /var/log/clamscan.log
home/user1/mail/new/1313103706.H805502P12513.hosting,S=14295: Heuristics.Phishing.Email.SpoofedDomain FOUND/home/user1/mail/new/1313111001.H714629P29084.hosting,S=14260: Heuristics.Phishing.Email.SpoofedDomain FOUND/home/user1/mail/new/1305115464.H192447P14802.hosting,S=22663: Heuristics.Phishing.Email.SpoofedDomain FOUND/home/user1/mail/new/1311076363.H967421P17372.hosting,S=13114: Heuristics.Phishing.Email.SpoofedDomain FOUND/home/user1/mail/domain.com/infos/cur/859.hosting,S=8283:2,S: Heuristics.Phishing.Email.SSL-Spoof FOUND/home/user1/mail/domain.com/infos/cur/131.hosting,S=6935:2,S: Heuristics.Phishing.Email.SSL-Spoof FOUND
I prefer running the clamscan in a screen session, because it’s handier, if for example my ssh connection dies the screen session will preserve the clamscan cmd execution and I can attach later on to see how scan went.
clamscan of course is slower as it does not use Clamav antivirus daemon clamd , however I prefer running it without running the daemon, as having a permanently running clamd on the servers sometimes creates problems or hangs and it’s not really worthy to have it running since I’m intending to do a clamscan no more than once per month to see some potential users which might need to be suspended.
Also later on, after it finishes all possible problems are logged to /var/log/clamscan.log , so I can read the file report any time.
A good idea might also be to implement the above clamscan to be conducted, once per month via a cron job, though I’m still in doubt if it’s better to run it manually once per month to search for the malicious users content or it’s better to run it via cron schedule.
One possible pitfall with automating the above clamscan /home virus check up, might be the increased load it puts to the system. In some cases the Webserver and SQL server might be under a heavy load at the exactly same time the clamscan cron work is running, this might possible create severe issues for users websites, if it’s not monitored.
Thus I would probably go with running above clamscan manually each month and monitor the server performance.
However for people, who have “iron” system hardware and clamscan file scan is less likely to cause any issues, probably a cronjob would be fine. Here is sample cron job to run clamscan:
10 05 01 * * clamscan -r -i --heuristic-scan-precedence=yes --phishing-scan-urls=yes --phishing-cloak=yes --phishing-ssl=yes --scan-archive=no /home/ -l /var/log/clamscan.log >/dev/null 2>&1
I’m interested to hear if somebody already is using a clamscan to run on cron without issues, once I’m sure that running it on cron would not lead to server down-times, i’ll implement it via cron job.
Anyone having experience with running clamscan directory scan through crond? 🙂
Tags: antivirus, cake, center, CentOS, Clamav, clamav antivirus, company servers, dangerous files, exploits, gnu linux, hosting servers, impact, Installing, kernel, kind, linux servers, m center, mailer, open source, Phishing, piece, piece of cake, plenty, root, rpm, scanner, script kiddies, spammer, Spammers, time, unwanted files, unwanted users, ver, virus, virus scanner, Viruses, writting, yum
Posted in Linux, System Administration, Web and CMS | 2 Comments »
Wednesday, August 10th, 2011 
I’m in charge of dozen of Linux servers these days and therefore am required to restart many of the servers with a support ticket (because many of the Data Centers where the servers are co-located does not have a web interface or IPKVM connected to the server for that purpose). Therefore the server restart requests in case of crash sometimes gets processed in few hours or in best case in at least half an hour.
I’m aware of the existence of Hardware Watchdog devices, which are capable to detect if a server is hanged and auto-restart it, however the servers I administrate does not have Hardware support for Watchdog timer.
Thanksfully there is a free software project called Watchdog which is easily configured and mitigates the terrible downtimes caused every now and then by a server crash and respective delays by tech support in Data Centers.
I’ve recently blogged on the topic of Debian Linux auto-restart in case of kernel panic , however now i had to conifgure watchdog on some dozen of CentOS Linux servers.
It appeared installation & configuration of Watchdog on CentOS is a piece of cake and comes to simply following few easy steps, which I’ll explain quickly in this post:
1. Install with yum watchdog to CentOS
[root@centos:/etc/init.d ]# yum install watchdog
...
2. Add to configuration a log file to log watchdog activities and location of the watchdog device
The quickest way to add this two is to use echo to append it in /etc/watchdog.conf:
[root@centos:/etc/init.d ]# echo 'file = /var/log/messages' >> /etc/watchdog.conf
echo 'watchdog-device = /dev/watchdog' >> /etc/watchdog.conf
3. Load the softdog kernel module to initialize the software watchdog via /dev/watchdog
[root@centos:/etc/init.d ]# /sbin/modprobe softdog
Initialization of softdog should be indicated by a line in dmesg kernel log like the one above:
[root@centos:/etc/init.d ]# dmesg |grep -i watchdog
Software Watchdog Timer: 0.07 initialized. soft_noboot=0 soft_margin=60 sec (nowayout= 0)
4. Include the softdog kernel module to load on CentOS boot up
This is necessery, because otherwise after reboot the softdog would not be auto initialized and without it being initialized, the watchdog daemon service could not function as it does automatically auto reboots the server if the /dev/watchdog disappears.
It’s better that the softdog module is not loaded via /etc/rc.local but the default CentOS methodology to load module from /etc/rc.module is used:
[root@centos:/etc/init.d ]# echo modprobe softdog >> /etc/rc.modules
[root@centos:/etc/init.d ]# chmod +x /etc/rc.modules
5. Start the watchdog daemon service
The succesful intialization of softdog in step 4, should have provided the system with /dev/watchdog, before proceeding with starting up the watchdog daemon it’s wise to first check if /dev/watchdog is existent on the system. Here is how:
[root@centos:/etc/init.d ]# ls -al /dev/watchdogcrw------- 1 root root 10, 130 Aug 10 14:03 /dev/watchdog
Being sure, that /dev/watchdog is there, I’ll start the watchdog service.
[root@centos:/etc/init.d ]# service watchdog restart
...
Very important note to make here is that you should never ever configure watchdog service to run on boot time with chkconfig. In other words the status from chkconfig for watchdog boot on all levels should be off like so:
[root@centos:/etc/init.d ]# chkconfig --list |grep -i watchdog
watchdog 0:off 1:off 2:off 3:off 4:off 5:off 6:off
Enabling the watchdog from the chkconfig will cause watchdog to automatically restart the system as it will probably start the watchdog daemon before the softdog module is initialized. As watchdog will be unable to read the /dev/watchdog it will though the system has hanged even though the system might be in a boot process. Therefore it will end up in an endless loops of reboots which can only be fixed in a linux single user mode!!! Once again BEWARE, never ever activate watchdog via chkconfig!
Next step to be absolutely sure that watchdog device is running it can be checked with normal ps command:
[root@centos:/etc/init.d ]# ps aux|grep -i watchdog
root@hosting1-fr [~]# ps axu|grep -i watch|grep -v greproot 18692 0.0 0.0 1816 1812 ? SNLs 14:03 0:00 /usr/sbin/watchdog
root 25225 0.0 0.0 0 0 ? ZN 17:25 0:00 [watchdog] <defunct>
You have probably noticed the defunct state of watchdog, consider that as absolutely normal, above output indicates that now watchdog is properly running on the host and waiting to auto reboot in case of sudden /dev/watchdog disappearance.
As a last step before, after being sure its initialized properly, it’s necessery to add watchdog to run on boot time via /etc/rc.local post init script, like so:
[root@centos:/etc/init.d ]# echo 'echo /sbin/service watchdog start' >> /etc/rc.local
Now enjoy, watchdog is up and running and will automatically restart the CentOS host 😉
Tags: CentOS, crash, data, dmesg, existence, file, free software project, half an hour, hardware support, host, init, installation, installation configuration, kernel panic, Linux, linux server, linux servers, log, log messages, modprobe, necessery, piece of cake, root, server crash, server downtime, software, support, support ticket, tech support, ticket, time, topic, Watchdog, watchdog timer, web interface, yum
Posted in Linux, System Administration | 1 Comment »
Tuesday, August 2nd, 2011 I needed to install support for mbstring, as it was required by a client hosted on one of the servers running on CentOS 5.5.
Installation is quite straight forward as php-mbstring rpm package is available, here is how to install mbstring:
[root@centos [~]#yum install php-mbstring
...
Further on a restart of Apache or Litespeed and the mbstring support is loaded in php.
On some OpenVZ CentOS virtual servers enabling the php-mbstring might require also a complete php recompile if php is not build with the –enable-mbstring
If thus the mbstring has to be enabled on an OpenVZ server with php precompile, this can be easily done with cpeeasyapache , like so
server: ~# cd /home/cpeasyapache/src/php-5.2.9
server: php-5.2.9# cat config.nice |head -n $(($(cat config.nice |wc -l) - 1)) >> config.nice.new;
server: php-5.2.9# echo "'--enable-mbstring' \" >> config.nice.new; echo '"$@"' >> config.nice.new
server: php-5.2.9# mv config.nice config.nice.orig; mv config.nice.new config.nice
After that follow the normal way with make, make install and make install modules , e.g.:
server: php-5.2.9# make && make install && make install modules
Next the php-mbstring is enabled enjoy 😉
Tags: amp, apache, CentOS, client, config, installation, litespeed, Module, modulesNext, mv, newserver, openvz, orig, package, php, root, rpm, server, server php, support, virtual servers, way, yum
Posted in Linux, System Administration | 4 Comments »
Friday, July 22nd, 2011 
Lately, I’m basicly using htop‘s nice colourful advanced Linux top command frontend in almost every server I manage, therefore I’ve almost abondoned top usage these days and in that reason I wanted to have htop installed on few of the OpenVZ CentOS 5.5 Linux servers at work.
I looked online but unfortunately I couldn’t find any rpm pre-built binary packages. The source rpm package I tried to build from dag wieers repository failed as well, so finally I went further and decided to install htop from source
Here is how I did it:
1. Install gcc and glibc-devel prerequired rpm packages
[root@centos ~]# yum install gcc glibc-devel
2. Download htop and compile from source
[root@centos src]# cd /usr/local/src
[root@centos src]# wget "http://sourceforge.net/projects/htop/files/htop/0.9/htop-0.9.tar.gz/download"
Connecting to heanet.dl.sourceforge.net|193.1.193.66|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 418767 (409K) [application/x-gzip]
Saving to: "download"
100%[======================================>] 418,767 417K/s in 1.0s
2011-07-22 13:30:28 (417 KB/s) – “download” saved [418767/418767]
[root@centos src]# mv download htop.tar.gz
[root@centos src]# tar -zxf htop.tar.gz
[root@centos src]# cd htop-0.9
[root@centos htop-0.9]# ./configure && make && make install
make install should install htop to /usr/local/bin/htop
That’s all folks! , now my OpenVZ CentOS server is equipped with the nifty htop tool 😉
Tags: amp, CentOS, colourful, command, dag wieers, devel, frontend, heanet, htop, HTTP, Install, Installing, Linux, linux servers, mv, OKLength, online, openvz, package, reason, repository, request, response, root, rpm, src, tar gz, tar zxf, tool, wget, yum, zxf
Posted in Linux, System Administration | No Comments »
Monday, July 18th, 2011 Recently I had the task to add a range of few IP addresses to as a virtual interface IPs.
The normal way to do that is of course using the all well known ifconfig eth0:0, ifconfig eth0:1 or using a tiny shell script which does it and set it up to run through /etc/rc.local .
However the Redhat guys could omit all this mambo jambo and do it The Redhat way TM 😉 by using a standard method documented in CentOS and RHEL documentation.
Here is how:
# go to network-script directory[root@centos ~]# cd /etc/sysconfig/network-scripts
# create ifcfg-eth0-range (if virtual ips are to be assigned on eth0 lan interface[root@centos network-scripts]# touch ifcfg-eth0-range
Now inside ifcfg-eth0-range, open up with a text editor or use the echo command to put inside:
IPADDR_START=192.168.1.120
IPADDR_END=192.168.1.250
NETMASK=255.255.255.25
CLONENUM_START=0
Now save the /etc/sysconfig/network-scripts/ifcfg-eth0-range file and finally restart centos networking via the network script:
[root@centos network-scripts]# service network restart
That’s all now after the network gets reinitialized all the IPs starting with 192.168.1.120 and ending in 192.168.1.250< will get assigned as virtual IPs for eth0 interface
Cheers 😉
Tags: CentOS, CLONENUM, command, course, directory root, echo, echo command, eth, fedora linux, file, ifconfig eth0, ip addresses, ips, jambo, lan, lan interface, Linux, mambo jambo, Netmask, network, network scripts, Networking, range, rangeNow, Redhat, root, script directory, Shell, shell script, sysconfig, task, text, tiny shell, virtual interface, way
Posted in Linux and FreeBSD Desktop, System Administration | No Comments »
Thursday, July 14th, 2011 
Just recently it was necessery to load up a tun kernel module on few CentOS Linux servers.
I’m using Debian on daily basis, and everybody that had even little of experience with Debian should already be aware about the existence of the handy:
/etc/modules file.
On Debian to enable a certain kernel module to load up on Linux boot, all necessery is to just place the kernel module name in /etc/modules.
For example loading the tun tunneling kernel module I issue the command:
debian:~# echo tun >> /etc/modules
I wondered if CentOS, also supports /etc/modules as it was necessery now to add this tun module to load up on CentOS’s boot.
After a bit of research I’ve figured out CentOS does not have support for adding modules names in /etc/modules , anyhow after consulting CentOS documentation on http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-kernel-modules-persistant.html , I found CentOS and RHEL use /etc/rc.modules instead of Debian’s /etc/modules to load up any custom kernel modules not loaded by default during system boot.
Therefore instructing the RHEL Linux to load up my desired tun module in kernel on next boot was as easy as executing:
[root@centos ~]# echo 'modprobe tun' >> /etc/rc.modules
[root@centos ~]# chmod +x /etc/rc.modules
Now on next boot CentOS will load up the tun module in kernel. Achiving the same module load up is also possible through /etc/rc.local , but it’s not recommended way as /etc/rc.local would load up the kernel module after all of the rest init boot scripts complete and therefore will load up the module slightly later, at the final boot stage.
Tags: basis, boot, boot scripts, boot stage, CentOS, command, custom, custom kernel, daily basis, Debian, deployment guide, everybody, existence, experience, final boot, kernel, kernel module, kernel modules, Linux, linux servers, modprobe, Module, modulesNow, name, necessery, rhel, root, stage, support, system boot, use, way
Posted in Linux, System Administration | No Comments »
Saturday, July 9th, 2011 
These days, I’m managing many, many servers. The servers are ordered in few groups. Each of the servers in the server groups contains identical hardware, identical Linux distribution as well as identical configuration.
Since managing multiple servers normally, takes a lot of time, applying changes to every single host loosing time in looking for the password is not a a good idea.
Thus I was forced to start managing the servers in a cluster like fashion, by executing commands on a server group using a simple for bash loop etc.
To be able to use this mass execution of course I needed away either to pass on the server group password just once and issue a command on the whole server group or use a passwordless authentication ssh key pair.
Before I switched to using SSH keys to authenticate passwordless, I first tried to use a bit of tools which were claimed to be helpful in the task of executing the same commands to a group of servers. I have tested the tools pssh, sudossh and dsh but none of them was capable of logging in and executing a bunch of commands to the group of remote servers.
I gave my best to make pssh work on Debian and CentOS distributions, but even though all my experiemnts and efforts to make the so much talked about pssh to work were bad!
I’ve seen also another tool called Cluster SSH which was said to be capable of issuing commands simultaneously on a number of hosts.
Cluster SSH looked promising, however the only problem was it’s supposed to run under xterm or some kind of X graphics based terminal and therefore it did not matched my desired.
Finally I got pissed of trying these mostly useless mass command linux server administration tools and decided to come COME BACK TO THE PRIMITIVE 😉 and use the good all known, well established method of Passwordless SSH server login with ssh public/private DSA key auth.
Therefore here the problem come to this to generate one single DSA ssh authenticatoin key and replicate/copy it to the whole group of 50 servers.
These task initially seemed quite complex, but with the help of a one liner bash shell script, it seemed to be a piece of cake 😉
To achieve this task, all I had to do is:
a. Generate an SSH key with ssh-keygen command
and
b. Use a one liner shell script to copy the generated id_rsa.pub file, to each server.
and
c. Create a file containig all server IP addresses to pass to the shell script.
Here are the two lines of code you will have to use to achieve these tasks:
1. Generate a DSA ssh key
linux:~# ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/hipo/.ssh/id_dsa): y
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in y.
Your public key has been saved in y.pub.
The key fingerprint is:
b0:28:48:a2:60:65:5a:ed:1b:9d:6c:ff:5f:37:03:e3 hipo@www.pc-freak.net
Here press enter few times and be sure not to fill in any passphrase when asked ’bout it.
2. Create a file containing all server IPs
Just create a file let’s say server-list.txt and make sure you include all the server IPs, one per line.
3. Use the following little script to upload the newly generated id_dsa.pub to the server list
linux:~# while read line; do ssh-copy-id -i ~/.ssh/id_dsa.pub root@"$line"; done < server-list.txt
Now you will have to paste the server password for about 50 times (if you have a file with 50 servers), however the good news is it will be just a wait and paste 50 times, if the servers are all configured to have the same root administrator pass (which was the case with me).
So until you do the boring pasting part, you can start up a nice music and enjoy 😉
Cheers 😉
Tags: administration tools, authentication, BACK, CentOS, course, distributions, DSA, dsh, fashion, file, hipo, host, hosts, Linux, linux distribution, mass, mass execution, none, passphrase, password, PRIMITIVE, pssh, root, server administration, server group, server groups, server login, servers, Shell, ssh server, terminal, time, tool, work, xterm
Posted in Linux, Linux and FreeBSD Desktop, Programming, System Administration | 2 Comments »
Tuesday, June 28th, 2011 
I’ve been issuing new wildcard multiple SSL certificate to renew an expiring ones. After I completed the new certificate setup manually on the server (a CentOS 5.5 Final running SoluSVM Pro – Virtual Private Manager), I launched Firefox to give a try if the certificate is properly configured.
Instead of my expectations that the browser would just accept the certificate without spitting any error messages and all will be fine, insetad I got error with the just installed certificate and thus the browser failed to report the SSL cert is properly authenticated.
The company used to issue the SSL certificate is GlobeSSL – http://globessl.com , it was quite “hassle”, with the tech support as the first certficate generated by globessl was generation based on SSL key file with 4096 key encryption.
As the first issued Authenticated certificate generated by GlobeSSL was not good further on about a week time was necessery to completethe required certificate reissuing ….
It wasn’t just GlobeSSL’s failure, as there were some spam filters on my side that was preventing some of GlobeSSL emails to enter normally, however what was partially their fault as they haven’t made their notification and confirmation emails to pass by a mid-level strong anti-spam filter…
Anyways my overall experience with GlobeSSL certificate reissue and especially their technical support is terrible.
To make a parallel, issuing certificates with GoDaddy is a way more easier and straight forward.
Now let me come back to the main certificate error I got in Firefox …
A bit of further investigation with the cert failure, has led me to the error message which tracked back to the newly installed SSL certificate issues.
In order to find the exact cause of the SSL certificate failure in Firefox I followed to the menus:
Tools -> Page Info -> Security -> View Certificate
Doing so in the General browser tab, there was the following error:
Could not verify this certificate for unknown reasons
The information on Could not verify this certificate for unknown reasons on the internet was very mixed and many people online suggested many possible causes of the issue, so I was about to loose myself.
Everything with the certificate seemed to be configured just fine in lighttpd, all the GlobeSSL issued .cer and .key file as well as the ca bundle were configured to be read used in lighttpd in it’s configuration file:
/etc/lighttpd/lighttpd.conf
Here is a section taken from lighttpd.conf file which did the SSL certificate cert and key file configuration:
$SERVER["socket"] == "0.0.0.0:443" {
ssl.engine = "enable"
ssl.pemfile = "/etc/lighttpd/ssl/wildcard.mydomain.bundle"
}
The file /etc/lighttpd/ssl/wildcard.mydomain.bundle was containing the content of both the .key (generated on my server with openssl) and the .cer file (issued by GlobeSSL) as well as the CA bundle (by GlobeSSL).
Even though all seemed to be configured well the SSL error Could not verify this certificate for unknown reasons was still present in the browser.
GlobeSSL tech support suggested that I try their Web key matcher interface – https://confirm.globessl.com/key-matcher.html to verify that everything is fine with my certificate and the cert key. Thanks to this interface I figured out all seemed to be fine with the issued certificate itself and something else should be causing the SSL oddities.
I was further referred by GlobeSSL tech support for another web interface to debug errors with newly installed SSL certificates.
These interface is called Verify and Validate Installed SSL Certificate and is found here
Even though this SSL domain installation error report and debug tool did some helpful suggestions, it wasn’t it that helped me solve the issues.
What helped was First the suggestion made by one of the many tech support guy in GlobeSSL who suggested something is wrong with the CA Bundle and on a first place the documentation on SolusVM’s wiki – http://wiki.solusvm.com/index.php/Installing_an_SSL_Certificate .
Cccording to SolusVM’s documentation lighttpd.conf‘s file had to have one extra line pointing to a seperate file containing the issued CA bundle (which is a combined version of the issued SSL authority company SSL key and certificate).
The line I was missing in lighttpd.conf (described in dox), looked like so:
ssl.ca-file = “/usr/local/solusvm/ssl/gd_bundle.crt”
Thus to include the directive I changed my previous lighttpd.conf to look like so:
$SERVER["socket"] == "0.0.0.0:443" {
ssl.engine = "enable"
ssl.pemfile = "/etc/lighttpd/ssl/wildcard.mydomain.bundle"
ssl.ca-file = "/etc/lighttpd/ssl/server.bundle.crt"
}
Where server.bundle.crt contains an exact paste from the certificate (CA Bundle) mailed by GlobeSSL.
There was a couple of other ports on which an SSL was configured so I had to include these configuration directive everywhere in my conf I had anything related to SSL.
Finally to make the new settings take place I did a lighttpd server restart.
[root@centos ssl]# /etc/init.d/lighttpd restart
Stopping lighttpd: [ OK ]
Starting lighttpd: [ OK ]
After lighttpd reinitiated the error was gone! Cheers ! 😉
Tags: anti spam filter, bundle, CentOS, cert, certficate, certificate, certificate error, Certificates, completethe, confirmation, confirmation emails, directive, encryption, Engine, error message, error messages, everything, exact cause, failure, file, Firefox, generation, godaddy, hassle, key file, menus, mid level, necessery, pemfile, place, private manager, Socket, something, spam filters, ssl certificate, support, tech support, time, Virtual
Posted in System Administration, Web and CMS | No Comments »
Tuesday, June 21st, 2011 
I just was recommended by a friend a nifty tool, which is absoutely nifty for system administrators.
The tool is called sshsudo and the project is hosted on http://code.google.com/p/sshsudo/.
Let’s say you’re responsible for 10 servers with the same operating system let’s say; CentOS 4 and you want to install tcpdump and vnstat on all of them without logging one by one to each of the nodes.
This task is really simple with using sshsudo.
A typical use of sshsudo is:
[root@centos root]# sshsudo -u root \
comp1,comp2,comp3,comp4,comp5,comp6,comp7,comp8,comp9,comp10 yum install tcpdump vnstat
Consequently a password prompt will appear on the screen;
Please enter your password:
If all the servers are configured to have the same administrator root password then just typing one the root password will be enough and the command will get issued on all the servers.
The program can also be used to run a custom admin script by automatically populating the script (upload the script), to all the servers and issuing it next on.
One typical use to run a custom bash shell script on ten servers would be:
[root@centos root]# sshsudo -r -u root \
comp1,comp2,comp3,comp4,comp5,comp6,comp7,comp8,comp9,comp10 /pathtoscript/script.sh
I’m glad I found this handy tool 😉
Tags: admin script, Auto, bash shell script, CentOS, command, comp, comp3, comp6, custom, Draft, google, handy tool, nifty tool, operating system, password, project, root, Runing, screen, script, script upload, servers, Shell, SSHSUDO, sudo, system administrators, task, tcpdump, tool, upload, use, vnstat, yum
Posted in System Administration | No Comments »