Posts Tagged ‘gz’

Resume sftp / scp cancelled (interrupted) network transfer – Continue (large) partially downloaded files on Linux / Windows

Thursday, April 23rd, 2015

resume-sftp-scp-cancelled-interrupted-file-transfer-download-upload-network-transfer-continue-large-partially-downloaded-file-howto-linux-windows
I've recentely have a task to transfer some huge Application server long time stored data (about 70GB) of data after being archived between an old Linux host server and a new one to where the new Tomcat Application (Linux) server will be installed to fit the increased sites accessibility (server hardware overload).

The two systems are into a a paranoid DMZ network and does not have access between each other via SSH / FTP / FTPs and even no Web Access on port (80 or SSL – 443) between the two hosts, so in order to move the data I had to use a third HOP station Windows (server) which have a huge SAN network attached storage of 150 TB (as a Mapped drive I:/).

On the Windows HOP station which is giving me access via Citrix Receiver to the DMZ-ed network I'm using mobaxterm so I have the basic UNIX commands such as sftp / scp already existing on the Windows system via it.
Thus to transfer the Chronos Tomcat application stored files .tar.gz archived I've sftp-ed into the Linux host and used get command to retrieve it, e.g.:

 

sftp UserName@Linux-server.net
Password:
Connected to Linux-server.
sftp> get Chronos_Application_23_04_2015.tar.gz

….


The Secured DMZ Network seemed to have a network shaper limiting my get / Secured SCP download to be at 2.5MBytes / sec, thus the overall file transfer seemed to require a lot of time about 08:30 hours to complete. As it was the middle of day about 13:00 and my work day ends at 18:00 (this meant I would be able to keep the file retrieval session for a maximum of 5 hrs) and thus file transfer would cancel when I logout of the HOP station (after 18:00). However I've already left the file transfer to continue for 2hrs and thus about 23% of file were retrieved, thus I wondered whether SCP / SFTP Protocol file downloads could be resumed. I've checked thoroughfully all the options within sftp (interactive SCP client) and the scp command manual itself however none of it doesn't have a way to do a resume option. Then I thought for a while what I can use to continue the interrupted download and I remembered good old rsync (versatile remote and local file copying tool) which I often use to create customer backup stragies has the ability to resume partially downloaded files I wondered whether this partially downloaded file resume could be done only if file transfer was only initiated through rsync itself and luckily rsync is able to continue interrupted file transfers no matter what kind of HTTP / HTTPS / SCP / FTP program was used to start file retrievalrsync is able to continue cancelled / failed transfer due to network problems or user interaction activity), that turned even pretty easy to continue failed file transfer download from where it was interrupted I had to change to directory where file is located:
 

cd /path/to/interrupted_file/


and issue command:
 

rsync -av –partial username@Linux-server.net:/path/to/file .


the –partial option is the one that does the file resume trick, -a option stands for –archive and turns on the archive mode; equals -rlptgoD (no -H,-A,-X) arguments and -v option shows a file transfer percantage status line and an avarage estimated time for transfer to complete, an easier to remember rsync resume is like so:
 

rsync -avP username@Linux-server.net:/path/to/file .
Password:
receiving incremental file list
chronos_application_23_04_2015.tar.gz
  4364009472   8%    2.41MB/s    5:37:34

To continue a failed file upload with rsync (e.g. if you used sftp put command and the upload transfer failed or have been cancalled:
 

rsync -avP chronos_application_23_04_2015.tar.gz username@Linux-server.net:/path/where_to/upload


Of course for the rsync resume to work remote Linux system had installed rsync (package), if rsync was not available on remote system this would have not work, so before using this method make sure remote Linux / Windows server has rsync installed. There is an rsync port also for Windows so to resume large Giga or Terabyte file archive downloads easily between two Windows hosts use cwRsync.

Fixing Shellshock new critical remote bash shell exploitable vulnerability on Debian / Ubuntu / CentOS / RHEL / Fedora / OpenSuSE and Slackware

Friday, October 10th, 2014

Bash-ShellShock-remote-exploitable-Vulnerability-affecting-Linux-Mac-OSX-and-BSD-fixing-shellshock-bash-vulnerability-debian-redhat-fedora-centos-ubuntu-slackware-and-opensuse
If you still haven’t heard about the ShellShock Bash (Bourne Again) shell remote exploit vulnerability and you admin some Linux server, you will definitely have to read seriously about it. ShellShock Bash Vulnerabily has become public on Sept 24 and is described in details here.

The vulnerability allows remote malicious attacker to execute arbitrary code under certain conditions, by passing strings of code following environment variable assignments. Affected are most of bash versions starting with bash 1.14 to bash 4.3.
Even if you have patched there are some reports, there are other bash shell flaws in the way bash handles shell variables, so probably in the coming month there will be even more patches to follow.

Affected bash flaw OS-es are Linux, Mac OS and BSDs;

• Some DHCP clients

• OpenSSL servers that use ForceCommand capability in (Webserver config)

• Apache Webservers that use CGi Scripts through mod_cgi and mod_cgid as well as cgis written in bash or launching bash subshells

• Network exposed services that use bash somehow

Even though there is patch there are futher reports claiming patch ineffective from both Google developers and RedHat devs, they say there are other flaws in how batch handles variables which lead to same remote code execution.

There are a couple of online testing tools already to test whether your website or certain script from a website is vulnerable to bash remote code executions, one of the few online remote bash vulnerability scanner is here and here. Also a good usable resource to test whether your webserver is vulnerable to ShellShock remote attack is found on ShellShocker.Net.

As there are plenty of non-standard custom written scripts probably online and there is not too much publicity about the problem and most admins are lazy the vulnerability will stay unpatched for a really long time and we’re about to see more and more exploit tools circulating in the script kiddies irc botnets.

Fixing bash Shellcode remote vulnerability on Debian 5.0 Lenny.

Follow the article suggesting how to fix the remote exploitable bash following few steps on older unsupported Debian 4.0 / 3.0 (Potato) etc. – here.

Fixing the bash shellcode vulnerability on Debian 6.0 Squeeze. For those who never heard since April 2014, there is a A Debian LTS (Long Term Support) repository. To fix in Debian 6.0 use the LTS package repository, like described in following article.

If you have issues patching your Debian Wheezy 6.0 Linux bash, it might be because you already have a newer installed version of bash and apt-get is refusing to overwrite it with an older version which is provided by Debian LTS repos. The quickest and surest way to fix it is to do literally the following:


vim /etc/apt/sources.list

Paste inside to use the following LTS repositories:

deb http://http.debian.net/debian/ squeeze main contrib non-free
deb-src http://http.debian.net/debian/ squeeze main contrib non-free
deb http://security.debian.org/ squeeze/updates main contrib non-free
deb-src http://security.debian.org/ squeeze/updates main contrib non-free
deb http://http.debian.net/debian squeeze-lts main contrib non-free
deb-src http://http.debian.net/debian squeeze-lts main contrib non-free

Further on to check the available installable deb package versions with apt-get, issue:



apt-cache showpkg bash
...
...
Provides:
4.1-3+deb6u2 -
4.1-3 -
Reverse Provides:

As you see there are two installable versions of bash one from default Debian 6.0 repos 4.1-3 and the second one 4.1-3+deb6u2, another way to check the possible alternative installable versions when more than one version of a package is available is with:



apt-cache policy bash
...
*** 4.1-3+deb6u2 0
500 http://http.debian.net/debian/ squeeze-lts/main amd64 Packages
100 /var/lib/dpkg/status
4.1-3 0
500 http://http.debian.net/debian/ squeeze/main amd64 Packages

Then to install the LTS bash version on Debian 6.0 run:



apt-get install bash=4.1-3+deb6u2

Patching Ubuntu Linux supported version against shellcode bash vulnerability:
A security notice addressing Bash vulnerability in Ubuntus is in Ubuntu Security Notice (USN) here
USNs are a way Ubuntu discloses packages affected by a security issues, thus Ubuntu users should try to keep frequently an eye on Ubuntu Security Notices

apt-get update
apt-get install bash

Patching Bash Shellcode vulnerability on EOL (End of Life) versions of Ubuntu:

mkdir -p /usr/local/src/dist && cd /usr/local/src/dist
wget http://ftpmirror.gnu.org/bash/bash-4.3.tar.gz.sig
wget http://ftpmirror.gnu.org/bash/bash-4.3.tar.gz
wget http://tiswww.case.edu/php/chet/gpgkey.asc
gpg --import gpgkey.asc
gpg --verify bash-4.3.tar.gz.sig
cd ..
tar xzvf dist/bash-4.3.tar.gz
cd bash-4.3
mkdir patches && cd patches
wget -r --no-parent --accept "bash43-*" -nH -nd
ftp.heanet.ie/mirrors/gnu/bash/bash-4.3-patches/ # Use a local mirror
echo *sig | xargs -n 1 gpg --verify --quiet # see note 2

cd ..
echo patches/bash43-0?? | xargs -n 1 patch -p0 -i # see note 3 below

./configure --prefix=/usr --bindir=/bin
--docdir=/usr/share/doc/bash-4.3
--without-bash-malloc
--with-installed-readline

make
make test && make install

To solve bash vuln in recent Slackware Linux:

slackpkg update
slackpkg upgrade bash

For old Slacks, either download a patched version of bash or download the source for current installed package and apply the respective patch for the shellcode vulnerability.
There is also a GitHub project “ShellShock” Proof of Concept code demonstrating – https://github.com/mubix/shellshocker-pocs
There are also non-confirmed speculations for bash vulnerability bug to impact also:

Speculations:(Non-confirmed possibly vulnerable common server services):

• XMPP(ejabberd)

• Mailman

• MySQL

• NFS

• Bind9

• Procmail

• Exim

• Juniper Google Search

• Cisco Gear

• CUPS

• Postfix

• Qmail

Fixing ShellShock bash vulnerability on supported versions of CentOS, Redhat, Fedora

In supported versions of CentOS where EOL has not reached:

yum –y install bash

In Redhat, Fedoras recent releases to patch:

yum update bash

To upgrade the bash vulnerability in OpenSUSE:

zipper patch –cve=CVE-2014-7187

Shellcode is worser vulnerability than recent SSL severe vulnerability Hearbleed. According to Redhat and other sources this new bash vulnerability is already actively exploited in the wild and probably even worms are crawling the net stealing passwords, data and building IRC botnets for remote control and UDP flooding.

Fix “tar: Error exit delayed from previous errors” and its cause and solution

Monday, August 18th, 2014

fix-solve-tar-error-delayed-exit-from-previous-errors-tarball-error

tar: Error exit delayed from previous errors

error is a very common error encountered when creating archives (or backing up server configurations / websites / sql binary data). The error is quite unexplanatory and whenever creating files verbose in order to see the files added to archve in "real time" with lets say:

tar -czvf /tmp/filename_backup_date-of-backup.tar.gz /home/websites /home/sql


its pretty hard to track on exactly which file is the backup producing the Error exit delayed from previous errors, this is especially the case whenever adding to archive directories containing millions of tiny few kilobyte sized files. Many novice on uncautious Linux admins , might simply ignore the warning if they're in a hurry / are having excessive work to be done as there will be .tar.gz backup produced and whenever uncompressed most of the files are there and the backup error would seem not of a big issue.

However as backuping files is vital stuff, especially when moving the files from a server to be decomissioned you have to be extra careful and make the backup properly, e.g. figure out the cause of the error, to do so log the full output of tar operations with tee command, like so:

tar -czvf /tmp/filename_backup_date-of-backup.tar.gz /home/websites/ /home/sql | tee /tmp/backup_tar_full_output.log

Then you will have to review the file and lookup for errors with less search string – / (slash) – look for "error" and "permission den" keywords and this should point you to what is causing the error. In cases when millions of files are to be archived, the log might grow really big and hard to process, therefore a much quicker way to understand what's happening is to only log and show in shell standard output last file error with > (shell redirect):
 

tar -czvf /tmp/filename_backup_date-of-backup.tar.gz /home/websites /home/sql > /tmp/backup_failure-cause.log

 

tar: www.ur-website.com-http/2.0.63/conf/tnsnames.ora.20080918: Cannot open: Permission denied
tar: Removing leading `/' from member names

The error indicates clearly the cause of error is lack of Permissions to read the file tnsnames.ora.20080918 so solution is to either grant permissions to non-root user with (chmod / chown) cmds, in my case grant perms to user hipo with which tar is ran, or run again the website backup with superuser, I usually just run with root user to prevent tampering with original permissions, e.g. to solve the error, either:

$ su root
# tar -czvf /tmp/filename_backup_date-of-backup.tar.gz /home/websites /home/sql

Or even better if sudo is installed and user is added to /etc/sudoers file

$ sudo tar -czvf /tmp/filename_backup_date-of-backup.tar.gz /home/websites /home/sql


Though permission errors is the most often reason for:

tar: Error exit delayed from previous errors, you should keep in mind that in some cases the error might be caused due to failing RAID membered disk drive or single hdd failure on systems that are not in some RAID array

 

Fun with Apache / Nginx Webserver log – Visualize webserver access log in real time

Friday, July 18th, 2014

visualize-graphically-web-server-access-log-logstalgia-nginx-apache-log-visualize-in-gnu-linux-and-windows
If you're working in a hosting company and looking for a graphical way to Visualize access to your Linux webservers – (Apache, Nginx, Lighttpd) you will be happy to learn about Logstalgia's existence. Logstalgia is very useful if you need to convince your Boss / company clients that the webservers are exceeding the CPU / Memory hardware limits physically servers can handle. Even if you don't have to convince anyone of anything logstalgia is cool to run if you want to impress a friend and show off your 1337 4Dm!N Sk!11Z 🙂 Nostalgia is much more pleasent way to keep an eye on your Webserver log files in real time better than (tail -f)

The graphical output of nostalgia is a pong-like battle game between webserver and never ending chain of web requests.

This is the official website description of Logstalgia:
 

Logstalgia is a website traffic visualization that replays web-server access logs as a pong-like battle between the web server and an never ending torrent of requests. Requests appear as colored balls (the same color as the host) which travel across the screen to arrive at the requested location. Successful requests are hit by the paddle while unsuccessful ones (eg 404 – File Not Found) are missed and pass through. The paths of requests are summarized within the available space by identifying common path prefixes. Related paths are grouped together under headings. For instance, by default paths ending in png, gif or jpg are grouped under the heading Images. Paths that don’t match any of the specified groups are lumped together under a Miscellaneous section.


To install Logstalgia on Debian / Ubuntu Linux there is a native package, so to install it run the usual:

apt-get --yes install logstalgia

Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
  logstalgia
0 upgraded, 1 newly installed, 0 to remove and 4 not upgraded.
Need to get 161 kB of archives.
After this operation, 1,102 kB of additional disk space will be used.
Get:1 http://mirrors.kernel.org/debian/ stable/main logstalgia amd64 1.0.0-1+b1 [161 kB]
Fetched 161 kB in 2s (73.9 kB/s)
Selecting previously deselected package logstalgia.
(Reading database ... 338532 files and directories currently installed.)
Unpacking logstalgia (from .../logstalgia_1.0.0-1+b1_amd64.deb) ...
Processing triggers for man-db ...
Setting up logstalgia (1.0.0-1+b1) ...


Logstalgia is easily installable from source code on non-Debian Linux distributions too, to install it on any non-debian Linux distrubution do:

cd /usr/local/src/ wget https://logstalgia.googlecode.com/files/logstalgia-1.0.5.tar.gz
 

–2014-07-18 13:53:23–  https://logstalgia.googlecode.com/files/logstalgia-1.0.3.tar.gz
Resolving logstalgia.googlecode.com… 74.125.206.82, 2a00:1450:400c:c04::52
Connecting to logstalgia.googlecode.com|74.125.206.82|:443… connected.
HTTP request sent, awaiting response… 200 OK
Length: 841822 (822K) [application/x-gzip]
Saving to: `logstalgia-1.0.3.tar.gz'

100%[=================================>] 841,822     1.25M/s   in 0.6s

2014-07-18 13:53:24 (1.25 MB/s) – `logstalgia-1.0.3.tar.gz' saved [841822/841822]

Untar the archive with:
 

tar -zxvf logstalgia-1.0.5.tar.gz

Compile and install it:

cd logstalgia
./configure
make
make install

 

How to use LogStalgia?

Syntax is pretty straight forward just pass the Nginx / Apache

Process Debian Linux Apache logs:

logstalgia /var/log/apache2/access.log


Process CentoS, Redhat etc. RPM based logs:

logstalgia /var/log/httpd/access.log
To process webserver log in real time with logstalgia:

tail -f /var/log/httpd/access_log | logstalgia -

To make logstalgia visualize log output you will need to have access to server physical console screen. As physical access is not possible on most dedicated servers – already colocated in some Datacenter. You can also use a local Linux PC / notebook installed with nostalgia to process webserver access logs remotely like so:

logstalgia-visualize-your-apache-nginx-lighttpd-logs-graphically-in-x-and-console-locally-and-remotely

ssh hipo@pc-freak.net tail -f /var/log/apache2/access.log | logstalgia --sync

Note! If you get an empty output from logstalgia, this is because of permission issues, in this example my user hipo is added in www-data Apache group – if you want to add your user to have access like me, issue on remote ssh server):
 

addgroup hipo www-data


Alterantively you can login with ssh with root, e.g. ssh root@pc-freak.net

If you're having a GNOME / KDE X environment on the Linux machine from which you're ssh-ing Logstalgia will visualize Webserver access.log requests inside a new X Window otherwise if you're on a Linux with just a console with no Xserver graphics it will visualize graphically web log statistics using console svgalib .

 

If you're planning to save output from nostalgia visualization screen for later use – lets say you have to present to your CEO statistics about all your servers  Webservers logs you can save nostalgia produced video in .ppm (netpbm) format.

Whether you have physical console access to the server:

logstalgia -1280x720 --output-ppm-stream output.ppm /var/log/httpd/access.log

Or if you just a have a PC with Linux and you want to save visualized content of access.log remotely:

ssh hipo@pc-freak.net tail -f /var/log/nginx/pc-freak-access.log | logstalgia -1280x720 --output-ppm-stream --sync output.ppm

 

ssh user@server1.cyberciti.biz tail -f /var/log/nginx/www.cyberciti.biz_access.log | logstalgia -1280x720 --output-ppm-stream --sync output.ppm

To make produced .ppm later usable you can use ffmpeg to convert to .mp4:

ffmpeg -y -r 60 -f image2pipe -vcodec ppm -i output.ppm -vcodec libx264 -preset ultrafast -pix_fmt yuv420p -crf 1 -threads 0 -bf 0 nginx.server.log.mp4

Then to play the videos use any video player, I usually use vlc and mplayer.

For complete info on Nostalgia – website access log visualizercheck home page on googlecode

If you're lazy to install Logstalgia, here is Youtube video made from its output:

Enjoy 🙂

Windows equivalent of Linux which, whereis command – Windows WHERE command

Friday, June 6th, 2014

windows-find-commands-full-location-where-which-linux-commands-equivalent-in-windows-where-screenshot
In Linux there are the which and whereis commands showing you location of binaries included in $PATH # which lsof /usr/bin/lsof

# whereis lsof
lsof: /usr/bin/lsof /usr/share/man/man8/lsof.8.gz

so question arises what is which / whereis command Linux commands Windows equivalent?

In older Windows Home / Server editions – e.g. – Windows XP, 2000, 2003 – there is no standard installed tool to show you location of windows %PATH% defined executables. However it is possible to add the WHERE command binary by installing Resource Kit tools for administrative tasks.

windows-resource-kit-tools-install-where-linux-command-equivalent-in-windows

 

In Windows Vista / 7 / 8 (and presumably in future Windows releases), WHERE command is (will be) available by default

C:\Users\Georgi>WHERE SQLPLUS
D:\webdienste\application\oracle\11.2.0\client_1\BIN\sqlplus.exe

Cheers! 🙂

Linux: List last 10 (newest) and 10 oldest modified files in a directory with ls

Tuesday, April 8th, 2014

An useful thing on GNU / Linux sometimes is to list last or oldest modified files in directory.

Lets say you want to list last 10 modified files with ls from today / yesterday. Here is how:
 

ls -1t | head -10
my.cnf
wordperss_enabled_plugins.txt
newcerts/
mysql-hipo_pcfreakbiz.dump
NewArchive-Jan-10-15.zip
hipo_pcfreakbiz-mysqldb-any-out-1389361234.tgz
Tisho_Snimki/
wordpress/
wp-cron.php?doing_wp_cron=1.1
wp-cron.php

 

To list 10 oldest modified files on Linux:

 

ls -1t | tail -10
    my.cnf
    pcfreak_sql-15_10_05_2012
    mysql-tuning-primer*
    tuning-primer.sh*
    system-administration-services.html
    blog_backup_15_07_2012.tar.gz
    www-files/
    dump.sql
    courier-imap*
    djbdns-1.05.tar.gz


Cheers 😉

Make QMAIL with vpopmail vchkpw, courier-authlib and courier-imap auth work without MySQL on Debian Linux qmailrocks Thibs install

Friday, September 28th, 2012

How to make qmail vpopmail vchkpw courier-authlib and courier-imap work storing mails on hard disk with qmailrocks Thibs install

Recently installed a new QMAIL, following mostly Thibs Qmailrocks install guide. I didn’t followed literally Thibs good guide, cause in his guide in few of the sections like Install Vpopmail he recommends using MySQL as a Backend to store Vpopmail email data and passwords; I prefer storing all vpopmail data on the file system as I believe it is much better especially for tiny QMAIL mail servers with less than 500 mail box accounts.

In this little article I will explain, how I made Vpopmail courier-authlib and courier-imap play nice together without storing data in SQL backend.

1. Compile vpopmail with file system data storage support

So here is how I managed to make vpopmail + courier-authlib + courier-imap, work well together:

First its necessery to compile Vpopmailin store all its users data and mail data on file system. For this in Thibs Vpopmail Intsall step compiled Vpopmail without support for MySQL, e.g. instead of using his pointed compile time ./configure, arguments I used:


# cd /downloads/vpopmail-5.4.33
# ./configure \
--enable-qmaildir=/var/qmail/ \
--enable-qmail-newu=/var/qmail/bin/qmail-newu \
--enable-qmail-inject=/var/qmail/bin/qmail-inject \
--enable-qmail-newmrh=/var/qmail/bin/qmail-newmrh \
--enable-tcprules-prog=/usr/bin/tcprules \
--enable-tcpserver-file=/etc/tcp.smtp \
--enable-clear-passwd \
--enable-many-domains \
--enable-qmail-ext \
--enable-logging=y \
--enable-auth-logging \
--enable-libdir=/usr/lib/ \
--disable-roaming-users \
--disable-passwd \
--enable-domainquotas \
--enable-roaming-users
....
....
# make && make install-strip
# cat > ~vpopmail/etc/vusagec.conf < < __EOF__
Server:
Disable = True;
__EOF__
echo 'export PATH=$PATH:/var/qmail/bin/:/home/vpopmail/bin/' > /etc/profile.d/extrapath.sh
chmod +x /etc/profile.d/extrapath.sh
source /etc/profile

A tiny shell script with all above options to compile (qmail) vpopmail without MySQL / PostgreSQL support is here

For other steps concerning creation of vpopmail/vchkpw – user/group just follow as Thibs suggests.

2. Compile and install courier-authlib-0.59.1

I’ve made mirror of courier-authlib.0.59.1.tar.gz cause this version includes support for vchkpw without mysql, its a pity newer versions of courier-authlib not any more have support for vpopmail to store its data directly on the hard disk.

Then on downlaod, compile && install courier-authlib:

Download authlib courier-authlib.0.59.1.tar.gz – (I made mirror of courier-authlib.0.59.1.tar.gz you can use my mirror or download it somewhere else from the net):


# cd /usr/local/src
# wget -q http://www.pc-freak.net/files/courier-authlib.0.59.1.tar.gz
# tar -zxvvf courier-authlib.0.59.1.tar.gz

Compile courier-authlib

# ./configure --prefix=/usr/local --exec-prefix=/usr/local --with-authvchkpw --without-authldap --without-authmysql --disable-root-check --with-ssl --with-authchangepwdir=/usr/local/libexec/authlib
....
# make && make install && make install-strip && make install-configure
....

On Debian Squeeze, this version of courier-authlib compiles fine, on Debian Lenny I use it too and there it is okay.

Unless above commands returns a compile error authlib will be installed inside /usr/local/libexec. If you get any errors it is most likely due to some missing header files. The error should be self explanatory enough, but just in case you have troubles to find what deb is necessery to install, please check here the complete list of installed packages I have on the host . In case of problems the quickest way (if on Debian Squeeze) is to install same packages, type:


# wget -q http://www.pc-freak.net/files/list_of_all_deb_necessery_installed_packages_for_authlib.txt
# for i in $(cat list_of_all_deb_necessery_installed_packages_for_authlib.txt |awk '{ print $2 }'); do
apt-get install --yes $i;
done

This is for the lazy ones though it might install you some packs you don’t like to have on your host, so just install it in case you know what you’re doing 🙂

Next step is to set proper configuration for courier-authdaemon.

3. Configure courier-authlib in /usr/local/etc/authlib

Again for the lazy ones I have prepared a good config which is working 100% with vpopmail configured to store mails on the file system, to install the “good” configs, fetch mine and put them in proper location, e.g.:


# cd /usr/local/etc
# wget -q http://www.pc-freak.net/files/authlib-config-for-qmail-with-hdd-directory-stored-userdata.tar.gz
# tar -zxvvf authlib-config-for-qmail-with-hdd-directory-stored-userdata.tar.gz
....

For those who prefer not to use my configuration as pointed above, here is what you will need to change manually in configs:

Edit /usr/local/etc/authlib/authdaemonrc and make sure there variable authmodulelist and authmodulelist and daemons=5
equals to:


authmodulelist="authvchkpw"


authmodulelistorig="authuserdb authpgsql authldap authmysql authcustom authvchkpw authpipe"


daemons=10

Bear in mind here the setting daemons, will set how many maximum parallel connections should be possible to authdaemond on new IMAP fetch mail user requests. Setting it to 10 will allow your mail server to support up to 10 users to paralelly check your mail for a tiny mail server this setting is okay if you expect higher number of parallel mail users raise the setting to some setting fitting your needs.

P.S. On some qmail installations this value has created weird problems and took me hours to debug the whole mess is caused by this setting, make sure you plan it now unless you don’t to loose some time in future.

4. Stop debian courier-authdaemon and start custom compiled one

Now all is ready and authdaemond can be started, but before that if you have installed courier-authlib as a debian package you need to stop it via init script and only when completely sure old default Debian courier-authdaemon is stopped launch the new installed one:


# /etc/init.d/courier-authdaemon stop
# s ax |grep -i authdaemond |grep -v grep
#
# /usr/local/sbin/authdaemond start
#

To make the newly custom source installed courier-authdaemon to load itself on system boot instead of the debian installed package


# dpkg -l |grep -i courier-authdaemon
ii courier-authdaemon 0.63.0-3 Courier authentication daemon

open /etc/init.d/courier-authdaemond, after line:


. /lib/lsb/init-functions

add


/usr/local/sbin/authdaemond start
exit 0

This will make the script exit once launches cmd /usr/local/sbin/authdaemond start

5. Compile and Install courier-imap

You will also have to install from courier-imap archive source, I have tested it and know Qmail + Vpopmail + Courier-Imap works for sure with version courier-imap-4.1.2.tar.bz2

As of time of writing this post courier-imap-4.11.0.tar.bz2 is the latest available for download from Courier-imap download site unfortunately this version requires higher version of >= courier-authlib-0.63

In order install courier-imap-4.1.2.tar.bz2


# cd /usr/local/src
# wget -q http://www.pc-freak.net/files/courier-imap-4.1.2.tar.bz2
# tar -jxvvf courier-imap-4.1.2.tar.bz2
...
# chown -R hipo:hipo courier-imap-4.1.2
# su hipo
$ cd courier-imap-4.1.2/
$ export CFLAGS="-DHAVE_OPEN_SMTP_RELAY -DHAVE_VLOGAUTH"
$ export COURIERAUTHCONFIG=/usr/local/bin/courierauthconfig
$ export CPPFLAGS=-I/usr/local/courier-authlib/include
$ ./configure --prefix=/usr/local/courier-imap --disable-root-check
...
$ exit
# make
...
# make install
...
# make install configure

It is recommended courier-imap to be compiled with non root username. In above code I use my username hipo, other people have to use any non-root user.

6. Set proper configuration and new init script for courier-imap

In /usr/lib/courier-imap, download following working configs (for convenience I’ve made tar with my configs):


# cd /usr/lib/courier-imap
# rm -rf etc
# wget -q http://www.pc-freak.net/files/courier-imap-config-etc.tar.gz

Then you will have to overwrite default courier-imap init script in /etc/init.d/courier-imap with another one to start the custom compiled one instead of debian default installed courier-imap


# mv /etc/init.d/courier-imap /root
# cd /etc/init.d
# wget -q http://www.pc-freak.net/files/debian-courier-imap
# mv debian-courier-imap courier-imap
# chmod +x courier-imap

This init script is written use /var/lock/subsys/courier-imap, so you will have to also create /var/lock/subsys/


# mkdir -p /var/lock/subsys

7. Start custom installed courier-imap

The start/stop init script of newly installed courier-imap is /usr/lib/courier-imap/libexec/imapd.rc


/usr/lib/courier-imap/libexec/imapd.rc start

Since a new /etc/init.dcourier-imap is installed too, it can be also used to control courier-imap start/stop.

Well thats should be enough for Courier-authlib and Courier-Authlib to communicate fine between each other and be able to connect and fetch e-mail stored in file system by vpopmail.

8. Test if Qmail IMAP proto finally works


# telnet localhost 143
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
* OK [CAPABILITY IMAP4rev1 UIDPLUS CHILDREN NAMESPACE THREAD=ORDEREDSUBJECT THREAD=REFERENCES SORT QUOTA IDLE AUTH=CRAM-MD5 ACL ACL2=UNION STARTTLS] Courier-IMAP ready. Copyright 1998-2011 Double Precision, Inc. See COPYING for distribution information.
a login username@mail-domain.com my-username-password
a OK LOGIN Ok.
a LIST "" "*"
* LIST (\HasNoChildren) "." "INBOX.Sent"
* LIST (\Marked \HasChildren) "." "INBOX"
* LIST (\HasNoChildren) "." "INBOX.Drafts"
* LIST (\HasNoChildren) "." "INBOX.Trash"
a OK LIST completed
a EXAMINE Inbox
* FLAGS (\Draft \Answered \Flagged \Deleted \Seen \Recent)
* OK [PERMANENTFLAGS ()] No permanent flags permitted
* 6683 EXISTS
* 471 RECENT
* OK [UIDVALIDITY 1272460837] Ok
* OK [MYRIGHTS "acdilrsw"] ACL
a OK [READ-ONLY] Ok
* 1 FETCH (BODY[] {2619}
Return-Path:
Delivered-To: hipo@my-domain-name.com
Received: (qmail 22304 invoked by uid 1048); 24 Apr 2012 14:49:49 -0000
Received: from unknown (HELO localhost) (127.0.0.1)
by mail.my-domain-name.com with SMTP; 24 Apr 2012 14:49:49 -0000
Delivered-To: hipo@my-domain-name.com
Received: from localhost [127.0.0.1]
......
......

That’s all it works. Enjoy 🙂

How to build website sitemap (sitemap.xml) in Joomla

Monday, December 20th, 2010

A joomla installation I’m managing needed to have created a sitemap.xml

From a SEO perspective sitemap.xml and sitemap.xml.gz are absolutely necessary for a website to bewell indexed in google.

After some investigation I’ve found out a perfect program that makes generation of sitemap.xml in joomla a piece of cake.

All necessary to be done for the sitemap.xml to be created comes to download and install of JCrawler

What makes JCrawler even better is that it’s open source program.

To install Jcrawler go in the joomla administrator panel:

Extensiosn -> Install/Uninstall

You will see the Install from URL on the bottom, there place the link to the com_jcrawler.zip installation archive, for instance you can place the link of the downloaded copy of com_jcrawler.zip I’ve mirrored on pc-freak.net ,by placing it on http://www.pc-freak.net/files/com_jcrawler.zip

The installation will be done in a few seconds and you will hopefully be greated with a Installation Success message.

Last thing to do in order to have the sitemap.xml in your joomla based website generated is to navigate to:

Components -> JCrawler

There a screen will appear where you can customize certain things related to the sitemap.xml generation, I myself used the default options and continued straight to the Start button.

Further on a screen will appear asking you to Submit the newly generated sitemap to; Google, MSN, Ask.com, Moreover and Yahoo , so press the Submit button.
That’s all now your joomla website will be equipped with sitemap.xml, enjoy!

Convert png files to ico on Linux – Create “icon” files from pictures

Tuesday, November 2nd, 2010

You will need png2ico

First you will have to download the png2ico source

Now you will have to download compile and install the program by issuing:

debian:~# wget http://www.winterdrache.de/freeware/png2ico/data/png2ico-src-2002-12-08.tar.gz
debian:~# tar -zxvf png2ico-src-2002-12-08.tar.gz...
debian:~# cd png2ico/
debian:/root/png2ico# make
debian:/root/png2ico# cp -rpf png2ico /usr/local/bin/

Convertion is pretty easy and it comes to executing simply:

debian:/home/hipo$ png2ico favicon.ico png_picture_to_convert.png

Note that your png_picture_to_convert.png has to be in a graphic dimensions of 16×16
That’s all now you should have your favicon.ico on your Linux created.