Posts Tagged ‘client’

How to check the IP address of Skype (user / Contacts) on GNU / Linux with netstat and whois

Thursday, May 3rd, 2012

netstat check skype contact IP info with netstat Linux xterm Debian Linux

Before I explain how netstat and whois commands can be used to check information about a remote skype user – e.g. (skype msg is send or receved) in Skype. I will say in a a few words ( abstract level ), how skype P2P protocol is designed.
Many hard core hackers, certainly know how skype operates, so if this is the case just skip the boring few lines of explanation on how skype proto works.

In short skype transfers its message data as most people know in Peer-to-Peer "mode" (P2P)  – p2p is unique with this that it doesn't require a a server to transfer data from one peer to another. Most classical use of p2p networks in the free software realm are the bittorrents.

Skype way of connecting to peer client to other peer client is done via a so called "transport points". To make a P-to-P connection skype wents through a number of middle point destinations. This transport points (peers) are actually other users logged in Skype and the data between point A and point B is transferred via this other logged users in encrypted form. If a skype messages has to be transferred  from Peer A (point A) to Peer B (Point B) or (the other way around), the data flows in a way similar to:

 A -> D -> F -> B

or

B -> F -> D -> A

(where D and F are simply other people running skype on their PCs).
The communication from a person A to person B chat in Skype hence, always passes by at least few other IP addresses which are owned by some skype users who happen to be located in the middle geographically between the real geographic location of A (the skype peer sender) and B (The skype peer receiver)..

The exact way skypes communicate is way more complex, this basics however should be enough to grasp the basic skype proto concept for most ppl …

In order to find the IP address to a certain skype contact – one needs to check all ESTABLISHED connections of type skype protocol with netsat within the kernel network stack (connection) queue.

netstat displays few IPs, when skype proto established connections are grepped:

noah:~# netstat -tupan|grep -i skype | grep -i established| grep -v '0.0.0.0'
tcp 0 0 192.168.2.134:59677 212.72.192.8:58401 ESTABLISHED 3606/skype
tcp 0 0 192.168.2.134:49096 213.199.179.161:40029 ESTABLISHED 3606/skype
tcp 0 0 192.168.2.134:57896 87.120.255.10:57063 ESTABLISHED 3606/skype

Now, as few IPs are displayed, one needs to find out which exactly from the list of the ESTABLISHED IPs is the the Skype Contact from whom are received or to whom are sent the messages in question.

The blue colored IP address:port is the local IP address of my host running the Skype client. The red one is the IP address of the remote skype host (Skype Name) to which messages are transferred (in the the exact time the netstat command was ran.

The easiest way to find exactly which, from all the listed IP is the IP address of the remote person is to send multiple messages in a low time interval (let's say 10 secs / 10 messages to the remote Skype contact).

It is a hard task to write 10 msgs for 10 seconds and run 10 times a netstat in separate terminal (simultaneously). Therefore it is a good practice instead of trying your reflex, to run a tiny loop to delay 1 sec its execution and run the prior netstat cmd.

To do so open a new terminal window and type:

noah:~# for i in $(seq 1 10); do \
sleep 1; echo '-------'; \
netstat -tupan|grep -i skype | grep -i established| grep -v '0.0.0.0'; \
done

-------
tcp 0 0 192.168.2.134:55119 87.126.71.94:26309 ESTABLISHED 3606/skype
-------
tcp 0 0 192.168.2.134:49096 213.199.179.161:40029 ESTABLISHED 3606/skype
tcp 0 0 192.168.2.134:55119 87.126.71.94:26309 ESTABLISHED 3606/skype
-------
tcp 0 0 192.168.2.134:49096 213.199.179.161:40029 ESTABLISHED 3606/skype
tcp 0 0 192.168.2.134:55119 87.126.71.94:26309 ESTABLISHED 3606/skype
...

You see on the first netstat (sequence) exec, there is only 1 IP address to which a skype connection is established, once I sent some new messages to my remote skype friend, another IP immediatelly appeared. This other IP is actually the IP of the person to whom, I'm sending the "probe" skype messages.
Hence, its most likely the skype chat at hand is with a person who has an IP address of the newly appeared 213.199.179.161

Later to get exact information on who owns 213.199.179.161 and administrative contact info as well as address of the ISP or person owning the IP, do a RIPE  whois

noah:~# whois 213.199.179.161
% This is the RIPE Database query service.
% The objects are in RPSL format.
%
% The RIPE Database is subject to Terms and Conditions.
% See http://www.ripe.net/db/support/db-terms-conditions.pdf

% Note: this output has been filtered.
% To receive output for a database update, use the "-B" flag.
% Information related to '87.126.0.0 - 87.126.127.255'
inetnum: 87.126.0.0 - 87.126.127.255
netname: BTC-BROADBAND-NET-2
descr: BTC Broadband Service
country: BG
admin-c: LG700-RIPE
tech-c: LG700-RIPE
tech-c: SS4127-RIPE
status: ASSIGNED PA
mnt-by: BT95-ADM
mnt-domains: BT95-ADM
mnt-lower: BT95-ADM
source: RIPE # Filteredperson: Lyubomir Georgiev
.....

Note that this method of finding out the remote Skype Name IP to whom a skype chat is running is not always precise.

If for instance you tend to chat to many people simultaneously in skype, finding the exact IPs of each of the multiple Skype contacts will be a very hard not to say impossible task.
Often also by using netstat to capture a Skype Name you're in chat with, there might be plenty of "false positive" IPs..
For instance, Skype might show a remote Skype contact IP correct but still this might not be the IP from which the remote skype user is chatting, as the remote skype side might not have a unique assigned internet IP address but might use his NET connection over a NAT or DMZ.

The remote skype user might be hard or impossible to track also if skype client is run over skype tor proxy for the sake of anonymity
Though it can't be taken as granted that the IP address obtained would be 100% correct with the netstat + whois method, in most cases it is enough to give (at least approximate) info on a Country and City origin of the person you're skyping with.
 

How to quickly check unread Gmail emails on GNU / Linux – one liner script

Monday, April 2nd, 2012

I've hit an interesting article explaining how to check unread gmail email messages in Linux terminal. The original article is here

Being able to read your latest gmail emails in terminal/console is great thing, especially for console geeks like me.
Here is the one liner script:

curl -u GMAIL-USERNAME@gmail.com:SECRET-PASSWORD \
--silent "https://mail.google.com/mail/feed/atom" | tr -d '\n' \
| awk -F '' '{for (i=2; i<=NF; i++) {print $i}}' \
| sed -n "s/

Linux Users Group M. – [7] discussions, [10] comments and [2] jobs on LinkedIn
Twitter – Lynn Serafinn (@LynnSerafinn) has sent you a direct message on Twitter!
Facebook – Sys, you have notifications pending
Twitter – Email Marketing (@optinlists) is now following you on Twitter!
Twitter – Lynn Serafinn (@LynnSerafinn) is now following you on Twitter!
NutshellMail – 32 New Messages for Sat 3/31 12:00 PM
Linux Users Group M. – [10] discussions, [5] comments and [8] jobs on LinkedIn
eBay – Deals up to 60% OFF + A Sweepstakes!
LinkedIn Today – Top news today: The Magic of Doing One Thing at a Time
NutshellMail – 29 New Messages for Fri 3/30 12:00 PM
Linux Users Group M. – [16] discussions, [8] comments and [8] jobs on LinkedIn
Ervan Faizal Rizki . – Join my network on LinkedIn
Twitter – LEXO (@LEXOmx) retweeted one of your Tweets!
NutshellMail – 24 New Messages for Thu 3/29 12:00 PM
Facebook – Your Weekly Facebook Page Update
Linux Users Group M. – [11] discussions, [9] comments and [16] jobs on LinkedIn

As you see this one liner uses curl to fetch the information from mail.google.com's atom feed and then uses awk and sed to parse the returned content and make it suitable for display.

If you want to use the script every now and then on a Linux server or your Linux desktop you can download the above code in a script file -quick_gmail_new_mail_check.sh here

Here is a screenshot of script's returned output:

Quick Gmail New Mail Check bash script screenshot

A good use of a modified version of the script is in conjunction with a 15 minutes cron job to launch for new gmail mails and launch your favourite desktop mail client.
This method is useful if you don't want a constant hanging Thunderbird or Evolution, pop3 / imap client on your system to just take up memory or dangle down the window list.
I've done a little modification to the script to simply, launch a predefined email reader program, if gmail atom feed returns new unread mails are available, check or download my check_gmail_unread_mail.sh here
Bear in mind, on occasions of errors with incorrect username or password, the script will not return any errors. The script is missing a properer error handling.Therefore, before you use the script make sure:

gmail_username='YOUR-USERNAME';
gmail_password='YOUR-PASSWORD';

are 100% correct.

To launch the script on 15 minutes cronjob, put it somewhere and place a cron in (non-root) user:

# crontab -u root -e
...
*/15 * * * * /path/to/check_gmail_unread_mail.sh

Once you read your new emails in lets say Thunderbird, close it and on the next delivered unread gmail mails, your mail client will pop up by itself again. Once the mail client is closed the script execution will be terminated.
Consised that if you get too frequently gmail emails, using the script might be annoying as every 15 minutes your mail client will be re-opened.

If you use any of the shell scripts, make sure there are well secured (make it owned only by you). The gmail username and pass are in plain text, so someone can steal your password, very easily. For a one user Linux desktops systems as my case, security is not such a big concern, putting my user only readable script permissions (e.g. chmod 0700)is enough.

How to delete million of files on busy Linux servers (Work out Argument list too long)

Tuesday, March 20th, 2012

How to Delete million or many thousands of files in the same directory on GNU / Linux and FreeBSD

If you try to delete more than 131072 of files on Linux with rm -f *, where the files are all stored in the same directory, you will get an error:

/bin/rm: Argument list too long.

I've earlier blogged on deleting multiple files on Linux and FreeBSD and this is not my first time facing this error.
Anyways, as time passed, I've found few other new ways to delete large multitudes of files from a server.

In this article, I will explain shortly few approaches to delete few million of obsolete files to clean some space on your server.
Here are 3 methods to use to clean your tons of junk files.

1. Using Linux find command to wipe out millions of files

a.) Finding and deleting files using find's -exec switch:

# find . -type f -exec rm -fv {} \;

This method works fine but it has 1 downside, file deletion is too slow as for each found file external rm command is invoked.

For half a million of files or more, using this method will take "long". However from a server hard disk stressing point of view it is not so bad as, the files deletion is not putting too much strain on the server hard disk.
b.) Finding and deleting big number of files with find's -delete argument:

Luckily, there is a better way to delete the files, by using find's command embedded -delete argument:

# find . -type f -print -delete

c.) Deleting and printing out deleted files with find's -print arg

If you would like to output on your terminal, what files find is deleting in "real time" add -print:

# find . -type f -print -delete

To prevent your server hard disk from being stressed and hence save your self from server normal operation "outages", it is good to combine find command with ionice, e.g.:

# ionice -c 3 find . -type f -print -delete

Just note, that ionice cannot guarantee find's opeartions will not affect severely hard disk i/o requests. On  heavily busy servers with high amounts of disk i/o writes still applying the ionice will not prevent the server from being hanged! Be sure to always keep an eye on the server, while deleting the files nomatter with or without ionice. if throughout find execution, the server gets lagged in serving its ordinary client requests or whatever, stop the execution of the cmd immediately by killing it from another ssh session or tty (if physically on the server).

2. Using a simple bash loop with rm command to delete "tons" of files

An alternative way is to use a bash loop, to print each of the files in the directory and issue /bin/rm on each of the loop elements (files) like so:

for i in *; do
rm -f $i;
done

If you'd like to print what you will be deleting add an echo to the loop:

# for i in $(echo *); do \
echo "Deleting : $i"; rm -f $i; \

The bash loop, worked like a charm in my case so I really warmly recommend this method, whenever you need to delete more than 500 000+ files in a directory.

3. Deleting multiple files with perl

Deleting multiple files with perl is not a bad idea at all.
Here is a perl one liner, to delete all files contained within a directory:

# perl -e 'for(<*>){((stat)[9]<(unlink))}'

If you prefer to use more human readable perl script to delete a multitide of files use delete_multple_files_in_dir_perl.pl

Using perl interpreter to delete thousand of files is quick, really, really quick.
I did not benchmark it on the server, how quick exactly is it, but I guess the delete rate should be similar to find command. Its possible even in some cases the perl loop is  quicker …

4. Using PHP script to delete a multiple files

Using a short php script to delete files file by file in a loop similar to above bash script is another option.
To do deletion  with PHP, use this little PHP script:

<?php
$dir = "/path/to/dir/with/files";
$dh = opendir( $dir);
$i = 0;
while (($file = readdir($dh)) !== false) {
$file = "$dir/$file";
if (is_file( $file)) {
unlink( $file);
if (!(++$i % 1000)) {
echo "$i files removed\n";
}
}
}
?>

As you see the script reads the $dir defined directory and loops through it, opening file by file and doing a delete for each of its loop elements.
You should already know PHP is slow, so this method is only useful if you have to delete many thousands of files on a shared hosting server with no (ssh) shell access.

This php script is taken from Steve Kamerman's blog . I would like also to express my big gratitude to Steve for writting such a wonderful post. His post actually become  inspiration for this article to become reality.

You can also download the php delete million of files script sample here

To use it rename delete_millioon_of_files_in_a_dir.php.txt to delete_millioon_of_files_in_a_dir.php and run it through a browser .

Note that you might need to run it multiple times, cause many shared hosting servers are configured to exit a php script which keeps running for too long.
Alternatively the script can be run through shell with PHP cli:

php -l delete_millioon_of_files_in_a_dir.php.txt.

5. So What is the "best" way to delete million of files on Linux?

In order to find out which method is quicker in terms of execution time I did a home brew benchmarking on my thinkpad notebook.

a) Creating 509072 of sample files.

Again, I used bash loop to create many thousands of files in order to benchmark.
I didn't wanted to put this load on a productive server and hence I used my own notebook to conduct the benchmarks. As my notebook is not a server the benchmarks might be partially incorrect, however I believe still .they're pretty good indicator on which deletion method would be better.

hipo@noah:~$ mkdir /tmp/test
hipo@noah:~$ cd /tmp/test;
hiponoah:/tmp/test$ for i in $(seq 1 509072); do echo aaaa >> $i.txt; done

I had to wait few minutes until I have at hand 509072  of files created. Each of the files as you can read is containing the sample "aaaa" string.

b) Calculating the number of files in the directory

Once the command was completed to make sure all the 509072 were existing, I used a find + wc cmd to calculate the directory contained number of files:

hipo@noah:/tmp/test$ find . -maxdepth 1 -type f |wc -l
509072

real 0m1.886s
user 0m0.440s
sys 0m1.332s

Its intesrsting, using an ls command to calculate the files is less efficient than using find:

hipo@noah:/tmp/test$ time ls -1 |wc -l
509072

real 0m3.355s
user 0m2.696s
sys 0m0.528s

c) benchmarking the different file deleting methods with time

– Testing delete speed of find

hipo@noah:/tmp/test$ time find . -maxdepth 1 -type f -delete
real 15m40.853s
user 0m0.908s
sys 0m22.357s

You see, using find to delete the files is not either too slow nor light quick.

– How fast is perl loop in multitude file deletion ?

hipo@noah:/tmp/test$ time perl -e 'for(<*>){((stat)[9]<(unlink))}'real 6m24.669suser 0m2.980ssys 0m22.673s

Deleting my sample 509072 took 6 mins and 24 secs. This is about 3 times faster than find! GO-GO perl 🙂
As you can see from the results, perl is a great and time saving, way to delete 500 000 files.

– The approximate speed deletion rate of of for + rm bash loop

hipo@noah:/tmp/test$ time for i in *; do rm -f $i; done

real 206m15.081s
user 2m38.954s
sys 195m38.182s

You see the execution took 195m en 38 secs = 3 HOURS and 43 MINUTES!!!! This is extremely slow ! But works like a charm as the running of deletion didn't impacted my normal laptop browsing. While the script was running I was mostly browsing through few not so heavy (non flash) websites and doing some other stuff in gnome-terminal) 🙂

As you can imagine running a bash loop is a bit CPU intensive, but puts less stress on the hard disk read/write operations. Therefore its clear using it is always a good practice when deletion of many files on a dedi servers is required.

b) my production server file deleting experience

On a production server I only tested two of all the listed methods to delete my files. The production server, where I tested is running Debian GNU / Linux Squeeze 6.0.3. There I had a task to delete few million of files.
The tested methods tried on the server were:

– The find . type -f -delete method.

– for i in *; do rm -f $i; done

The results from using find -delete method was quite sad, as the server almost hanged under the heavy hard disk load the command produced.

With the for script all went smoothly. The files were deleted for a long long time (like few hours), but while it was running, the server continued with no interruptions..

While the bash loop was running, the server load avarage kept at steady 4
Taking my experience in mind, If you're running a production, server and you're still wondering which delete method to use to wipe some multitude of files, I would recommend you go  the bash for loop + /bin/rm way. Yes, it is extremely slow, expect it run for some half an hour or so but puts not too much extra load on the server..

Using the PHP script will probably be slow and inefficient, if compared to both find and the a bash loop.. I didn't give it a try yet, but suppose it will be either equal in time or at least few times slower than bash.

If you have tried the php script and you have some observations, please drop some comment to tell me how it performs.

To sum it up;

Even though there are "hacks" to clean up some messy parsing directory full of few million of junk files, having such a directory should never exist on the first place.

Frankly, keeping millions of files within the same directory is very stupid idea.
Doing so will have a severe negative impact on a directory listing performance of your filesystem in the long term.

If you know better (more efficient) ways to delete a multitude of files in a dir please share in comments.

Check and Restart Apache if it is malfunctioning (not returning HTML content) shell script

Monday, March 19th, 2012

Check and Restart Apache Webserver on Malfunction, Apache feather logo

One of the company Debian Lenny 5.0 Webservers, where I'm working as sys admin sometimes stops to properly server HTTP requests.
Whenever this oddity happens, the Apache server seems to be running okay but it is not failing to return requested content

I can see the webserver listens on port 80 and establishing connections to remote hosts – the apache processes show normally as I can see in netstat …:

apache:~# netstat -enp 80
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 xxx.xxx.xxx.xx:80 46.253.9.36:5665 SYN_RECV 0 0 -
tcp 0 0 xxx.xxx.xxx.xx:80 78.157.26.24:5933 SYN_RECV 0 0 -
...

Also the apache forked child processes show normally in process list:

apache:~# ps axuwwf|grep -i apache
root 46748 0.0 0.0 112300 872 pts/1 S+ 18:07 0:00 \_ grep -i apache
root 42530 0.0 0.1 217392 6636 ? Ss Mar14 0:39 /usr/sbin/apache2 -k start
www-data 42535 0.0 0.0 147876 1488 ? S Mar14 0:01 \_ /usr/sbin/apache2 -k start
root 28747 0.0 0.1 218180 4792 ? Sl Mar14 0:00 \_ /usr/sbin/apache2 -k start
www-data 31787 0.0 0.1 219156 5832 ? S Mar14 0:00 | \_ /usr/sbin/apache2 -k start

In spite of that, in any client browser to any of the Apache (Virtual hosts) websites, there is no HTML content returned…
This weird problem continues until the Apache webserver is retarted.
Once webserver is restarted everything is back to normal.
I use Apache Check Apache shell script set on few remote hosts to regularly check with nmap if port 80 (www) of my server is open and responding, anyways this script just checks if the open and reachable and thus using it was unable to detect Apache wasn't able to return back HTML content.
To work around the malfunctions I wrote tiny script – retart_apache_if_empty_content_is_returned.sh

The scripts idea is very simple;
A request is made a remote defined host with lynx text browser, then the output of lines is counted, if the output returned by lynx -dump http://someurl.com is less than the number returned whether normally invoked, then the script triggers an apache init script restart.

I've set the script to periodically run in a cron job, every 5 minutes each hour.
# check if apache returns empty content with lynx and if yes restart and log it
*/5 * * * * /usr/sbin/restart_apache_if_empty_content.sh >/dev/null 2>&1

This is not perfect as sometimes still, there will be few minutes downtime, but at least the downside will not be few hours until I am informed ssh to the server and restart Apache manually

A quick way to download and set from cron execution my script every 5 minutes use:

apache:~# cd /usr/sbin
apache:/usr/sbin# wget -q https://www.pc-freak.net/bscscr/restart_apache_if_empty_content.sh
apache:/usr/sbin# chmod +x restart_apache_if_empty_content.sh
apache:/usr/sbin# crontab -l > /tmp/file; echo '*/5 * * * *' /usr/sbin/restart_apache_if_empty_content.sh 2>&1 >/dev/null

 

Recommended logrorate practices on heavy loaded (busy) Apache Linux servers

Wednesday, March 7th, 2012

Apache logrotate Debian good configuration for heavy loaded servers

If you are sys admin of Apache Webserver running on Debian Linux relying on logrorate to rorate logs, you might want to change the default way logroration is done.

Little changes in the way Apache log files are served on busy servers can have positive outcomes on the overall way the server CPU units burden. A good logrotation strategy can also prevent your server from occasional extra overheads or downtimes.

The way Debian GNU / Linux process logs is well planned for small servers, however the default logroration Apache routine doesn't fit well for servers which process millions of client requests each day.

I happen to administrate, few servers which are constantly under a heavy load and have occasionally overload troubles because of Debian's logrorate default mechanism.

To cope with the situation I have made few modifications to /etc/logrorate.d/apache2 and decided to share it here hoping, this might help you too.

1. Rotate Apache acccess.log log file daily instead of weekly

On Debian Apache's logrorate script is in /etc/logrotate.d/apache2

The default file content will be like so like so:

debian:~# cat /etc/logrotate.d/apache2
/var/log/apache2/*.log {
weekly
missingok
rotate 52
size 1G
compress
delaycompress
notifempty
create 640 root adm
sharedscripts
postrotate
if [ -f "`. /etc/apache2/envvars ; echo ${APACHE_PID_FILE:-/var/run/apache2.pid}`" ]; then
/etc/init.d/apache2 reload > /dev/null
fi
endscript
}

To change the rotation from weekly to daily change:

weekly

to

#weekly

2. Disable access.log log file gzip compression

By default apache2 logrotate script is tuned ot make compression of rotated file (exmpl: copy access.log to access.log.1 and gzip it, copy access.log to access.log.2 and gzip it etc.). On servers where logs are many gigabytes, once logrotate initiates its scheduled work it will have to compress an enormous log record of apache requests. On very busy Apache servers from my experience, just for a day the log could grow up to approximately 8 / 10 Gigabytes.
I'm sure there are more busy servers out there, which log files are growing to over 100GB for just a single day.
Gzipping a 100GB file piece takes an enormous load on the CPU, as well as often takes long time. When this logrotation gzipping occurs at a moment where the servers CPU cores are already heavy loaded from Apache serving HTTP requests, Apache server becomes inaccessible to most of the clients.
Then for end clients various oddities are experienced, for example Apache dropped connection errors, webserver returning empty pages, or simply inability to respond to the client browser.
Sometimes as a result of the overload, even secure shell connection to SSHD to the server is impossible …

To prevent your server from this roration overloads remove logrorate's default access.log gzipping by commenting:

compress

to

#comment

3. Change maximum log roration by logrorate to be up to 30

By default logrorate is configured to create and keep up to 52 rotated and gzipped access.log files, changing this to a lower number is a good practice (in my view), in cases where log files grow daily to 10 or more GBs. Doing so will save a lot of disk space and reduce the chance the hard disk gets filled in because of the multiple rorated ungzipped enormous access.log files.

To tune the default keep max rorated logs to 30, change:

rotate 52

to rotate 30

The way logrorate's apache log processing on RHEL / CentOS Linux is working better on high load servers, by default on CentOS logrorate is not configured to do log gzipping at all.

Here is the default /etc/logrorate.d/httpd script for
CentOS release 5.6 (Final)

[hipo@centos httpd]$ cat /etc/logrotate.d/httpd /var/log/httpd/*log {
missingok
notifempty
sharedscripts
postrotate
/bin/kill -HUP `cat /var/run/httpd.pid 2>/dev/null` 2> /dev/null || true
endscript
}

 

The Weekend

Monday, November 26th, 2007

In Saturday we had a Training day at Design.BG. It wasn’t too interesting the Boss spoke a lot about a new hierarchy being implanting into the corporation. Big mouths were said “how we should not anymore say that we make sites in the office or out if it.” He said: “We doesn’t make sites! We make projects and solutions for the client” :). Mitko has come back in Dobrich in Saturday (For the weekend) and Sunday and we went out twice. I haven’t touched computers much this weekend :).In the morning in Sunday I went to Liturgy and after that we had a small walk with Stoyan. After that I watched the 3 seriesof “From Dusk till Dawn” nice vampire story btw 🙂 And yes I like the style of Tarantino.

At some time I feeled very alone like nobody cares about me. I called Lily and we spend some good time together :).

END—–

How to fix transmission unable to download and connect to torrent tracker on Ubuntu Maverick 10.10

Wednesday, April 27th, 2011

As you can read in my few previous posts I have just installed a new Ubuntu 10.10 on a Toshiba Satellite L40 notebook.

Most of the things which are necessery for a fully working Linux desktop are already installed and the machine works fine, however I just noticed there is an issue with the default torrent gnome client and transmission unable to download files from torrent trackers.

Few minutes of playing with the transmission’s settings has revealed what was causing my torrent download problems.

It seems on Ubuntu 10.10 (probably on other Ubuntus and Debians) by default the transmission bittorrent client is trying to use for torrent download connections an incoming port 53636 number.

As the computer is behind a firewall and does not have a real IP address seeders cannot properly connect to the notebook port 53636 and hence the transmission bittorrent client could not initialize any torrent downloads.

Fixing up the issue is rather easy to fix it I had to change the settings in transmission from the menus:

Edit -> Settings -> Network

You need to select the options:
 

  • Pick a random port on startup
  • Use UPnP or NAT-PMP to redirect connections

Next I had to restart transmission and my torrent downloads started 😉

The end of the work week :)

Friday, February 1st, 2008

One more week passed without serious server problems. Yesterday after upgrade to debian 4.0rc2 with

apt-get dist-upgrade and reboot the pc-freak box became unbootable.

I wasn’t able to fix it until today because the machine’s box seemed not to read cds well.The problem was consisted of this that after the boot process of the linux kernel has started the machine the boot up was interrupted with a message saying
/sbin/init is missing

and I was dropped to a busybox without being able to read nothing from my filesystem.Thankfully nomen came to Dobrich for the weekend and today he bring me his cdrom-drive I booted with the debian.

Using Debian’s linux rescue I mounted the partition to check what’s wrong. I suspected something is terribly wrong with the lilo’s conf.

Looking closely to it I saw it’s the lilo conf file it was setupped to load a initrd for the older kernel. changing the line to thenew initrd in /etc/lilo.conf and rereading the lilo; /sbin/lilo -C; /sbin/lilo;

fixed the mess and pc-freak booted succesfully! 🙂

Yesterday I had to do something kinky. It was requested from a client to have access to a mysql service of one of the company servers,the problem was that the client didn’t have static IP so I didn’t have a good way to put into the current firewall.

Everytime the adsl they use got restarted a new absolutely random IP from all the BTC IP ranges was assigned.

The solution was to make a port redirect to a non-standard mysql port (XXXXX) which pointed to the standard 3306 service. I had to tell the firewall not to check the coming IPs on the non-standard port (XXXXX) against the 3306 service fwall rules.

Thanks to the help of a guy inirc.freenode.net #iptables jengelh I figured out the solution.

To complete the requested task it was needed to mark all packagescoming into port (XXXXX) using the iptables mangle option and to add a rule to ACCEPT all marked packages.

The rules looked like this

/sbin/iptables -t mangle -A PREROUTING -p tcp –dport XXXXX -j MARK –set-mark 123456/sbin/iptables -t nat -A PREROUTING -d EXTERNAL_IP -i eth0 -p tcp –dport XXXXX -j DNAT –to-destination EXTERNAL_IP:3306

/sbin/iptables -t filter -A INPUT -p tcp –dport 3306 -m mark –mark 123456 -j ACCEPT .

Something I wondered a bit was should /proc/sys/net/ipv4/ip_forward in order for the above redirect to be working, in case you’re wondering too well it doesn’t 🙂 The working week was a sort of quiteful no serious problems with servers and work no serious problems at school (although I see me and my collegues become more and more unserious) at studying. My grand parentsdecided to make me a gift and give me money to buy a laptop and I’m pretty happy for this 🙂 All that is left is to choose a good machine with hardware supported both by FreeBSD and Linux.

END—–

How to solve “[crit] [client xxx.xxx.xxx.xxx] (13)Permission denied: /var/lib/ejabberd/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable, referer: http://jabber.mydomain.com/” – Error Cause and Solution

Thursday, January 5th, 2012

While configuring JWchat domain, I've come across around an error:

pcfg_openfile: unable to check htaccess file, ensure it is readable

The exact error I got in /var/log/apache2/error.log looked like so:

[crit] [client xxx.xxx.xxx.xxx] (13)Permission denied: /var/lib/ejabberd/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable, referer: http://jabber.mydomain.com/

The error message suggested /var/lib/ejabberd/.htaccess – is missing or not readable, however after checking i've seen .htaccess existed as well as was readable:

debian:~# ls -al /var/lib/ejabberd/.htaccess
-rw-r--r-- 1 www-data www-data 114 2012-01-05 07:44 /var/lib/ejabberd/.htaccess

At first glimpse it seems like the message is misleading and not true, however when I switched to www-data user (the user with which Apache runs on Debian), I've figured out the error meaning of unreadability is exactly correct:

www-data@debian:$ ls -al /var/lib/ejabberd/.htaccess
ls: cannot access /var/lib/ejabberd/.htaccess: Permission denied

This permission denied was quite strange, especially when considering the .htaccess is readable for all users:

debian:~# ls -al /var/lib/ejabberd/.htaccess
-rw-r--r-- 1 www-data www-data 114 2012-01-05 07:44 /var/lib/ejabberd/.htaccess

After a thorough look on what might go wrong, thanksfully I've figured it out. The all issues were caused by wrong permissions of /var/lib/ejabberd/.htaccess .You can see below the executable flag for all users (including apache's www-data) is missing :

debian:/var/lib# ls -ld /var/lib/ejabberd/drw-r--r-- 3 ejabberd ejabberd 4096 2012-01-05 07:45 /var/lib/ejabberd/

Solving the error, hence is as easy as adding +x flag to /var/lib/ejabberd :

debian:/var/lib# chmod +x /var/lib/ejabberd

Another way to fix the error is to chmod to 755 to the directory which holds .htaccess:

From now onwards pcfg_openfile: unable to check htaccess file, ensure it is readable err is no more 😉