Posts Tagged ‘test’

Text Monitoring of connection server (traffic RX / TX) business in ASCII graphs with speedometer / Easy Monitor network traffic performance

Friday, May 4th, 2012

While reading some posts online related to MS-Windows TcpViewnetwork traffic analyzing tool. I've came across very nice tool for tracking connection speed for Linux (Speedometer). If I have to compare it, speedometer is somehow similar to nethogs and iftop bandwidth network measuring utilities .

What differentiates speedometer from iftop / nethogs / iptraf is it is more suitable for visualizing a network file or data transfers.
The graphs speedometer draws are way easier to understand, than iftop graphs.

Even complete newbies can understand it with no need for extraordinary knowledge in networking. This makes Speedometer, a top tool to visually see the amount of traffic flowing through server network interface (eth0) … (eth1) etc.

What speedometer shows is similar to the Midnight Commander's (mc) file transfer status bar, except the statistics are not only for a certain file transfer but can show overall statistics over server passing network traffic amount (though according to its manual it can be used to also track individual file transfers).

The simplicity for basic use makes speedometer nice tool to track for network congestion issues on Linux. Therefore it is a  must have outfit for every server admin. Below you see a screenshot of my terminal running speedometer on a remote server.

Speedometer ascii traffic track server network business screenshot in byobu screen like virtual terminal emulator

1. Installing speedometer on Debian / Ubuntu and Debian derivatives

For Debian and Ubuntu server administrators speedometer is already packaged as a deb so its installation is as simple as:

debian:~# apt-get --yes install speedometer
....

2. Installing speedometer from source for other Linux distributions CentOS, Fedora, SuSE etc.

Speedometer is written in python programming language, so in order to install and use on other OS Linux platforms, it is necessery to have installed (preferably) an up2date python programming language interpreter (python ver. 2.6 or higher)..
Besides that it is necessary to have installed the urwid -( console user interface library for Python) available for download via excess.org/urwid/

 

Hence to install speedometer on RedHat based Linux distributions one has to follow these steps:

a) Download & Install python urwid library

[root@centos ~]# cd /usr/local/src
[root@centos src]# wget -q http://excess.org/urwid/urwid-1.0.1.tar.gz
[root@centos src]# tar -zxvvf urwid-1.0.1.tar.gz
....
[root@centos src]# cd urwid-1.0.1
[root@centos urwid-1.0.1]# python setup.py install
running install
running build
running build_py
creating build
creating build/lib.linux-i686-2.4
creating build/lib.linux-i686-2.4/urwid
copying urwid/tests.py -> build/lib.linux-i686-2.4/urwid
copying urwid/command_map.py -> build/lib.linux-i686-2.4/urwid
copying urwid/graphics.py -> build/lib.linux-i686-2.4/urwid
copying urwid/vterm_test.py -> build/lib.linux-i686-2.4/urwid
copying urwid/curses_display.py -> build/lib.linux-i686-2.4/urwid
copying urwid/display_common.py -> build/lib.linux-i686-2.4/urwid
....

b) Download and install python-setuptools

python-setuptools is one other requirement of speedometer, happily on CentOS and Fedora the rpm package is already there and installable with yum:

[root@centos ~]# yum -y install python-setuptools
....

c) Download and install Speedometer

[root@centos urwid-1.0.1]# cd /usr/local/src/
[root@centos src]# wget -q http://excess.org/speedometer/speedometer-2.8.tar.gz
[root@centos src]# tar -zxvvf speedometer-2.8.tar.gz
.....
[root@centos src]# cd speedometer-2.8
[root@centos speedometer-2.8]# python setup.py install
Traceback (most recent call last):
File "setup.py", line 26, in ?
import speedometer
File "/usr/local/src/speedometer-2.8/speedometer.py", line 112
n = n * granularity + (granularity if r else 0)
^

While running the CentOS 5.6 installation of speedometer-2.8, I hit the
"n = n * granularity + (granularity if r else 0)
error.

After consultation with some people in #python (irc.freenode.net), I've figured out this error is caused due the outdated version of python interpreter installed by default on CentOS Linux 5.6. On CentOS 5.6 the python version is:

[root@centos ~]# python -V
Python 2.4.3

As I priorly said speedometer 2.8's minimum requirement for a python to be at v. 2.6. Happily there is quick way to update python 2.4 to python 2.6 on CentOS 5.6, as there is an RPM repository maintained by Chris Lea which contains RPM binary of python 2.6.

To update python 2.4 to python 2.6:

[root@centos speedometer-2.8]# rpm -Uvh http://yum.chrislea.com/centos/5/i386/chl-release-5-3.noarch.rpm[root@centos speedometer-2.8]# rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CHL[root@centos speedometer-2.8]# yum install python26

Now the newly installed python 2.6 is executable under the binary name python26, hence to install speedometer:

[root@centos speedometer-2.8]# python26 setup.py install
[root@centos speedometer-2.8]# chown root:root /usr/local/bin/speedometer
[root@centos speedometer-2.8]# chmod +x /usr/local/bin/speedometer

[root@centos speedometer-2.8]# python26 speedometer -i 1 -tx eth0

The -i will instruct speedometer to refresh the screen graphs once a second.

3. Using speedometer to keep an eye on send / received traffic network congestion

To observe, the amount of only sent traffic via a network interface eth0 with speedometer use:

debian:~# speedometer -tx eth0

To only keep an eye on received traffic through eth0 use:

debian:~# speedometer -rx eth0

To watch over both TX and RX (Transmitted and Received) network traffic:

debian:~# speedometer -tx eth0 -rx eth0

If you want to watch in separate windows TX and RX traffic while  running speedometer you can run in separate xterm windows speedometer -tx eth0 and speedometer -rx eth0, like in below screenshot:

Monitor Received and Transmitted server Network traffic in two separate xterm windows with speedometer ascii graphs

4. Using speedometer to test network maximum possible transfer speed between server (host A) and server (host B)

The speedometer manual suggests few examples one of which is:

How fast is this LAN?

host-a$ cat /dev/zero | nc -l -p 12345
host-b$ nc host-a 12345 > /dev/null
host-b$ speedometer -rx eth0

When I red this example in speedometer's manual, it wasn't completely clear to me what the author really meant, but a bit after when I thought over the example I got his point.

The idea behind this example is that a constant stream of zeros taken from /dev/zero will be streamed over via a pipe (|) to nc which will bind a port number 12345, anyone connecting from another host machine, lets say a server with host host-b to port 12345 on machine host-a will start receiving the /dev/zero streamed content.

Then to finally measure the streamed traffic between host-a and host-b machines a speedometer is started to visualize the received traffic on network interface eth0, thus measuring the amount of traffic flowing from host-a to host-b

I give a try to the exmpls, using for 2 test nodes my home Desktop PC, Linux running  arcane version of Ubuntu and my Debian Linux notebook.

First on the Ubuntu PC I issued
 

hipo@hip0-desktop:~$ cat /dev/zero | nc -l -p 12345
 

Note that I have previously had installed the netcat, as nc is not installed by default on Ubuntu and Debian. If you, don't have nc installed yet, install it with:

apt-get –yes install netcat

"cat /dev/zero | nc -l -p 12345" will not produce any output, but will display just a blank line.

Then on my notebook I ran the second command example, given in the speedometer manual:
 

hipo@noah:~$ nc 192.168.0.2 12345 > /dev/null

Here the 192.168.0.2 is actually the local network IP address of my Desktop PC. My Desktop PC is connected via a normal 100Mbit switch to my routing machine and receives its internet via  NAT. The second test machine (my laptop), gets its internet through a WI-FI connection received by a Wireless Router connected via a UTP cable to the same switch to which my Desktop PC is connected.

Finally to test / get my network maximum thoroughput I had to use:

hipo@noah:~$ speedometer -rx wlan0

Here, I  monitor my wlan0 interface, as this is my (laptop) wireless card interface over which I have connectivity to my local network and via which through the the WI-FI router I get connected to the internet.

Below is a snapshot captured showing approximately what is the max network thoroughput from:

Desktop PC -> to my Thinkpad R61 laptop

Using Speedometer to test network thorougput between two network server hosts screenshot Debian Squeeze Linux

As you can see in the shot approximately the maximum network thoroughput is in between:
2.55MB/s min and 2.59MB/S max, the speed is quite low for a 100 MBit local network, but this is normal as most laptop wireless adapters hardly transfer traffic in more than 10 to 20 MBits per sec.

If the same nework thoroughput test is conducted between two machines both connected to a same 100 M/bit switch, the traffic should be at least a 8 MB/sec.

There is something, else to take in consideration that probably makes the provided example network thoroughput measuring a bit inaccurate. The fact that the /dev/zero content is stremed over is slowing down the zeroes sent over network because of the  pipe ( | ) use slows down the stream.

5. Using speedometer to visualize maximum writting speed to a local hard drive on Linux

In the speedometer manual, I've noticed another interesting application of this nifty tool.

speedometer can be used to track and visualize the maximum writing speed a hard disk drive or hard drive partition can support on Linux OS:

A copy paster from the manual text is as follows:

How fast can I write data to my filesystem? (with at least 1GB free)
dd bs=1000000 count=1000 if=/dev/zero of=bigfile &
speedometer bigfile

However, when I tried copy/pasting the example in terminal, to test the maximum writing speed to an external USB hard drive, only dd command was started and speedometer failed to initialize and display graphs of the file creation speed.

I've found a little "hack" that makes the man example work by adding a 3 secs sleep like so:

debian:/media/Expansion Drive# dd bs=1000000 count=1000 if=/dev/zero of=bigfile & sleep 3; speedometer bigfile

Here is a screenshot of the bigfile created by dd and tracked "in real time" by speedometer:

How fast is writting data to local USB expandable hard disk Debian Linux speedometer screenshot

Actually the returned results from this external USB drive are, quite high, the possible reason for that is it is connected to my laptop over an USB protocol verion 3.

6. Using Speedometer to keep an eye on file download in progress

This application of speedometer is mostly useless especially on Linux where it is used as a Desktop.

However in some occasions if files are transferred over ssh or in non interactive FTP / Samba file transfers between Linux servers it can come handy.

To visualize the download and writing speed of lets say FTP transferred .AVI movie (during the actual file transfer) on the download host issue:

# speedometer Download-Folder/What-goes-around-comes-around.avi

7. Estimating approximate time for file transfer

There is another section in the speedometer manual pointing of the program use to calculate the time remaining for a file transfer.

The (man speedometer) provided example text is:

How long it will take for my 38MB transfer to finish?
speedometer favorite_episode.rm $((38*1024*1024))

At first glimpse it hard to understand (like the other manual example). A bit of reasoning and I comprehend what the man author meant by the obscure calculation:

$((38*1024*1024))

This is a formula used in which 38 has to be substituted with the exact file size amount of the transferred file. The author manual used a 38MB file so this is why he put $((38* … in the formula.

I give it a try – (just for the sake to see how it works) with a file with a size of 2500MB, in below two screenshot pictures I show my preparation to copy the file and the actual copying / "real time" transfer tracking with speedometer's status percentage completion bar.

xterm terminal copy file and estimate file copying operation speed on linux with speedometer preparation

Two xterm terminals one is copying a file the other one uses speedometer to estimate the time remaining to complete the file transfer from expansion USB hard drive to my laptop harddrive

 

Fixing QMAIL mail server SMTP auto-configure issues in Thunderbird and other mail IMAP / POP3 mobile clients

Friday, July 13th, 2012

One of the QMAIL mail servers, setup-uped on a Debian host has been creating some auto configuration issues. Every-time a new mail user tries to use the embedded Thunderbird client auto configuration, the auto config fails leaving the client unable to use his Mailbox through POP3 or IMAP protocols.

Since about 2 years Thunderbird and many other modern pop3 and imap mail desktop and mobile clients are by default using the auto configuration and hence it was unthinkable to manually change settings for new clients with the QMAIl install; Besides that most of the Office users are always confused, whether they have to manually change SMTP or POP3 host for a server.

Below is a screenshot displaying the warning during email auto-configuration:

Thunderbird new Mail account setup auto config warning SMTP not OKThe orange color in the button for the newly auto-detected smtp.mail-domain.com indicates, something is not right with the SMTP host.

Obviously, something was wrong with smtp.mail-domain.com, hence I checked where smtp.mail.domain.com resolves with host command. What I found was actually smtp.mail-domain.com Active ( A ) DNS records was pointing to an IP address, our company previously used for the mail server. At present time the correct mail server host name is mx.mail-domain.com and the QMAIL installation on mx.soccerfame.com is configured to be the actual SMTP server.

By default Thunderbird and many other POP3, IMAP mail clients, however automatically assume the default SMTP host for a mail server is to be configured under a host name smtp.mail-domain.com. This is really strange, especially when the primary MX record for mail-domain.com domain is pointing to mx.mail-domain.com, e.g.:

qmail:~# host -t MX mail-domain.com
soccerfame.com mail is handled by 10 mx.mail-domain.com.
soccerfame.com mail is handled by 20 mail.mail-domain.com.
soccerfame.com mail is handled by 30 mail-domain.com.

The whole warning was caused due to the fact mx.mail-domain.com was resolving to an IP like xxx.xxx.xxx.xxx, whether smtp.mail-domain.com was resolving to yyy.yyy.yyy.yyy

Both xxx.xxx.xxx.xxx and yyy.yyy.yyy.yyy hosts were configured to have a different qmail SMTP host i.e.:

The server under IP xxx.xxx.xxx.xxx – (mx.mail-domain.com) was configured in /var/qmail/control/me to be mx.mail-domain.com and the other old one yyy.yyy.yyy.yyy – (mail.mail-domain.com) had (mail.mail-domain.com) in /var/qmail/control/me

As smtp.mail-domain.com was actually being still resolved to mail.mail-domain.com, the EMAILs were improperly trying to be sent with a configured DNS hostname of smtp.mail-domain.com, where the actual one on the server was mail.mail-domain

It took, me about an hour of pondering what is causing the oddities until I got the here explained issue. As the DNS recors for the domain the sample mail-domain.com were handled by Godaddy, to fix the mess, I logged in to Godaddy and;

a) deleted – DNS record for smtp.mail-domain.com.
b) Created new CNAME record for smtp.mail-domain.com to be a domain alias for mx.soccerfame.com

A few minutes, afterwards I tried configuring once again the same email account in Thunderbird and this time both imap.mail-domain.com and smtp.mail-domain.com turned green; indicating everything is configured fine.

To be 100% sure all is working fine I first fetched, all email via the IMAP protocol without hassles and onwards sent a test email to my Gmail account; thanksfully the sent email was delivered to Gmail indicating both Get Mail and Send Mail functions worked now fine.

Thunderbird icedove new mail account setup auto config Okay
 

How to delete million of files on busy Linux servers (Work out Argument list too long)

Tuesday, March 20th, 2012

How to Delete million or many thousands of files in the same directory on GNU / Linux and FreeBSD

If you try to delete more than 131072 of files on Linux with rm -f *, where the files are all stored in the same directory, you will get an error:

/bin/rm: Argument list too long.

I've earlier blogged on deleting multiple files on Linux and FreeBSD and this is not my first time facing this error.
Anyways, as time passed, I've found few other new ways to delete large multitudes of files from a server.

In this article, I will explain shortly few approaches to delete few million of obsolete files to clean some space on your server.
Here are 3 methods to use to clean your tons of junk files.

1. Using Linux find command to wipe out millions of files

a.) Finding and deleting files using find's -exec switch:

# find . -type f -exec rm -fv {} \;

This method works fine but it has 1 downside, file deletion is too slow as for each found file external rm command is invoked.

For half a million of files or more, using this method will take "long". However from a server hard disk stressing point of view it is not so bad as, the files deletion is not putting too much strain on the server hard disk.
b.) Finding and deleting big number of files with find's -delete argument:

Luckily, there is a better way to delete the files, by using find's command embedded -delete argument:

# find . -type f -print -delete

c.) Deleting and printing out deleted files with find's -print arg

If you would like to output on your terminal, what files find is deleting in "real time" add -print:

# find . -type f -print -delete

To prevent your server hard disk from being stressed and hence save your self from server normal operation "outages", it is good to combine find command with ionice, e.g.:

# ionice -c 3 find . -type f -print -delete

Just note, that ionice cannot guarantee find's opeartions will not affect severely hard disk i/o requests. On  heavily busy servers with high amounts of disk i/o writes still applying the ionice will not prevent the server from being hanged! Be sure to always keep an eye on the server, while deleting the files nomatter with or without ionice. if throughout find execution, the server gets lagged in serving its ordinary client requests or whatever, stop the execution of the cmd immediately by killing it from another ssh session or tty (if physically on the server).

2. Using a simple bash loop with rm command to delete "tons" of files

An alternative way is to use a bash loop, to print each of the files in the directory and issue /bin/rm on each of the loop elements (files) like so:

for i in *; do
rm -f $i;
done

If you'd like to print what you will be deleting add an echo to the loop:

# for i in $(echo *); do \
echo "Deleting : $i"; rm -f $i; \

The bash loop, worked like a charm in my case so I really warmly recommend this method, whenever you need to delete more than 500 000+ files in a directory.

3. Deleting multiple files with perl

Deleting multiple files with perl is not a bad idea at all.
Here is a perl one liner, to delete all files contained within a directory:

# perl -e 'for(<*>){((stat)[9]<(unlink))}'

If you prefer to use more human readable perl script to delete a multitide of files use delete_multple_files_in_dir_perl.pl

Using perl interpreter to delete thousand of files is quick, really, really quick.
I did not benchmark it on the server, how quick exactly is it, but I guess the delete rate should be similar to find command. Its possible even in some cases the perl loop is  quicker …

4. Using PHP script to delete a multiple files

Using a short php script to delete files file by file in a loop similar to above bash script is another option.
To do deletion  with PHP, use this little PHP script:

<?php
$dir = "/path/to/dir/with/files";
$dh = opendir( $dir);
$i = 0;
while (($file = readdir($dh)) !== false) {
$file = "$dir/$file";
if (is_file( $file)) {
unlink( $file);
if (!(++$i % 1000)) {
echo "$i files removed\n";
}
}
}
?>

As you see the script reads the $dir defined directory and loops through it, opening file by file and doing a delete for each of its loop elements.
You should already know PHP is slow, so this method is only useful if you have to delete many thousands of files on a shared hosting server with no (ssh) shell access.

This php script is taken from Steve Kamerman's blog . I would like also to express my big gratitude to Steve for writting such a wonderful post. His post actually become  inspiration for this article to become reality.

You can also download the php delete million of files script sample here

To use it rename delete_millioon_of_files_in_a_dir.php.txt to delete_millioon_of_files_in_a_dir.php and run it through a browser .

Note that you might need to run it multiple times, cause many shared hosting servers are configured to exit a php script which keeps running for too long.
Alternatively the script can be run through shell with PHP cli:

php -l delete_millioon_of_files_in_a_dir.php.txt.

5. So What is the "best" way to delete million of files on Linux?

In order to find out which method is quicker in terms of execution time I did a home brew benchmarking on my thinkpad notebook.

a) Creating 509072 of sample files.

Again, I used bash loop to create many thousands of files in order to benchmark.
I didn't wanted to put this load on a productive server and hence I used my own notebook to conduct the benchmarks. As my notebook is not a server the benchmarks might be partially incorrect, however I believe still .they're pretty good indicator on which deletion method would be better.

hipo@noah:~$ mkdir /tmp/test
hipo@noah:~$ cd /tmp/test;
hiponoah:/tmp/test$ for i in $(seq 1 509072); do echo aaaa >> $i.txt; done

I had to wait few minutes until I have at hand 509072  of files created. Each of the files as you can read is containing the sample "aaaa" string.

b) Calculating the number of files in the directory

Once the command was completed to make sure all the 509072 were existing, I used a find + wc cmd to calculate the directory contained number of files:

hipo@noah:/tmp/test$ find . -maxdepth 1 -type f |wc -l
509072

real 0m1.886s
user 0m0.440s
sys 0m1.332s

Its intesrsting, using an ls command to calculate the files is less efficient than using find:

hipo@noah:/tmp/test$ time ls -1 |wc -l
509072

real 0m3.355s
user 0m2.696s
sys 0m0.528s

c) benchmarking the different file deleting methods with time

– Testing delete speed of find

hipo@noah:/tmp/test$ time find . -maxdepth 1 -type f -delete
real 15m40.853s
user 0m0.908s
sys 0m22.357s

You see, using find to delete the files is not either too slow nor light quick.

– How fast is perl loop in multitude file deletion ?

hipo@noah:/tmp/test$ time perl -e 'for(<*>){((stat)[9]<(unlink))}'real 6m24.669suser 0m2.980ssys 0m22.673s

Deleting my sample 509072 took 6 mins and 24 secs. This is about 3 times faster than find! GO-GO perl 🙂
As you can see from the results, perl is a great and time saving, way to delete 500 000 files.

– The approximate speed deletion rate of of for + rm bash loop

hipo@noah:/tmp/test$ time for i in *; do rm -f $i; done

real 206m15.081s
user 2m38.954s
sys 195m38.182s

You see the execution took 195m en 38 secs = 3 HOURS and 43 MINUTES!!!! This is extremely slow ! But works like a charm as the running of deletion didn't impacted my normal laptop browsing. While the script was running I was mostly browsing through few not so heavy (non flash) websites and doing some other stuff in gnome-terminal) 🙂

As you can imagine running a bash loop is a bit CPU intensive, but puts less stress on the hard disk read/write operations. Therefore its clear using it is always a good practice when deletion of many files on a dedi servers is required.

b) my production server file deleting experience

On a production server I only tested two of all the listed methods to delete my files. The production server, where I tested is running Debian GNU / Linux Squeeze 6.0.3. There I had a task to delete few million of files.
The tested methods tried on the server were:

– The find . type -f -delete method.

– for i in *; do rm -f $i; done

The results from using find -delete method was quite sad, as the server almost hanged under the heavy hard disk load the command produced.

With the for script all went smoothly. The files were deleted for a long long time (like few hours), but while it was running, the server continued with no interruptions..

While the bash loop was running, the server load avarage kept at steady 4
Taking my experience in mind, If you're running a production, server and you're still wondering which delete method to use to wipe some multitude of files, I would recommend you go  the bash for loop + /bin/rm way. Yes, it is extremely slow, expect it run for some half an hour or so but puts not too much extra load on the server..

Using the PHP script will probably be slow and inefficient, if compared to both find and the a bash loop.. I didn't give it a try yet, but suppose it will be either equal in time or at least few times slower than bash.

If you have tried the php script and you have some observations, please drop some comment to tell me how it performs.

To sum it up;

Even though there are "hacks" to clean up some messy parsing directory full of few million of junk files, having such a directory should never exist on the first place.

Frankly, keeping millions of files within the same directory is very stupid idea.
Doing so will have a severe negative impact on a directory listing performance of your filesystem in the long term.

If you know better (more efficient) ways to delete a multitude of files in a dir please share in comments.

Beside myself

Sunday, October 12th, 2008

There is not much to say, Recently I’m experiencing mix of spiritual and emotional fluctuations ups and downs.I feel so alone quite often. There are not many valuable people (considering my interests).Day by day I’m asking myself the question “Hey man , why are you studying HRQM this stupid secreatary stuff.”I’m confused quite a lot and in a state of a denial, or better to say I feel a kind of lost because I’m out of my confortzone .. The teachers here in the HRQM stream claim that when a man is frightened and out of his confort zone,then he is learning a lot. They might be true about that, I don’t know. At Friday we had that Business Ethics test.Before the test we watched the movie “The Wizard of Oz” a movie from the distant year 1939. Right after the class wasover I went home and laundered my clothes. Then we had a dinner. Today I woke up around 11:00, had my breakfastat around 13:00 and near 13:30 I went out for a walk. I went to the city center and walked around the river Netherlands Rijn.A little later I walked through the city center around the open market which was located right before The St. Eusibeus Chapel.I went through a waggon which sells bibles in different languages and tried to draw people back close to God andspoke for a while with one nice old man who said used to be a Christian for 40 years already.Then I went for shopping to the grocy stores Aldi and Albertheijn and went back “home” to Honigkamp… That’s mostlyhow my day passed … I should thank to God for still caring for me and providing me with all necessary for my daily living.Thanks Lord! END—–

The last two weeks

Saturday, December 16th, 2006

Heya. Few weeks I can’t take the courage to write on the blog. But today I’m spiritually (Praise be to God!). So I was able to write that. The day passed away in a very traditional way. I went to the collage in 10:15 instead of 08:15 again I was not able to stand up from the bed. We had some International Law lectures up to 11:30, after that we drank coffee with Nomen. Smoked few cigarettes. Later on I had a Deutsch 12:30 to 15:45. We made a test the time between 14:15 – 15:45 ( I was not ready for that test ofcourse :] ). No matter price be to God I succeded to write stuff in the test. After that I spoke fo 1.5 h with the College Sys Admin we discussed mine idea to migrate some Windows 98 boxes to just dumb terminals connecting to a central Terminal Server. In monday maybe we’ll start doing the terminal server. Other things I saw Nomen and Habib in 17:10 in the college. Mitko give a promise to Malin to check something on his computer, so we had to go Malin’s home. So we went there nothing special their we put Habib in front of the Malin’s home door and he rang we with mitko hide and watched. Dori (The woman of Malin), went out looking shocked at Habib, she was possible wondering who is that strange (nigga like guy). Habib said: “Malin, Vkyshti” :]. Dori said: “Malin ne vkyshti” then we showed ourself and laughed;] We tried to fix the malin’s problem with the Audio Drivers but was not able the problem seemed to be a hardware one. After that we went to Mino’s coffee we saw Miro and Gosho with Zuio Kiro and Maya. We drinked some beer and went our homes. End of story. The last days were hard for me. I really have no idea where my life is going into nomatter I hope God will think about me a good way the best way. I had a big spiritual pain over the last days, and I said to God a lot of times I hope he will take me away, from this hellish earth. But I guess his plan is another for me. Will live and will see. Now God Even now you see me. Please help me fix the mess I am into and forgive me for all my awful deeds. Will go now. Blessings in the name of Jesus Christ.

Still Here

Wednesday, January 17th, 2007

Aloha. Still here. Two days ago I ran Heroes 3 under my FreeBSD box successfully, there was a terrible bug in fullscreen mode which needed fix I’ve used the loki’s site patch to patch the heroes 3 start binary with xdelta. The loki installer was a terrible pain in the ass I used my l337test sk!llZ :} to be able to patch the binary by hand. About the last post yes I was desperate still not good still living. Today I was on exam again a failure probably, nothing new.

As I often used to say in the fast if something starts bad then it overs bad too. See my life show this very well. I’m suffering terrible and still waiting for something to set me free to happen. What would be the turning point? Will there be turning point at all? No idea. If God is such good and powerful as he said in the bible and his promises are true then he will deliver me and set me free on a good ground. Still hoping … Prodigy — Speedway, else the earth is probably his favourite experiment.

The Economics Exam. Or the day of a standard man :]

Tuesday, January 30th, 2007

Today. I had exam on marketing. The exam started 50 minutes later because the teachers had some sort of meating.I was able to get most of the test answers from one collegue but I’m not sure are her answers correct.I hope if God give me a help I would pass. After that me and some others from my group tried to get the anwers or the exam for our next exam which is tomorrow and is in the Accounting discipline.Unluckily we were not able to find anything. As usual I don’t know anything and I hope on a miracle and God’smercy to take the exam. I invited Habib to come home to explain me some of the matters. But my mind was toooverheaded with information so I was not in a mood for studying. After that we went out with Habib, Mitko,Toto and Sami. All started well until the Zuio’s father come to our table ( we were drinking beer on the fountain).He come and started kissing all of the guys around he started talking total bullshits to Habib and otherpersons in the coffee terrible picture The Classical “Bai Ganio” in action. After that we walked for some timewith Habib on the way to his home. And drinked a coffee on the “Zhurnalist” Coffee. Now I’m home again.After some problems luckily, I was able to start skype’s microphone to work under my FreeBSD.I have to sit on my back and study for few ours. Thanks God I didn’t have any problems with my Servers.Glory is for the Lord of Hosts.END—–

Drunk

Monday, February 26th, 2007

I drinked 100 gr. of Rom and a beer after that and I got drunk. I have some problems with one of the servers in DBG. I hope the machine’s hard drive didn’t die. If the hdd is dead it would be very very bad for me. Tomorrow I have test in Management. As usual I haven’t study :]] I’m listening to Pantera and I want to drink more .. which is not good at all. END—–

Blah Blah

Tuesday, February 27th, 2007

It was really steressing day. All started with a lot of calls to NetInfo’s Office then to our Office in Varna to explain what is happening to one of our servers design-bg.net. In the end after 2 hours of stress and even more everything come on it’s place Praise the LORD for this he helped me. Right after design-bg.net was up mail.design.bg almost freeze with God’s help everything came to normal after a simple reboot. In 12:00 to 12:40 I drunk coffee with some friends, after that spent some time in the library. In 14:15 the management test seemed to be in a small room and I was able to copy stuff from my collegues on the test:] Yeash CopyCat :]. Mitko and Habib was my guests today and we ate spinach’s soup with green salad and plant yellow cheese. Thanks to the Lord for being merciful over my sins. Yesterday I got drunk I drunk 200 gr. of ROM and 1 beer. After that I went home singing and listening to Metallica’s songs simultaneously, there was odd movement red eyes and head banging for almost an hour :]. I spend so many things for absolutely useless things, I am ashamed of myself. I acted like an animal. I really wonder why the world still exists If it’s such an awful place to be. I really doubt about the meaning of my life. Things seems to go in a bad way for me and a lot of ppl around me. I really hope the Lord will interfere and everything will come to place but who knows the Paths the Creator has created for us? END—–

Durankulak’s beach

Monday, August 27th, 2007

In Saturday we was on a Cisco course with Nomen, Niki (A friend programmer),and the Other Niki (Mitko’s brother). The course went smoothly up to some pointafter that Niki (the programmer) has received a call with an awful news…After the course I Nomen and Nomen’s brother went to the chineese restaurant and we ate rice with vegatables and spaghetti with vegetables and meat. An hour later we was at Nomen’s home we had to make the cisco’s Chapter One and Chapter two test. And Luckily I got 100% right answers on both of the two (I have to be honest that I used cheating and tricks on the tests so I probably deserve less.After that Nomen gave suggestion to go to Carapec or somewhere on the Beach on the Bulgarian Coast. At the end we ended beaching the next morning on Durankulak’s Camping beach after we setupped a fire for the night and we slept in tents. There was a lot of problems during the whole trip ( I won’t go into details) but Thanks to God Almighty all has ended well in the end. Talking about God, I’m smoking cigarettes again and I have to stop (I hope God would help again). Also something I have to note is I’m a sinner but God is faithful although I sin badly The Lord is gracious to me still. PRAISE THE LORD!!! HalleluJah! :]END—–