Posts Tagged ‘processing’

What is Vertical scaling and Horizontal scaling – Vertical and Horizontal hardware / services scaling

Friday, June 13th, 2014

horizontal-vs-vertical-scaling-vertical-and-horizontal-scaling-explained-diagram

If you're coming from a small or middle-sized company to a corporations like HP or IBM probably you will not a clear defined idea on the 2 types (2 dimensions) of system Scaling (Horizontal and Vertical scaling). I know from my pesronal experience that in small companies – all needed is to guarantee a model for as less probels as possible without too much of defining things and with much less planning. Other thing is being a sysadmin in middle-sized companies, often doesn't give you opportunity to discuss issues to solve with other admins but you have to deal as "one man (machine) for all" and thus often to solve office server and services tasks you do some custom solution.
hence for novice system administrators probably it will be probably unclear what is the difference between Horizontal and Vertical Scaling?

horizontal-vertical-scaling-scale-up-and-scale-out-server-infrastructure-diagram

 

Vertical Scaling (scale vertically or scale up) :- adding more resources(CPU/RAM/DISK) to your server (database or application server is still remains one).
Vertical Scaling is much more used in small and middle-sized companies and in applications and products of middle-range. Very common example for Virtual Scaling nowdays is to buy an expensive hardware and use it as a Virtual Machine hypervisor (VMWare ESX). Where a database is involved using Vertical Scaling without use of multiple virtual machines might be not the best solution, as even though hardware might suffice (creation of database locks might impose problems). Reasons to scale vertically include increasing IOPS (Input / Ouput Operations), increasing CPU/RAM capacity, and increasing disk capacity.
Because Vertical Scaling usually means upgrade of server hardware – whenever an improved performance is targeted, even though if Virtualization is used, the risk for downtimes with it is much higher than whenever horizontal scaling.

Horizontal Scaling (scale horizontally or scale out):- adding more processing units (phyiscal machine) to your server (infrastructure be it application web/server or database).
Horizontal scaling, means increasing  the number of nodes in the cluster, reduces the responsibilities of each member node by spreading the keyspace wider and providing additional end-points for client connections. The capacity of each individual node does not change, but its load is decreased (because load is distributed between separate server nodes). Reasons to scale horizontally include increasing I/O concurrency, reducing the load on existing nodes, and increasing disk capacity.
Horizontal Scaling has been historically much more used for high level of computing and for application and services. The Internet and particular web services gave a boom of Horizontal Scaling use, most companies nowadays that provide well known web services like Google (Gmail, Youtube), Yahoo, Facebook, Ebay, Amazon etc. are using heavily horizontal scaling. Horizontal Scaling is a must use technology – whenever a high availability of (server) services are required.

Share this on

Fixing Qmail 451 qq temporary problem (#4.3.0) / @4000000050587780174c60dc status: qmail-todo stop processing asap / status: exiting

Wednesday, September 19th, 2012

I’m in process of installing plain new Qmail mail (SMTP) server following QmailRocks updated: Thibs QmailRocks install guide for Debian 6.0 Squeeze
The install went smoothly so far and I’m already doing this installation for about 5 hours or so. I’m done with the minor install and following Thibs instructions to Implement validrcptto feature to Qmail.

Anyone who works with Qmail, should already know the lack of validrcptto tons of SPAM problems and useless Qmail load, because of QMAIL attempts to delivery to the local mail server unexisting mail boxes ….


Fixing this whole mess is implemented with the validrcptto. I myself has installed numerous times validrcptto and almost ever I ended up in some kind of mess before fixing it once and for all, this time of course (quite traditionally) the “story” repeated to piss me off for a while 🙂

After following steps literally as described on Thibs great Qmail install tutorial!, I ended up with a Qmail mail server unable to deliver properly e-mails.

To debug why mails are not properly delivered by the mail server I used telnet:


root@qmail-host:/var/qmail/control# telnet localhost 25
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
220 This is Mail Pc-Freak.NET ESMTP
HELO localhost
250 This is Mail Pc-Freak.NET
MAIL FROM:<hipo@pc-freak.net>
250 ok
RCPT TO:<hipo@pc-freak.net>
250 ok
DATA
354 go ahead
asdfdsfafsd
.
451 qq temporary problem (#4.3.0)

Some time, back while configuring another Qmail fresh install, I ended up with exactly same delivery error – I’ve take time to document how I fixed this weird qq temporary issue here

As I thought one error in “normal” Software can correspondent to one cause, I red my previous post and checked closely all that was in past wrong whether I encountered the err; guess what this time it wasn’t due to non-running (missing) clamav-daemon. Still though this was not the issue, it partially pointed me to the cause (problem with qmail-scanner.pl / spamd /pyzor / razor / dcc or whatever of this overall complexity ..).

First logical think was to check in logs. In /var/log/qmail/qmail-smtpd/current everything was looking good; my log looked like so:


root@qmail-host:/# tail -n 10 /var/log/qmail/qmail-smtpd/current
@40000000505877b91ab3aba4 tcpserver: end 23727 status 0
@40000000505877b91ab3af8c tcpserver: status: 0/30
@40000000505877f6273acefc tcpserver: status: 1/30
@40000000505877f6273ba9bc tcpserver: pid 23882 from 127.0.0.1
@40000000505877f6273f8dd4 tcpserver: ok 23882 mail.pc-freak.net:127.0.0.1:25 localhost:127.0.0.1::46769
@40000000505877fd1a3c647c qmail-smtpd[23882]: MFCHECK pass [127.0.0.1] pc-freak.net
@40000000505877fd1a3c935c qmail-smtpd[23882]: MAIL FROM:
@400000005058780123ba5eb4 qmail-smtpd[23882]: RCPT TO:

@4000000050587ccd179210b4 tcpserver: end 23882 status 256
@4000000050587ccd1792149c tcpserver: status: 0/30
root@qmail-host:/# tail -n 5 /var/log/qmail/qmail-smtpd/current
@40000000505877fd1a3c647c qmail-smtpd[23882]: MFCHECK pass [127.0.0.1] pc-freak.net

Second guess was to check in /var/log/qmail/qmail-send/current, there found errors like:


root@qmail-host:/# tail -n 10 /var/log/qmail/qmail-send/current
@4000000050584f8e0b799194 status: local 0/10 remote 0/120
@4000000050584f8e0b79957c end msg 9610091
@4000000050584fde2f5ebf44 status: qmail-todo stop processing asap
@4000000050584fde2f5ec32c status: exiting
@4000000050584fde32d2a884 status: local 0/10 remote 0/120
@4000000050584fe8136a44ac status: qmail-todo stop processing asap
@4000000050584fe8136a4894 status: exiting
@4000000050584fe8138b884c status: local 0/10 remote 0/120
@4000000050585014232903c4 status: qmail-todo stop processing asap
@4000000050585014232907ac status: exiting
@40000000505850142363e5fc status: local 0/10 remote 0/120
@40000000505851030773efa4 status: qmail-todo stop processing asap
@40000000505851030774320c status: exiting
@400000005058510307b5f214 status: local 0/10 remote 0/120

s you can see yourself, the errors are not giving any insight on what could be the reason, so I checked in /var/log/mail.log, just to find more errors there:


Sep 18 16:22:04 qmail-host qmail-scanner-queue.pl: X-Qmail-Scanner-2.10st:[pcfreak134797452279623171]

d_m: output spotted from /usr/bin/reformime -x/var/spool/qscan/tmp/qmail-host/I134797452279623171/ (sh: /usr/bin/reformime: not found#012) - that shouldn't happen!

As the error points out, the whole issues are caused by missing binary – /usr/bin/reformime. Logically I had to install reformime, so did a quick apt-cache search reformime and saw reformime is part of maildrop deb package. I thought it is installed but after checking with:


dpkg -a |grep -i maildrop

Realized it is missing and install it:


qmail-host:/# apt-get --yes install maildrop
....

That’s all after a qmail restart, i.e.:


qmail-host:/# qmailctl restart
* Stopping qmail-smtpdssl.
* Stopping qmail-smtpd.
* Sending qmail-send SIGTERM and restarting.
* Restarting qmail-smtpd.
* Restarting qmail-smtpdssl.
* Restarting qmail-pop3d.

qq temporary error got solved and from there on qmail received and sent mails normally with validrcptto enabled. Cheers 😉

Share this on

How to set up Qmail auto reply (Out of the Office), vacation message manually using .qmail message processing file

Tuesday, February 14th, 2012

Qmail Logo Auto reply message / how to setup qmail auto reply out of the office vacation message

I had to setup a QMAIL auto reply (Out of the Office) message on 5 email addresses and since I haven't done it for a long time it took me a couple 20 minutes to consult Qmail (Life With Qmail http://lifewithqmail.org (great website!) documentation and read a couple of online forum threads until I finally remembered, how I used to be setting up a vacation message manually via qmail's .qmail file.

Of course Setting qmail auto reply can always be done via QmailAdmin or VQadmin ..Qmail Vpopmail web frontends however on many Qmail mail servers Qmailadmin or/and VQadmin is absent due to some reason or even on a big mail servers the server doesn't run Apache at all. Hence it is good to know how to set qmail vacation message directly via plain SSH terminal connection and this is why how this article got born.

So here is how I enable qmail auto reply "manually", through .qmail for my email address info@my-email-domain.com:

1. Set a /var/vpopmail/domains/my-email-domain.com/info/.qmail file with the following content:

| /usr/bin/autorespond 86400 3 /home/vpopmail/domains/my-email-domain.com/info/vacation/message /home/vpopmail/domains/my-email-domain.com/info/vacation

2. Create /home/vpopmail/domains/my-email-domain.com/info/vacation directory

linux:~# mkdir -p /home/vpopmail/domains/my-email-domain.com/info/vacation/

3. Create /home/vpopmail/domains/my-email-domain.com/info/vacation/message file with auto reply message

First create the message file with touch command:

linux:~# touch /home/vpopmail/domains/my-email-domain.com/info/vacation/message

Then put with vim or mcedit etc. an auto-reply vacation message similar to the sample below:

From: info@cadiainsurance.com
Subject: We have received your message. Thank you!

Dear Customer, we thank you for the interest in our services.
A member of our team will reply promptly to your enquiry shortly.

4. Set proper permissions for vacation/message and .qmail files

/home/vpopmail/domains/my-email-domain.com/info/vacation/message and /home/vpopmail/domains/my-email-domain.com/info/.qmail files has to be owned by user/group vpopmail:vchkpw, e.g.:

linux:~# chown -R vpopmail:vchkpw /home/vpopmail/domains/my-email-domain.com/info/vacation
linux:~# chown vpopmail:vchkpw /home/vpopmail/domains/my-email-domain.com/info/.qmail

If you are a qmail administration with the requirement to create auto reply message for employees going on a holiday often (in a middle sized company office), setting up the out of the office auto reply manually one by one is a time consuming, annoying task and "crazy" task. Therefore some time ago while still I was employed in a Bulgarian mid-sized company called Design.BG, I've written a tiny shell script which creates qmail email users vacation messages by passing few arguments.

Here is my create_vpopmail_vacation.sh shell script
Note that this script might have a lot of bugs and is not much tested, so read it carefully and test it before you put it for daily use 😉
Happy Hacking! 😉

Share this on

Cause and solution for Qmail sent error “Requested action aborted: error in processing Server replied: 451 qq temporary problem (#4.3.0)”

Friday, October 28th, 2011

One of the qmail servers I manage today has started returning strange errors in Squirrel webmail and via POP3/IMAP connections with Thunderbird.

What was rather strange is if the email doesn’t contain a link to a webpage or and attachment, e.g. mail consists of just plain text the mail was sent properly, if not however it failed to sent with an error message of:

Requested action aborted: error in processing Server replied: 451 qq temporary problem (#4.3.0)

After looking up in the logs and some quick search in Google, I come across some online threads reporting that the whole issues are caused by malfunction of the qmail-scanner.pl (script checking mail for viruses).

After a close examination on what is happening I found out /usr/sbin/clamd was not running at all?!
Then I remembered a bit earlier I applied some updates on the server with apt-get update && apt-get upgrade , some of the packages which were updated were exactly clamav-daemon and clamav-freshclam .
Hence, the reason for the error:

451 qq temporary problem (#4.3.0)

was pretty obvious qmail-scanner.pl which is using the clamd daemon to check incoming and outgoing mail for viruses failed to respond, so any mail which contained any content which needed to go through clamd for a check and returned back to qmail-scanner.pl did not make it and therefore qmail returned the weird error message.
Apparently for some reason apparently the earlier update of clamav-daemon failed to properly restart, the init script /etc/init.d/clamav-daemon .

Following fix was very simple all I had to do is launch clamav-daemon again:

linux:~# /etc/inid.d/clamav-daemon restart

Afterwards the error is gone and all mails worked just fine 😉

Share this on

How to debug mod_rewrite .htaccess problems with RewriteLog / Solve mod_rewrite broken redirects

Friday, September 30th, 2011

Its common thing that CMS systems and many developers custom .htaccess cause issues where websites depending on mod_rewrite fails to work properly. Most common issues are broken redirects or mod_rewrite rules, which behave differently among the different mod_rewrite versions which comes with different versions of Apache.

Everytime there are such problems its necessery that mod_rewrite’s RewriteLog functionality is used.
Even though the RewriteLog mod_rewrite config variable is well described on httpd.apache.org , I decided to drop a little post here as I’m pretty sure many novice admins might not know about RewriteLog config var and might benefit of this small article.
Enabling mod_rewrite requests logging of requests to the webserver and process via mod_rewrite rules is being done either via the specific website .htaccess (located in the site’s root directory) or via httpd.conf, apache2.conf etc. depending on the Linux / BSD linux distribution Apache config file naming is used.

To enable RewriteLog near the end of the Apache configuration file its necessery to place the variables in apache conf:

1. Edit RewriteLog and place following variables:

RewriteLogLevel 9
RewriteLog /var/log/rewrite.log

RewriteLogLevel does define the level of logging that should get logged in /var/log/rewrite.log
The higher the RewriteLogLevel number defined the more debugging related to mod_rewrite requests processing gets logged.
RewriteLogLevel 9 is actually the highest loglevel that can be. Setting the RewriteLogLevel to 0 will instruct mod_rewrite to stop logging. In many cases a RewriteLogLevel of 3 is also enough to debug most of the redirect issues, however I prefer to see more, so almost always I use RewriteLogLevel of 9.

2. Create /var/log/rewrite.log and set writtable permissions

a. Create /var/log/rewrite.log

freebsd# touch /var/log/rewrite.log

b. Set writtable permissons

Either chown the file to the user with which the Apache server is running, or chmod it to permissions of 777.

On FreeBSD, chown permissions to allow webserver to write in file, should be:

freebsd# chown www:www /var/log/rewrite.log

On Debian and alike distros:

debian:~# chown www-data:www-data /var/log/rewrite.log

On CentOS, Fedora etc.:

[root@centos ~]# chown httpd:httpd /var/log/rewrite.log

On any other distribution, you don’t want to bother to check the uid:gid, the permissions can be set with chmod 777, e.g.:

linux# chmod 777 /var/log/rewrite.log

Next after RewriteLog is in conf to make configs active the usual webserver restart is required.

To restart Apache On FreeBSD:

freebsd# /usr/local/etc/rc.d/apache2 restart
...

To restart Apache on Debian and derivatives:

debian:~# /etc/init.d/apache2 restart
...

On Fedora and derivive distros:

[root@fedora ~]# /etc/init.d/httpd restart
...

Its common error to forget to set proper permissions to /var/log/rewrite.log this has puzzled me many times, when enabling RewriteLog’s logging.

Another important note is when debugging for mod_rewrite is enabled, one forgets to disable logging and after a while if the /var/log partition is placed on a small partition or is on an old server with less space often the RewriteLog fills in the disk quickly and might create website downtimes. Hence always make sure RewriteLog is disabled after work rewrite debugging is no longer needed.

The way I use to disable it is by commenting it in conf like so:

#RewriteLogLevel 9
#RewriteLog /var/log/rewrite.log

Finally to check, what the mod_rewrite processor is doing on the fly its handy to use the well known tail -f

linux# tail -f /var/log/rewrite.log

A bunch of time in watching the requests, should be enough to point to the exact problem causing broken redirects or general website malfunction.
Cheers 😉

Share this on

Using perl and sed to substitute strings in multiple files on Linux and BSD

Friday, August 26th, 2011

Using perl and sed to replace strings in files on Linux, FreeBSD, OpenBSD, NetBSD and other UnixOn many occasions when had to administer on Linux, BSD, SunOS or any other *nix, there is a need to substitute strings inside files or group of files containing a certain string with another one.

The task is not too complex and many of the senior sysadmins out there would certainly already has faced this requirement and probably had a good idea on files substitution with perl and sed, however I’m quite sure there are dozen of system administrators out there who did not know, how and still haven’t faced a situation where there i a requirement to substitute from a command shell or via a scripting language.

This article tagets exactly these system administrators who are not 100% sys op Gurus 😉

1. Substitute text strings inside files on Linux and BSD with perl

Perl programming language has originally been created to do a lot of text manipulation as well as most of the Linux / Unix based hosts today have installed working copy of perl , therefore using perl as a mean to substitute one string in a file to another one is maybe the best way to completet the task.
Another good thing about perl is that text processing with it is said to be in most cases a bit faster than sed .
However it is still dependent on the string to be substituted I haven’t done benchmark tests to positively say 100% that always perl is quicker, however my common sense suggests perl will be quicker.

Now enough talk here is a very simple way to substitute a reoccuring, text string inside a file with another chosen one is like so:

debian:~# perl -pi -e 's/foo/bar/g' file1 file2

This will substitute the string foo with bar everywhere it’s matched in file1 and file2

However the above code is a bit “dangerous” as it does not preserve a backup copy of the original files, where string is substituted is not made.
Therefore using the above command should only be used where one is 100% sure about the string changes to be made.

Hence a better idea whether conducting the text substitution is to keep also the original file backup under a let’s say .bak extension. To achieve that I use perl as follows:

freebsd# perl -i.bak -p -e 's/syzdarma/magdanoz/g;' file1 file2

This command creates copies of the original files file1 and file2 under the names file1.bak and file2.bak , the files file1 and file2 text occurance of strings syzdarma will get substituted with magdanoz using the option /g which means – (substitute globally).

2. Substitute string in all files inside directory using perl on Linux and BSD

Every now and then the there is a need to do manipulations with large amounts of files, I can’t right now remember a good scenario where I had to change all occuring matching strings to anther one to all files located inside a directory, anyhow I’ve done this on a number of occasions.

A good way to do a mass file string substitution on Linux and BSD hosts equipped with a bash shell is via the commands:

debian:/root/textfiles:# for i in $(echo *.txt); do perl -i.bak -p -e 's/old_string/new_string/g;' $i; done

Where the text files had the default txt file extension .txt

Above bash loop prints each of the files located in /root/textfiles and substitutes everywhere (globally) the old_string with new_string .

Another alternative to the above example to replace multiple occuring text string in all files in multiple directories is possible using a combination of shell commands grep, perl, sort, uniq and xargs .
Let’s say that one wants to match everywhere inside the root directory and all the descendant directories for files with a custom string and substitute it to another one, this can be done with the cmd:

debian:~# grep -R -files-with-matches 'old_string' / | sort | uniq | xargs perl -pi~ -e 's/old_string/new_string/g'

This command will lookup for string old_string in all files in the / – root directory and in case of occurance will substitute with new_string (This command’s idea was borrowed as an idea from http://linuxadmin.org so thx.).

Using the combination of 5 commands, however is not very wise in terms of efficiency.

Therefore to save some system resources, its better in terms of efficiency to take advantage of the find command in combination with xargs , here is how:

debian:~# find / | xargs grep 'old_string' -sl |uniq | xargs perl -pi~ -e 's/old_string/new_string/g'

Once again the find command example will do exactly the same as the substitute method with grep -R …

As enough is said about the way to substitute text strings inside files using perl, I will further explain how text strings can be substituted using sed

The main reason why using sed could be a better choice in some cases is that Unices are not equipped by default with perl interpreter. In general the amount of servers who contains installed sed compared to the ones with perl language interpreter is surely higher.

3. Substitute text strings inside files on Linux and BSD with sed stream editor

In many occasions, wether a website is hosted, one needs to quickly conduct a change in string inside all files located in a directory, to resolve issues with static urls directly encoded in html.
To achieve this task here is a code using two little bash script loops in conjunctions with sed, echo and mv commands:

debian:/var/www/website# for i in $(ls -1); do cat $i |sed -e "s#index.htm#http://www.webdomain.com/#g">$i.new; done
debian:/var/www/website# for i in $(ls *.new); do mv $i $(echo $i |sed -e "s#.new##g"); done

The above command sed -e “s#index.htm#http://www.webdomain.com/#g”, instructs sed to substitute all appearance of the text string index.htm to the new text string http://www.webdomain.com

First for bash loop, creates all the files with substituted string to file1.new, file2.new, file3.new etc.
The second for loop uses mv to overwrite the original input files file1, file2, file3, etc. with the newly created ones file1.new, file2.new, file3.new

There is a a way shorter way to conclude the same text substitutions task using a simpler one liner with only using sed and bash’s eval capabilities, here is how:

debian:/var/www/website# sed -i 's/old_string/new_string/g' *

Above command will change old_string to new_string inside all files in directory /var/www/website

Whether a change has to be made with less than 1024 files using this method might be more efficient, however whether a text substitute has to be done to let’s say 5000+ the above simplistic version will not work. An error of Argument list too long will prevent the sed -i ‘s/old_string/new_string/g’ to complete its task.

The above for loop 2 liner should be also working without problems with FreeBSD and the rest of BSD derivatives, though I have not tested it yet, hence any feedback from FreeBSD guys is mostly welcome.

Consider that in order to have the for loops commands work on FreeBSD or NetBSD, they have to be run under a bash shell.
That’s all folks thanks the Lord for letting me write this nice article, I hope it gives some insights on how multiple files text replace on Unix works .
Cheers 😉

Share this on

How to check Host is up with Nagios for servers with disabled ICMP (ping) protocol

Friday, July 15th, 2011

At the company where I administrate some servers, they’re running Nagios to keep track of the servers status and instantly report if problems with connectivity to certain servers occurs.

Now one of the servers which had configured UP host checks is up, but because of heavy ICMP denial of service attacks to the servers the ICMP protocol ping is completely disabled.

In Nagios this host was constantly showing as DOWN in the usual red color, so nagios reported issue even though all services on the client are running fine.

As this is quite annoying, I checked if Nagios supports host checking without doing the ICMP ping test. It appeared it does through something called in nagios Submit passive check result for host

Enabling the “Submit passive check result for this host” could be done straight from Nagios’s web interface (so I don’t even have to edit configurations! ;).
Here is how I did it. In Nagios I had to navigate to:

Hosts -> Click over my host (hosting1) which showed in red as down

Nagios disable ICMP ping report for hosts

You see my down host which I clicked over showing in red in above pic.

On next Nagios screen I had to select, Disable active checks of this host

Nagios Disable active ICMP checks of this host
and press on the Commit button.

Next following text appears on browser:

Your command request was successfully submitted to Nagios for processing.

Note: It may take a while before the command is actually processed.

Afterwards I had to click on Submit passive check result for this host and in:
Check Output to type in:

check_tcp -p 80

Here is the Screenshot of the Command Options dialog:

Nagios submit passive check with check TCP -p 80

That’s all now Nagious should start checking the down host by doing a query if the webserver on port 80 is up and running instead of pinging it.
As well as the server is no longer shown in the Nagio’s Down host list.

Share this on

Computers Technology use, Internet, Mobile Phones and all kind of technical screen based equipment alters negatively the human brain

Tuesday, April 26th, 2011

Computers Internet and Technology evil terminator picture

According to latest scientific research conducted in Stanford University USA .

People who actively use computers and internet has been the object of the research in 2009.

Social Networks, Tablets Smartphones etc. provides more and more possibilities for us to access information.

Most of modern people today tend to loose approximately between 8 and 10 hours a day either using Internet, a PC, Word-excel, their mobile phone or some kind of other mobile gadget like let’s say IPAD.

Most of today’s technologic goods we use to make our lives easier are multitasking.
The brain itself is not adjusted to work in such a multi-tasking mode as a direct consequence of being in contact with this multi-tasking for a long periods of time it gets altered.
Suddenly it starts being multitasking, or in other words starts processing information in parallel.

As the amount of information is constantly increasing online and we’re in contact with more and more information and moreover the altered way of our brains which starts working in multi-tasking the brain-overflows or (information brain overlow) is starting being more and more occuring event.

The consequence of this complexity is starting to impact us seriously as we tend to get addicted to technology usage and day by day it seems that the amount of information our brains are able to process is decreasing.

Logically enough the long-term consequence of a an internet addiction or any kind of technology addiction, plus the tremendous amounts of information we do think over daily is starting to show up the negative consequences on our psyche and (soul)

The brain starts changing the way it gets information as it adapts itself to “not remember”, as the information to be processed daily is so much that it couldn’t really comprehend it.

A good example for multi-tasking which if not all most of the users on the Internet today use daily is one of terriblest things ever created facebook, in my of my previous articles I’ve blogged about why social networks are big evil read it here and it seems this new information about brain altering caused bhy multi-tasking is just another supporting reason on why it’s better not to use social networks like facebook and twitter.

The endless amount of information according to the Stanford University research has prooven that the endless amount of information is pernicioufor our (brains) minds and is in many ways similar to the excessive amount of sugar in the body.

The scientiests which conducted the research does recommend to heavy computer and tech users (like me), to self-control themselves and be on a tech-diet (e.g. not use technology completely for at least 1 or 2 days every week).

Another serious damage which was prooven according to Stanford’s scientiests research was that people’s brains who have a severe exposure to internet or phone usage tend to have very serious problems with contentration and are very easily distracted.
This in a long term surely leads to a chaotic way of living obviously.
Suddenly it seems technology to be slowly becoming even more deadly and destructive than drugs.

Many people would say this kind of research is not true, but I can confirm that for instance many of the proven facts are things I have experiences myself in my daily life, so I believe what the research has prooven is mostly true.

This research was just another one after a month before other scientiests has prooven that Mobile Phone use leads to alteration of the brain chemistry
Apart from all the said negative consequences of use of technology for human brain is the problem with technology today heavily used as a way to spy on personal privacy I wonder be glad to hear in the comments section for other people like me who have problems with concentration and have a very short time memory (I myself have serious problem with that one).

Share this on

Make picture transparent with the Gimp on Linux

Tuesday, November 16th, 2010

GIMP Logo make picture transparent with GIMP on GNU / Linux
I’m trying to learn some basic design this days as an attempt to fill my huge missing gap of knowledge in graphic processing.
I’ve always been not too good with visual stuff and always been focused on the command line and console, however since
some time design started being quite interesting thing to me and I found it quite handy and challenging to learn some basic designing.

I’m not really a Windows guy and thus my Photoshop skills are next to zero.
Since The Gimp is the substitute for Photoshopfor Linux users and I had a task for one of the websites I’m developing to make some pictures for the website transparent, therefore I had to learn how to make pictures transparent with The Gimp
After some reading online and some experimenting with GIMP it appeared to me it’s very easy to actually make pictures transparent with the GIMP.
So I’ve come with a small article here on how to make image or a picture transparent with Gimp in simple steps in order to help people who are trying to achieve the same easy task:

1. Open Gimp and place your mouse cursor on the picture

Here, Press the 2nd or 3rd mouse button to show menu.

2. Select Layer -> Transperancy -> Alpha to Selection

In that menu selectSelect Layer -> Transprerancy -> Alpha to Selection

Gimp Alpha to Selection Menu

3. Use Fuzzy Select Tool and select the picture background

Gimp fuzzy select background

4. From Gimp Window pane main menu choose the Clear option

Edit -> Clear (Delete)
gimp edit clear menu

That’s all now your picture background should be removed if some parts of the picture still needs to be purged just follow the above step and remove them.
I should say I thought making picture transparent with GIMP would be a more complex task than it really was, quite nice one more step in my development as a designer 🙂

Share this on