Posts Tagged ‘downside’

Speed up Apache webserver by including htaccess mod_rewrite rules in VirtualHosts / httpd.conf

Wednesday, November 12th, 2014

speed-up-apache-through-include-htaccess-from-config
There are plenty of Apache Performance Optimization things to do on a new server. However many sysadmins miss  .htaccess mod_rewrite rules whole optimization often leads to a dramatic performance benefits and low webserver responce time, making website much more attractive for both Search Engine Crawlers and End User experience.

Normally most Apache + PHP CMS systems, websites, blogs etc. are configured to use various goodies of .htaccess files (mostly mod_rewrite rules, directory htpasswd authentication  and allow forbid directives). All most popular open-source Content management systems  like Drupal, Joomla, WordPress, TYPO3, Symphony CMS are configured to get use  .htaccess file usually living in the DocumentRoot of a virtualhost ( website/s )httpd.conf , apache2.conf /etc/apache2/sites-enabled/customvhost.com or whichever config the Vhost resides…

It is also not uncommon practice to enable .htaccess files to make programmers life easier (allowing the coder to add and remove URL rewrite rules that makes URL pretty and SEO friendly, handle website redirection or gives live to the framework like it is the case with Zend PHP Framework).

However though having the possibility to get the advantages of dynamically using .htaccess inside site DocRoot or site's subdirectories is great for developers it is not a very good idea to have the .htaccess turned on Production server environment.

Having

AllowOverride All

switched on for a directory in order to have .htaccess enabled, makes the webserver lookup for .htaccess file and re-read its content dynamically on each client request.
This has a negative influence on overall server performance and makes Apache preforked childs or workers (in case of mpm-worker engine used) to waste time parsing .htaccess file leading to slower request processing.

Normally a Virtualhost with enabled .htaccess looks like so:

<VirtualHost 192.168.0.5:80>
ServerName your-website.com:80 …
DocumentRoot /var/www/website
<Directory /var/www/website>
AllowOverride All …
</Directory> …
</VirtualHost>

And VirtualHost configured to keep permanently loaded mod_rewrite .htaccess rules in memory on Apache server start-up.
 

<VirtualHost 192.168.0.5:80>
ServerName your-website.com:80 …
DocumentRoot /var/www/website
<Directory /var/www/website>
AllowOverride None
Include /var/www/website/.htaccess …
</Directory> …
</VirtualHost>

Now CMS uses the previous .htaccess rules just as before, however to put more rewrite rules into the file you will need to restart webserver which is a downside of using rewrite rules through the Include directive. Using the Include directive instead of AllowOverride leads to 7 to 10% faster individual page loads.

I have to mention Include directive though faster has a security downside because .htaccess files loaded with Include option (uses mod_include) via httpd.conf doesn't recognize <Directory> … </Directory> set security rules. Also including .htaccess from configuration on Main Website directory, could make any other sub-directories .htaccess Deny / Allow access rules invalid and this could expose site to  security risk. Another security downside is because Include variable allows loading a full subset of Apache directives (including) loading other Apache configuration files (for example you can even override Virtualsthost pre-set directives such as ErrorLog, ScriptAlias etc.) and not only .htaccess standard directives allowed by AllowOverride All. This gives a potential website attacker who gains write permissions over the included /var/www/website/.htaccess access to this full set of VirtualHost directives and not only .htaccess standard allowed.

Because of the increased security risk most people recommend not to use Include .htaccess rules, however for those who want to get the few percentage page load acceleration of using static Include from Apache config, just set your Included .htaccess file to be owned by user/group root, e.g.:

chown root:root /var/www/website/.htaccess

How to make a mysql root user to login interactive with mysql cli passwordless

Wednesday, June 29th, 2011

MySQL Logo Passwordless root login .my.cnf

I’m using access to the mysql servers via localhost with mysql cli on daily basis.
With time I’ve figured out that it’s pretty unahandy to always login with my root mysql password, I mean each time to enter it, e.g.:

root@mysql-server:~# mysql -u root
Enter password:
...

Thus to make my life a way easier I decided to store my mysql root password in order to allow my root admin user to be able to login to my mysql server without asking for password. This saves time and nerves, as I’m not supposed to look up for the password file I store my server mysql root pass.

To allow my mysql cli interface, to login passwordless to the SQL server I had to create the file /root/.my.cnf readable only for my root user and store my MySQL username and password there.

Here is a sample /root/.my.cnf file:

root@mysql-server:~# cat /root/.my.cnf
[client]
user="root"
pass="mysecretMySQLPasswordgoeshere"

Now next time I use the mysql console interface to access my mysql server I don’t have to supply the password, here is how easier is the mysql login afterwards:

root@mysql-server:~# mysql -u root
Welcome to the MySQL monitor. Commands end with ; or g.
Your MySQL connection id is 3520
Server version: 5.0.77 Source distribution

Type ‘help;’ or ‘h’ for help. Type ‘c’ to clear the buffer.

mysql>

The only downside of using .my.cnf to store permanently the mysql server root and password is from security standpoint.
If for instance somebody roots my servers, where I have stored my root user/pwds in .my.cnf , he will be able immediately to get access to the MySQL server.

Another possible security flaw with using the mysql passwordless login “trick” is if somebody forgets to set proper file permissions to, .my.cnf

Once again the file should possess the permissons of:

root@mysql-server:~# ls -al /root/.my.cnf
-rw------- 1 root root 90 Apr 2 00:05 /root/.my.cnf

Any other permissons might allow non-privileged users to read the file and gain unathorized admin access to the SQL server.
 

How to convert FLV to AVI and AVI to FLV Videos on Linux and BSD with avidemux and ffmpeg – Simple video editting with LiVES

Tuesday, May 8th, 2012

I'm starting to learn some video editing, as I need it sometimes for building client websites.
As a Linux user I needed to have some kind of software for amateur video editing.
For Microsoft Windows OS, there are tons of video editor programs both free and proprietary (paid).
Windows users can for instance use the free software program VirtualDub (licensed under GPL license) to easily cut movie scenes from a video.

Unfortunately VirtualDub didn't have a Linux or BSD version so in my case I had to look for another soft.

VirtualDub running on Microsoft Windows XP Screenshot (Biomassa)

I consulted a friend of mine who recommended a video editor program called LiVES.

If you haven't done any video editing previously on Linux (like my case was), you will certainly be happy to try LiVES

Debian GNU / Linux LiVES video editor logo bootscreen shot

LiVES can extract only sound from videos, cut selected parts (frames) from videos and do plenty of other nice stuff. It is just great piece of software for anyone, who needs to do simply (newbie) video editting.

With LiVES even an amateur video editor like me could, immediately learn how to chop a movie scenes

Screenshot opened video for editting with LiVES Linux movie editor Debian Squeeze Linux shot

To master the basics and edit one video in FLV format it took me about 1 hour of time, as in the beginning it was confusing to get confortable with the program scenes selector.

One downside of LiVES it failure to open a FLV file I wanted to edit.
In order to be able to edit the flv movie hence I first had to convert the FLV to AVI or MPEG, as this two (video multimedia formats) are supported by LiVES video editor.

After completing my video scenes chopping to the AVI file I had to convert back to FLV.

In order to complete the convertion between FLV to AVI format on my Debian Linux, I used a program called avidemux

Avidemux has a nice GUI interface and also like Lives has support for video editting, though I have never succesfully done any video edits with it.

Avidemux IMHO is user (completely intuitive). To convert the FLV to AVI, all I had to do was simply open the file FLV file, press (CTRL+S) select my FLV video file format and select the output file extension format to be AVI.

Further on, used LiVES to cut my desired parts from my video of choice. Once the cuts were complete I saved the new cutted version of video to AVI.
Then I needed the video again in FLV to upload it in Joomla, so used ffmpegcommand line tool to do the AVI to FLV file converstion, like so:

hipo@noah:~$ /usr/bin/ffmpeg -i my_media_file.avi my_video_file.flv

Hope this article helps someone aiming to do basic video editting on Linux with LiVES and just like needed FLV to AVI and AVI to FLV convertions.

Text mode (console) browsing with tabs with Elinks / Text browsers – (lynx, elinks, links and w3m) useful HTTP debugging tools for Linux and FreeBSD servers

Friday, April 27th, 2012

The last days, I'm starting to think the GUI use is making me brainless so I'm getting back to my old habits of using console.
I still remember with a grain of nostalgy how much more efficient I used to be when the way to interact with my computer was primary in text mode console.
Actually, I'm starting to get this idea the more new a software is the more inefficient it makes your use of computer, not to mention the hardware resources required by newer software is constantly increasing.

With this said, I started occasionally browsing again like in the old days by using links text browser.
In the old days I mostly used lynx and its more advanced "brother" text browser links.
The main difference between lynx and links is that lynx does not have any support for the terrible "javascript", whether links supports most of the Javascript ver 2.
Also links and has a midnight commander like pull down menus on the screen top, – handy for people who prefer some more interactivity.

In the past I remember I used also to browse graphically in normal consoles (ttys) with a hacked version of links calledTThere is also a variation of linksxlinks suitable for people who would like to have graphical browser in console (ttys).

I used xlinks quite heavily in the past, when I have slower computer P166Mhz with 64MB of memory 2.5 GB HDD (What a times boy what a times) .
Maybe when I have time I will install it on my PC and start using it again like in the old days to boost my computer use efficiency…
I remember the only major xlinks downside was it doesn't included support for Adobe flash (though this is due to the bad non-free software nature of Adobe lack of proper support for free software and not a failure of xlinks developers. Anyways for me this wasn't a big trouble since, ex Macromedia (Adobe) Flash support is not something essential for most of my work…

links2 is actually the naming of links version 2. elinks emerged later (if I remember correctly, as fork project of links).
elinks difference with links constitutes in this it supports tabbed browsing as well as colors (links browser displays results monochrome).

Having a tabbed browsing support in tty console is a great thing…
I personally belive text browsing if properly used can in many ways outbeat, graphic browsing in terms of performance and time spend to obtain data. I'm convinced text browsing is superior for two reasons:
1. with text there is way less elements to obstruct your attention.
– No graphical annoying flash banners, no annoying taking the attention pictures

2. Navigating in web pages using the keyboard is more efficient than mouse
– Using keyboard shorcuts is always quicker than mouse, generally keboard has always been a quicker way to access computer commands.

Another reason to use text browsing is, it is mostly the text part of a page that matters, most of the pages that provide images to better explain a topic are bloated (this is my personal view though, i'm sure designer guys will argue me :D).
Here is a screenshot of a my links text browser in action, I'm sorry the image is a bit unreadable, but after taking a screenshot of the console and resizing it with GIMP this is what I got …

Links text console browser screenshot with 2 tabs opened Debian GNU / Linux

For all those new to Linux who didn't tried text browsing yet and for those interested in computer history, I suggest you install and give a try to following text browsers:
 

  • lynx
  • (Supports colorful text console text browsing)
    lynx text console browser Debian Squeeze GNU / Linux Screenshot

  • links
  • Links www text console browser screenshot on Debian Linux

  • elinks
  • (Supports colors filled text browsing and tabs)
    elinks opened duckduckgo.com google alternative search engine in mlterm terminal Debian Linux

  • w3m
  • w3m one of the oldest text console browsers screenshot Debian Linux Squeeze 6.2

By the way having the 4 text browsers is very useful for debugging purposes for system administrators too, so in any case I think this 4 web browsers are absoutely required software for newly installed GNU / Linux or BSD* based servers.

For Debian and the derivatives Linux distributions, the 4 browsers are available as deb packages, so install them with following apt 1 liner:
 

debian:~# apt-get –yes install w3m elinks links lynx
….

FreeBSD users can install the browsers using, cmd:
 

freebsd# cd /usr/ports/www/w3mfreebsd# make install clean
….
freebsd# cd /usr/ports/www/elinksfreebsd# make install clean
….
freebsd# cd /usr/ports/www/linksfreebsd# make install clean
….
freebsd# cd /usr/ports/www/lynxfreebsd# make install clean
….

In links using the tabs functionality appeared, somewhere near the 2001 or 2000 (at least that was the first time I saw links with tabbed browsing enabled). My first time to saw links support opening multiple pages within the same screen under tabs was on Redhat Linux 9

Opening multiple pages in tabs in the text browser is done by pressing the t key and typing in the desired URL to open isnide.
For more than 2 tabs, again t has to be pressed and same procedure goes on and on.
It was pretty hard for me to figure out how I can do a text browsing with tabs, though I found a way to open new tabs it took me some 10 minutes in pondering how to switch between the new opened links browser tabs.

Hence, I thought it would be helpful to mention here how tabs can be switched in links text browser. Actually it turned it is pretty easy to Switch tabs tabs back and foward.

1 tab to move backwards is done with < (key), wheter switching one tab forward is done with the > key.

On UK and US qwerty keyboards alignment the movement a tab backward and forward is done with holding shift and pressing < onwards holding both keys simultaneously and analogously with pressing shift + >
 

How to delete million of files on busy Linux servers (Work out Argument list too long)

Tuesday, March 20th, 2012

How to Delete million or many thousands of files in the same directory on GNU / Linux and FreeBSD

If you try to delete more than 131072 of files on Linux with rm -f *, where the files are all stored in the same directory, you will get an error:

/bin/rm: Argument list too long.

I've earlier blogged on deleting multiple files on Linux and FreeBSD and this is not my first time facing this error.
Anyways, as time passed, I've found few other new ways to delete large multitudes of files from a server.

In this article, I will explain shortly few approaches to delete few million of obsolete files to clean some space on your server.
Here are 3 methods to use to clean your tons of junk files.

1. Using Linux find command to wipe out millions of files

a.) Finding and deleting files using find's -exec switch:

# find . -type f -exec rm -fv {} \;

This method works fine but it has 1 downside, file deletion is too slow as for each found file external rm command is invoked.

For half a million of files or more, using this method will take "long". However from a server hard disk stressing point of view it is not so bad as, the files deletion is not putting too much strain on the server hard disk.
b.) Finding and deleting big number of files with find's -delete argument:

Luckily, there is a better way to delete the files, by using find's command embedded -delete argument:

# find . -type f -print -delete

c.) Deleting and printing out deleted files with find's -print arg

If you would like to output on your terminal, what files find is deleting in "real time" add -print:

# find . -type f -print -delete

To prevent your server hard disk from being stressed and hence save your self from server normal operation "outages", it is good to combine find command with ionice, e.g.:

# ionice -c 3 find . -type f -print -delete

Just note, that ionice cannot guarantee find's opeartions will not affect severely hard disk i/o requests. On  heavily busy servers with high amounts of disk i/o writes still applying the ionice will not prevent the server from being hanged! Be sure to always keep an eye on the server, while deleting the files nomatter with or without ionice. if throughout find execution, the server gets lagged in serving its ordinary client requests or whatever, stop the execution of the cmd immediately by killing it from another ssh session or tty (if physically on the server).

2. Using a simple bash loop with rm command to delete "tons" of files

An alternative way is to use a bash loop, to print each of the files in the directory and issue /bin/rm on each of the loop elements (files) like so:

for i in *; do
rm -f $i;
done

If you'd like to print what you will be deleting add an echo to the loop:

# for i in $(echo *); do \
echo "Deleting : $i"; rm -f $i; \

The bash loop, worked like a charm in my case so I really warmly recommend this method, whenever you need to delete more than 500 000+ files in a directory.

3. Deleting multiple files with perl

Deleting multiple files with perl is not a bad idea at all.
Here is a perl one liner, to delete all files contained within a directory:

# perl -e 'for(<*>){((stat)[9]<(unlink))}'

If you prefer to use more human readable perl script to delete a multitide of files use delete_multple_files_in_dir_perl.pl

Using perl interpreter to delete thousand of files is quick, really, really quick.
I did not benchmark it on the server, how quick exactly is it, but I guess the delete rate should be similar to find command. Its possible even in some cases the perl loop is  quicker …

4. Using PHP script to delete a multiple files

Using a short php script to delete files file by file in a loop similar to above bash script is another option.
To do deletion  with PHP, use this little PHP script:

<?php
$dir = "/path/to/dir/with/files";
$dh = opendir( $dir);
$i = 0;
while (($file = readdir($dh)) !== false) {
$file = "$dir/$file";
if (is_file( $file)) {
unlink( $file);
if (!(++$i % 1000)) {
echo "$i files removed\n";
}
}
}
?>

As you see the script reads the $dir defined directory and loops through it, opening file by file and doing a delete for each of its loop elements.
You should already know PHP is slow, so this method is only useful if you have to delete many thousands of files on a shared hosting server with no (ssh) shell access.

This php script is taken from Steve Kamerman's blog . I would like also to express my big gratitude to Steve for writting such a wonderful post. His post actually become  inspiration for this article to become reality.

You can also download the php delete million of files script sample here

To use it rename delete_millioon_of_files_in_a_dir.php.txt to delete_millioon_of_files_in_a_dir.php and run it through a browser .

Note that you might need to run it multiple times, cause many shared hosting servers are configured to exit a php script which keeps running for too long.
Alternatively the script can be run through shell with PHP cli:

php -l delete_millioon_of_files_in_a_dir.php.txt.

5. So What is the "best" way to delete million of files on Linux?

In order to find out which method is quicker in terms of execution time I did a home brew benchmarking on my thinkpad notebook.

a) Creating 509072 of sample files.

Again, I used bash loop to create many thousands of files in order to benchmark.
I didn't wanted to put this load on a productive server and hence I used my own notebook to conduct the benchmarks. As my notebook is not a server the benchmarks might be partially incorrect, however I believe still .they're pretty good indicator on which deletion method would be better.

hipo@noah:~$ mkdir /tmp/test
hipo@noah:~$ cd /tmp/test;
hiponoah:/tmp/test$ for i in $(seq 1 509072); do echo aaaa >> $i.txt; done

I had to wait few minutes until I have at hand 509072  of files created. Each of the files as you can read is containing the sample "aaaa" string.

b) Calculating the number of files in the directory

Once the command was completed to make sure all the 509072 were existing, I used a find + wc cmd to calculate the directory contained number of files:

hipo@noah:/tmp/test$ find . -maxdepth 1 -type f |wc -l
509072

real 0m1.886s
user 0m0.440s
sys 0m1.332s

Its intesrsting, using an ls command to calculate the files is less efficient than using find:

hipo@noah:/tmp/test$ time ls -1 |wc -l
509072

real 0m3.355s
user 0m2.696s
sys 0m0.528s

c) benchmarking the different file deleting methods with time

– Testing delete speed of find

hipo@noah:/tmp/test$ time find . -maxdepth 1 -type f -delete
real 15m40.853s
user 0m0.908s
sys 0m22.357s

You see, using find to delete the files is not either too slow nor light quick.

– How fast is perl loop in multitude file deletion ?

hipo@noah:/tmp/test$ time perl -e 'for(<*>){((stat)[9]<(unlink))}'real 6m24.669suser 0m2.980ssys 0m22.673s

Deleting my sample 509072 took 6 mins and 24 secs. This is about 3 times faster than find! GO-GO perl 🙂
As you can see from the results, perl is a great and time saving, way to delete 500 000 files.

– The approximate speed deletion rate of of for + rm bash loop

hipo@noah:/tmp/test$ time for i in *; do rm -f $i; done

real 206m15.081s
user 2m38.954s
sys 195m38.182s

You see the execution took 195m en 38 secs = 3 HOURS and 43 MINUTES!!!! This is extremely slow ! But works like a charm as the running of deletion didn't impacted my normal laptop browsing. While the script was running I was mostly browsing through few not so heavy (non flash) websites and doing some other stuff in gnome-terminal) 🙂

As you can imagine running a bash loop is a bit CPU intensive, but puts less stress on the hard disk read/write operations. Therefore its clear using it is always a good practice when deletion of many files on a dedi servers is required.

b) my production server file deleting experience

On a production server I only tested two of all the listed methods to delete my files. The production server, where I tested is running Debian GNU / Linux Squeeze 6.0.3. There I had a task to delete few million of files.
The tested methods tried on the server were:

– The find . type -f -delete method.

– for i in *; do rm -f $i; done

The results from using find -delete method was quite sad, as the server almost hanged under the heavy hard disk load the command produced.

With the for script all went smoothly. The files were deleted for a long long time (like few hours), but while it was running, the server continued with no interruptions..

While the bash loop was running, the server load avarage kept at steady 4
Taking my experience in mind, If you're running a production, server and you're still wondering which delete method to use to wipe some multitude of files, I would recommend you go  the bash for loop + /bin/rm way. Yes, it is extremely slow, expect it run for some half an hour or so but puts not too much extra load on the server..

Using the PHP script will probably be slow and inefficient, if compared to both find and the a bash loop.. I didn't give it a try yet, but suppose it will be either equal in time or at least few times slower than bash.

If you have tried the php script and you have some observations, please drop some comment to tell me how it performs.

To sum it up;

Even though there are "hacks" to clean up some messy parsing directory full of few million of junk files, having such a directory should never exist on the first place.

Frankly, keeping millions of files within the same directory is very stupid idea.
Doing so will have a severe negative impact on a directory listing performance of your filesystem in the long term.

If you know better (more efficient) ways to delete a multitude of files in a dir please share in comments.

How to take area screenshots in GNOME – Take quick area selection screenshots in G* / Linux and BSD

Thursday, March 15th, 2012

Quick Area screenshot in GNOME how to make quick area selection screenshots in Linux and FreeBSD gnome-screenshot shot

Often when, you do something on your PC, you need to make a quick screenshot of a screen area.. Yes GNOME's feature to take complete screenshots of Screen with Print Screen SysRQ and consequential picture edit with GIMP is one way, but this is far away from quick. This method to chop out of a complete display screenshot usually takes from 40 secs to 1 minute to properly cut and save a selection of the whole picture.
Another common use, that I love in GNOME is the ALT + Print Screen SysRQ key combination. alt+ print scr sysrq is handy while taking a single window screenshot is desired. Anyways often you only need to make a screenshot of a tiny area of the screen. Many people might think this is not possible currently in GNOME, but they will be wrong as there are no impossible but hard things to achieve on Linux / FreeBSD 😉

There are at least two ways using a predefined command for taking quick area screen snapshot.

1. Taking quick area screenshot by using ImageMagick's import command

To use import you will need to have installed ImageMagickswiss army knife of command line image manipulation 😉
For area screenshot with import, press ALT+F2 and type inside Run Application box:

Screenshot GNOME run application GNU / Linux Debian ImageMagick import area screenshot

import -frame screenshot.png

Now make the selection of the exact screen area you would like to screeshot in file screenshot.png
Note that screenshot.png file will be saved by default in your home directory as it is read from $HOME shell variable:

hipo@noah:~$ echo $HOME/home/hipo
hipo@noah:~$ ls -al screenshot.png
-rw-r--r-- 1 hipo hipo 4950 Mar 14 21:11 screenshot.png

You see my $HOME equals /home/hipo, therefore screenshot.png just grabbed is saved in there.

One downside of taking the screenshot with import is that picture snapshot is not further edittable, if it has to be further processed with GIMP or some other graphic editor program.

In the screenshot, below I show you one screen area of my XMMS taken with import -frame screenshot.png cmd:

XMMS Screen Area Screenshot import screenshot

Trying to open the screenshot.png, file with GIMP displays the following error in GIMP:

PNG image message PNG the file specifies offset that caused the layer to be positioned outiside image GIMP screenshot

Not all area snapshots taken with import -frame, create this issue sometimes screenshots are opening in GIMP but only area of the screenshot.png is visible in gimp.

Thanksfull, there is work around to this issue by converting the import generated PNG format picture to JPEG with ImageMagick's convert and then edit the .JPEG with GIMP etc.:

hipo@noah:~$ convert screenshot.png screenshot.jpg

Hence to permanently work around it, in case you intend to apply (GIMP modifications), once area snapshot is made instruct import to save its output picture in .jpeg, e.g.:

hipo@noah:~$ import -frame screenshot.jpeg

2. Taking quick area screenshot using gnome-screenshot cmd

Once again invoke the GNOME command Launcher by pressing Alt+F2 (holding alt and pressing F2) and type in the launch box:

gnome-screenshot -a

gnome-screenshot Run Application in GNOME 2.30 on Debian GNU / Linux

Below is a small area from my desktop, chopped with gnome-screenshot 🙂

GNOME desktop area chop screenshot with gnome-screenshot on my home Debian Linux

You see on above screenshot a tiny (picture) icon one of the greatest, if not the greatest bulgarian saint – saint John of Rila. St. John's lived as hermit for many years in Rila mountain and by God's grace possessed incorruptable body. His incorruptable body is still kept and can be venerated in Rila Monastery. The monastery is located 160 km from Bulgaria's capital city Sofia

St. Johns first Bulgarian established monastery Rila Monastery is currently the biggest functioing monastery in Bulgaria. The saints monastery is considered one of the most holy places in Bulgaria. If you have a travel or plan a holiday in Bulgaria, I warmly recommend you go there and venerate the saint incorruptable relics.

3. Binding keys to allow quick area screenshot taking with gnome-screenshot in GNOME

This configuration is for GNOME 2.x and is tested to work on my Debian (Squeeze 6.0), GNOME ver. 2.30.2, it should work in earlier Ubuntu versions shipped with GNOME 2.2.xx too. As I've red on the Internet it works well with Ubuntu 10.10Binding a key for screenshot area grab, should be working properly also on any GNOME 2.2.x supporting OS, including the BSD family OSes (FreeBSD, OpenBSD, NetBSD)

a) setting gnome-screenshot key binding for interactive screenshot area grab

Navigate the mouse cursor to GNOME main menus panel in left top, where you see (Applications, Places, System).
Therein use menus:

System -> Preferences -> Keybord Shortcuts -> Add ->

Alternatively if you prefer you can directly invoke the Keyboard Shortcuts configuration with command:

hipo@noah:~$ gnome-keybinding-properties

Further on, assign a shortcut by filling in something like:

name: grab-screen-area
command: gnome-screenshot -i -a

GNOME add keyboard shortcut map key for area interactive screenshot

press Apply and next map a key to the new defined key binding:

GNOME add keyboard shortcut map key

Under the Shortcut column click on Disabled and assign some key combination to invoke the cmd for example Ctrl+F4

The command gnome-screenshot -i makes gnome-screenshot, show interactive make screenshot dialog like the one in below screenshot.

GNOME screenshot interactive screenshot select area grab shot

b) creating gnome-screenshot -a area screenshot key binding for quick area screenshots "on the fly"

The procedure is precisely the same as with adding interactive screenshot; Under Keyboard Shortcuts GNOME config assign new key binding by pressing Add button and adding:

name: grab-screen-area1
command: gnome-screenshot -a

Once again in Shortcut column in line starting with grab-screen-area1 add your desired key switch. I personally like Ctrl+Print Screen SysRQ as it is close to the default GNOME key combination assigned for taking screenshot for a Windows Alt+Print SysRq

It was logical, that this key binding should work and a direct selection mouse cursor to appear once Alt+Print SysRQ is pressed, however for some reason this is not working (hmm, maybe due to bug) ??

Thanksfully it is always possible to substitute the just assigned gnome-screenshot -a key binding with import -frame /home/hipo/Desktop/screenshot.png

If you have followed literally my article so far and you did tried to place a bind for gnome-screenshot -a, modifty grab-screen-area1 to be something like:

name: grab-screen-area1
command: import -frame /home/hipo/Desktop/screenshot.png

Where modify the path /home/hipo/Desktop/screenshot.png, to wherever you prefer the region screep capture to be stored.

c) bind keys for delayed screenshot

This also a handy binding, especially if you every now and then need to make screenshots of screen with a few secs interval.
Add one more keyboard shortcut;

name: grab-screen-area2
command: gnome-screenshot -d 5

Assign a key to make a screenshot of the active display after a delay of 5 seconds. I prefer Ctrl+F5

Onwards every time you would like to make an area screenshot, just use the defined keys:

Ctrl+F4 - will prompt you interactively for the precise type of screenshot you would like to take
Ctrl+Print SysRQ - will prompt you for a direct area to select and once selected will immediately screenshot it
Ctrl+F5 - would do delayed screenshot of entire screen after a delay of 5 seconds

4. Adding border and drop shadow effects with gnome-screenshot Actually, there is plenty of interesting things to do with Screenshots which I never thought were possible.
While reading gnome-screenshot's man page, I've stumbled to an interesting argument:

-e, --effect=EFFECT,
Add an effect to the outside of the screenshot border. EFFECT can be ``shadow'' (adding drop shadow), ``border'' (adding
rectangular space around the screenshot) or ``none'' (no effect). Default is ``none''.

This would have been a nice feature but as of time of writting this article, untofrtunately it is not working in GNOME 2.30.2. I'm not sure if this is a local Debian bug, however I suspect on other Linux distributions with different GNOME build configuration, this features might be working well. My guess here is drop shadow effect and border effect are not working because, gnome-screenshot was compiled without (support for ImageMagick?).
Anyways the way the feature is supposed to be work is by invoking commands:
:

hipo@noah:~$ gnome-screenshot --border-effect=shadow
hipo@noah:~$ nome-screenshot --border-effect=border

The same basic effects, are also available through GIMP's menus:

Image -> Effects

5. Setting default behaviour of gnome-screenshot in gconf-editor GConf (Gnome config registry db)

Experienced, GNOME users should already know about the existence of gconf-editor and the gnome registry database. For those who have don't, coming from MS-Windows background gconf-editor is GNOME (graphical environment) equivalent to Microsoft Windows registry regedit command

gconf-editor can be used to atune the way the screenshots are taken by default. To do so, launch gconf-editor cmd and follow to sub-structure:

/ -> apps -> gnome-screenshot

gconf-editor GNOME screenshot border effect none default gnome-screenshot gnome behaviour

The settings in above screenshot are configurations which are used by default by gnome-screenshot, right after install.
You can play with the options to change the default way PrintScreen SysRQ key press will take screenshots.
Here is one example for changing the gnome-screenshot default GNOME behaviour:

GConf Editor GNOME screenshot, border effect drop shadow and include border option set on Linux Debian

As you can see in above screenshot, I've changed my default gnome-screenshot snap taking to include a drop shadow effect:
Name | Value
border_effect | shadow include_border | (tick on)
last_save_directory | file://home/hipo/Desktop

As you see you can also control, where by default gnome-screenshot will save its screenshots, by default, its saved in $HOME/Desktop
. If you prefer some custom directory to only contain Screenshots taken for instance $HOME/Screenshots, create the directory:
hipo@noah:~$ mkdir ~/Screenshots

and then change the value for last_save_directory gconf var:

last_save_directory | file://home/hipo/Screenshots

Once settings are applied screenshots with Print Screen SysRQ key will be made with Shadow Border effect and saved in /home/hipo/Screenshots

Strangely enough, changing gnome-screenshot default screenshotting values to include screenshot effects like drop shadow or screenshot border effect works just fine.
Even though gnome-screenshot –border-effect=shadow and gnome-screenshot –border-effect=border doesn't directly affect the current screenshot to be made, I've later noticed writting this two commands in the gnome-terminal, does change the border settings for gconf-editor screenshot border.

If you enjoyed, this article and you intend to become "a professional screnshotter" :), you might also enjoy my two other articles:

Happy screenshotting 😉

How to make wordpress Update Plugins prompt to permanently store password / Get rid of annoying updates wordpress prompt

Thursday, February 2nd, 2012

I'm managing few wordpress installations which requires me to type in:
Hostname , FTP Username and FTP Password , every single time a plugin update is issued and I want to upgrade to the new version.
Below is a screenshot of this annoying behaviour:

How to get rid of update plugins wordpress username password prompt

As you can see in the above screenshot, there is no way through Update Plugins web interface to store the password permanently. Hence the only option to store it permanently is to manually edit wp-config.php (file located in wordpress docroot, e.g. /path/to/wordpress/wp-config.php , inside the file find the line:

define ('WPLANG', '');

Right after it put a code similar to:

define('FS_METHOD', 'ftpsockets');
define('FTP_BASE', '/path/to/wordpress/');
define('FTP_CONTENT_DIR', '/path/to/wordpress/wp-content/');
define('FTP_PLUGIN_DIR ', '/path/to/wordpress/wp-content/plugins/');
define('FTP_USER', 'Username');
define('FTP_PASS', 'Password');
define('FTP_HOST', 'localhost');

Change the above defines:
path/to/wordpress/ – with your wordpress location directory.
Username and Password – with your respective FTP username and password. The localhost

That's all, from now onwards the User/Password prompt will not appear anymore. Consider there is a security downside of storing the FTP User/Pass in wp-config.php , if someone is able to intrude the wordpress install and access the documentroot of the wordpress install he we'll be able to obtain the ftp user/pass and log in the server directly via FTP protocol.

How to add Apache 301 redirect to VirtualHost in Apache

Sunday, September 25th, 2011

I’ve had two domain names which were pointing to the same website content.
As one can read in any SEO guide around this is a really bad practice as search engines things automatically there is a duplicate site content and this has automatically a negative effect on the site pagerank.
To deal with situation where multiple domains are pointing to the same websites its suggested by many SEO specialists that a 301 redirect is created from all the domain websites to a single website domain which will open the actual website.

Making the 301 direct domain from the sample domain my-redirect-domain.com to www.mydomain.com can be done with a virtualhost dfefinition in either httpd.conf or with the respective file containing the domain virtualhost definitions:
Here is the exact VirtualHost code I use to make a 301 redirect.

<VirtualHost *>  ServerAdmin support@mydomain.com  ServerName my-redirected-domain.com
ServerAlias my-redirected-domain.com www.my-redirected-domain.com
RewriteEngine on RewriteRule ^/(.*) http://www.mydomain.com/$1 [L,R=301]
</VirtualHost>

After placing the VirtualHost redirect, an apache redirect is required.
Further on when a Gooogle or Yahoo Bot visits the website and does any request to my-redirect-domain.com or www.my-redirect-domain.com , they will be redirected with a 301 reuturned code to www.mydomain.com

This kind of redirect however can have a negative impact on the Apache CPU use (performance), especially if the my-redirect-domain.com is high traffic domain. This is because the redirect is done with mod_rewrite.

Therefore it might be better on high traffic domains to create the mod_rewrite redirect by using a vhost like:

<VirtualHost *>
ServerAdmin support@mydomain.com
ServerName my-redirected-domain.com
Redirect 301 / http://www.mydomain.com/
</VirtualHost>

The downside of using the Apache 301 redirect capabilities like in the above example is that any passed domain urls like let’s say http://www.my-redirected-domain.com/support/ would not be 301 redirected to http://www.mydomain.com/support/ but instead the redirect will be done straight to http://www.mydomain.com/