Posts Tagged ‘file formats’

10 must know and extremely useful Linux commands that every sys admin should know

Tuesday, July 30th, 2013

10 must know extremely useful gnu linux command line tools tips and tricks
There are plenty of precious command line stuff every admin should be aware on Linux. In this article I just decided to place some I use often and are interesting to know. Below commands are nothing special and probably many of experienced sys admins already know them. However I'm pretty sure novice admins and start-up Linux enthusiasts will find it useful. I know there much more to be said on the topic. So anyone is mostly welcome to share his used cmds.
 
1. Delete all files in directory except files with certain file extension

It is good trick to delete all files in directory except certain file formats, to do so:

root@linux:~# rm !(*.c|*.py|*.txt|*.mp3)

2. Write command output to multiple files (tee)

The normal way to write to file is by using redirect (to overwrite file) ">" or (to append to file) ">>";. However when you need to write output to multiple files there is a command called tee, i.e.:

root@linux:~# ps axuwwf | tee file1 file2 file3

3. Search for text in plain text file printing number of lines after match

Whether you need to print all number of lines after match of "search_text" use:

root@linux:~# grep -A 5 -i "search_text" text_file.txt

4. Show all files where text string is matched with GREP (Search for text recursively)

Searching for text match is extremely helpful for system administration. I use  grep recursive (capability) almost on daily basis:

root@websrv:/etc/dovecot# grep -rli text *
conf.d/10-auth.conf
conf.d/10-mail.conf
dovecot.conf

-l (instructs to only print file names matching string), -r (stands for recursive search), and -i flag (instructs grep to print all matches  inogoring case-sensitivity ( look for text nomatter if with capital or small letters)

5. Finding files and running command on each file type matched

In Linux with find command it is possible to search for files and run command on each file matched.
Lets say you we want to look in current directory for all files .swp (temporary) files produced so often by VIM and wipe them out:

root@linux:~# find . -iname '*.swp*' -exec rm -f {} \;

6. Convert DOS end of file (EOF) to UNIX with sed

If it happens you not have dos2unix command installed on Linux shell and you need to translate DOS end of file (\r\n – return carriage, new line) to UNIX's (\r – return carriage)), do it with sed:

root@linux:~# sed 's/.$//' filename

7. Remove file duplicate lines with awk:

cat test.txt
test
test
test duplicate
The brown fox jump over ...
Richard Stallman rox

root@linux:~# awk '!($0 in array) { array[$0]; print }' test.txt
test
test duplicate
The brown fox jump over ...
Richard Stallman rox

To remove duplicate text from all files in directory same can be easily scripped with bash for loop:

root@linux:~# for i in *; do
awk '!($0 in array) { array[$0]; print }' $i;
done

8. Print only selected columns from text file

To print text only in 1st and 7th column in plain text file with awk:

root@linux:~# awk '{print $1,$6;}' filename.txt ...

To print only all existing users on Linux with their respective set shell type:

root@linux:~# cat /etc/passwd|sed -e 's#:# #g'|awk '{print $1,$6;}'

9. Open file with VIM text editor starting from line

I use only vim for console text processing, and I often had to edit and fix file which fail to compile on certain line number. Thus use vim to open file for writing from necessary line num. To open file and set cursor to line 35 root@linux:~# vim +35 /home/hipo/current.c

10. Run last command with "!!" bash shorcut

Lets say last command you run is uname -a:

root@websrv:/home/student# uname -a
Linux websrv 3.2.0-4-686-pae #1 SMP Debian 3.2.46-1 i686 GNU/Linux

To re-run it simply type "!!":

root@websrv:/home/student# !!
uname -a
Linux websrv 3.2.0-4-686-pae #1 SMP Debian 3.2.46-1 i686 GNU/Linux

root@websrv:/home/student#

 

Create PDF file from (png, jpg, gif ) images / pictures in Linux

Tuesday, September 14th, 2010

I’ve recently received a number of images in JPEG format as a feedback on a project plan that was constructed by a team I’m participating at the university where I study.

Somebody from my project group has scanned or taken snapshots of each of the hard copy paper feedback and has sent it to my mail.

I’ve received 13 images so I had to open them one by one to get each of the Project Plan to read the feedback on the page this was really unhandy, so I decided to give it a try on how to generate a common PDF file from all my picture files.

Thanksfully it happened to be very easy and trivial using the good old Image Magick

In order to complete the task of generating one PDF from a number of pictures all I did was.1. Switch to the directory where I have saved all my jpeg images

debian:~# cd /home/hipo/Desktop/my_images_directory/

2. Use the convert binary part of imagemagick package to generate the actual PDF file from the group of images

debian:~# convert *.jpg outputpdffile.pdf

If the images are numbered and contain many scanned pages of course you can always pass by all the images to the /usr/bin/convert binary, like for instance:

debian:~# convert 1.jpg 2.jpg 3.jpg 4.jpg 5.jpg outputpdffile.pdf
Even though in my case I had to convert to PDF from multiple JPEG (JPG) pictures, convertion with convert is not restricted to convert only from JPEG, but you can also convert to PDF by using other graphical file formats.

For instance to convert multiple PNG pictures to a single PDF file the command will be absolutely the same except you change the file extension of the graphic files e.g.:

debian:~# convert 1.PNG 2.PNG 3.PNG 4.PNG 5.PNG OUTPUT-PDF-FILE.PDF

I was quite happy eventually to know Linux is so flexible and such a trivial things are able to be completed in such an easy way.