Posts Tagged ‘tree’

Delete empty files and directories under directory tree in Linux / UNIX / BSD

Wednesday, October 21st, 2020

delete-empty-directories-and-files-freeup-inodes-by-empty-deleting-directoriers-or-files

Sometimes it happens that you end up on your server with a multiple of empty files. The reason for that could be different for example it could be /tmp is overflown with some session store files on a busy website, or due to some programmers Web executed badly written PHP / Python / Perl / Ruby code bug or lets say Content Management System ( CMS ) based website based on WordPress / Joomla / Drupal / Magento / Shopify etc. due to a broken plugin some specific directory could get filled up with plenty of meaningless empty files, that never gets wiped out if you don't care. This could happen if you offer your users to share files online to a public sharing service as WebFTP and some of the local hacked UNIX user accounts decides to make you look like a fool and run an endless loop to create files in your Hard Drive until your small server HDD filesystem of few terabytes gets filled up with useless empty files and due to full inode count on the filesystem your machine running running services gets disfunctional …

Hence on servers with shared users or simply webservers it is always a good idea to keep an eye on filesystem used nodes count by system are and in case if notices a sudden increase of used FS inodes as part of the investigation process on what caused it to check the amount of empty files on the system attached SCSI / SSD / SAS whatever drive.
 

1. Show a list of free inodes on server


Getting inodes count after logged is done with df command

root@linux-server:~# df -i
Filesystem        Inodes   IUsed     IFree IUse% Mounted on
udev             2041464     516   2040948    1% /dev
tmpfs            2046343    1000   2045343    1% /run
/dev/sdb2       14655488 1794109  12861379   13% /
tmpfs            2046343       4   2046339    1% /dev/shm
tmpfs            2046343       8   2046335    1% /run/lock
tmpfs            2046343      17   2046326    1% /sys/fs/cgroup
/dev/sdc6        6111232  6111232   0   100% /var/www
/dev/sda1       30162944 3734710  26428234   13% /mnt/sda1
/dev/sdd1      122093568 8011342 114082226    7% /backups
tmpfs            2046343      13   2046330    1% /run/user/1000

 

2. Show all empty files and directories count

 

### count empty directories ### root@linux-server:~# find /path/ -empty -type d | wc -l

### count empty files only ### root@linux-server:~# find /path/ -empty -type f | wc -l

 

3. List all empty files in directory or root dir

As you can see on the server in above example the amount of inodes of empty inodes is depleted.
The next step is to anylize what is happening in that web directory and if there is a multitude of empty files taking up all our disk space.
 

root@linux-server:~# find /var/www -type f -empty > /root/empty_files_list.txt


As you can see I'm redirecting output to a file as with the case of many empty files, I'll have to wait for ages and console will get filled up with a data I'll be unable to easily analyze

If the problem is another directory in your case, lets say the root dir.

root@linux-server:~#  DIR='/';
root@linux-server:~# find $DIR -type f -empty > /root/empty_files_list.txt

4. Getting empty directories list


Under some case it might be that the server is overflowed with empty directories. This is also a thing some malicious cracker guy could do to your server if he can't root the server with some exploit but wants to bug you and 'show off his script kiddie 3l337 magic tricks' :). This is easily done with a perl / python or bash shell endless loop inside which a random file named millions of empty directories instead of files is created.

To look up for empty directories hence use:

root@linux-server:~# DIR='/home';
root@linux-server:~# find  $DIR . -type d -empty > /root/empty_directories_list.txt

 

5. Delete all empty files only to clean up inodes

Deletion of empty files will automatically free up the inodes occupied, to delete them.

root@linux-server:~# cd /path/containing/multiple/empty-dirs/
root@linux-server:~# find . -type f -empty -exec rm -fr {} \;

 

6. Delete all empty directories only to clean up inocommanddes

root@linux-server:~# find . -type d -empty -exec rm -fr {} \;

 

7. Delete all empty files and directories to clean up inodes

root@linux-server:~# cd /path/containing/multiple/empty-dirs/
root@linux-server:~# find . -empty -delete

 

8. Use find + xargs to delete if files count to delete is too high

root@linux-server:~# find . -empty | xargs rm -r


That's all folks ! Enjoy now your Filesystem to have retrieved back the lost inodes from the jump empty files or directories.

Happy cleaning  🙂

Cracking zip protected password files on GNU/Linux and FreeBSD

Wednesday, October 5th, 2011

crack-zip-freebsd

Its not very common, but sometimes it happens you have to crack some downloaded file from thepiratebay.com or some other big torrent tracker. An example scenario would be downloading a huge words dictionary (a rainbow tables) dictionary etc., which was protected by the author with a password and zipped.

Fortunately Mark Lehmann developed a software called fcrackzip which is capable of brute forcing zip protected file passwords straight on UNIX like operating systems (GNU/Linux, FreeBSD).

fcrackzip is available from package repositories on Debian and Ubuntu Linuces to install via apt:

linux:~# apt-get install frackzip
...

fcrackzip is also available on FreeBSD via the ports tree and can be installed with:

freebsd# cd /usr/ports/security/fcrackzip
freebsd# make install cleam

On Debian it's worthy to have a quick look on the README file:

linux:~# cat /usr/share/doc/fcrackzip/READMESee fcrackzip.txt (which is derived from the manpage), or fcrackzip.html

There is a web page with more information at
http://lehmann.home.ml.org/fcrackzip.html or
http://www.goof.com/pcg/marc/fcrackzip.html

A sample password-protected .zip file is included as "noradi.zip". It's
password has 6 lower case characters, and fcrackzip will find it (and a
number of false positives) with

fcrackzip -b -c a -p aaaaaa ./noradi.zip

which will take between one and thirty minutes on typical machines.

To find out which of these passwords is the right one either try them out
or use the –use-unzip option.

Marc

Cracking the noradi.zip password protected sample file on my dual core 1.8 ghz box with 2gb, it took 30 seconds.

linux:~# time fcrackzip -u -b -c a -p aaaaaa noradi.zip

PASSWORD FOUND!!!!: pw == noradi

real 0m29.627s
user 0m29.530s
sys 0m0.064s

Of course the sample set password for noradi.zip is pretty trivial and with more complex passwords, sometimes cracking the password can take up to 30 minutes or an hour and it all depends on the specific case, but at least now we the free software users have a new tool in the growing arsenal of free software programs 😉

Here are the options passed on to the above fcrackzip command:

-uTry to decompress with the detected possible archive passwords using unzip (This is necessery to precisely find the archive password, otherwise it will just print out a number of possible matching archive passwords and you have to try each of the passwords one by one. Note that this option depends on a working unzip version installed.)

-c ainclude all charsets to be tried with the generated passwords

-bSelect brute force mode – Tries all possible combinations of letters specified

-p aaaaaainit-password string (Look up for a password between the password length 6 characters long)

FCrackZip is partly written in assembler and thus is generally works fast, to reduce the CPU load fcrackzip will put on the processor its also capable of using external words dictionary file by passing it the option:

-DThe file should be in a format one word per line and be preliminary alphabetically sorted with let's say sort

Also fcrackzip supports parallel file brute force, for example if you have 10 zip files protected with passwords it can paralelly try to brute force the pwds.

As of time of writting frackzip reached version 1.0 and seems to be pretty stable. Happy cracking.
Just to make sure fcrackzip's source is not lost somewhere in the line in the long future to come, I've created a fcrackzip download mirror here

On God and computers and how computers copy God’s creation

Friday, October 25th, 2013

People are copying Gods creation the-tree model people don't invent they copy

I've been thinking for a long time. How computers and involved technology copy God's creation. This kind of thoughts poped up in my mind right after I became a believer. As I'm having a strong IT background I tend to view thinks in world via the prism of my IT knowledge. If I have to learn a new science my mind tend to compare how this translates to my previous knowledge obtained in IT. Probably some other people out there has the same kind of thinking? I'm not sure if this is a geek thinking or it is usual and people from other fields of science tend to also understand the world by using accommodated knowledge in field of profession they practice. Anyways since my days I believed in Jesus Christ, I started to also to compare my so far knowledge with what I've red in Holy Bible and  in the book of The Living of Saints (which btw is unknown to most protestant world). It is very interesting that if you deeply look into how all Information Technology knowledge is organized you can see how Computers resembles the visible God's creation. In reality I came to realization how Moden Man self-deceives himself. We think with every new modern technology we achieved something new revolutionory which didn't existed before. But is it really true? Lets take some technology like Microsoft Active Directory (using LDAP) for example. LDAP structures data in a tree form where each branch could have a number of sub branches (variables). In reality it appears LDAP is not new it a translation of previous already existent knowledge in universe served in a different kind of form. Let me give some other examples, lets pick up the Internet, we claim its a new invention and from human point of view it is. But if we look on it via the prism of existing created world. It is just a interconnection between "BIG DATA" in real world it is absolutely the same latest researches already know all in world is data and all data in world is interconnected. So obviously the internet is another copying of the wonderful things God created in material and for those who can accept (the spiritual world) world. Many who are hard-core atheists will argue that we copy things in the world but all the material world is just a co-incidence. But having in mind that the world is so perfectly tuned "for living beings to exist" it is near to impossible that all this life and perfection emerged by random. The tree structure model is existing everywhere in OS and programming. We can see it in hiearchy of a file system, we can see it in hashes (arrays) in programming and all this just copies the over-simplified model of a real tree (which we know well from Biology is innemous times more complex). Probably the future of computing is in Biotechnologies and people's attempts to copy how living organism works. We know from well from science-fiction and cyberpunk the future should be mostly in Bio-technologies and computer as we know it but even this high-tech next generation technology will be based on existent things. Meaning man doesn't invent something so different he does copy a model and then modify the model according to environment or just makes a combination of a number of models to achieve a next one. Sorry for the rant post but I'm thinking on this for quite a while and I thought i should spit it here and interested to hear what people think and what are the arguments for or against my thesis?

How to count how many files are in a directory with find on Linux

Tuesday, February 21st, 2012

how to count how many directories are on your linux server

Did you ever needed to count, how many files in a directory are there?
Having the concrete number of files in a directory is not a seldom task but still very useful especially for scripts or simply for the sake of learning

The quickest and maybe the easiest way to count all files in a directory in Linux is with a combination of find and wc commands:

Here is how;

linux:~# cd ascii
linux:~/ascii# find . -type f -iname '*' -print |wc -l
407

This will find and list all matched files in any directory and subdirectories, print them out and count them with wc command.
The -type f argument instructs find to look only for files.

Other helpful variance of finding and listing all files in a directory and subdirectories is to list and count all the files with a certain file extension under a directory. For example, lets list all text files (.txt) contained in a directory and all level sub-directories:

linux:~/ascii# find . -type f -iname '*.txt' -print |wc -l
401

If you need to check the number of files in a directory for multiple directories on a server and you're aiming at doing it efficienly, issung above find .. | wc code will definitely be not a good choice. If used it will generate heavy load for the system and along with that will complete the execution in ages if issued on a large number of files containing dirs.

Thanksfully if efficiency is targetted, there is a command written in C called tree which is more efficient than find.
To count the number of files in dir but using tree :

linux:~# cd ascii
linux:/ascii# tree | tail -n 1
32 directories, 407 files

By default tree prints info for both the number of found files and directories.
To print out only the files matched, awk comes handy, e.g.:

linux:/ascii# tree |tail -n 1| awk '{ print $3 }'407

To list only the number of files in a directory without its existing sub-directories ls + wc use is also possible:

linux:~/ascii# ls -l | grep ^- | wc -l68

This result the above command would produce is +1 more than the real number of files, as it counts the directory ".." as one file (in UNIX / LINUX everything is file).

A short one liner script that can calculate all files correctly by substracting 1 is and hence present correct result on number of files is like so:

linux:~/ascii# var=$(ls -l | grep ^- | wc -l); var=$(($var - 1)); echo $var

ls can be used to calculate the number of 1-st level sub-directories under certain directory for instance:

linux:~/ascii# ls -l |grep ^d|wc -l
25

You see the ascii directory has 25 subdirectories in its 1st level.

To check symlinks under a directory with ls the command would be:

linux:~/ascii# ls -l | grep ^l | wc -l
0

Note above 3 ls | grep … examples, will not work properly if the directory contains files with SUID or some special properties set.
Hence to get the same 3 results for active files, directories and symbolic links, a one liner similar to the one below can be used instead:

linux:~/ascii# for t in files links directories; do echo `find . -type ${t:0:1} | wc -l` $t; done 2> /dev/null
407 files
0 links
33 directories

This will show statistics about all files, links and directories for all directory sub-levels.
Just in case if there is need to only count files, links and directories without directory recursion enabled, use:

linux:~/ascii# for t in files links directories; do echo `find . -maxdepth 1 -type ${t:0:1} | wc -l` $t; done 2> /dev/null
68 files
0 links
26 directories

Anyways the above bash loop will be slow, for directories containing thousands of files. For better performance the equivallent of above bash loop rewritten in perl would be:

linux:~/ascii# ls -l |perl -e 'while(<>){$h{substr($_,0,1)}+=1;} END {foreach(keys %h) {print "$_ $h{$_}\n";}}'
- 68
d 25
t 1
linux:~/ascii#
In any case the most preferrable and efficient way to count files en directories is by using tree command.
In my view using always tree command instead of code "hacks" is smart idea.

In Slackware tree command is part of the base install, on Debian and CentOS Linux, tree cmd is not part of the base system and requires install via apt / yum e.g.:

debian:~# apt-get --yes install tree
...

[root@centos:~ ]# yum --yes install tree

Happy counting 😉

swap_pager_getswapspace: failed, MySQL troubles on FreeBSD 7.2 cause and solution

Tuesday, May 3rd, 2011

Every now and then my FreeBSD router dmesg ( /var/log/dmesg.today ) logs, gets filled with error messages like:

pid 86369 (httpd), uid 80, was killed: out of swap space
swap_pager_getswapspace(14): failed
swap_pager_getswapspace(16): failed
swap_pager_getswapspace(11): failed
swap_pager_getswapspace(12): failed
swap_pager_getswapspace(16): failed
swap_pager_getswapspace(16): failed
swap_pager_getswapspace(16): failed
swap_pager_getswapspace(16): failed
swap_pager_getswapspace(14): failed
swap_pager_getswapspace(16): failed
swap_pager_getswapspace(8): failed

Using swapinfo during the swap_pager_getswapspace(16): failed messages were logged in, I figured out that definitely the swap memory over-use is the bottleneck for the troubles, to find this I used the command:

freebsd# swapinfo
Device 1K-blocks Used Avail Capacity Type
/dev/ad0s1b 49712 45920 3792 92% Interleaved

After some investigation, I’ve figured out that the MySQL server is causing the kernel exceeded swap troubles.

My current MySQL server version is installed from the ports tree, whether I’m using the bsd port /usr/ports/databases/mysql51-server/ and it appears to work just fine.

However I have noticed that the mysql-server is missing a my.cnf file!, which means the mysql server is running under a mode with some kind of default configurations.

Strangely in the system process list it appeared it is using a default my.cnf file located in /var/db/mysql/my.cnf

Below you see the paste from the ps command:

ps axuww freebsd# ps axuww | grep -i my.cnf | grep -v grep
mysql 7557 0.0 0.1 3464 1268 p1 I 12:03PM 0:00.01 /bin/sh /usr/local/bin/mysqld_safe --defaults-extra-file=/var/db/mysql/my.cnf --user=mysql --datadir=/var/db/mysql --pid-file=/var/db/mysql/pcfreak.pidmysql 7589 0.0 5.1 93284 52852 p1 I 12:03PM 0:59.01 /usr/local/libexec/mysqld --defaults-extra-file=/var/db/mysql/my.cnf --basedir=/usr/local --datadir=/var/db/mysql --user=mysql --pid-file=/var/db/mysql/pcfreak.pid --port=3306 --socket=/tmp/mysql.sock

Nevertheless it appeared the sql server is running the file /var/db/mysql/my.cnf conf was not existing! This was really weird for me as I’m used to have the default my.cnf from my previous experience with Linux servers!

Thus the next logical thing I did was to create my.cnf conf file in order to be able to have a proper limiting configuration for the sql server.

The FreeBSD my.cnf skele files are found in /usr/local/share/mysql/, here are the 4 files one can use as a starting basis for further configuration of the mysql-server.

freebsd# ls -al /usr/local/share/mysql/my-*.cnf
-r--r--r-- 1 root wheel 4948 Aug 12 2009 /usr/local/share/mysql/my-huge.cnf
-r--r--r-- 1 root wheel 20949 Aug 12 2009 /usr/local/share/mysql/my-innodb-heavy-4G.cnf
-r--r--r-- 1 root wheel 4924 Aug 12 2009 /usr/local/share/mysql/my-large.cnf
-r--r--r-- 1 root wheel 4931 Aug 12 2009 /usr/local/share/mysql/my-medium.cnf
-r--r--r-- 1 root wheel 2502 Aug 12 2009 /usr/local/share/mysql/my-small.cnf

I have chosen to use the my-medium.cnf as a skele to tune up, as my server is not high iron one e.g. the host I run the mysql is a (simple dual core 1.2Ghz system).

Further on I copied the /usr/local/share/mysql/my-medium.cnf to /var/db/mysql/my.cnf e.g.:

freebsd# cp -rpf /usr/local/share/mysql/my-medium.cnf /var/db/mysql/my.cnf

As a next step to properly tune up the default values of the newly copied my.cnf to my specific server I used the Tuning-Primer MySQL tuning script

Using tuning-primer.sh is really easy as all I did is download, launch it and follow the script suggestions to correct some of the values already in my.cnf

I have finally ended up with the following my.cnf after using tuning-primer.sh to optimize mysql server to work with my bsd host

Now I really hope the shitty swap_pager_getswapspace: failed errors would not haunt me once again by crashing my server and causing mem overheads.

Still I wonder why the port developer Alex Dupre – ale@FreeBSD.org choose not to provide the default mysql51-server conf with some kind of my.cnf file? I hope he had a good reason.

How to mount NTFS Windows XP filesystem on FreeBSD, NetBSD, OpenBSD

Friday, May 11th, 2012

Mounting NTFS hdd partitions on FreeBSD logo picture

A friend of mine bring home a Seagate External Hard Disk Drive using USB 3 as a communication media. I needed to attach the hard disk to my FreeBSD router to transfer him some data, the External HDD is formatted to use NTFS as a main file partition and hence to make the file transfers I had to somehow mount the NTFS partition on the HDD.

FreeBSD and other BSDs, just like Linux does not have embedded NTFS file system mount support.
In order to add an external write support for the plugged hdd NTFS I looked in the ports tree:

freebsd# cd /usr/ports
freebsd# make search name='ntfs'
Port: fusefs-ntfs-2010.10.2
Path: /usr/ports/sysutils/fusefs-ntfs
Info: Mount NTFS partitions (read/write) and disk images
Maint: ports@FreeBSD.org
B-deps: fusefs-libs-2.7.4 libiconv-1.13.1_1 libtool-2.4 libublio-20070103 pkg-config-0.25_1
R-deps: fusefs-kmod-0.3.9.p1.20080208_7 fusefs-libs-2.7.4 libiconv-1.13.1_1 libublio-20070103 pkg-config-0.25_1
WWW: http://www.tuxera.com/community/

Port: ntfsprogs-2.0.0_1
Path: /usr/ports/sysutils/ntfsprogs
Info: Utilities and library to manipulate NTFS partitions
Maint: ports@FreeBSD.org
B-deps: fusefs-libs-2.7.4 libiconv-1.13.1_1 libublio-20070103 pkg-config-0.25_1
R-deps: libublio-20070103 pkg-config-0.25_1
WWW: http://www.linux-ntfs.org/
freebs# cd sysutils/fusefs-ntfs/
freebsd# ls
Makefile distinfo files/ pkg-descr pkg-plist
freebsd# cat pkg-descr
The ntfs-3g driver is an open source, freely available read/write NTFS
driver, which provides safe and fast handling of the Windows XP, Windows
Server 2003 and Windows 2000 filesystems. Almost the full POSIX filesystem
functionality is supported, the major exceptions are changing the file
ownerships and the access rights.
WWW: http://www.tuxera.com/community/

Using ntfs-3g I managed to succeed mounting the NTFS on my old PC running FreeBSD ver. 7_2

1. Installing fuserfs-ntfs support on BSD

Before I can use ntfs-3g, to mount the paritition, I had to install fuserfs-ntfs bsd port, with:

freebsd# cd /usr/ports/sysutils/fusefs-ntfs
freebsd# make install clean
.....

I was curious if ntfsprogs provides other utilities to do the ntfs mount but whilst trying to install it I realized it is already installed as a dependency package to fusefs-ntfs.

fusefs-ntfs package provides a number of utilities for creating, mounting, fixing and doing various manipulations with Microsoft NTFS filesystems.

Here is a list of all the executable utilities helpful in NTFS fs management:

freebsd# pkg_info -L fusefs-ntfs\* | grep -E "/bin/|/sbin|README"
/usr/local/bin/lowntfs-3g
/usr/local/bin/ntfs-3g
/usr/local/bin/ntfs-3g.probe
/usr/local/bin/ntfs-3g.secaudit
/usr/local/bin/ntfs-3g.usermap
/usr/local/bin/ntfscat
/usr/local/bin/ntfscluster
/usr/local/bin/ntfscmp
/usr/local/bin/ntfsfix
/usr/local/bin/ntfsinfo
/usr/local/bin/ntfsls
/usr/local/sbin/mkntfs
/usr/local/sbin/ntfsclone
/usr/local/sbin/ntfscp
/usr/local/sbin/ntfslabel
/usr/local/sbin/ntfsresize
/usr/local/sbin/ntfsundelete
/usr/local/share/doc/ntfs-3g/README
/usr/local/share/doc/ntfs-3g/README.FreeBSD

The README and README.FreeBSD are wonderful, reading for those who want to get more in depth knowledge on using the up-listed utilities.

One utility, worthy to mention, I have used in the past is ntfsfix. ntfsfix resolve issues with NTFS partitions which were not unmounted on system shutdown (electricity outage), system hang up etc.

2. Start fusefs (ntfs) and configure it to auto load on system boot

Once fuserfs-ntfs is installed, if its necessery ntfs fs mounts to be permanently supported on the BSD system add fusefs_enable="YES" to /etc/rc.conf(the FreeBSD services auto load conf).

freebsd# echo 'fusefs_enable="YES"' >> /etc/rc.conf

One note to make here is that you need to have also dbus_enable="YES" and hald_enable="YES" in /etc/rc.conf, not having this two in rc.conf will prevent fusefs to start properly. Do a quick grep to make sure this two variables are enabled:

Afterwards fsusefs load up script should be run:

freebsd# /usr/local/etc/rc.d/fusefs start
Starting fusefs.

Another alternative way to load ntfs support on the BSD host is to directly load fuse.ko kernel module:

freebsd# /sbin/kldload fuse.ko

3. Mounting the NTFS partition

In my case, the Seagate hard drive was detected as da0, where the NTFS partition was detected as s1 (da0s1):

freebsd# dmesg|grep -i da0
da0 at umass-sim0 bus 0 target 0 lun 0
da0: Fixed Direct Access SCSI-4 device
da0: 40.000MB/s transfers
da0: 953869MB (1953525164 512 byte sectors: 255H 63S/T 121601C)br />GEOM_LABEL: Label for provider da0s1 is ntfs/Expansion Drive.
GEOM_LABEL: Label for provider da0s1 is ntfs/Expansion Drive.

Therefore further to mount it one can use mount_ntfs (to quickly mount in read only mode) or ntfs-3g for (read / write mode):

If you need to just quickly mount a disk drive to copy some data and umount it with no need for writting to the NTFS partition do;

freebsd# /sbin/mount_ntfs /dev/ad0s1 /mnt/disk

Note that mount_ntfs command is a native BSD command and have nothing to do with ntfs-3g. Therefore using it to mount NTFS is not the same as mounting it via ntfs-3g cmd

freebsd# /usr/local/bin/ntfs-3g -o rw /dev/da0s1 /mnt/disk/

Something, I've noticed while using ntfs-3g is, it fails to properly exit even when the ntfs-3g shell execution is over:

freebsd# ps ax |grep -i ntfs|grep -v grep
18892 ?? Is 0:00.00 /usr/local/bin/ntfs-3g -o rw /dev/da0s1 /mnt/disk/

I dunno if this is some kind of ntfs-3g bug or feature specific to all versions of FreeBSD or it is something local to FBSD 7.2

Thought ntfs-3g, keeps appearing in process list, praise God as of time of writting NTFS support on FreeBSD prooved to be stable.
Read / Write disk operations to the NTFS I tested it with works great. Just about 5 years ago I still remember write mode was still experimental. Now it seems NTFS mounts can be used with no hassle even on production machines.

4. Auto mounting NTFS partition on FreeBSD system boot

There are two approaches towards 'the problem' I can think of.
The better way to auto mount on boot (in my view) is through /etc/fstab use

If /etc/fstab + ntfs-3g is to be used, you will however change the default /sbin/mount_ntfs command to point to /usr/local/bin/ntfs-3g, i.e.:

freebsd# mv /sbin/mount_ntfs /sbin/mount_ntfs.orig
freebsd# ln -s /usr/local/bin/ntfs-3g /sbin/mount_ntfs

Then to mount /dev/da0s1 via /etc/fstab add line:

/dev/ad0s1 /mnt/disk ntfs rw,late 0 0

To not bother with text editor run:

freebsd# echo '/dev/ad0s1 /mnt/disk ntfs rw,late 0 0' >> /etc/fstab

I've red in posts in freebsd forums, there is also a way to use ntfs-3g for mounting partitions, without substituting the original bsd /sbin/mount_ntfs, the exact commands suggested to be used with no need to prior mv /sbin/mount_ntfs to /sbin/mount_ntfs.orig and link it to ntfs is:

/dev/ad0s1 /disk ntfs rw,mountprog=/usr/local/bin/ntfs-3g,late 0 0

For any other NTFS partitions, for instance /dev/ad0s2, /dev/ad2s1 etc. simply change the parititon name and mount points.

The second alternative to adding the NTFS to auto mount is through /etc/rc.local. /etc/rc.local content will be executed very late in system boot. :

echo '/usr/local/bin/ntfs-3g -o rw /dev/da0s1' >> /etc/rc.local

One disadvanage of using /etc/rc.local for mounting the partition is the hanging ntfs-3g in proc list:

freebsd# ps ax |grep -i ntfs|grep -v grep
18892 ?? Is 0:00.00 /usr/local/bin/ntfs-3g -o rw /dev/da0s1 /mnt/disk/

Though, I haven't tested it yet. Using the same methodology should be perfectly working on PC-BSD, DragonFlyBSD, NetBSD and OpenBSD.
I will be glad if someone who runs any of the other BSDs can confirm, following this instructions works fine on these BSDs too.

Editting binary files in console and GUI on FreeBSD and Linux

Thursday, April 26th, 2012

I’ve recently wanted to edit one binary file because there was compiled in the binary a text string with a word I didn’t liked and therefore I wanted to delete. I know I can dig in the source of the proggie with grep and directly substitute my “unwatned text” there but I wanted to experiment, and see what kind of hex binary text editors are for Free OSes.
All those who lived the DOS OS computer era should certainly remember the DOS hex editors was very enjoyable. It was not rare case, where in this good old days, one could simply use the hex editor to “hack” the game and add extra player lives or modify some vital game parameter like put himself first in the top scores list. I even remember some DOS programs and games was possible to be cracked with a text editor … Well it was times, now back to current situation as a Free Software user for the last 12 years it was interesting to see what is the DOS hexeditor like alternatives for FreeBSD and Linux and hence in this article I will present my findings:

A quick search in FreeBSD ports tree and Debian installable packages list, I’ve found a number of programs allowing one to edit in console and GUI binary files.

Here is a list of the hex editors I will in short review in this article:

  • hexedit
  • dhex
  • chexedit
  • hte
  • hexer
  • hexcurse
  • ghex
  • shed
  • okteta
  • bless
  • lfhex

1. hexedit on Linux and BSD – basic hex editor

I’ve used hexedit already on Linux so I’ve used it some long time ago.

My previou experience in using hexedit is not too pinky, I found it difficult to use on Redhat and Debian Linux back in the day. hexedit is definitely not a choice of people who are not “initiated” with hex editting.
Anyways if you want to give it a try you can install it on FreeBSD with:

freebsd# cd /usr/ports/editors/hexedit
freebsd# make install clean

On Debian the hexedit, install package is named the same so installation is with apt:

debian:~# apt-get –yes install hexedit

hexedit screenshot Debian Linux Squeeze

2. Hex editting with chexedit

I’ve installed chexedit the usual way from ports:

freebsd# cd /usr/ports/editors/chexedit
freebsd# make install clean

chexedit is using the ncurses text console library, so the interface is very similar to midnight commander (mc) as you see from below’s screenshot:

Chexeditor FreeBSD 7.2 OS Screenshot

Editting the binary compiled in string was an easy task with chexedit as most of the commands are clearly visible, anyways changing a certain text string contained within the binary file with some other is not easy with chexedit as you need to know the corresponding binary binary value representing each text string character.
I’m not a low level programmer, so I don’t know the binary values of each keyboard character and hence my competence came to the point where I can substitute the text string I wanted with some unreadable characters by simply filling all my text string with AA AA AA AA values…

chexedit on Debian is packaged under a deb ncurses-hexedit. Hence to install it on Deb run:

debian:~# apt-get –yes install ncurses-hexedit

Further on the binary to run chexedit on binary contained within ncurses-hexedit is:

debian:~# hexeeditor

3. Hex Editting on BSD and Linux with hte

Just after trying out chexedit, I’ve found about the existence of one even more sophisticated hexeditor console program available across both FreeBSD and Linux.
The program is called hte (sounds to me a bit like the Indian word for Elephant “Hatti” :))

hte is installable on Debian with cmd:

debian:~# apt-get install ht

On FreeBSD the port name is identical, so to install it I execed:

freebsd# cd /usr/ports/editors/hte
freebsd# make install clean

hte is started on Debian Linux (and presumably other Linux distros) with:

$ hte

On FreeBSD you need to run it with ht command:

freebsd# ht

You see how hte looks like in below screenshot:

ht has the look & feel like midnight commander and I found it easier to use than chexedit and hexeditor
4. hexer VI like interface for Linux

As I was looking through the available packages ready to install, I’ve tried hexer

debian:~# apt-get install –yes hexer

hexer does follow the same standard commands like VIM, e.g. i for insert, a for append etc.

Hexer Debian Linux vim like binary editor screenshot

It was interesting to find out hexer was written by a Bulgarian fellow Petar Penchev 🙂
(Proud to be Bulgarian)

http://people.freebsd.org/~roam/ – Petar Penchev has his own page on FreeBSD.org

As a vim user I really liked the idea, the only thing I didn’t liked is there is no easy way to just substitute a string within the binary with another string.

5. hexcurse another ncurses library based hex editor

On Deb install and run via:

debian:~# apt-get –yes install hexcurse
debian:~# hexcurse /usr/bin/mc

Hexcurse Debian Linux text binary editor screenshot

hexcurse is also available on FreeBSD to install it use cmd:

freebsd# cd /usr/ports/editors/hexcurse
freebsd# make install clean
….

To access the editor functions press CTRL+the first letter of the word in the bottom menu, CTRL+H, CTRL+S etc.
Something I disliked about it is the program search is always in hex, so I cannot look for a text string within the binaries with it.

6. ghex – Editting binary files in graphical environment

If you’re running a graphical environment, take a look at ghex. ghex is a gnome (graphical hex) editor.Installing ghex on Debian is with:

debian:~# apt-get –yes install ghex
….

To run ghex from terminal type:

debian:~# ghex2

GHex2 GNOME hex binary editor screenshot

To install ghex on FreeBSD (and I assume other BSDs), install via port:

freebsd# cd /usr/ports/editors/ghex
freebsd# make install clean

Gnome hex editor have plenty of tools, useful for developers to debug binary files.

Some nice tools one can find are under the the menus:

Windows -> Character Table

This will show a complete list of each keyboard sent character in ASCII, Hex, Decimal, Octal and Binary

Screenshot ghex Character table Debian Linux

Another useful embedded tool in ghex is:

Windows -> Type Convertion Dialog

Ghex type convertion dialog screenshot

Note that if you want to use the Type Convertion Dialog tool to find the representing binary values of a text string you will have to type in the letters one by one and save the output within a text file and later you can go and use the same editor to edit the text string within the binary file you like.

I’m not a programmer but surely for programmers or people who want to learn some binary counting, this 2 ghex edmebbed tools are surely valuable.

To conclude even though there are plenty of softwares for hex editting in Linux and BSD, none of them is not so easy to use as the old DOS hexdedit tool, maybe it will be a nice idea if someone actually rewrites the DOS tool and they package it for various free operating systems, I’m sure many people will find it helpful to have a 1:1 equivalent to the DOS tool.

7. Shed pico like interfaced hex editor

For people, who use pico / nano as a default text editor in Linux shed will probably be the editor of choice as it follows the command shortcuts of picoOn Deb based distros to install it run:

debian:~# apt-get install –yes shed

shed pico like hex binary editor Linux

Shed has no BSD port as of time of writting.8. Okteta a KDE GUI hex editor

For KDE users, I found a program called okteta. It is available for Deb based Linuxes as deb to install it:

debian:~# apt-get –yes install okteta

Screenshot Okteta Debian GNU / Linux Squeeze

As of time of writting this article there is no okteta port for BSDs.
Okteta has plenty of functions and even has more of a functions than ghexedit. Something distinctive for it is it supports opening multiple files in tabs.

9. lfhex a large file text editor

lfhex is said to be a large (binary) file text editor, I have not tested it myself but just run it to see how it looks like. I don’t have a need to edit large binary files too, but I guess there are people with such requirements too 🙂

lfhex - Linux The Large file hex editor

To install lfhex on Debian:

debian:~# apt-get install –yes lfhex

lfhex has also a FreeBSD port installable via:

freebsd# cd /usr/ports/editors/lfhex
freebsd# make install clean

10. Bless a GUI tool for editting large hex (binary) files

Here is the description directly taken from the BSD port /usr/ports/editors/bless

Bless is a binary (hex) editor, a program that enables you to edit files asa sequence of bytes. It is written in C# and uses the Gtk# bindings for theGTK+ toolkit.

To install and use ot on deb based Linuxes:

debian:~# apt-get install –yes bless
….

On BSD installation is again from port:

freebsd# cd /usr/ports/editors/bless
freebsd# make install clean
….

Something that makes bless, maybe more desirable choice for GUI users than ghex is its availability of tabs. Opening multiple binaries in tabs will be useful only to few heavy debuggers.

Bless GUI hex editor Debian Linux tabs opened screenshot

11. Ghextris – an ultra hard hacker tetris game 🙂

For absolute, hacker / (geeks), there is a tetris game called ghextris. The game is the hardest tetris game I ever played in my life. It requires more than regular IQ and a lot of practice if you want to become really good in this game.

To enjoy it:

debian:~# apt-get –yes install ghextris

Ultra hrad hardcore hackers game ghextris screenshot

Unfortunately there is no native port of ghextris for BSD (yet). Anyhow, it can be probably run using the Linux emulation or even compiled from source.
Well that’s all I found for hexedit-ing, I’ll be happy to hear if someone can give me some feedback on his favourite editor.

How to upgrade single package with their dependencies on Debian and Ubuntu Linux

Friday, March 16th, 2012

Debian GNU / Linux apt-get upgrade a package selection of a whole bunch of packages ready to upgrade apt artistic logo

Are you a Debian System Administrator and you recently run apt-get upgrade && apt-get upgrade finding out there are plenty of new packagesfor upgrade? Do you need only a pre-selected number of packages to upgrade with apt?
I run apt-get update && apt-get upgrade on one of our company Debian servers, just to see there are a number of packages to be upgraded among which there was some I didn't wanted to upgrade. Here is a little paste output from apt-get upgrade:

debian:~# apt-get update && apt-get upgrade
Hit http://security.debian.org squeeze/updates Release.gpg
...
Hit http://security.debian.org squeeze/updates/main amd64 Packages
Fetched 128 kB in 0s (441 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
at imagemagick libdbd-pg-perl libfreetype6 libmagickcore3 libmagickcore3-extra libmagickwand3 libmysqlclient16 mysql-client
mysql-client-5.1 mysql-common mysql-server mysql-server-5.1 mysql-server-core-5.1
Do you want to continue [Y/n]
14 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

From first sight it seems logical to issue apt-get upgrade packagename to upgrade only single package with its package dependencies, instead of the whole group the above packs. However doing:
apt-get upgrade imagemagick will still try to upgrade all the packages instead of just imagemagick and its dependency package deb libmagickcore3

debian:~# apt-get upgrade imagemagick
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
at imagemagick libdbd-pg-perl libfreetype6 libmagickcore3 libmagickcore3-extra libmagickwand3 libmysqlclient16 mysql-client
mysql-client-5.1 mysql-common mysql-server mysql-server-5.1 mysql-server-core-5.1
14 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Do you want to continue [Y/n]

Doing all package,upgrade is not a good idea in my case, since upgrading mysql-server will require a MySQL server restart (something which we cannot afford to do right now) on this production server.
MySQL server restart during upgrade is never a good idea especially on productive busy (heavy loaded) SQL servers.
A restart of the MySQL server serving thousands of requests per second could lead often to crashed tables and hence temporary server downtime etc.

Still it is a good idea to upgrade the rest of packages with their newer versions. For exmpl. to upgrade; imagemagick, at , libfreetype6 and so on.

In order to upgrade only this 3 ones and their respective package dependencies, issue:

debian:~# apt-get --yes install imagemagick at libfreetype6

Repeat the apt-get install command with passing all the single package name you want to be upgraded and voila you're done :).
Be sure the apt-get install packagename upgrade doesn't require also upgrade of myssql-server, mysql-client, mysql-common or mysql-server-core-5.1 or any of the package name you want to preserve from upgrading.

How to fix “Fatal error: Call to undefined function: curl_init()” on FreeBSD and Debian

Saturday, June 18th, 2011

After installing the Tweet Old Post wordpress plugin and giving it, I’ve been returned an error of my PHP code interpreter:

Call to undefined function: curl_init()

As I’ve consulted with uncle Google’s indexed forums 😉 discussing the issues, I’ve found out the whole issues are caused by a missing php curl module

My current PHP installation is installed from the port tree on FreeBSD 7.2. Thus in order to include support for php curl it was necessery to install the port /usr/ports/ftp/php5-curl :

freebsd# cd /usr/ports/ftp/php5-curl
freebsd# make install clean

(note that I’m using the php5 port and it’s surrounding modules).

Fixing the Call to undefined function: curl_init() on Linux hosts I suppose should follow the same logic, e.g. one will have to install php5-curl to resolve the issue.
Fixing the missing curl_init() function support on Debian for example will be as easy as using apt to install the php5-curl package, like so:

debian:~# apt-get install php5-curl
...

Now my tweet-old-post curl requirement is matched and the error is gone, hooray 😉