Posts Tagged ‘source code’

Create SSH Tunnel to MySQL server to access remote filtered MySQL port 3306 host through localhost port 3308

Friday, February 27th, 2015

create_ssh_tunnel_to-mysql_server-to-access-remote-filtered-mysql-on-port-3306-secure_ssh_traffic
On our Debian / CentOS / Ubuntu Linux and Windows servers we're running multiple MySQL servers and our customers sometimes need to access this servers.
This is usually problem because MySQL Db  servers are running in a DMZ Zone with a strong firewall and besides that for security reasons SQLs are configured to only listen for connections coming from localhost, I mean in config files across our Debian Linux servers and CentOS / RHEL Linux machines the /etc/mysql/my.cnf and /etc/my.cnf the setting for bind-address is 127.0.0.1:
 

[root@centos ~]# grep -i bind-address /etc/my.cnf 
bind-address            = 127.0.0.1
##bind-address  = 0.0.0.0


For source code developers which are accessing development SQL servers only through a VPN secured DMZ Network there are few MySQL servers witha allowed access remotely from all hosts, e.g. on those I have configured:
 

[root@ubuntu-dev ~]# grep -i bind-address /etc/my.cnf 

bind-address  = 0.0.0.0


However though clients insisted to have remote access to their MySQL Databases but since this is pretty unsecure, we decided not to configure MySQLs to listen to all available IP addresses / network interfaces. 
MySQl acess is allowed only through PhpMyAdmin accessible via Cleint's Web interface which on some servers is CPanel  and on other Kloxo (This is open source CPanel like very nice webhosting platform).

For some stubborn clients which wanted to have a mysql CLI and MySQL Desktop clients access to be able to easily analyze their databases with Desktop clients such as MySQL WorkBench there is a "hackers" like work around to create and use a MySQL Tunnel to SQL server from their local Windows PCs using standard OpenSSH Linux Client from Cygwin,  MobaXterm which already comes with the SSH client pre-installed and has easy GUI interface to create SSH tunnels or eventually use Putty's Plink (Command Line Interface) to create the tunnel

Anyways the preferred and recommended (easiest) way to achieve a tunnel between MySQL and local PC (nomatter whether Windows or Linux client system) is to use standard ssh client and below command:
 

ssh -o ServerAliveInterval=10 -M -T -M -N -L 3308:localhost:3306 your-server.your-domain.com


By default SSH tunnel will keep opened for 3 minutes and if not used it will automatically close to get around this issue, you might want to raise it to (lets say 15 minutes). To do so in home directory user has to add in:
 

~/.ssh/config

ServerAliveInterval 15
ServerAliveCountMax 4


Note that sometimes it is possible ven though ssh tunnel timeout value is raised to not take affect if there is some NAT (Network Adress Translation) with low timeout setting on a firewall level. If you face constant SSH Tunnel timeouts you can use below bash few lines code to auto-respawn SSH tunnel connection (for Windows users use MobaXterm or install in advance bash shell cygwin package):
 

while true
do
 
ssh -o ServerAliveInterval=10 -M -T -M -N -L 3308:localhost:3306 your-server.your-domain.com
  sleep 15
done


Below is MySQLBench screenshot connected through server where this blog is located after establishing ssh tunnel to remote mysql server on port 3308 on localhost

mysql-workbench-database-analysis-and-management-gui-tool-convenient-for-data-migratin-and-queries-screenshot-

There is also another alternative way to access remote firewall filtered mysql servers without running complex commands to Run a tunnel which we recommend for clients (sql developers / sql designers) by using HeidiSQL (which is a useful tool for webdevelopers who has to deal with MySQL and MSSQL hosted Dbs).

heidisql-show-host_processlist-screenshot

To connect to remote MySQL server through a Tunnel using Heidi:

mysql_connection_configuration-heidi-mysql-gui-connect-tool

 

In the ‘Settings’ tab

1. In the dropdown list of ‘Network type’, please select SSH tunnel

2. Hostname/IP: localhost (even you are connecting remotely)

3. Username & Password: your mysql user and password

Next, in the tab SSH Tunnel:

1. specify plink.exe or you need to download it and specify where it’s located

2. Host + port: the remote IP of your SSH server(should be MySQL server as well), port 22 if you don’t change anything

3. Username & password: SSH username (not MySQL user)

 

heidi-connection_ssh_tunnel_configuration-heidi-sql-tool-screenshot
 

Generating Static Source Code Auditing reports with Spike PHP Security Audit Tool

Saturday, April 24th, 2010

I’m conducting a PHP Audit on a server in relation to that one of the audit criterias I follow is a
Static PHP Source Code Auditing of the php files source code located physically on the Linux server.
Auditing a tons of source code manually is a kind of impossible task, therefore I needed a quick way to at least
partly automate or fully automate the PHP applications source code.
A quick search in Google pointed me to a php application tool – Spike Security Audit .
This small application PHP written app is quite handy. It is able to either check a certain php source code file for WARNINGS or ERRORS or do a complete security source code analysis of a bunch of PHP files in a directory including all the other php source files in subdirectories.

After executed the PHP Security Audit Tool generates a nice source code analysis report in html that can easily be later observed with some Browser.

The use of the tool is pretty straight forward, all you have to do is download it from Spikeforge – the project’s official webpage and unzip it e.g.


debian-server:~# wget http://developer.spikesource.com/frs/download.php/136/spike_phpSecAudit_0.27.zip
debian-server:~# unzip spike_phpSecAudit_0.27.zip

Then you have to invoke the run.php with the php cli, that you need to have installed first.
If you don’t have the php cli yet please install it with the command:


debian-server:~# apt-get install php5-cli

Now you have to execute the run.php script bundled with the spike php security audit program source code.


debian-server:~# php run.php

Please specify a source directory/file using –src option.

Usage run.php options

Options:
–src Root of the source directory tree or a file.
–exclude [Optional] A directory or file that needs to be excluded.
–format [Optional] Output format (html/text). Defaults to ‘html’.
–outdir [Optional] Report Directory. Defaults to ‘./style-report’.
–help Display this usage information.

As you can see the spike php security audit has only few command line options and they’re quite easily understandable.
However in my case I had to audit a couple of directories which contained source code.
I also wanted the generated reports to be cyclic, on let’s say per daily basis cause I wanted to have the PHP applicaiton analysis generated on a daily basis.
In that reason I decided to write a small shell script that would aid the usage of php spike audit, I’ve called the script code-analysis.sh

The usage of the Automation source code analysis script for PHP Spike Audit can be downloaded here
The script has a few configuration options that you might need to modify before you can put it to execute on a crontab.

This are:


# Specify your domain name on which php spike audit reports will be accessed
domain_name='yourdomainname.com';
# put here the location where phpspike run.php execute is located
spike_phpsec=/usr/local/spike_phpSecAudit_0.27/run.php;
# specify here which will be the directory where the php source code analysis reports will be stored by php spike
log_dir=/root/code-analysis/;
# in that part you have to specify the physical location of the php cli it's located by default in /usr/bin/php on Debian GNU Linux.
php_bin=/usr/bin/php;
# the directory below should be set to a directory where the reports that will be visible from the webserver will be stored
www_dir=/var/www/code-analysis;

# in the variables

directory[1]=’/home/source-code1/’; ..
directory[2]=”; ..

# you should configure the directories containing php source code to be audited by the php spike audit tool.

After you have prepared the code-analysis.sh script with your custom likings, you can now put it to be executed periodically
using crontab or some other unix system scheduler of choice.

To do that edit your root crontab.

crontab -u root -e

and put in it.

# code analysis results
05 3 * * * /usr/local/bin/code-analysis.sh >/dev/null 2>&1

Now hopefully you can edit your /etc/apache2/apache2.conf or your httpd.conf depending on your linux or unix architecture and make a Alias like:


Alias /code-analysis "/var/www/code-analysis"

Now your php source code analysis from the php spike audit tool will be generated daily.
You will be able to access them via web using http://yourdomain.com/code-analysis/

That way, you can review your php source code written or changed in your php applications on daily basis and you can a way easily track your coding mistakes, as well as track for possible security issues in your code.

For the sake of security I’ve also decided to protect the /code-analysis Apache directory with a password using the following .htaccess file:


AuthUserFile /var/www/code-analysis/.htpasswd
AuthGroupFile /dev/null
AuthName "Login to access PHP Source Code Analysis"
AuthType Basic

< Limit GET >
require valid-user
< /Limit >

If you decide to protect yours as well you have to also generate the .htpasswd file using the following command:


debian-server:~# htpasswd -c /var/www/code-analysis/.htpasswd admin

You will be asked for a password. The code-analysis.sh script will also take care to generate an html file for you including links to reports to all the php source code audited directories reports.

Now accessing http://yourdomain.com/code-analysis/ will give you shiny look to the php source applications generated reports .

Extracting pages and page ranges, protect with password and remove password from PDF on GNU / Linux with QPDF – Linux Manipulating PDF files from command line

Friday, August 8th, 2014

qpdf-logo-extract-pages-page-ranges-protect-pdf-with-password-remove-password-from-pdf-linux-qpdf-manipulating-pdf-files-on-gnu-linux-and-bsd
If're a Linux user and you need to script certain page extraction from PDF files, crypt protect with password a PDF file or decrypt (remote password protection from PDF) or do some kind of structural transformation of existing PDF file you can use a QPDF command line utility. qpdf is in active development and very convenient tool for Website developers (PHP / Perl / Python), as often on websites its necessery to write code to cut / tailer / restructure PDFs.

1. Install QPDF from deb / rpm package

qpdf is instalalble by default in deb repositories on Debian / Ubuntu GNU / (deb derivative) Linux-es to install it apt-get it

apt-get install –yes qpdf

On RPM based distribution CentOS / SuSE / RHEL / Fedora Linux to install qpdf, fetch the respective distribution binary from rpmfind.net or to install latest version of qpdf build it from source code.

2. Install QPDF from source

To build latest qpdf from source

  • on RPM based distributions install with yum fullowing packages:

yum -y install zlib-devel pcre-devel gcc gcc-c++

  • on Deb based Linuces, you will need to install

apt-get install –yes build-essential gcc dpkg-dev g++ zlib1g-dev


Then to build gather latest qpdf source from here

 

cd /usr/local/src
wget -q https://www.pc-freak.net/files/qpdf-5.1.2.tar.gz
tar -zxvf qpdf-5.1.2.tar.gz
cd qpdf-5.1.2/
./configure
make
make install


Once it is installed, if you get error on qpdf runtime:
 

/usr/local/bin/qpdf: error while loading shared libraries: libqpdf.so.13: cannot open shared object file: No such file or directory

To solve the error find in your compile directory libqpdf.so.13 and copy it to /usr/lib or /usr/local/lib

 cp -rpf ./libqpdf/build/.libs/libqpdf.so.13 /usr/local/lib


3. Decrypt password encrypted (protected) PDF file

if you have time and you like reading be sure to check the extensive qpdf-manual.

To remove password from a PDF file protected with a password with qpdf

qpdf –password=SECRET-PASSWORD –decrypt input-file.pdf output-file.pdf

QPDF has a vast range of split and merge features. It can combine all the files in a folder (*.pdf), you can use it to try to recover damaged pdf files, extract individual pages from PDF, dump and reverse page range, make new created PDF with old PDF's reversed pages (pages 1,2,3,4 to become in order 4,3,2,1), apply some single pdf file metadata to multiple files.

4. Try to Recover damaged PDF file


To try to recover some damaged file with qpdf:
 

qpdf file-to-repair.pdf repaired-file.pdf

5. Extract certain pages or page range from PDF

It is recommended to use the version built from source to extract certain page range from PDF
 

/usr/local/bin/qpdf –empty –pages input-file.pdf 1-5 — outfile-file.pdf


If you wanted to take pages 1–5 from file1.pdf and pages 11–15 from file2.pdf in reverse, you would run
 

qpdf file1.pdf –pages file1.pdf 1-5 file2.pdf 15-11 — outfile.pdf

 

Improve Website Apache Webserver SEO without Website source code moficitations with Google PageSpeed module on Debian, Ubuntu, CentOS, Fedora and SuSE Linux

Thursday, December 18th, 2014

Improve-website-apache-webserver-seo-without-website-source-code-modifications-with-Google-PageSpeed-Apache-module

For hosting companies and even personal website speed performance becomes increasingly important factor that gives higher and higher weight on overall PageRank and is one of the key things for Successful Site Search Engine Optimization (positioning) in Search Engines of a not specially SEO friendly crafted website.

Virtually all Google / Yahoo / Bing,  Yahoo  etc. Search Engines give better pagerank to websites which load faster and has little or no downtimes, for the reason a faster loading time of a website pages means better user experience and is indicator that the website is well maintained. 

Often websites deployed written for purpose of a business-es or just community CMS / Blog Website Open Source systems such as Joomla, Drupal and WordPress by default are not made to provide fantastic speed right after deploy without install of custom plugins and website tuning, i.e.:

  • Content size optimization (gzipping)
  • More efficient way to deliver CSS / Javascript (MinifyJS / CSS files into single ones
  • HTML optimization
  • Stripping (useful) page Comments
  • Adding <head> if missing on pages etc.

. Therefore as I said in many of my previous LAMP Optimization articles page  (opening) speed could make really Bad Users / Clients experience when the site grows too big or is badly optimized it gives degraded page speed times (often page loads 20 / 30 seconds waiting for the page to load!). Having Pages lagging on big information sites or EShos has both Ruining Company's Image on the market and quickly convinces the user to use another service from the already thosands available and thus drives out (potential) customers.

As Programming code maintainance and improvement is usually very costly, companies that want to save money or can't afford it (because of the shrinking budgets dictacted by the global economic crisis), the best thing to do is to ask your sysadmin to Squeeze the Best out of the WebService and Servers without major (Backend Code) infrastructural changes.

To  Speed up Apache and create Proper Page Caching without installing on server external PHP Caching modules such as Eaccelerator  / PHP APC caching and without
extra CMS modules
such as lets say WordPress W3 Total Cache there is Google Develop Apache Webserver external module – PageSpeed.

Here is Google Pagespeed Module overview :
 

PageSpeed speeds up your site and reduces page load time. This open-source webserver module automatically applies web performance best practices to pages and associated assets (CSS, JavaScript, images) without requiring that you modify your existing content or workflow.


What does Apache Google PageSpeed actually does?
 

  • Automatic website and asset optimization
  • Latest web optimization techniques
  • 40+ configurable optimization filters
  • Free, open-source, and frequently updated
  • Deployed by individual sites, hosting providers, CDNs


1. Install PageSpeed on Debian / Ubuntu, deb derivatives) Linux

a) Download and install module 

On 64 bit deb based Linux:

cd /usr/local/src
wget https://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-stable_current_amd64.deb 
dpkg -i mod-pagespeed-stable_current_amd64.deb
apt-get -f install


On 32 bit Linux:

cd /usr/local/src
wget https://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-stable_current_i386.deb
dpkg -i 
direct/mod-pagespeed-stable_current_i386.deb
apt-get -f install


b) Restart Apache
 

sudo /etc/init.d/apache2 restart

Important files and folders placed on server by deb installer are:

/usr/bin/pagespeed_js_minify – binary that does Javascript minification
/etc/apache2/mods-available/pagespeed.conf – Pagespeed config
/etc/apache2/mods-available/pagespeed.load – Load module directives in Apache
/etc/cron.daily/mod-pagespeed – mod_pagespeed cron script for checking and installing latest updates.
/var/cache/mod_pagespeed – Mod Pagespeed cahing folder (useful to install memcached to increase even further caching performance)
/var/log/pagespeed – Directory to store pagespeed log files

 

2. Install PageSpeed on (RPM based CentOS, Fedora, RHEL / SuSE Linux)


RPM 64 bit package install:
 

rpm -Uvh https://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-beta_current_x86_64.rpm

 


32 bit pack version:
 

rpm -Uvh https://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-stable_current_i386.rpm


Modify pagespeed mod config 

Restart Apache

sudo /etc/init.d/httpd restart


Important config files and folders created during RPM install are:

  • /etc/cron.daily/mod-pagespeed : mod_pagespeed cron script for checking and installing latest updates.
  • /etc/httpd/conf.d/pagespeed.conf : The main configuration file for Apache.
  • /usr/lib/httpd/modules/mod_pagespeed.so : mod_pagespeed module for Apache.
  • /var/www/mod_pagespeed/cache : File caching direcotry for web sites.
  • /var/www/mod_pagespeed/files : File generate prefix for web sites.

3. Configuring Google PageSpeed module

 

To configure PageSpeed you can either edit the package installed bundled pagespeed.conf (/etc/apache2/mods-available/pagspeed.conf,  /etc/httpd/conf.d/pagespeed.conf) or insert configuration items inside Apache VirtualHosts config files or even if you need flexibility and you don't have straight access to Apache config files (on shared hosting servers where module is available) through .htaccess.
Anyways try to avoid adding pagespeed directives to .htaccess as it will be too slow and inefficient.

Configuration is managed by setting different so-called "Rewrite Levels". Default behavior is to use Level of "Corefilters.", a set of filters (module behavior configs) which according to Google is safe for use. PageSpeed Filters is a set of actions applied to Web Delivered files.

Default config setting is hence:
 

ModPagespeedRewriteLevel CoreFilters

Disabling default set of filters is done with:
 

ModPagespeedRewriteLevel PassThrough

"Corefilters" default filter set as of time of writting this article:
 

add_head
combine_css
convert_jpeg_to_progressive
convert_meta_tags
extend_cache
flatten_css_imports
inline_css
inline_import_to_link
inline_javascript
rewrite_css
rewrite_images
rewrite_javascript
rewrite_style_attributes_with_url

Complete documentation on Configuring PageSpeed Filters is here.

If caching is turned on, default PageSped caching is configured in /var/cache/mod_pagespeed/
Enabling someof the non-Corefilters that sometimes are useful for SEO (reduce of served / returned pagesize) are:
 

ModPagespeedEnableFilters pedantic,remove_comments

By default pagespeed does some things (such as inline_css, inline_javascript and rewrite_images (Optimize, removing Excess pixels).  My litle experience with pagespeed shows in some cases this could break websites), so I found for my case useful to disable some of the filters:

 

vim /etc/apache2/mods-available/pagespeed.conf

 

ModPagespeedDisableFilters rewrite_images,convert_jpeg_to_progressive,inline_css,inline_javascript

 

4. Testing if PageSpeed is Enabled pagespeed_admin

By default PageSpeed has Admin which by default is only allowed to be accessed from server localhost (127.0.0.1) to get basic statistics either install text browser like lynx / elinks or add more access IPs again in pagespeed config / vhosts pagespeed.conf include more Allow lines like below:

 

    <Location /pagespeed_admin>
        Order allow,deny
        Allow from localhost
        Allow from 127.0.0.1
        Allow from 192.168.1.1
        Allow from xxx.xxx.xxx.xxx

        #Allow from All
        SetHandler pagespeed_admin
    </Location>
    <Location /pagespeed_global_admin>
        Order allow,deny
        Allow from localhost
        Allow from 127.0.0.1

        Allow from 192.168.1.1
        Allow from xxx.xxx.xxx.xxx
        SetHandler pagespeed_global_admin
    </Location>

 

Once configured pagespeed_admin access it with favourite browser on:

http://127.0.0.1/pagespeed_admin
http://127.0.0.1/pagespeed_global_admin

improve-website-apache-webserver-seo-without-source-code-modifications-google-pagespeed_admin_panel

Other way to test it is enabled is by creating php file with good old <? phpinfo(); ?> – PHP stats enabled / disabled features code:

pagespeed-in-phpinfo-x-mod-pagespeed-output-screenshot-apache-webserver

I've also tested also pagespeed unstable release, but experienced some segmentation faults in both error.log and access.log so finally decided to keep using stable release.

PageSpeed is a great way to boost your server sites performance, however it comes on certain costs as expect your server CPU Load to jump drastically, (in my case it jumped more than twice), there are Linux servers where enabling the module could totally stone the servers, so before implementing the module on a Production system environment, always first test thouroughfully with loaded pagespeed on UAT (testing) environment with AB or Siege (Apache Benchmarking Tools).

Linux: Configure Midnight Commander to use mcedit instead of nano or vi text editor

Friday, June 21st, 2013

reverting mc text editor to mcedit fix problem with mcedit not working in linux

I use Midnight Commander console file manager on any UNIX like servers, since my early days as system administrator. mc comes with its own text editor mcedit which is often very handy for reading config files or pieces of source code. Many times I had to modify files which were spitting errors which I couldn't track in VIM, jor or whatever text editor had on server at hand and after checking file with mcedit I caught my config or source code mistake. I guess many other admins has similar nice experiences with mcedit Internal file editor of GNU Midnight Commander. Nowadays, I install mostly Debian Linux on new configured servers and using mc to navigate in file system is very useful. I prefer mc to open files for edit with F4 (Edit – kbd shortcut) with its default mcedit, however for some reasons most of Debian / Ubuntu and other Linuxes, nowadays has set global environment text editor to nano. I totally dislike this text editor and like changing mc always to use mcedit. This is done straight from MC menus by:

Pressing F9 -> Going to Options -> Configuration -> (Setting mark on) -> Use Inernal Edit
/code>

unix terminal file manager midnight commander configuration menu screenshot

linux console file manager midnight commander use internal edit menu unchecked screenshot

<

How to count lines of PHP source code in a directory (recursively)

Saturday, July 14th, 2012

Count PHP and other programming languages lines of source code (source code files count) recursively

Being able to count the number of PHP source code lines for a website is a major statistical information for timely auditting of projects and evaluating real Project Managment costs. It is inevitable process for any software project evaluation to count the number of source lines programmers has written.
In many small and middle sized software and website development companies, it is the system administrator task to provide information or script quickly something to give info on the exact total number of source lines for projects.

Even for personal use out of curiousity it is useful to know how many lines of PHP source code a wordpress or Joomla website (with the plugins) contains.
Anyone willing to count the number of PHP source code lines under one directory level, could do it with:::

serbver:~# cd /var/www/wordpress-website
server:/var/www/wordpress-website:# wc -l *.php
17 index.php
101 wp-activate.php
1612 wp-app.php
12 wp-atom.php
19 wp-blog-header.php
105 wp-comments-post.php
12 wp-commentsrss2.php
90 wp-config-sample.php
85 wp-config.php
104 wp-cron.php
12 wp-feed.php
58 wp-links-opml.php
59 wp-load.php
694 wp-login.php
236 wp-mail.php
17 wp-pass.php
12 wp-rdf.php
15 wp-register.php
12 wp-rss.php
12 wp-rss2.php
326 wp-settings.php
451 wp-signup.php
110 wp-trackback.php
109 xmlrpc.php
4280 total

This will count and show statistics, for each and every PHP source file within wordpress-website (non-recursively), to get only information about the total number of PHP source code lines within the directory, one could grep it, e.g.:::

server:/var/www/wordpress-website:# wc -l *.php |grep -i '\stotal$'
4280 total

The command grep -i '\stotal$' has \s in beginning and $ at the end of total keyword in order to omit erroneously matching PHP source code file names which contain total in file name; for example total.php …. total_blabla.php …. blabla_total_bla.php etc. etc.

The \s grep regular expression meaning is "put empty space", "$" is placed at the end of tital to indicate to regexp grep only for words ending in string total.

So far, so good … Now it is most common that instead of counting the PHP source code lines for a first directory level to count complete number of PHP, C, Python whatever source code lines recursively – i. e. (a source code of website or projects kept in multiple sub-directories). To count recursively lines of programming code for any existing filesystem directory use find in conjunction with xargs:::

server:/var/www/wp-website1# find . -name '*.php' | xargs wc -l
1079 ./wp-admin/includes/file.php
2105 ./wp-admin/includes/media.php
103 ./wp-admin/includes/list-table.php
1054 ./wp-admin/includes/class-wp-posts-list-table.php
105 ./wp-admin/index.php
109 ./wp-admin/network/user-new.php
100 ./wp-admin/link-manager.php
410 ./wp-admin/widgets.php
108 ./wp-content/plugins/akismet/widget.php
104 ./wp-content/plugins/google-analytics-for-wordpress/wp-gdata/wp-gdata.php
104 ./wp-content/plugins/cyr2lat-slugs/cyr2lat-slugs.php
,,,,
652239 total

As you see the cmd counts and displays the number of source code lines encountered in each and every file, for big directory structures the screen gets floated and passing | less is nice, e.g.:

find . -name '*.php' | xargs wc -l | less

Displaying lines of code for each file within the directories is sometimes unnecessery, whether just a total number of programming source code line is required, hence for scripting purposes it is useful to only get the source lines total num:::

server:/var/www/wp-website1# find . -name '*.php' | xargs wc -l | grep -i '\stotal$'

Another shorter and less CPU intensive one-liner to calculate the lines of codes is:::

server:/var/www/wp-website1# ( find ./ -name '*.php' -print0 | xargs -0 cat ) | wc -l

Here is one other shell script which displays all file names within a directory with the respective calculated lines of code

For more professional and bigger projects using pure Linux bash and command line scripting might not be the best approach. For counting huge number of programming source code and displaying various statistics concerning it, there are two other tools – SLOCCount
as well as clock (count lines of code)

Both tools, are written in Perl, so for IT managers concerned for speed of calculating projects source (if too frequent source audit is necessery) this tools might be a bit sluggish. However for most projects they should be of a great add on value, actually SLOCCount was already used for calculating the development costs of GNU / Linux and other projects of high importance for Free Software community and therefore it is proven it works well with ENORMOUS software source line code calculations written in programming languages of heterogenous origin.

sloccount and cloc packages are available in default Debian and Ubuntu Linux repositories, so if you're a Debilian user like me you're in luck:::

server:~# apt-cache search cloc$
cloc - statistics utility to count lines of code
server:~# apt-cache search sloccount$
sloccount - programs for counting physical source lines of code (SLOC)

Well that's all folks, Cheers en happy counting 😉

FreeBSD Jumbo Frames network configuration short how to

Wednesday, March 14th, 2012

FreeBSD Jumbo Frames Howto configure FreeBSD

Recently I wrote a post on how to enable Jumbo Frames on GNU / Linux , therefore I thought it will be useful to write how Jumbo Frames network boost can be achieved on FreeBSD too.

I will skip the details of what is Jumbo Frames, as in the previous article I have thoroughfully explained. Just in short to remind you what is Jumbo Frames and why you might need it? – it is a way to increase network MTU transfer frames from the MTU 1500 to MTU of 9000 bytes

It is interesting to mention that according to specifications, the maximum Jumbo Frames MTU possible for assignment are of MTU=16128
Just like on Linux to be able to take advantage of the bigger Jumbo Frames increase in network thoroughput, you need to have a gigabyt NIC card/s on the router / server.

1. Increasing MTU to 9000 to enable Jumbo Frames "manually"

Just like on Linux, the network tool to use is ifconfig. For those who don't know ifconfig on Linux is part of the net-tools package and rewritten from scratch especially for GNU / Linux OS, whether BSD's ifconfig is based on source code taken from 4.2BSD UNIX

As you know, network interface naming on FreeBSD is different, as there is no strict naming like on Linux (eth0, eth1, eth2), rather the interfaces are named after the name of the NIC card vendor for instance (Intel(R) PRO/1000 NIC is em0), RealTek is rl0 etc.

To set Jumbro Frames Maximum Transmission Units of 9000 on FreeBSD host with a Realtek and Intel gigabyt ethernet cards use: freebsd# /sbin/ifconfig em0 192.168.1.2 mtu 9000
freebsd# /sbin/ifconfig rl0 192.168.2.2 mtu 9000

!! Be very cautious here, as if you're connected to the system remotely over ssh you might loose connection to it because of broken routing.

To prevent routing loss problems, if you're executing the above two commands remotely, you better run them in GNU screen session:

freebsd# screen
freebsd# /sbin/ifconfig em0 192.168.1.2 mtu 9000; /sbin/ifconfig rl0 192.168.1.2 mtu 9000; \
/etc/rc.d/netif restart; /etc/rc.d/routed restart

2. Check MTU settings are set to 9000

If everything is fine the commands will return empty output, to check further the MTU is properly set to 9000 issue:

freebsd# /sbin/ifconfig -a|grep -i em0em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000freebsd# /sbin/ifconfig -a|grep -i rl0
rl0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000

3. Reset routing for default gateway

If you have some kind of routing assigned for em0 and rl0, network interfaces they will be affected by the MTU change and the routing will be gone. To reset the routing to the previously properly assigned routing, you have to restart the BSD init script taking care for assigning routing on system boot time:

freebsd# /etc/rc.d/routing restart
default 192.168.1.1 done
add net default: gateway 192.168.1.1
Additional routing options: IP gateway=YES.

4. Change MTU settings for NIC card with route command

There is also a way to assign higher MTU without "breaking" the working routing, e.g. avoiding network downtime with bsd route command:

freebsd# grep -i defaultrouter /etc/rc.conf
defaultrouter="192.168.1.1"
freebsd# /sbin/route change 192.168.1.1 -mtu 9000
change host 192.168.1.1

5. Finding the new MTU NIC settings on the FreeBSD host

freebsd# /sbin/route -n get 192.168.1.1
route to: 192.168.1.1
destination: 192.168.1.1
interface: em0
flags: <UP,HOST,DONE,LLINFO,WASCLONED>
recvpipe sendpipe ssthresh rtt,msec rttvar hopcount mtu expire
0 0 0 0 0 0 9000 1009

6. Set Jumbo Frames to load automatically on system load

To make the increased MTU to 9000 for Jumbo Frames support permanent on a FreeBSD system the /etc/rc.conf file is used:

The variable for em0 and rl0 NICs are ifconfig_em0 and ifconfig_rl0.
The lines to place in /etc/rc.conf should be similar to:

ifconfig_em0="inet 192.168.1.1 netmask 255.255.255.0 media 1000baseTX mediaopt half-duplex mtu 9000"
ifconfig_em0="inet 192.168.1.1 netmask 255.255.255.0 media 1000baseTX mediaopt half-duplex mtu 9000"

Change in the above lines the gateway address 192.168.1.1 and the netmask 255.255.255.0 to yours corresponding gw and netmask.
Also in the above example you see the half-duplex ifconfig option is set insetad of full-duplex in order to prevent some duplex mismatches. A full-duplex could be used instead, if you're completely sure on the other side of the host is configured to support full-duplex connections. Otherwise if you try to set full-duplex with other side set to half-duplex or auto-duplex a duplex mismatch will occur. If this happens insetad of taking the advantage of the Increase Jumbo Frames MTU the network connection could become slower than originally with standard ethernet MTU of 1500. One other bad side if you end up with duplex-mismatch could be a high number of loss packets and degraded thoroughout …

7. Setting Jumbo Frames for interfaces assigning dynamic IP via DHCP

If you need to assign an MTU of 9000 for a gigabyt network interfaces, which are receiving its TCP/IP network configuration over DHCP server.
First, tell em0 and rl0 network interfaces to dynamically assign IP addresses via DHCP proto by adding in /etc/rc.conf:

ifconfig_em0="DHCP"
ifconfig_rl0="DHCP"

Secondly make two files /etc/start_if.em0 and /etc/start_if.rl0 and include in each file:

ifconfig em0 media 1000baseTX mediaopt full-duplex mtu 9000
ifconfig rl0 media 1000baseTX mediaopt full-duplex mtu 9000

Copy / paste in root console:

echo 'ifconfig em0 media 1000baseTX mediaopt full-duplex mtu 9000' >> /etc/start_if.em0
echo 'ifconfig rl0 media 1000baseTX mediaopt full-duplex mtu 9000' >> /etc/start_if.rl0

Finally, to load the new MTU for both interfaces, reload the IPs with the increased MTUs:

freebsd# /etc/rc.d/routing restart
default 192.168.1.1 done
add net default: gateway 192.168.1.1

8. Testing if Jumbo Frames is working correctly

To test if an MTU packs are transferred correctly through the network you can use ping or tcpdumpa.) Testing Jumbo Frames enabled packet transfers with tcpdump

freebsd# tcpdump -vvn | grep -i 'length 9000'

You should get output like:

16:40:07.432370 IP (tos 0x0, ttl 50, id 63903, offset 0, flags [DF], proto TCP (6), length 9000) 192.168.1.2.80 > 192.168.1.1.60213: . 85825:87285(1460) ack 668 win 14343
16:40:07.432588 IP (tos 0x0, ttl 50, id 63904, offset 0, flags [DF], proto TCP (6), length 9000) 192.168.1.2.80 > 192.168.1.1.60213: . 87285:88745(1460) ack 668 win 14343
16:40:07.433091 IP (tos 0x0, ttl 50, id 63905, offset 0, flags [DF], proto TCP (6), length 9000) 192.168.1.2.80 > 192.168.1.1.60213: . 23153:24613(1460) ack 668 win 14343
16:40:07.568388 IP (tos 0x0, ttl 50, id 63907, offset 0, flags [DF], proto TCP (6), length 9000) 192.168.1.2.80 > 192.168.1.1.60213: . 88745:90205(1460) ack 668 win 14343
16:40:07.568636 IP (tos 0x0, ttl 50, id 63908, offset 0, flags [DF], proto TCP (6), length 9000) 192.168.1.2.80 > 192.168.1.1.60213: . 90205:91665(1460) ack 668 win 14343
16:40:07.569012 IP (tos 0x0, ttl 50, id 63909, offset 0, flags [DF], proto TCP (6), length 9000) 192.168.1.2.80 > 192.168.1.1.60213: . 91665:93125(1460) ack 668 win 14343
16:40:07.569888 IP (tos 0x0, ttl 50, id 63910, offset 0, flags [DF], proto TCP (6), length 9000) 192.168.1.2.80 > 192.168.1.1.60213: . 93125:94585(1460) ack 668 win 14343

b.) Testing if Jumbo Frames are enabled with ping

Testing Jumbo Frames with ping command on Linux

linux:~# ping 192.168.1.1 -M do -s 8972
PING 192.168.1.1 (192.168.1.1) 8972(9000) bytes of data.
9000 bytes from 192.168.1.1: icmp_req=1 ttl=52 time=43.7 ms
9000 bytes from 192.168.1.1: icmp_req=2 ttl=52 time=43.3 ms
9000 bytes from 192.168.1.1: icmp_req=3 ttl=52 time=43.5 ms
9000 bytes from 192.168.1.1: icmp_req=4 ttl=52 time=44.6 ms
--- 192.168.0.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 2.397/2.841/4.066/0.708 ms

If you get insetad an an output like:

From 192.168.1.2 icmp_seq=1 Frag needed and DF set (mtu = 1500)
From 192.168.1.2 icmp_seq=1 Frag needed and DF set (mtu = 1500)
From 192.168.1.2 icmp_seq=1 Frag needed and DF set (mtu = 1500)
From 192.168.1.2 icmp_seq=1 Frag needed and DF set (mtu = 1500)

--- 192.168.1.1 ping statistics ---
0 packets transmitted, 0 received, +4 errors

This means a packets with maximum MTU of 1500 could be transmitted and hence something is not okay with the Jumbo Frames config.
Another helpful command in debugging MTU and showing which host in a hop queue support jumbo frames is Linux's traceroute

To debug a path between host and target, you can use:

linux:~# traceroute --mtu www.google.com
...

If you want to test the Jumbo Frames configuration from a Windows host use ms-windows ping command like so:

C:\>ping 192.168.1.2 -f -l 8972
Pinging 192.168.1.2 with 8972 bytes of data:
Reply from 192.168.1.2: bytes=8972 time=2ms TTL=255
Reply from 192.168.1.2: bytes=8972 time=2ms TTL=255
Reply from 192.168.1.2: bytes=8972 time=2ms TTL=255
Reply from 192.168.1.2: bytes=8972 time=2ms TTL=255
Ping statistics for 192.168.1.2:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 2ms, Maximum = 2ms, Average = 2ms

Here -l 8972 value is actually equal to 9000. 8972 = 9000 – 20 (20 byte IP header) – 8 (ICMP header)

How to mount ISO image files in Graphical Environment (GUI) on Ubuntu and Debian GNU/Linux

Saturday, January 14th, 2012

Mounting ISO files in Linux is easy with mount cmd, however remembering the exact command one has to issue is a hard task because mounting ISO files is not a common task.

Mounting ISO files directly by clicking on the ISO file is very nice, especially for lazy people uninitiated with the command line 😉

Besides that I'm sure many Windows users are curious if there is an equivallent program to DaemonTools for Linux / BSD*?

The answer to this question is YES!
There are two major programs which can be used as a DaemonTools substitute on Linux:

These are FuriousISOMount and AcetoneISO
AcetoneISO is more known and I've used it some long time ago and if I'm correct it used to be one of the first ISO Mount GUI programs for Linux. There is a project called GMount-ISO / (GMountISO) which of the time of writting this article seems to be dead (at least I couldn't find the source code).

Luckily FuriousISOMount and AcetoneISO are pretty easy to install and either one of the two is nowdays existing in most Linux distributions.
Probably the programs can also be easily run on BSD platform also quite easily using bsd linux emulation.
If someone has tried something to mount GUIs in Free/Net/OpenBSD, I'll be interesting to hear how?

1. Mount ISO files GUI in GNOME with Furius ISO Mount

FuriousISOMount is a simple Gtk+ interface to mount -t iso9660 -o loop command.

To start using the program on Debian / Ubuntu install with apt;

debian:~# apt-get install furiusisomount
The following extra packages will be installed:
fuseiso fuseiso9660 libumlib0
The following NEW packages will be installed:
furiusisomount fuseiso fuseiso9660 libumlib0

To access the program in GNOME after install use;

Applications -> Accessories -> Furious ISO Mount

Screenshot ISO Mount Tool Debian GNU/Linux Screenshot
 

When mounting it is important to choose Loop option to mount the iso instead of Fuse

After the program is installed to associate the (.iso) ISO files, to permanently be opened with furiusisomount roll over the .iso file and choose Open With -> Other Application -> (Use a custom command) -> furiusisomount

GNOME Open with menu Debian GNU / Linux

2. Mount ISO Files in KDE Graphical Environment with AcetoneISO

AcetoneISO is build on top of KDE's QT library and isway more feature rich than furiousisomount.
Installing AcetoneISO Ubuntu and Debian is done with:

debian:~# apt-get install acetoneiso
The following NEW packages will be installed:
acetoneiso gnupg-agent gnupg2 libksba8 pinentry-gtk2 pinentry-qt4
0 upgraded, 6 newly installed, 0 to remove and 35 not upgraded.
Need to get 3,963 kB of archives.
After this operation, 8,974 kB of additional disk space will be used.
...

Screenshot Furius ISO Mount Tool Debian GNU/Linux ScreenShot

AcetoneISO supports:
 

  • conversion between different ISO formats
  • burn images to disc
  • split ISO image volumes
  • encrypt images
  • extract password protected files

Complete list of the rich functionality AcetoneISO offers is to be found on http://www.acetoneteam.org/viewpage.php?page_id=6
To start the program via the GNOME menus use;

Applications -> Accessories -> Sound & Video -> AcetoneISO

I personally don't like AcetoneISO as I'm not a KDE user and I see the functionality this program offers as to rich and mostly unnecessery for the simple purpose of mounting an ISO.

3. Mount ISO image files using the mount command

If you're a console guy and still prefer mounting ISO with the mount command instead of using fancy gui stuff use:

# mount -t iso9660 -o loop /home/binary/someiso.iso /home/username/Iso_Directory_Name

 

How to change from default main menu text to another text in Joomla

Tuesday, December 21st, 2010

To change the Main Menu link menu entry in Joomla, from Joomla administ rator I tried to:

Menus -> Main Menu -> Menus

I’ve changed the title but the change didn’t appeared in my Joomla based website .

I tried to change it directly in the source code of the website by looking for ‘Main Menu’ string with:

debian:/home/mysite/www# grep -rli 'Main Menu' *

but it appeared too complicated and after trying few string changes in few files I decided to drop this kind of approach.

A bit of investigation online led me to how to achieve what I was trying to dire ctly from Joomla.

Here is how. In Joomla administrator move to:

Extensions -> Module Manager

In the list you will the Module Manager appear under the list Module Name , therein you have to click over Main Menu text and change it to whatever you like to.

The new text you entered will appear on the joomla website immediately, enjoy.