Posts Tagged ‘doesn’

HasciiCAM supposed to stream ASCII video over the network on GNU / Linux

Tuesday, May 22nd, 2012

Richard M. Stallman (RMS) Face portrait rendered in ASCII art from a video with hasciicam
To continue with my lately ASCII centered articles I found hasciicam
hasciicam is a program to stream ASCII video over the network on Linux and probably can be easily made working on FreeBSDtoo.

The project concept is interesting in a matter of fun (play) point of view, however not too usable as we all know ASCII character looking faces doesn't look too pretty.

Below is the Debian (Squeeze) package description:

noah:~# apt-cache show hasciicam|grep -i description -A 7
Description: (h)ascii for the masses: live video as text
Hasciicam makes it possible to have live ASCII video on the web. It
captures video from a tv card and renders it into ascii, formatting the
output into an html page with a refresh tag or in a live ASCII window or
in a simple text file as well, giving the possibility to anybody that has a
bttv card, a Linux box and a cheap modem line to show a live ASCII video
feed that can be browsable without any need for plugin, java etc.
Homepage: http://ascii.dyne.org/

On hasciicam Project webpage is it is stated as a hardware you need to have:
 

"As hardware you need to have a webcam or a videocard supported by "video 4 linux", most of the gear you can buy around should work well."

To install and test it I run:

noah:~# apt-get --yes install hasciicam

Though it is stated on the project website supposed to work display video fine with most 'linux ready' webcams, it didn't with this very standard one.

Here is the exact WebCamera model as identified to the kernel:

noah:~# dmesg|grep -i camera
[ 1.433661] usb 2-2: Product: USB2.0 Camera
[ 10.107840] uvcvideo: Found UVC 1.00 device USB2.0 Camera (1e4e:0102)
[ 10.110660] input: USB2.0 Camera as /devices/pci0000:00/0000:00:1d.7/usb2/2-2/2-2:1.0/input/input11

By the way, I use the very same CAM daily on for Skype video calls as well as the Camera is working with no problems to save video or pictures inside Cheese

Here is the exact WebCamera model as identified to the kernel:

noah:~# dmesg|grep -i camera
[ 1.433661] usb 2-2: Product: USB2.0 Camera
[ 10.107840] uvcvideo: Found UVC 1.00 device USB2.0 Camera (1e4e:0102)
[ 10.110660] input: USB2.0 Camera as /devices/pci0000:00/0000:00:1d.7/usb2/2-2/2-2:1.0/input/input11

The just installed deb has one binary file only /usr/bin/hasciicam. To test it with the camera I issued:

noah:~# hasciicam -d /dev/video0
HasciiCam 1.0 - (h)ascii 4 the masses! - http://ascii.dyne.org
(c)2000-2006 Denis Roio < jaromil @ dyne.org >
watch out for the (h)ASCII ROOTS

Device detected is /dev/video0
USB2.0 Camera
1 channels detected
max size w[640] h[480] - min size w[48] h[32]
Video capabilities:
VID_TYPE_CAPTURE can capture to memory
!! error in ioctl VIDIOCGMBUF: : Invalid argument

Unfortunately as you see from the output, it failed to detect the web camera model.
The exact camera besides its kernel detection naminf is a cheap external USB 2.0 (fake brand / nonanem) "universal" Web PC Camera (SUPER .3mega pixel)

For those who have a further interest in building and installing hasciicam on other Linux platforms than Debian and Ubuntu or whoever wants to look in the code check check Project webpage is. For those who are less of programmers (like me) the project is written in C programming language and uses aa-lib in order to render the video to ASCII.

On the site you will notice two totally schizophrenic looking pictures of presumably the project head developer …

hasciiart video streamed ASCII screenshot of some crazy looking guy smoking marijuanna or smth

As I read in man hasciicam manual page it's said to be able to generate ascii plain text and html files as well as directly to write the output to console, which later probably can be streamed via the network.
Pitily as it didn't detect my camera I couldn't make some testing of its network capabilities.

A Streaming of ASCII couuld be done through pushing the .html output to a webserver and setting a php or javascript to loop through and refresh the browser over the uploaded files every sec or so.

Also I assume the ASCII video output saved in plain console could be streamed via netcat or some tiny scripted perl or bash script and directly observed via a telnet or ssh connection.
One playful way I can think of checking a stored video without the use of FTP is to login via ssh and do:

$ ssh someuser@somehost
$ watch -n 1 "cat video-ascii.html"

🙂

Well something disturbing about hasciicam from a (purely Christian point of view) is it was developed by some kind of non profit organization called RastaSoft on the project website, some of its authors has written JAH BLESS.

As I didn't succeeded seeing it working, I'll be interested to hear if someone who red this article and give it a try can report the web camera model used.

Facebook use in organizations harmful for company businesses – How to block facebook access to company or organization network on Linux routers

Wednesday, May 2nd, 2012

Facebook harms company and organization employee efficiency picture, Falling company efficiency diagram due to facebook employee use

I don't know if someone has thought about this topic but in my view Facebook use in organizations has a negative influence on companies overall efficiency!
Think for a while, facebook's website is one of the largest Internet based "people stealing time machine" so to say. I mean most people use facebook for pretty much useless stuff on daily basis (doesn't they ??). The whole original idea of facebook was to be a lay off site for college people with a lot of time to spend on nothing.
Yes it is true some companies use facebook succesfully for their advertising purposes and sperading the awareness of a company brand or product name but it is also true that many companies administration jobs like secretaries, accountants even probably CEOs loose a great time in facebook useless games and picture viewing etcetera.

Even government administration job positioned people who have access to the internet access facebook often from their work place. Not to mention, the mobility of people nowdays doesn't even require facebook to be accessed from a desktop PC. Many people employeed within companies, who does not have to work in front of a computer screen has already modern mobile "smart phones" as the business people incorrectly call this mini computer devices which allows them to browse the NET including facebook.

Sadly Microsoft (.NET) programmers and many of the programmers on various system platforms developers, software beta testers and sys admins are starting to adopt this "facebook loose your time for nothing culture". Many of my friends actively use the Facebook, (probably) because they're feeling lonely in front of the computer screen and they want to have interaction with someone.

Anyways, the effect of this constant fb use and aline social networks is clear. If in the company the employeed personal has to do work on the computer or behind any Internet plugged device, a big time of the use of the device is being 'invested' in facebook to kill some time instead of investing the same time for innovation within the company or doing the assigned tasks in the best possible way

Even those who use facebook occasionally from their work place (by occasionally I mean when they don't have any work to do on the work place), they are constantly distracted (focus on work stealed) by the hanging opened browser window and respectively, when it comes to do some kind of work their work efficiency drops severely.
You might wonder how do I know that facebook opened browser tab would have bad interaction with the rest of the employee work. Well let me explain. Its a well known scientifically proven fact that the human mind is not designed to do simultaneously multiple tasks (we're not computers, though even computers doesn't work perfect when simultaneous tasks are at hand.).
Therefore using facebook in parallel with their daily job most people nowdays try to "multi task" their job and hence this reflects in poor work productivity per employee. The chain result cause of the worsened productivity per employee is therefore seen in the end of the fiscal quarter or fiscal year in bad productivity levels, bad or worsened quality of product and hence to poor financial fiscal results.

I've worked before some time for company whose CEO has realized that the use of certain Internet resources like facebook, gmail and yahoo mail – hurts the employee work productivity and therefore the executive directors asked me to filter out facebook, GMAIL and mail.yahoo as well as few other website which consumed a big portion of the employees time …
Well apparantly this CEO was smart and realized the harm this internet based resources done to his business. Nowdays however many company head executives did not realize the bad effect of the heavy use of public internet services on their work force and never ask the system administrator to filter out this "employees efficiency thefts".

I hope this article, will be eventually red by some middle or small sized company with deteriorating efficiency and this will motivate some companies to introduce an anti-facebook and gmail use policy to boost up the company performance.

As one can imagine, if you sum up all the harm all around the world to companies facebook imposed by simply exposing the employees to do facebooking and not their work, this definitely worsenes the even severe economic crisis raging around …
The topic of how facebook use destroyes many businesses is quite huge and actually probably I'm missing a lot of hardmful aspects to business that can be imposed by just a simple "innocent facebook use", so I will be glad to hear from people in comments, if someone at all benefits of facebook use in an company office (I seriously doubt there is even one).

Suppose you are a company that does big portion of their job behind a computer screen over the internet via a Software as a Service internet based service, suppose you have a project deadline you have to match. The project deadline is way more likely to be matched if you filter out facebook.
Disabling access to facebook of employees and adding company policy to prohibit social network use and rules & regulations prohibiting time consuming internet spaces should produce good productivity results for company lightly.
Though still the employees can find a way to access their out of the job favourite internet services it will be way harder.
If the employee work progress is monitored by installed cameras, there won't be much people to want to cheat and use Facebook, Gmail or any other service prohibited by the company internal codex

Though this are a draconian measures, my personal view is that its better for a company to have such a policy, instead of pay to their emloyees to browser facebook….

I'm not aware what is the situation within many of the companies nowdays and how many of them prohibit the fb, hyves, google plus and the other kind of "anti-social" networks.
But I truly hope more and more organizations chairman / company management will comprehend the damages facebook makes to their business and will issue a new policy to prohibit the use of facebook and the other alike shitty services.

In the mean time for those running an organization routing its traffic through a GNU / Linux powered router and who'd like to prohibit the facebook use to increase the company employees efficiency use this few lines of bash code + iptables:

#!/bin/sh
# Simple iptables firewall rules to filter out www.facebook.com
# Leaving www.facebook.com open from your office will have impact on employees output ;)
# Written by hip0
# 05.03.2012
get_fb_network=$(whois 69.63.190.18|grep CIDR|awk '{ print $2 }');
/sbin/iptables -A OUTPUT -p tcp -d ${get_fb_network} -j DROP

Here is also the same filter out facebook, tiny shell script / blocks access to facebook script

If the script logic is followed I guess facebook can be disabled on other company networks easily if the router is using CISCO, BSD etc.
I will be happy to hear if someone did a research on how much a company efficiency is increased whether in the company office facebook gets filtered out. My guess is that efficiency will increase at least with 30% as a result of prohibition of just facebook.

Please drop me a comment if you have an argument against or for my thesis.

Text mode (console) browsing with tabs with Elinks / Text browsers – (lynx, elinks, links and w3m) useful HTTP debugging tools for Linux and FreeBSD servers

Friday, April 27th, 2012

The last days, I'm starting to think the GUI use is making me brainless so I'm getting back to my old habits of using console.
I still remember with a grain of nostalgy how much more efficient I used to be when the way to interact with my computer was primary in text mode console.
Actually, I'm starting to get this idea the more new a software is the more inefficient it makes your use of computer, not to mention the hardware resources required by newer software is constantly increasing.

With this said, I started occasionally browsing again like in the old days by using links text browser.
In the old days I mostly used lynx and its more advanced "brother" text browser links.
The main difference between lynx and links is that lynx does not have any support for the terrible "javascript", whether links supports most of the Javascript ver 2.
Also links and has a midnight commander like pull down menus on the screen top, – handy for people who prefer some more interactivity.

In the past I remember I used also to browse graphically in normal consoles (ttys) with a hacked version of links calledTThere is also a variation of linksxlinks suitable for people who would like to have graphical browser in console (ttys).

I used xlinks quite heavily in the past, when I have slower computer P166Mhz with 64MB of memory 2.5 GB HDD (What a times boy what a times) .
Maybe when I have time I will install it on my PC and start using it again like in the old days to boost my computer use efficiency…
I remember the only major xlinks downside was it doesn't included support for Adobe flash (though this is due to the bad non-free software nature of Adobe lack of proper support for free software and not a failure of xlinks developers. Anyways for me this wasn't a big trouble since, ex Macromedia (Adobe) Flash support is not something essential for most of my work…

links2 is actually the naming of links version 2. elinks emerged later (if I remember correctly, as fork project of links).
elinks difference with links constitutes in this it supports tabbed browsing as well as colors (links browser displays results monochrome).

Having a tabbed browsing support in tty console is a great thing…
I personally belive text browsing if properly used can in many ways outbeat, graphic browsing in terms of performance and time spend to obtain data. I'm convinced text browsing is superior for two reasons:
1. with text there is way less elements to obstruct your attention.
– No graphical annoying flash banners, no annoying taking the attention pictures

2. Navigating in web pages using the keyboard is more efficient than mouse
– Using keyboard shorcuts is always quicker than mouse, generally keboard has always been a quicker way to access computer commands.

Another reason to use text browsing is, it is mostly the text part of a page that matters, most of the pages that provide images to better explain a topic are bloated (this is my personal view though, i'm sure designer guys will argue me :D).
Here is a screenshot of a my links text browser in action, I'm sorry the image is a bit unreadable, but after taking a screenshot of the console and resizing it with GIMP this is what I got …

Links text console browser screenshot with 2 tabs opened Debian GNU / Linux

For all those new to Linux who didn't tried text browsing yet and for those interested in computer history, I suggest you install and give a try to following text browsers:
 

  • lynx
  • (Supports colorful text console text browsing)
    lynx text console browser Debian Squeeze GNU / Linux Screenshot

  • links
  • Links www text console browser screenshot on Debian Linux

  • elinks
  • (Supports colors filled text browsing and tabs)
    elinks opened duckduckgo.com google alternative search engine in mlterm terminal Debian Linux

  • w3m
  • w3m one of the oldest text console browsers screenshot Debian Linux Squeeze 6.2

By the way having the 4 text browsers is very useful for debugging purposes for system administrators too, so in any case I think this 4 web browsers are absoutely required software for newly installed GNU / Linux or BSD* based servers.

For Debian and the derivatives Linux distributions, the 4 browsers are available as deb packages, so install them with following apt 1 liner:
 

debian:~# apt-get –yes install w3m elinks links lynx
….

FreeBSD users can install the browsers using, cmd:
 

freebsd# cd /usr/ports/www/w3mfreebsd# make install clean
….
freebsd# cd /usr/ports/www/elinksfreebsd# make install clean
….
freebsd# cd /usr/ports/www/linksfreebsd# make install clean
….
freebsd# cd /usr/ports/www/lynxfreebsd# make install clean
….

In links using the tabs functionality appeared, somewhere near the 2001 or 2000 (at least that was the first time I saw links with tabbed browsing enabled). My first time to saw links support opening multiple pages within the same screen under tabs was on Redhat Linux 9

Opening multiple pages in tabs in the text browser is done by pressing the t key and typing in the desired URL to open isnide.
For more than 2 tabs, again t has to be pressed and same procedure goes on and on.
It was pretty hard for me to figure out how I can do a text browsing with tabs, though I found a way to open new tabs it took me some 10 minutes in pondering how to switch between the new opened links browser tabs.

Hence, I thought it would be helpful to mention here how tabs can be switched in links text browser. Actually it turned it is pretty easy to Switch tabs tabs back and foward.

1 tab to move backwards is done with < (key), wheter switching one tab forward is done with the > key.

On UK and US qwerty keyboards alignment the movement a tab backward and forward is done with holding shift and pressing < onwards holding both keys simultaneously and analogously with pressing shift + >
 

How to upgrade single package with their dependencies on Debian and Ubuntu Linux

Friday, March 16th, 2012

Debian GNU / Linux apt-get upgrade a package selection of a whole bunch of packages ready to upgrade apt artistic logo

Are you a Debian System Administrator and you recently run apt-get upgrade && apt-get upgrade finding out there are plenty of new packagesfor upgrade? Do you need only a pre-selected number of packages to upgrade with apt?
I run apt-get update && apt-get upgrade on one of our company Debian servers, just to see there are a number of packages to be upgraded among which there was some I didn't wanted to upgrade. Here is a little paste output from apt-get upgrade:

debian:~# apt-get update && apt-get upgrade
Hit http://security.debian.org squeeze/updates Release.gpg
...
Hit http://security.debian.org squeeze/updates/main amd64 Packages
Fetched 128 kB in 0s (441 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
at imagemagick libdbd-pg-perl libfreetype6 libmagickcore3 libmagickcore3-extra libmagickwand3 libmysqlclient16 mysql-client
mysql-client-5.1 mysql-common mysql-server mysql-server-5.1 mysql-server-core-5.1
Do you want to continue [Y/n]
14 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

From first sight it seems logical to issue apt-get upgrade packagename to upgrade only single package with its package dependencies, instead of the whole group the above packs. However doing:
apt-get upgrade imagemagick will still try to upgrade all the packages instead of just imagemagick and its dependency package deb libmagickcore3

debian:~# apt-get upgrade imagemagick
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
at imagemagick libdbd-pg-perl libfreetype6 libmagickcore3 libmagickcore3-extra libmagickwand3 libmysqlclient16 mysql-client
mysql-client-5.1 mysql-common mysql-server mysql-server-5.1 mysql-server-core-5.1
14 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Do you want to continue [Y/n]

Doing all package,upgrade is not a good idea in my case, since upgrading mysql-server will require a MySQL server restart (something which we cannot afford to do right now) on this production server.
MySQL server restart during upgrade is never a good idea especially on productive busy (heavy loaded) SQL servers.
A restart of the MySQL server serving thousands of requests per second could lead often to crashed tables and hence temporary server downtime etc.

Still it is a good idea to upgrade the rest of packages with their newer versions. For exmpl. to upgrade; imagemagick, at , libfreetype6 and so on.

In order to upgrade only this 3 ones and their respective package dependencies, issue:

debian:~# apt-get --yes install imagemagick at libfreetype6

Repeat the apt-get install command with passing all the single package name you want to be upgraded and voila you're done :).
Be sure the apt-get install packagename upgrade doesn't require also upgrade of myssql-server, mysql-client, mysql-common or mysql-server-core-5.1 or any of the package name you want to preserve from upgrading.

Boost local network performance (Increase network thoroughput) by enabling Jumbo Frames on GNU / Linux

Saturday, March 10th, 2012

Jumbo Frames boost local network performance in GNU / Linux

So what is Jumbo Frames? and why, when and how it can increase the network thoroughput on Linux?

Jumbo Frames are Ethernet frames with more than 1500 bytes of payload. They can carry up to 9000 bytes of payload. Many Gigabit switches and network cards supports them.
Jumbo frames is a networking standard for many educational networks like AARNET. Unfortunately most commercial ISPs doesn't support them and therefore enabling Jumbo frames will rarely increase bandwidth thoroughput for information transfers over the internet.
Hopefully in the years to come with the constant increase of bandwidths and betterment of connectivity, jumbo frames package transfers will be supported by most ISPs as well.
Jumbo frames network support is just great for is small local – home networks and company / corporation office intranets.

Thus enabling Jumbo Frame is absolutely essential for "local" ethernet networks, where large file transfers occur frequently. Such networks are networks where, there is often a Video or Audio streaming with high quality like HD quality on servers running File Sharing services like Samba, local FTP sites,Webservers etc.

One other advantage of enabling jumbo frames is reduce of general server overhead and decrease in CPU load / (CPU usage), when transferring large or enormous sized files.Therefore having jumbo frames enabled on office network routers with GNU / Linux or any other *nix OS is vital.

Jumbo Frames traffic is supported in GNU / Linux kernel since version 2.6.17+ in earlier 2.4.x it was possible through external third party kernel patches.

1. Manually increase MTU to 9000 with ifconfig to enable Jumbo frames

debian:~# /sbin/ifconfig eth0 mtu 9000

The default MTU on most GNU / Linux (if not all) is 1500, to check the default set MTU with ifconfig:

linux:~# /sbin/ifconfig eth0|grep -i mtu
UP BROADCAST MULTICAST MTU:1500 Metric:1

To take advantage of Jumbo Frames, all that has to be done is increase the default Maximum Transmission Unit from 1500 to 9000

For those who don't know MTU is the largest physical packet size that can be transferred over the network. MTU is measured by default in bytes. If a information has to be transferred over the network which exceeds the lets say 1500 MTU (bytes), it will be chopped and transferred in few packs each of 1500 size.

MTUs differ on different netework topologies. Just for info here are the few main MTUs for main network types existing today:
 

  • 16 MBit/Sec Token Ring – default MTU (17914)
  • 4 Mbits/Sec Token Ring – default MTU (4464)
  • FDDI – default MTU (4352)
  • Ethernet – def MTU (1500)
  • IEEE 802.3/802.2 standard – def MTU (1492)
  • X.25 (dial up etc.) – def MTU (576)
  • Jumbo Frames – def max MTU (9000)

Setting the MTU packet frames to 9000 to enable Jumbo Frames is done with:

linux:~# /sbin/ifconfig eth0 mtu 9000

If the command returns nothing, this most likely means now the server can communicate on eth0 with MTUs of each 9000 and therefore the network thoroughput will be better. In other case, if the network card driver or card is not a gigabit one the cmd will return error:

SIOCSIFMTU: Invalid argument

2. Enabling Jumbo Frames on Debian / Ubuntu etc. "the Debian way"

a.) Jumbo Frames on ethernet interfaces with static IP address assigned Edit /etc/network/interfaces and you should have for each of the interfaces you would like to set the Jumbo Frames, records similar to:

Raising the MTU to 9000 if for one time can be done again manually with ifconfig

debian:~# /sbin/ifconfig eth0 mtu 9000

iface eth0 inet static
address 192.168.0.5
network 192.168.0.0
gateway 192.168.0.254
netmask 255.255.255.0
mtu 9000

For each of the interfaces (eth1, eth2 etc.), add a chunk similar to one above changing the changing the IPs, Gateway and Netmask.

If the server is with two gigabit cards (eth0, eth1) supporting Jumbo frames add to /etc/network/interfaces :

iface eth0 inet static
address 192.168.0.5
network 192.168.0.0
gateway 192.168.0.254
netmask 255.255.255.0
mtu 9000

iface eth1 inet static
address 192.168.0.6
network 192.168.0.0
gateway 192.168.0.254
netmask 255.255.255.0
mtu 9000

b.) Jumbo Frames on ethernet interfaces with dynamic IP obtained via DHCP

Again in /etc/network/interfaces put:

auto eth0
iface eth0 inet dhcp
post-up /sbin/ifconfig eth0 mtu 9000

3. Setting Jumbo Frames on Fedora / CentOS / RHEL "the Redhat way"

Enabling jumbo frames on all Gigabit lan interfaces (eth0, eth1, eth2 …) in Fedora / CentOS / RHEL is done through files:
 

  • /etc/sysconfig/network-script/ifcfg-eth0
  • /etc/sysconfig/network-script/ifcfg-eth1

etc. …
append in each one at the end of the respective config:

MTU=9000

[root@fedora ~]# echo 'MTU=9000' >> /etc/sysconfig/network-scripts/ifcfg-eth


a quick way to set Maximum Transmission Unit to 9000 for all network interfaces on on Redhat based distros is by executing the following loop:

[root@centos ~]# for i in $(echo /etc/sysconfig/network-scripts/ifcfg-eth*); do \echo 'MTU=9000' >> $i
done

P.S.: Be sure that all your interfaces are supporting MTU=9000, otherwise increase while the MTU setting is set will return SIOCSIFMTU: Invalid argument err.
The above loop is to be used only, in case you have a group of identical machines with Lan Cards supporting Gigabit networks and loaded kernel drivers supporting MTU up to 9000.

Some Intel and Realtek Gigabit cards supports only a maximum MTU of 7000, 7500 etc., so if you own a card like this check what is the max MTU the card supports and set it in the lan device configuration.
If increasing the MTU is done on remote server through SSH connection, be extremely cautious as restarting the network might leave your server inaccessible.

To check if each of the server interfaces are "Gigabit ready":

[root@centos ~]# /sbin/ethtool eth0|grep -i 1000BaseT
1000baseT/Half 1000baseT/Full
1000baseT/Half 1000baseT/Full

If you're 100% sure there will be no troubles with enabling MTU > 1500, initiate a network reload:

[root@centos ~]# /etc/init.d/network restart
...

4. Enable Jumbo Frames on Slackware Linux

To list the ethernet devices and check they are Gigabit ones issue:

bash-4.1# lspci | grep [Ee]ther
0c:00.0 Ethernet controller: D-Link System Inc Gigabit Ethernet Adapter (rev 11)
0c:01.0 Ethernet controller: D-Link System Inc Gigabit Ethernet Adapter (rev 11)

Setting up jumbo frames on Slackware Linux has two ways; the slackware way and the "universal" Linux way:

a.) the Slackware way

On Slackware Linux, all kind of network configurations are done in /etc/rc.d/rc.inet1.conf

Usual config for eth0 and eth1 interfaces looks like so:

# Config information for eth0:
IPADDR[0]="10.10.0.1"
NETMASK[0]="255.255.255.0"
USE_DHCP[0]=""
DHCP_HOSTNAME[1]=""
# Config information for eth1:
IPADDR[1]="10.1.1.1"
NETMASK[1]="255.255.255.0"
USE_DHCP[1]=""
DHCP_HOSTNAME[1]=""

To raise the MTU to 9000, the variables MTU[0]="9000" and MTU[1]="9000" has to be included after each interface config block, e.g.:

# Config information for eth0:
IPADDR[0]="172.16.1.1"
NETMASK[0]="255.255.255.0"
USE_DHCP[0]=""
DHCP_HOSTNAME[1]=""
MTU[0]="9000"
# Config information for eth1:
IPADDR[1]="10.1.1.1"
NETMASK[1]="255.255.255.0"
USE_DHCP[1]=""
DHCP_HOSTNAME[1]=""
MTU[1]="9000"

bash-4.1# /etc/rc.d/rc.inet1 restart
...

b.) The "Universal" Linux way

This way is working on most if not all Linux distributions.
Insert in /etc/rc.local:

/sbin/ifconfig eth0 mtu 9000 up
/sbin/ifconfig eth1 mtu 9000 up

5. Check if Jumbo Frames are properly enabled

There are at least two ways to display the MTU settings for eths.

a.) Using grepping the MTU from ifconfig

linux:~# /sbin/ifconfig eth0|grep -i mtu
UP BROADCAST MULTICAST MTU:9000 Metric:1
linux:~# /sbin/ifconfig eth1|grep -i mtu
UP BROADCAST MULTICAST MTU:9000 Metric:1

b.) Using ip command from iproute2 package to get MTU

linux:~# ip route get 192.168.2.134
local 192.168.2.134 dev lo src 192.168.2.134
cache mtu 9000 advmss 1460 hoplimit 64

linux:~# ip route show dev wlan0
192.168.2.0/24 proto kernel scope link src 192.168.2.134
default via 192.168.2.1

You see MTU is now set to 9000, so the two server lans, are now able to communicate with increased network thoroughput.
Enjoy the accelerated network transfers 😉

 

How to resolve (fix) WordPress wp-cron.php errors like “POST /wp-cron.php?doing_wp_cron HTTP/1.0″ 404” / What is wp-cron.php and what it does

Monday, March 12th, 2012

fix wordpress wp-cron.php 404 HTTP error, what is wp-cron.php schedule logo

One of the WordPress websites hosted on our dedicated server produces all the time a wp-cron.php 404 error messages like:

xxx.xxx.xxx.xxx - - [15/Apr/2010:06:32:12 -0600] "POST /wp-cron.php?doing_wp_cron HTTP/1.0

I did not know until recently, whatwp-cron.php does, so I checked in google and red a bit. Many of the places, I've red are aa bit unclear and doesn't give good exlanation on what exactly wp-cron.php does. I wrote this post in hope it will shed some more light on wp-config.php and how this major 404 issue is solved..
So

what is wp-cron.php doing?

 

  • wp-cron.php is acting like a cron scheduler for WordPress.
  • wp-cron.php is a wp file that controls routine actions for particular WordPress install.
  • Updates the data in SQL database on every, request, every day or every hour etc. – (depending on how it's set up.).
  • wp-cron.php executes automatically by default after EVERY PAGE LOAD!
  • Checks all pending comments for spam with Akismet (if akismet or anti-spam plugin alike is installed)
  • Sends all scheduled emails (e.g. sent a commentor email when someone comments on his comment functionality, sent newsletter subscribed persons emails etc.)
  • Post online scheduled articles for a day and time of particular day

Suppose you're writting a new post and you want to take advantage of WordPress functionality to schedule a post to appear Online at specific time:

What is wordpress wp-cron.php, Scheduling wordpress post screenshot

The Publish Immediately, field execution is being issued on the scheduled time thanks to the wp-cron.php periodic invocation.

Another example for wp-cron.php operation is in handling flushing of WP old HTML Caches generated by some wordpress caching plugin like W3 Total Cache
wp-cron.php takes care for dozens of other stuff silently in the background. That's why many wordpress plugins are depending heavily on wp-cron.php proper periodic execution. Therefore if something is wrong with wp-config.php, this makes wordpress based blog or website partially working or not working at all.
 

Our company wp-cron.php errors case

In our case the:
212.235.185.131 – – [15/Apr/2010:06:32:12 -0600] "POST /wp-cron.php?doing_wp_cron HTTP/1.0" 404
is occuring in Apache access.log (after each unique vistor request to wordpress!.), this is cause wp-cron.php is invoked on each new site visitor site request.
This puts a "vain load" on the Apache Server, attempting constatly to invoke the script … always returning not found 404 err.

As a consequence, the WP website experiences "weird" problems all the time. An illustration of a problem caused by the impoper wp-cron.php execution is when we are adding new plugins to WP.

Lets say a new wordpress extension is download, installed and enabled in order to add new useful functioanlity to the site.

Most of the time this new plugin would be malfunctioning if for example it is prepared to add some kind of new html form or change something on some or all the wordpress HTML generated pages.
This troubles are result of wp-config.php's inability to update settings in wp SQL database, after each new user request to our site.
So the newly added plugin website functionality is not showing up at all, until WP cache directory is manually deleted with rm -rf /var/www/blog/wp-content/cache/

I don't know how thi whole wp-config.php mess occured, however my guess is whoever installed this wordpress has messed something in the install procedure.

Anyways, as I researched thoroughfully, I red many people complaining of having experienced same wp-config.php 404 errs. As I red, most of the people troubles were caused by their shared hosting prohibiting the wp-cron.php execution.
It appears many shared hostings providers choose, to disable the wordpress default wp-cron.php execution. The reason is probably the script puts heavy load on shared hosting servers and makes troubles with server overloads.

Anyhow, since our company server is adedicated server I can tell for sure in our case wordpress had no restrictions for how and when wp-cron.php is invoked.
I've seen also some posts online claiming, the wp-cron.php issues are caused of improper localhost records in /etc/hosts, after a thorough examination I did not found any hosts problems:

hipo@debian:~$ grep -i 127.0.0.1 /etc/hosts
127.0.0.1 localhost.localdomain localhost

You see from below paste, our server, /etc/hosts has perfectly correct 127.0.0.1 records.

Changing default way wp-cron.php is executed

As I've learned it is generally a good idea for WordPress based websites which contain tens of thousands of visitors, to alter the default way wp-cron.php is handled. Doing so will achieve some efficiency and improve server hardware utilization.
Invoking the script, after each visitor request can put a heavy "useless" burden on the server CPU. In most wordpress based websites, the script did not need to make frequent changes in the DB, as new comments in posts did not happen often. In most wordpress installs out there, big changes in the wordpress are not common.

Therefore, a good frequency to exec wp-cron.php, for wordpress blogs getting only a couple of user comments per hour is, half an hour cron routine.

To disable automatic invocation of wp-cron.php, after each visitor request open /var/www/blog/wp-config.php and nearby the line 30 or 40, put:

define('DISABLE_WP_CRON', true);

An important note to make here is that it makes sense the position in wp-config.php, where define('DISABLE_WP_CRON', true); is placed. If for instance you put it at the end of file or near the end of the file, this setting will not take affect.
With that said be sure to put the variable define, somewhere along the file initial defines or it will not work.

Next, with Apache non-root privileged user lets say www-data, httpd, www depending on the Linux distribution or BSD Unix type add a php CLI line to invoke wp-cron.php every half an hour:

linux:~# crontab -u www-data -e

0,30 * * * * cd /var/www/blog; /usr/bin/php /var/www/blog/wp-cron.php 2>&1 >/dev/null

To assure, the php CLI (Command Language Interface) interpreter is capable of properly interpreting the wp-cron.php, check wp-cron.php for syntax errors with cmd:

linux:~# php -l /var/www/blog/wp-cron.php
No syntax errors detected in /var/www/blog/wp-cron.php

That's all, 404 wp-cron.php error messages will not appear anymore in access.log! 🙂

Just for those who can find the root of the /wp-cron.php?doing_wp_cron HTTP/1.0" 404 and fix the issue in some other way (I'll be glad to know how?), there is also another external way to invoke wp-cron.php with a request directly to the webserver with short cron invocation via wget or lynx text browser.

– Here is how to call wp-cron.php every half an hour with lynxPut inside any non-privileged user, something like:
01,30 * * * * /usr/bin/lynx -dump "http://www.your-domain-url.com/wp-cron.php?doing_wp_cron" 2>&1 >/dev/null

– Call wp-cron.php every 30 mins with wget:

01,30 * * * * /usr/bin/wget -q "http://www.your-domain-url.com/wp-cron.php?doing_wp_cron"

Invoke the wp-cron.php less frequently, saves the server from processing the wp-cron.php thousands of useless times.

Altering the way wp-cron.php works should be seen immediately as the reduced server load should drop a bit.
Consider you might need to play with the script exec frequency until you get, best fit cron timing. For my company case there are only up to 3 new article posted a week, hence too high frequence of wp-cron.php invocations is useless.

With blog where new posts occur once a day a script schedule frequency of 6 up to 12 hours should be ok.

 

Recommended logrorate practices on heavy loaded (busy) Apache Linux servers

Wednesday, March 7th, 2012

Apache logrotate Debian good configuration for heavy loaded servers

If you are sys admin of Apache Webserver running on Debian Linux relying on logrorate to rorate logs, you might want to change the default way logroration is done.

Little changes in the way Apache log files are served on busy servers can have positive outcomes on the overall way the server CPU units burden. A good logrotation strategy can also prevent your server from occasional extra overheads or downtimes.

The way Debian GNU / Linux process logs is well planned for small servers, however the default logroration Apache routine doesn't fit well for servers which process millions of client requests each day.

I happen to administrate, few servers which are constantly under a heavy load and have occasionally overload troubles because of Debian's logrorate default mechanism.

To cope with the situation I have made few modifications to /etc/logrorate.d/apache2 and decided to share it here hoping, this might help you too.

1. Rotate Apache acccess.log log file daily instead of weekly

On Debian Apache's logrorate script is in /etc/logrotate.d/apache2

The default file content will be like so like so:

debian:~# cat /etc/logrotate.d/apache2
/var/log/apache2/*.log {
weekly
missingok
rotate 52
size 1G
compress
delaycompress
notifempty
create 640 root adm
sharedscripts
postrotate
if [ -f "`. /etc/apache2/envvars ; echo ${APACHE_PID_FILE:-/var/run/apache2.pid}`" ]; then
/etc/init.d/apache2 reload > /dev/null
fi
endscript
}

To change the rotation from weekly to daily change:

weekly

to

#weekly

2. Disable access.log log file gzip compression

By default apache2 logrotate script is tuned ot make compression of rotated file (exmpl: copy access.log to access.log.1 and gzip it, copy access.log to access.log.2 and gzip it etc.). On servers where logs are many gigabytes, once logrotate initiates its scheduled work it will have to compress an enormous log record of apache requests. On very busy Apache servers from my experience, just for a day the log could grow up to approximately 8 / 10 Gigabytes.
I'm sure there are more busy servers out there, which log files are growing to over 100GB for just a single day.
Gzipping a 100GB file piece takes an enormous load on the CPU, as well as often takes long time. When this logrotation gzipping occurs at a moment where the servers CPU cores are already heavy loaded from Apache serving HTTP requests, Apache server becomes inaccessible to most of the clients.
Then for end clients various oddities are experienced, for example Apache dropped connection errors, webserver returning empty pages, or simply inability to respond to the client browser.
Sometimes as a result of the overload, even secure shell connection to SSHD to the server is impossible …

To prevent your server from this roration overloads remove logrorate's default access.log gzipping by commenting:

compress

to

#comment

3. Change maximum log roration by logrorate to be up to 30

By default logrorate is configured to create and keep up to 52 rotated and gzipped access.log files, changing this to a lower number is a good practice (in my view), in cases where log files grow daily to 10 or more GBs. Doing so will save a lot of disk space and reduce the chance the hard disk gets filled in because of the multiple rorated ungzipped enormous access.log files.

To tune the default keep max rorated logs to 30, change:

rotate 52

to rotate 30

The way logrorate's apache log processing on RHEL / CentOS Linux is working better on high load servers, by default on CentOS logrorate is not configured to do log gzipping at all.

Here is the default /etc/logrorate.d/httpd script for
CentOS release 5.6 (Final)

[hipo@centos httpd]$ cat /etc/logrotate.d/httpd /var/log/httpd/*log {
missingok
notifempty
sharedscripts
postrotate
/bin/kill -HUP `cat /var/run/httpd.pid 2>/dev/null` 2> /dev/null || true
endscript
}

 

The Weekend

Monday, November 26th, 2007

In Saturday we had a Training day at Design.BG. It wasn’t too interesting the Boss spoke a lot about a new hierarchy being implanting into the corporation. Big mouths were said “how we should not anymore say that we make sites in the office or out if it.” He said: “We doesn’t make sites! We make projects and solutions for the client” :). Mitko has come back in Dobrich in Saturday (For the weekend) and Sunday and we went out twice. I haven’t touched computers much this weekend :).In the morning in Sunday I went to Liturgy and after that we had a small walk with Stoyan. After that I watched the 3 seriesof “From Dusk till Dawn” nice vampire story btw 🙂 And yes I like the style of Tarantino.

At some time I feeled very alone like nobody cares about me. I called Lily and we spend some good time together :).

END—–

The day Today

Friday, March 28th, 2008

I stand up in the morning at somewhere around 09:30. I made my physical excersises everyday. What I have to say that I’m trying to fast. You know it’s the orthodox fasting period. Orthodox Eastern has to be on 27 may if I’m not mistaken. In the morning we had Marketing II classes with a teacher called Stanislav. Nobody has a homework and only three of us entered the classes so the teacher was furious. After that we had lectures with Ruelof on the topic of Marketing Research. I had not much job from work. Our project manager said that they are going to give me a phone so they can reach me cheaper and easier. We have to had some English classes just after the Marketing Planning lectures but the teacher Valio said we won’t have it because he has some job to do. After the school I worked on the servers. Servers and stuff works just fine thanks to God. Unfortunately my health issues continue on and on. I’m starting to loose temper I try to pray for a little every night before I go to bed and every evening. I still hope God would hear me and heal me. Ahm what else in the evening I went to the 2nd hand furniture shop of my cousin Zlatina and her husband Ivailo I spend a little less than an hour there. After that I went to Yasho a colleague 1st year (A metal head and brother of one of my class mates) and we watched Dr. Strangelove together Or How I learned to stop worrying and drop the bomb. I really enjoy this Kubrick film :). That’s most of the day. I figure out that one of the qmail chkuser’s patch wasn’t working properly on one of the servers and I did some tweaks to make it work. Also I’m reading a little book on MySQL although I almost doesn’t have time to read it. Well that’s it Let’s call it a day.END—–

Not feeling well

Saturday, May 31st, 2008

Yesterday and today I feel like hell. I’m trying to pray a bit but still it doesn’t help much, seems like I’m in a big temptation again. To be honest I’m sick of temptations, they were too much for me during the time. Still I hope God would fix the things for me.END—–