Archive for the ‘Various’ Category

How to turn keyboard backlight on GNU / Linux, keyboard no backlight solution

Friday, October 20th, 2017

how-to-make-CM_Storm_Devastator-keyboard_backlight-work-on-linux-enabled-disable-keyboard-glowing-gnu-linux

If you're a GNU / Linux user and you happen to buy a backlighted keyboard, some nice new laptop whose keyboard supports the more and more modern keyboard growing or if you happen to install a GNU / Linux for a Gamer friend no matter the Linux distribution, you might encounter sometimes  problem even in major Linux distributions Debian / Ubuntu / Mint / Fedora with keyboard backlight not working.

Lets say you buy a Devastator II backlighted keyboard or any other modern keyboard you plug it into the Linux machine and there is no nice blinking light coming out of the keyboard, all the joy is gone yes I know. The free software coolness would have been even more grandiose if your keyboard was shiny and glowing in color / colors 🙂

But wait, there is hope for your joy to be made complete.

To make the keyboard backlight switch on Just issue commands:

 

xmodmap -e 'add mod3 = Screen_Lock'

 

# Turn on the keyboard bright lamps
xset led on

# Turns off the keyboard bright lamps
xset led off


If you want to make the keyboard backlight be enabled permanent the easiest solution is to

– add the 3 command lines to /etc/rc.local

E.g. to do so open /etc/rc.local and before exit 0 command just add the lines:

 

vim /etc/rc.local

 

xmodmap -e 'add mod3 = Screen_Lock'

# Turn on the keyboard bright lamps
xset led on

# Turns off the keyboard bright lamps
xset led off


If you prefer to have the keyboard colorful backlight enable and disabled from X environment on lets say GNOME , here is how to make yourself an icon that enabled and disables the colors.

That's handy because at day time it is a kind of meaningless for the keyboard to glow.

Here is the shell script:

#!/bin/bash
sleep 1
xset led 3
xmodmap -e 'add mod3 = Scroll_Lock'


I saved it as /home/hipo/scripts/backlight.sh

(don't forget to make it executable!, to do so run):

 

chmod +x /home/hipo/scripts/backlight.sh


Then create  the .desktop file at /etc/xdg/autostart/backlight.desktop so that it runs the new shell script, like so:

[Desktop Entry]
Type=Application
Name=Devastator Backlight
Exec=/home/hipo/scripts/backlight.sh
Icon=system-run
X-GNOME-Autostart-enabled=true


Share this on

How to use find command to find files created on a specific date , Find files with specific size on GNU / Linux

Monday, October 16th, 2017

How to use find command to find files created on a specific date on GNU / Linux?

 

The easiest and most readable way but not most efficient ) especially for big hard disks with a lot of files not the best way) to do it is via:

 

find ./ -type f -ls |grep '12 Oct'

 


Example: To find all files modified on the 12th of October, 2017:

find . -type f -newermt 2017-10-12 ! -newermt 2017-10-13

To find all files accessed on the 29th of september, 2008:

$ find . -type f -newerat 2015-09-29 ! -newerat 2015-09-30

Or, files which had their permission changed on the same day:

$ find . -type f -newerct 2015-09-29 ! -newerct 2015-09-30

If you don't change permissions on the file, 'c' would normally correspond to the creation date, though.

 

Another more cryptic way but perhaps more efficient  to find any file modified on October 12th,2017,  would be with below command:

 

find . -type f -mtime $(( ( $(date +%s) – $(date -d '2017-10-12' +%s) ) / 60 / 60 / 24 – 1 ))

 

 

 

You could also look at files between certain dates by creating two files with touch

touch -t 0810010000 /tmp/f-example1
touch -t 0810011000 /tmp/f-example2

This will find all files between the two dates & times of the 2 files /tmp


 

find / -newer /tmp/f-example1 -and -not -newer /tmp/f-exampl2

 


How to Find Files with a certain size on GNU / Linux?

 

Lets say you got cracked and someone uploaded a shell php file of 50296 bytes a , that's a real scenario that just happened to me:

root@pcfreak:/var/www/blog/wp-admin/js# ls -b green.php 
green.php
root@pcfreak:/var/www/blog/wp-admin/js# ls -al green.php 
-rw-r–r– 1 www-data www-data 50296 окт 12 02:27 green.php

root@pcfreak:/home/hipo# find /var/www/ -type f -size 50296c -exec ls {} \;
/var/www/blog/wp-content/themes/default/green.php
/var/www/blog/wp-content/w3tc/pgcache/blog/tag/endless-loop/_index.html
/var/www/blog/wp-content/w3tc/pgcache/blog/tag/common/_index.html
/var/www/blog/wp-content/w3tc/pgcache/blog/tag/apacheroot/_index.html
/var/www/blog/wp-content/w3tc-bak/pgcache/blog/tag/endless-loop/_index.html
/var/www/blog/wp-content/w3tc-bak/pgcache/blog/tag/common/_index.html
/var/www/blog/wp-content/w3tc-bak/pgcache/blog/tag/apacheroot/_index.html
/var/www/pcfreakbiz/wp-includes/css/media-views.css
 

 

Share this on

OSCommerce how to change / reset lost admin password

Monday, October 16th, 2017

reset-forgotten-lost-oscommerce-password-howto-Os_commerce-logo.svg

How to change / reset OSCommerce lost / forgotten admin password?

The password in OSCommerce is kept in table "admin", so to reset password connect to MySQL with mysql cli client.

First thing to do is to generate the new hash string, you can do that with a simple php script using the md5(); function

 

root@pcfreak:/var/www/files# cat 1.php
<?
$pass=md5('password');
echo $pass;
?>

 

root@pcfreak:/var/www/files# php 1.php
5f4dcc3b5aa765d61d8327deb882cf99
root@pcfreak:/var/www/files#

 

Our just generated string (for text password password) is hash: 5f4dcc3b5aa765d61d8327deb882cf99

Next to update the new hash string into SQL, we connect to MySQL:

 

$ mysql -u root -p

 


And issue following command to modify the encrypted hash string:

 

UPDATE `DB`.`admin` SET `admin_password` = '5f4dcc3b5aa765d61d8327deb882cf99' WHERE `admin`.`admin_id` = 6;


Share this on

How to fix unfixable broken package dependencies on Debian GNU / Linux – Fix package mismatch

Wednesday, September 27th, 2017

how-to-fix-unfixable-broken-package-dependency-on-debian-ubuntu-linux-icon

I just tried to upgrade my Debian Wheezy 7 to the latest stable Debian Stretch 9 by not thinking too much and just changing the word wheezy with stretch in /etc/apt/sources.list so onwards on it looked like so:
 

cat /etc/apt/sources.list

 

deb http://ftp.bg.debian.org/debian/ stretch main contrib non-free
deb-src http://ftp.bg.debian.org/debian/ stretch main

deb http://security.debian.org/ stretch/updates main
deb-src http://security.debian.org/ stretch/updates main 

# stretch-updates, previously known as 'volatile'
##deb http://deb.debian.org/debian/ stretch-updates main
deb-src http://deb.debian.org/debian/ stretch-updates main

 

I also make sure all the defined Google Chrome / Opera / Skype and Squeeze Backports repositories existent in /etc/apt/sources.list.d directory files which in my case were like so;

 

root@noah:/etc/apt/sources.list.d# ls
google-chrome.list  opera-stable.list  squeeze-backports.list
opera.list          skype-stable.list


 were commented out because they were producing extra apt update errors …

And afterwards ran as usual:

 

apt-get update
apt-get –yes upgrade


The upgrade command executed fine and a lot of packages got downloaded and reinstalled without much issue, so I thought everything would be fine and just proceeded with the attempt to finalize the distribution major release 7 to major release 9 by running:

 

apt-get –yes dist-upgrade


But guess what now I got some dependency errors with cron and other installed packages that depend on package versions that are not going to be installed as the apt-get tool informed me.

I tried to out-smart the dpkg dependency system and removed all the packages reporting to have a missing dependencies with a short for bash loop after duming all the problematic packages showing dependency issues with commands such as:

apt-get -f dist-upgrade >> out.txt
for i in $(cat out.txt); awk '{ print $1 }' >> to_delete.txt; done


Before proceeding further I had to manually edit few lines in a text editor to remove some of the junk left from apt-get too.

So i was brave and just removed the dependency missing packages with following other for loop:

 

for i in $(cat to_delete.txt); do dpkg -r –force-all $i; done


Now I was hoping that rerunning:

 

apt-get autoremove

dpkg --configure -a

apt-get update -f
apt-get dist-upgrade -f


would no longer complain and I would just install the removed packages in another for shell loop once every other packages gets installed.

But guess what I was wrong … the system entered into another bunch of depedency terribly issues and messed up so badly that there were at least 50 packages reporting to have a missing / broken or uninstallable deb version depedency …

I got totally Angry, I knew already from experience that just trying to jump over while skipping a major release e.g. upgrade Debian 7 to Debian 9, instead of first upgrading to Debian 8 Linux and then upgrading Debian 8 to Debian 9 have always produced the same mess but I was lame and stupid again to f**k it up and I was out of mind swearing (a truly bad habid I'm not proud of) …

So as the notebook with Linux so far was perfectly working with Debian 7 and had a tons of old installed software and I was in a state where if I restart the system it was very likely my Thinkpad r61 laptop won't boot at all, I googled around to find a solution unfortunately without any luck, so finally I used the good old and tested method to DO IT MYSELF and Find the Fix without Uncle Google's help and by God's grace I did, after experimenting a while with the aptitude package / install / remove update tool without much success, finally I find the solution to the totally messed up Debian package dependencies and it all came to a simply reverting back my /etc/apt/source.list to look like following:

 

# deb cdrom:[Debian GNU/Linux 7.0.0 _Wheezy_ – Official amd64 CD Binary-1 20130504-14:44]/ wheezy main

##deb cdrom:[Debian GNU/Linux 7.0.0 _Wheezy_ – Official amd64 CD Binary-1 20130504-14:44]/ wheezy main

deb http://ftp.bg.debian.org/debian/ wheezy main contrib non-free
deb-src http://ftp.bg.debian.org/debian/ wheezy main

deb http://security.debian.org/ wheezy/updates main
deb-src http://security.debian.org/ wheezy/updates main

# wheezy-updates, previously known as 'volatile'
##deb http://deb.debian.org/debian/ wheezy-updates main
deb-src http://deb.debian.org/debian/ wheezy-updates main
##deb http://www.deb-multimedia.org wheezy main non-free
#deb http://ftp.debian.org/debian/ wheezy-backports main
###deb http://ftp.debian.org/debian/ wheezy-backports main contrib non-free
##deb http://dl.google.com/linux/chrome/deb/ wheezy main
#deb http://ftp2.de.debian.org/debian-volatile wheezy/volatile main
###deb http://www.deb-multimedia.org wheezy main non-free


run of the following two depedency fix commands !!!!

 

aptitude upgrade –full-resolver

aptitude full-upgrade –full-resolver


After a while a Debian LinuxOS system downgrade was initated and the missing packages were found, downloaded from the correct wheezy repositories and all broken and missing dependencies packages were fixed !!! HOORAY IT WORKS AGAIN!!

 


Share this on

Create dummy packages on Debian, Ubuntu, Mint Linux how to

Tuesday, September 26th, 2017

create-dummy-package-on-debian-ubuntu-mint-linux-how-to-equivs-equivs-control-equivs-build

Creating dummy packages on Debian, Ubuntu, Mint Linux in order to fulfill broken or missing package dependencies is a very useful thing to do as many third party vendor packages such as skype does provide software that is compiled against a certain set of libraries to a specific Linux distribution release and as the distribution versions evolve it is impossible to install because of missing dependencies.

Sadly the third party maintainer almost always did not compile / provide new .deb package of their software specific for the new or sometimes older Linux distribution release and did not consider that Linux versions change frequently every 6 months or a year.

Hence The precompiled proprietary version of program and deb package provided by them depends on libraries or tools (contained within packages) that are either obsolete or does not match the package name and versions of installed Linux / Ubuntu / Mint or other deb based distribution.

When this occurs once you try to install the third party software, you can't because of the missing or unavailable packages within the current installed Linux version.

Commonly the problem with the missing and unavailable packages are not due to the inability of software to run with the older or newer (or in other words different) version of a library or tool from the ones provided by apt repositories but just because the package was compiled by third party vendor to depend on a very specfic versions of libraries and tools from the one provided by the respective Linux repositories.instead of making it cross platform compatible.

Thus often the ugly work around to that is to simply install the package in question without regard for dependencies with dpkg, i.e run:
 

dpkg -i –force-all skype-install.deb


Though the package gets installed, that way the broken dependencies of it are still pending and the apt package (on top of dpkg) is aware of that and once you try to do to lets say patch server or laptop against newest security flaws the do a package update you will end in a broken package state with a bunch of dependency errors of the missing libraries / packages.

You might end up in such situation if you're using lets say unstable verson of Debian or just if you're mixing /etc/apt/sources.list from Stable version with unstable one

Here is few example of what you might be getting once you're trying to install a package and there is / are broken unmet (missing) (uninstallable dependencies) caused by a forcably installed 3rd party software:

 

 

# aptget install libmagickwanddev
libmagickwand-dev : Depends: libmagickwand5 (= 8:6.7.7.10-6ubuntu3.4) but it is not going to be installed
Depends: libmagickcore5-extra (= 8:6.7.7.10-6ubuntu3.4) but it is not going to be installed
Depends: libmagickcore-dev (= 8:6.7.7.10-6ubuntu3.4) but it is not going to be installed E: Unable to correct problems, you have held broken packages.

Above example indicates libmagickcore5-extra and libmagickcre-dev required version can't be installed because the provided packages from Ubuntu /etc/apt/sources.list defined package download location for (current installed Ubuntu Linux) is different from the one the libmagickwand-dev required, this is because I have extra package sources defined to /etc/apt/sources.list

 


So what is the solution workaround to a missing and unavailable packages due to missing dependency package requirements ?

 

The easiest and least "painful" way is to fool the package system by installing a dummy package with the name of the missing package requirement, so the next time, some installed package on your system depends on a missing :i386 / amd64 or i586 architecture package or some other weird older or newer package missing on the system you can emulate the package is present by:
 

Creating Dummy Package with equivs (equivs-build, equivs-control)


Lets say you want to install GXMame frontend package manually with dpkg which is now obsolete and no longer available for install across Linux distributions the package depends on xmame but xmame is not available in latest Linux deb distributions, so the only work around is to create the xmame package as a dummy package and install it so you fool deb package management system to thing it is there and not explode with errors on apt operations.

To create a dummy package use the equivs which is a tool used by Debian / Ubuntu developers to do the package packaging, here is how to do it:

root@jericho:/home/hipo# apt-get install –yes equivs
root@jericho:/home/hipo# equivs-control xmame

 

root@jericho:/home/hipo# cat xmame

 

Section: games
Priority: optional
Standards-Version: 2.5.5

 

Package: xmame
Version: 0.182-1
Section: games
Maintainer: Georgi Georgiev
Provides: mame
Architecture: all
Description: Dummy Xmame Arcade Emulator package

 


Edit xmame or whatever dummy package you have chosen to build and change the Priority / Section / Maintainer / Version / Provides / Description

Architecture: all is suitable if you want to build the dummy package be compatible with all architectures, if you need a specific CPU architecture i386, i586, amd64 armpc etc. just set the proper one
 

root@jericho:/home/hipo# vim xmame

 

 

root@jericho:/home/hipo# equivs-build xmame
dh_testdir
dh_testroot
dh_prep
dh_testdir
dh_testroot
dh_install
dh_install: Compatibility levels before 9 are deprecated (level 7 in use)
dh_installdocs
dh_installdocs: Compatibility levels before 9 are deprecated (level 7 in use)
dh_installchangelogs
dpkg-parsechangelog: warning:     debian/changelog(l5): badly formatted trailer line
LINE:  — Georgi Georgiev  Mon, 25 Sep 2017 12:55:27 +0300
dpkg-parsechangelog: warning:     debian/changelog(l5): found end of file where expected more change data or trailer
dh_installchangelogs: Compatibility levels before 9 are deprecated (level 7 in use)
dh_compress
dh_fixperms
dh_installdeb
dh_installdeb: Compatibility levels before 9 are deprecated (level 7 in use)
dh_gencontrol
dpkg-gencontrol: warning:     debian/changelog(l5): badly formatted trailer line
LINE:  — Georgi Georgiev  Mon, 25 Sep 2017 12:55:27 +0300
dpkg-gencontrol: warning:     debian/changelog(l5): found end of file where expected more change data or trailer
dh_md5sums
dh_builddeb
dpkg-deb: building package 'xmame' in '../xmame_0.182-1_all.deb'.

 

The package has been created.
Attention, the package has been created in the current directory,
not in ".." as indicated by the message above!


Historically a very common use of Dummy debian package creation was when installing Qmail Mail server instead of Postfix (because some of the binary packages such as bsd-mailx used to be dependent on Postfix

Here is how the dummy postfix package was generated once the real one was already removed (because its presence on system was intereferring with already installed Qmail mail agent)
 

 

root@jericho:/home/hipo# equivs-control postfix

 

Command would generate a default dummy package skele template such as below:

 

### Commented entries have reasonable defaults.
### Uncomment to edit them.
# Source: <source package name; defaults to package name>
Section: misc
Priority: optional
# Homepage: <enter URL here; no default>
Standards-Version: 3.9.2

 

 

Package: <package name; defaults to equivs-dummy>
# Version: <enter version here; defaults to 1.0>
# Maintainer: Your Name <yourname@example.com>
# Pre-Depends: <comma-separated list of packages>
# Depends: <comma-separated list of packages>
# Recommends: <comma-separated list of packages>
# Suggests: <comma-separated list of packages>
# Provides: <comma-separated list of packages>
# Replaces: <comma-separated list of packages>
# Architecture: all
# Multi-Arch: <one of: foreign|same|allowed>
# Copyright: <copyright file; defaults to GPL2>
# Changelog: <changelog file; defaults to a generic changelog>
# Readme: <README.Debian file; defaults to a generic one>
# Extra-Files: <comma-separated list of additional files for the doc directory>
# Files: <pair of space-separated paths; First is file to include, second is destination>
#  <more pairs, if there's more than one file to include. Notice the starting space>
Description: <short description; defaults to some wise words>
 long description and info
 .
 second paragraph


To make the postfix named package you can modify the equivs-control generated file postfix to look like so:

 

Section: misc
Priority: optional
Standards-Version: 2.3.3

 

Package: postfix-dummy
Version: 2.7.0
Section: mail
Maintainer: Eric Lubow
Provides: mail-transport-agent
Architecture: all
Description: Dummy Postfix package

 

 

root@jericho:/home/hipo# equivs-build postfix
dh_testdir
dh_testroot
dh_prep
dh_testdir
dh_testroot
dh_install
dh_install: Compatibility levels before 9 are deprecated (level 7 in use)
dh_installdocs
dh_installdocs: Compatibility levels before 9 are deprecated (level 7 in use)
dh_installchangelogs
dpkg-parsechangelog: warning:     debian/changelog(l5): badly formatted trailer line
LINE:  — Eric Lubow  Mon, 25 Sep 2017 12:52:46 +0300
dpkg-parsechangelog: warning:     debian/changelog(l5): found end of file where expected more change data or trailer
dh_compress
dh_fixperms
dh_installdeb
dh_installdeb: Compatibility levels before 9 are deprecated (level 7 in use)
dh_gencontrol
dpkg-gencontrol: warning:     debian/changelog(l5): badly formatted trailer line
LINE:  — Eric Lubow  Mon, 25 Sep 2017 12:52:46 +0300
dpkg-gencontrol: warning:     debian/changelog(l5): found end of file where expected more change data or trailer
dh_md5sums
dh_builddeb
dpkg-deb: building package 'postfix-dummy' in '../postfix-dummy_2.7.0_all.deb'.

 

The package has been created.
Attention, the package has been created in the current directory,
not in ".." as indicated by the message above!


Once you have all your dummy packages built, just install the packages in a standard way with dpkg to install below 2 generated packages xmame_0.182-1_all.deb and postfix-dummy_2.7.0_all.deb:

 

root@jericho:/home/hipo#  dpkg -i  xmame_0.182-1_all.deb; dpkg -i postfix-dummy_2.7.0_all.deb

 

Share this on

Transfer Contacts Nokia to Iphone 3GS, 5, 6, 7 mobile phone for free without using paid application using Nokia PC Suit VCF file export and Email or with Gmail / iCloud Contacts Synchronization

Sunday, September 24th, 2017

how-to-transfer-phone-contacts-from-old-nokia-symbian-phone-to-apple-iphone-ios-mobile

If you wonder how to transfer an old Nokia Mobile Contacts (with the already obsolete and unsupported "dumb" OS as well as the newer phones with Symbian – now abandoned mobile OS) and you need to transfer your contacts to iPhone (with iOS) easily, you can do that by backing up your Phone Contacts to a Windows 7,8, 10 PC and using the PC to transfer to Contacts to iPhone, either by email, through Google Mail Synchronization, (With Apple's iCloud) in case if you're using it, by importing to Outlook Express all the contacts once exported from Nokia Phone to PC and using iTunes to import to iPhone, or for the hardcore command line geeks to even use (WAB.EXE – Windows Address Book) command in conjunction with Apple iTunes to do the import.

Below described steps should be working on almost all Nokia mobile phones, such as Nokia 9300i, Nokia E, N, X and Nokia 6000, 7000, 8000 series as well, copying contacts from your Nokia to your iPhone Smart Phone works well across iPhone 3GS,4S,5, 6 and iPhone 7.

1. Backup Nokia Phone Contacts to Windows PC with Nokia PC Suite

Before proceeding with Nokia Contacts transfer to iPhone, make sure you create backup, just in case if something gets wrong, though this is not too likely it is always a good idea to take preventive measures, just to make sure your contacts doesn't disappear.

To backup your phone Contacts to ordinary PC with Windows, this is done via the good old Nokia PC Suite (Download it from here)

a. Install Nokia PC Suite run it and

b. connect your Nokia mobile phone with an USB Cable
, from the main menu choose Adress Book Icon, this is a small blue book icon in the Nokia PC Suite just like in the screenshot below:

c. A new window Nokia Communication Center will open Listing All Nokia phone existing contacts.

d. Go to the Windows PC to which you have just connected the Nokia Phone device and create somewhere lets say on Desktop (A new Windows folder called Nokia Contacts or whatever you like it to be called, we'll use this folder to transfer Nokia contacts there.

e.  Go back to Nokia Communication Center application, select a single contact (lets say the first one) and press CTRL + A to select all Nokia phone contacts

f. After selecting drag and drop selected contacts to just created Windows folder (Nokia Contacts)

That would output on your Windows PC under the folder all your Phone contacts in separete vCard format files (.VCF).

Now as we have all the Nokia Phone contacts stored each in a separate .VCF file in order to make the files easily importable, it is a very good idea to merge / combine all .VCF vCard files into a single .VCF vCard file.

So How to Merge All produced .VCF extension files into a single .VCF file?

Open Windows Command Prompt (Windows button + R) and type in (cmd.exe)

C:\Users\default>cd Desktop/"Nokkia Contacts"
C:Users\default> cp *.VCF ALL-Contacts.CVF

….

2. Import the Single .VCF Contacts file to Iphone by simply mailing it as attachment

The simplest way to import the just created above ALL-Contacts.VCF file is to simply mail it to yourself as an attachment, that works pretty well if your contacts list is not too big lets say 500-1000 contats, for really large contact lists, the Antivirus Software configured on mail servers might block the attachment, but in most cases just mailing the Single Merged .VCF file from the multiple .VCFs should be the best and easiest way to import Nokia Contacts without using a third party paid applications.

To import via email:

a. Send yourself email with the All-Contacts.VCF as attachment
b. Check your email with iPhone
c. Click on the attachment, once clicked iPhone will prompt you to import the CSV / .VCF Contacts, import them and you're done 🙂

3. Perhaps the easiest way (in case if you have a Gmail account) is to just import all just exported .VCF files into gmail, once all the contacts are now into Gmail, you can use iTunes to synchronize Gmail contacts to your iPhone.

http://www.leawo.com/knowledge/wp-content/uploads/2013/07/import-contacts-to-google-gmail

– Note that this method is a bad practise from security point of view as all your contacts will stay Synchronized into Gmail, well you have the option to delete them of course but still Google will have idea of which your contacts is, but anyways if you do it that way, once you have Phone contacts imported into Gmail:

port-contacts-to-google-email-gmail-screenshot-2how-to-import-contacts-to-google-email-gmail-screenshot-2

a) Launch iTunes on Win PC connect iPhone to the PC with USB data cable.

b) Select iPhone from iTunes under “Devices”, that is on the entry on the left sidebar of iTunes that will show the Summary page.

b) 5 Click "Info" tab on the right, and click "Sync Contacts with" checkbox,
– select "Google Contacts" from the drop-down menu
– click "Apply" or "Sync" button on the bottom-right corner of iTunes

 


iphone-import-nokia-contacts-from-google-gmail-contacts-itunes-screenshot

4. Import .VCF files directly into ICloud (if you're using iCloud) I hope you don't as this completely compromises your security and stores data on separate servers somewhere in a Clustered storage

import-vcard-to-icloud-screenshot

a. To import VCF to iPhone 5/4S/4/3GS via iCloud,  make sure the "Contacts" option of the iCloud on your iPhone is turned on by checking in:

Settings -> iCloud -> turn “Contacts” on.
 

b. Go to www.icloud.com in a browser, log into your iCloud account with respetive Apple ID and password.
c. Click “Setting” button on the left corner and choose “Import vCard”
e To check the import is successful frm vCard files to iPhone, so go to your iPhone Address Book to check the contacts.

sync-icloud-with-iphone-contacts-howto-screenshot


Finally assuming that iCloud is enabled from within iPhone settings, make sure to turn off / turn on contacts synchronization to speed up the contacts transferred form iCloud to iPhone.

For the lazy ones who don't want to bother and just want to pay some cash and have the import painess, there is also paid softwares such as copytrans that can help you transfer contacts
 

Hope this helped someone out there,
For the rest Enjoy !

Share this on

How to install Google Chrome Web browser on deb based distributions Debian 9 Stretch, Ubuntu 16 and RPM based ones Fedora 26 and CentOS 7, RHEL 7.3 Linux

Wednesday, September 20th, 2017

install-google-chrome-on-debian-ubuntu-centos-howto-fedora-Google-Chrome-logo
I'm not a fan of Google Chrome as it is not respecting users freedom and compatible with Free Software Philosophy just like it is bad for Google does track all your requests from the browsers but as there is a very large chunk of computers from various Operating Systems on the Internet  that use it as a default Desktop browser it is a must in the arsenal of browsers to have,.

Hence Google Chrome is a necessery evil  for for those who use their computer for quality assurance QA of web based websites for those developers or web testers who have to make sure a developed website is to be perfectly visualizing across all major web browsers across one major concern is how website behaves with  Google Chrome.

1. Install Google Chrome web browser on Debian and Ubuntu Linux

In Debian GNU / Linux 9, Google Chrome browser is not among the non-free available and installable packages with apt-get

In order to have t installed, you have to manually download the debian .deb package distributed by Google, e.g.:
 

wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb


Once downaloaded next step is to install using dpkg tool:

 

 

debian-linux:~# dpkg -i google-chrome-stable_current_amd64.deb
Selecting previously unselected package google-chrome-stable.
(Reading database … 300021 files and directories currently installed.)
Preparing to unpack google-chrome-stable_current_amd64.deb …
Unpacking google-chrome-stable (55.0.2883.87-1) …
Setting up google-chrome-stable (55.0.2883.87-1) …
update-alternatives: using /usr/bin/google-chrome-stable to provide /usr/bin/x-www-browser (x-www-browser) in auto mode
update-alternatives: using /usr/bin/google-chrome-stable to provide /usr/bin/gnome-www-browser (gnome-www-browser) in auto mode
update-alternatives: using /usr/bin/google-chrome-stable to provide /usr/bin/google-chrome (google-chrome) in auto mode
Processing triggers for menu (2.1.47) …
Processing triggers for man-db (2.7.6.1-2) …
Processing triggers for desktop-file-utils (0.23-1) …
Processing triggers for gnome-menus (3.13.3-8) …
Processing triggers for mime-support (3.60)

..

 


Note to make here is either you have to be root superuser as in above example, or you'll have to run it through sudo command to emulate administrator privileges

 

debian-linux:~$ sudo dpkg -i google-chrome-stable_current_amd64.deb

 

Once the Install completes, it will automatically add a Shortcut in your Gnome or KDE desktop start menus (in newer gnome versions that miss the standard start menu, but instead use the application dock, you can directly search it through the Unity dock search applet.

You can also manually run the browser in Gnome by pressing the good old (ALT+F2) to invoke the command run screen
or open gnome-terminal / konsole / xterm whatever terminal you prefer and run it with command:

debian-linux:~$ google-chrome

 

2. Installing Google Chrome on Fedora and CentOS Linux

There are available RPM packages in the Google provided RPM repository, so on Fedora and CentOS Linux fist thing to do is to add the necessery reposotiries to yum package manager:

For this you will need to have created file /etc/yum.repos.d/google-chrome.repo with a text editor and place inside:
 

[google-chrome]
name=google-chrome – $basearch
baseurl=http://dl.google.com/linux/chrome/rpm/stable/$basearch
enabled=1
gpgcheck=1
gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub

 

 

You can also directly do it without using a text editor (this is especially) helpful if you need to deploy Google Chrome on a bunch of Linux computers lets say in university computer laboratory simultaneously by issuing with a script that is connecting to multiple hosts through ssh the following:
 

cat << EOF > /etc/yum.repos.d/google-chrome.repo
[google-chrome]
name=google-chrome – $basearch
baseurl=http://dl.google.com/linux/chrome/rpm/stable/$basearch
enabled=1
gpgcheck=1
gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub
EOF

 

 


Next step is to install it with yum on the single or multiple hosts with:

 

 

[root@fedora:~ ]# yum update

[root@fedora:~ ]# yum install google-chrome-stable


This should deploy google chrome and from now on you can either look for to start it from Application menu or run it manually in terminal:

 

 

[hipo@fedora:~ ]$ google-chrome

 


Though above example to install is Fedora 21,22,23,24,25,26 versions specific, the install across different CentOS 6, 7 and Redhat Enterprise Linux (RHEL 7.3) versions is identical.


Share this on

Block Web server over loading Bad Crawler Bots and Search Engine Spiders with .htaccess rules

Monday, September 18th, 2017

howto-block-webserver-overloading-bad-crawler-bots-spiders-with-htaccess-modrewrite-rules-file

In last post, I've talked about the problem of Search Index Crawler Robots aggressively crawling websites and how to stop them (the article is here) explaning how to raise delays between Bot URL requests to website and how to completely probhit some bots from crawling with robots.txt.

As explained in article the consequence of too many badly written or agressive behaviour Spider is the "server stoning" and therefore degraded Web Server performance as a cause or even a short time Denial of Service Attack, depending on how well was the initial Server Scaling done.

The bots we want to filter are not to be confused with the legitimate bots, that drives real traffic to your website, just for information

 The 10 Most Popular WebCrawlers Bots as of time of writting are:
 

1. GoogleBot (The Google Crawler bots, funnily bots become less active on Saturday and Sundays :))

2. BingBot (Bing.com Crawler bots)

3. SlurpBot (also famous as Yahoo! Slurp)

4. DuckDuckBot (The dutch search engine duckduckgo.com crawler bots)

5. Baiduspider (The Chineese most famous search engine used as a substitute of Google in China)

6. YandexBot (Russian Yandex Search engine crawler bots used in Russia as a substitute for Google )

7. Sogou Spider (leading Chineese Search Engine launched in 2004)

8. Exabot (A French Search Engine, launched in 2000, crawler for ExaLead Search Engine)

9. FaceBot (Facebook External hit, this crawler is crawling a certain webpage only once the user shares or paste link with video, music, blog whatever  in chat to another user)

10. Alexa Crawler (la_archiver is a web crawler for Amazon's Alexa Internet Rankings, Alexa is a great site to evaluate the approximate page popularity on the internet, Alexa SiteInfo page has historically been the Swift Army knife for anyone wanting to quickly evaluate a webpage approx. ranking while compared to other pages)

Above legitimate bots are known to follow most if not all of W3C – World Wide Web Consorium (W3.Org) standards and therefore, they respect the content commands for allowance or restrictions on a single site as given from robots.txt but unfortunately many of the so called Bad-Bots or Mirroring scripts that are burning your Web Server CPU and Memory mentioned in previous article are either not following /robots.txt prescriptions completely or partially.

Hence with the robots.txt unrespective bots, the case the only way to get rid of most of the webspiders that are just loading your bandwidth and server hardware is to filter / block them is by using Apache's mod_rewrite through

 

.htaccess


file

Create if not existing in the DocumentRoot of your website .htaccess file with whatever text editor, or create it your windows / mac os desktop and transfer via FTP / SecureFTP to server.

I prefer to do it directly on server with vim (text editor)

 

 

vim /var/www/sites/your-domain.com/.htaccess

 

RewriteEngine On

IndexIgnore .htaccess */.??* *~ *# */HEADER* */README* */_vti*

SetEnvIfNoCase User-Agent "^Black Hole” bad_bot
SetEnvIfNoCase User-Agent "^Titan bad_bot
SetEnvIfNoCase User-Agent "^WebStripper" bad_bot
SetEnvIfNoCase User-Agent "^NetMechanic" bad_bot
SetEnvIfNoCase User-Agent "^CherryPicker" bad_bot
SetEnvIfNoCase User-Agent "^EmailCollector" bad_bot
SetEnvIfNoCase User-Agent "^EmailSiphon" bad_bot
SetEnvIfNoCase User-Agent "^WebBandit" bad_bot
SetEnvIfNoCase User-Agent "^EmailWolf" bad_bot
SetEnvIfNoCase User-Agent "^ExtractorPro" bad_bot
SetEnvIfNoCase User-Agent "^CopyRightCheck" bad_bot
SetEnvIfNoCase User-Agent "^Crescent" bad_bot
SetEnvIfNoCase User-Agent "^Wget" bad_bot
SetEnvIfNoCase User-Agent "^SiteSnagger" bad_bot
SetEnvIfNoCase User-Agent "^ProWebWalker" bad_bot
SetEnvIfNoCase User-Agent "^CheeseBot" bad_bot
SetEnvIfNoCase User-Agent "^Teleport" bad_bot
SetEnvIfNoCase User-Agent "^TeleportPro" bad_bot
SetEnvIfNoCase User-Agent "^MIIxpc" bad_bot
SetEnvIfNoCase User-Agent "^Telesoft" bad_bot
SetEnvIfNoCase User-Agent "^Website Quester" bad_bot
SetEnvIfNoCase User-Agent "^WebZip" bad_bot
SetEnvIfNoCase User-Agent "^moget/2.1" bad_bot
SetEnvIfNoCase User-Agent "^WebZip/4.0" bad_bot
SetEnvIfNoCase User-Agent "^WebSauger" bad_bot
SetEnvIfNoCase User-Agent "^WebCopier" bad_bot
SetEnvIfNoCase User-Agent "^NetAnts" bad_bot
SetEnvIfNoCase User-Agent "^Mister PiX" bad_bot
SetEnvIfNoCase User-Agent "^WebAuto" bad_bot
SetEnvIfNoCase User-Agent "^TheNomad" bad_bot
SetEnvIfNoCase User-Agent "^WWW-Collector-E" bad_bot
SetEnvIfNoCase User-Agent "^RMA" bad_bot
SetEnvIfNoCase User-Agent "^libWeb/clsHTTP" bad_bot
SetEnvIfNoCase User-Agent "^asterias" bad_bot
SetEnvIfNoCase User-Agent "^httplib" bad_bot
SetEnvIfNoCase User-Agent "^turingos" bad_bot
SetEnvIfNoCase User-Agent "^spanner" bad_bot
SetEnvIfNoCase User-Agent "^InfoNaviRobot" bad_bot
SetEnvIfNoCase User-Agent "^Harvest/1.5" bad_bot
SetEnvIfNoCase User-Agent "Bullseye/1.0" bad_bot
SetEnvIfNoCase User-Agent "^Mozilla/4.0 (compatible; BullsEye; Windows 95)" bad_bot
SetEnvIfNoCase User-Agent "^Crescent Internet ToolPak HTTP OLE Control v.1.0" bad_bot
SetEnvIfNoCase User-Agent "^CherryPickerSE/1.0" bad_bot
SetEnvIfNoCase User-Agent "^CherryPicker /1.0" bad_bot
SetEnvIfNoCase User-Agent "^WebBandit/3.50" bad_bot
SetEnvIfNoCase User-Agent "^NICErsPRO" bad_bot
SetEnvIfNoCase User-Agent "^Microsoft URL Control – 5.01.4511" bad_bot
SetEnvIfNoCase User-Agent "^DittoSpyder" bad_bot
SetEnvIfNoCase User-Agent "^Foobot" bad_bot
SetEnvIfNoCase User-Agent "^WebmasterWorldForumBot" bad_bot
SetEnvIfNoCase User-Agent "^SpankBot" bad_bot
SetEnvIfNoCase User-Agent "^BotALot" bad_bot
SetEnvIfNoCase User-Agent "^lwp-trivial/1.34" bad_bot
SetEnvIfNoCase User-Agent "^lwp-trivial" bad_bot
SetEnvIfNoCase User-Agent "^Wget/1.6" bad_bot
SetEnvIfNoCase User-Agent "^BunnySlippers" bad_bot
SetEnvIfNoCase User-Agent "^Microsoft URL Control – 6.00.8169" bad_bot
SetEnvIfNoCase User-Agent "^URLy Warning" bad_bot
SetEnvIfNoCase User-Agent "^Wget/1.5.3" bad_bot
SetEnvIfNoCase User-Agent "^LinkWalker" bad_bot
SetEnvIfNoCase User-Agent "^cosmos" bad_bot
SetEnvIfNoCase User-Agent "^moget" bad_bot
SetEnvIfNoCase User-Agent "^hloader" bad_bot
SetEnvIfNoCase User-Agent "^humanlinks" bad_bot
SetEnvIfNoCase User-Agent "^LinkextractorPro" bad_bot
SetEnvIfNoCase User-Agent "^Offline Explorer" bad_bot
SetEnvIfNoCase User-Agent "^Mata Hari" bad_bot
SetEnvIfNoCase User-Agent "^LexiBot" bad_bot
SetEnvIfNoCase User-Agent "^Web Image Collector" bad_bot
SetEnvIfNoCase User-Agent "^The Intraformant" bad_bot
SetEnvIfNoCase User-Agent "^True_Robot/1.0" bad_bot
SetEnvIfNoCase User-Agent "^True_Robot" bad_bot
SetEnvIfNoCase User-Agent "^BlowFish/1.0" bad_bot
SetEnvIfNoCase User-Agent "^JennyBot" bad_bot
SetEnvIfNoCase User-Agent "^MIIxpc/4.2" bad_bot
SetEnvIfNoCase User-Agent "^BuiltBotTough" bad_bot
SetEnvIfNoCase User-Agent "^ProPowerBot/2.14" bad_bot
SetEnvIfNoCase User-Agent "^BackDoorBot/1.0" bad_bot
SetEnvIfNoCase User-Agent "^toCrawl/UrlDispatcher" bad_bot
SetEnvIfNoCase User-Agent "^WebEnhancer" bad_bot
SetEnvIfNoCase User-Agent "^TightTwatBot" bad_bot
SetEnvIfNoCase User-Agent "^suzuran" bad_bot
SetEnvIfNoCase User-Agent "^VCI WebViewer VCI WebViewer Win32" bad_bot
SetEnvIfNoCase User-Agent "^VCI" bad_bot
SetEnvIfNoCase User-Agent "^Szukacz/1.4" bad_bot
SetEnvIfNoCase User-Agent "^QueryN Metasearch" bad_bot
SetEnvIfNoCase User-Agent "^Openfind data gathere" bad_bot
SetEnvIfNoCase User-Agent "^Openfind" bad_bot
SetEnvIfNoCase User-Agent "^Xenu’s Link Sleuth 1.1c" bad_bot
SetEnvIfNoCase User-Agent "^Xenu’s" bad_bot
SetEnvIfNoCase User-Agent "^Zeus" bad_bot
SetEnvIfNoCase User-Agent "^RepoMonkey Bait & Tackle/v1.01" bad_bot
SetEnvIfNoCase User-Agent "^RepoMonkey" bad_bot
SetEnvIfNoCase User-Agent "^Zeus 32297 Webster Pro V2.9 Win32" bad_bot
SetEnvIfNoCase User-Agent "^Webster Pro" bad_bot
SetEnvIfNoCase User-Agent "^EroCrawler" bad_bot
SetEnvIfNoCase User-Agent "^LinkScan/8.1a Unix" bad_bot
SetEnvIfNoCase User-Agent "^Keyword Density/0.9" bad_bot
SetEnvIfNoCase User-Agent "^Kenjin Spider" bad_bot
SetEnvIfNoCase User-Agent "^Cegbfeieh" bad_bot

 

<Limit GET POST>
order allow,deny
allow from all
Deny from env=bad_bot
</Limit>

 


Above rules are Bad bots prohibition rules have RewriteEngine On directive included however for many websites this directive is enabled directly into VirtualHost section for domain/s, if that is your case you might also remove RewriteEngine on from .htaccess and still the prohibition rules of bad bots should continue to work
Above rules are also perfectly suitable wordpress based websites / blogs in case you need to filter out obstructive spiders even though the rules would work on any website domain with mod_rewrite enabled.

Once you have implemented above rules, you will not need to restart Apache, as .htaccess will be read dynamically by each client request to Webserver

2. Testing .htaccess Bad Bots Filtering Works as Expected


In order to test the new Bad Bot filtering configuration is working properly, you have a manual and more complicated way with lynx (text browser), assuming you have shell access to a Linux / BSD / *Nix computer, or you have your own *NIX server / desktop computer running
 

Here is how:
 

 

lynx -useragent="Mozilla/5.0 (compatible; MegaIndex.ru/2.0; +http://megaindex.com/crawler)" -head -dump http://www.your-website-filtering-bad-bots.com/

 

 

Note that lynx will provide a warning such as:

Warning: User-Agent string does not contain "Lynx" or "L_y_n_x"!

Just ignore it and press enter to continue.

Two other use cases with lynx, that I historically used heavily is to pretent with Lynx, you're GoogleBot in order to see how does Google actually see your website?
 

  • Pretend with Lynx You're GoogleBot

 

lynx -useragent="Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" -head -dump http://www.your-domain.com/

 

 

  • How to Pretend with Lynx Browser You are GoogleBot-Mobile

 

lynx -useragent="Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_1 like Mac OS X; en-us) AppleWebKit/532.9 (KHTML, like Gecko) Version/4.0.5 Mobile/8B117 Safari/6531.22.7 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" -head -dump http://www.your-domain.com/

 


Or for the lazy ones that doesn't have Linux / *Nix at disposal you can use WannaBrowser website

Wannabrowseris a web based browser emulator which gives you the ability to change the User-Agent on each website req1uest, so just set your UserAgent to any bot browser that we just filtered for example set User-Agent to CheeseBot

The .htaccess rule earier added once detecting your browser client is coming in with the prohibit browser agent will immediately filter out and you'll be unable to access the website with a message like:
 

HTTP/1.1 403 Forbidden

 

Just as I've talked a lot about Index Bots, I think it is worthy to also mention three great websites that can give you a lot of Up to Date information on exact Spiders returned user-agent, common known Bot traits as well as a a current updated list with the Bad Bots etc.

Bot and Browser Resources information user-agents, bad-bots and odd Crawlers and Bots specifics

1. botreports.com
2. user-agents.org
3. useragentapi.com

 

An updated list with robots user-agents (crawler-user-agents) is also available in github here regularly updated by Caia Almeido

There are also a third party plugin (modules) available for Website Platforms like WordPress / Joomla / Typo3 etc.

Besides the listed on these websites as well as the known Bad and Good Bots, there are perhaps a hundred of others that might end up crawling your webdsite that might or might not need  to be filtered, therefore before proceeding with any filtering steps, it is generally a good idea to monitor your  HTTPD access.log / error.log, as if you happen to somehow mistakenly filter the wrong bot this might be a reason for Website Indexing Problems.

Hope this article give you some valueable information. Enjoy ! 🙂

 


Share this on

Finding top access IPs in Webserver or how to delay connects from Bots (Web Spiders) to your site to prevent connect Denial of Service

Friday, September 15th, 2017

analyze-log-files-most-visited-ips-and-find-and-stopwebsite-hammering-bot-spiders-neo-tux

If you're a sysadmin who has to deal with cracker attemps for DoS (Denial of Service) on single or multiple servers (clustered CDN or standalone) Apache Webservers, nomatter whether working for some web hosting company or just running your private run home brew web server its very useful thing to inspect Web Server log file (in Apache HTTPD case that's access.log).

Sometimes Web Server overloads and the follow up Danial of Service (DoS) affect is not caused by evil crackers (mistkenly often called hackers but by some data indexing Crawler Search Engine bots who are badly configured to aggressively crawl websites and hence causing high webserver loads flooding your servers with bad 404 or 400, 500 or other requests, just to give you an example of such obstructive bots.

1. Dealing with bad Search Indexer Bots (Spiders) with robots.txt

Just as I mentioned hackers word above I feel obliged to expose the badful lies the press and media spreading for years misconcepting in people's mind the word cracker (computer intruder) with a hacker, if you're one of those who mistakenly call security intruders hackers I recommend you read Dr. Richard Stallman's article On Hacking to get the proper understanding that hacker is an cheerful attitude of mind and spirit and a hacker could be anyone who has this kind of curious and playful mind out there. Very often hackers are computer professional, though many times they're skillful programmers, a hacker is tending to do things in a very undstandard and weird ways to make fun out of life but definitelely follow the rule of do no harm to the neighbor.

Well after the short lirical distraction above, let me continue;

Here is a short list of Search Index Crawler bots with very aggressive behaviour towards websites:

 

# mass download bots / mirroring utilities
1. webzip
2. webmirror
3. webcopy
4. netants
5. getright
6. wget
7. webcapture
8. libwww-perl
9. megaindex.ru
10. megaindex.com
11. Teleport / TeleportPro
12. Zeus
….

Note that some of the listed crawler bots are actually a mirroring clients tools (wget) etc., they're also included in the list of server hammering bots because often  websites are attempted to be mirrored by people who want to mirror content for the sake of good but perhaps these days more often mirror (duplicate) your content for the sake of stealing, this is called in Web language Content Stealing in SEO language.


I've found a very comprehensive list of Bad Bots to block on Mike's tech blog his website provided example of bad robots.txt file is mirrored as plain text file here

Below is the list of Bad Crawler Spiders taken from his site:

 

# robots.txt to prohibit bad internet search engine spiders to crawl your website
# Begin block Bad-Robots from robots.txt
User-agent: asterias
Disallow:/
User-agent: BackDoorBot/1.0
Disallow:/
User-agent: Black Hole
Disallow:/
User-agent: BlowFish/1.0
Disallow:/
User-agent: BotALot
Disallow:/
User-agent: BuiltBotTough
Disallow:/
User-agent: Bullseye/1.0
Disallow:/
User-agent: BunnySlippers
Disallow:/
User-agent: Cegbfeieh
Disallow:/
User-agent: CheeseBot
Disallow:/
User-agent: CherryPicker
Disallow:/
User-agent: CherryPickerElite/1.0
Disallow:/
User-agent: CherryPickerSE/1.0
Disallow:/
User-agent: CopyRightCheck
Disallow:/
User-agent: cosmos
Disallow:/
User-agent: Crescent
Disallow:/
User-agent: Crescent Internet ToolPak HTTP OLE Control v.1.0
Disallow:/
User-agent: DittoSpyder
Disallow:/
User-agent: EmailCollector
Disallow:/
User-agent: EmailSiphon
Disallow:/
User-agent: EmailWolf
Disallow:/
User-agent: EroCrawler
Disallow:/
User-agent: ExtractorPro
Disallow:/
User-agent: Foobot
Disallow:/
User-agent: Harvest/1.5
Disallow:/
User-agent: hloader
Disallow:/
User-agent: httplib
Disallow:/
User-agent: humanlinks
Disallow:/
User-agent: InfoNaviRobot
Disallow:/
User-agent: JennyBot
Disallow:/
User-agent: Kenjin Spider
Disallow:/
User-agent: Keyword Density/0.9
Disallow:/
User-agent: LexiBot
Disallow:/
User-agent: libWeb/clsHTTP
Disallow:/
User-agent: LinkextractorPro
Disallow:/
User-agent: LinkScan/8.1a Unix
Disallow:/
User-agent: LinkWalker
Disallow:/
User-agent: LNSpiderguy
Disallow:/
User-agent: lwp-trivial
Disallow:/
User-agent: lwp-trivial/1.34
Disallow:/
User-agent: Mata Hari
Disallow:/
User-agent: Microsoft URL Control – 5.01.4511
Disallow:/
User-agent: Microsoft URL Control – 6.00.8169
Disallow:/
User-agent: MIIxpc
Disallow:/
User-agent: MIIxpc/4.2
Disallow:/
User-agent: Mister PiX
Disallow:/
User-agent: moget
Disallow:/
User-agent: moget/2.1
Disallow:/
User-agent: mozilla/4
Disallow:/
User-agent: Mozilla/4.0 (compatible; BullsEye; Windows 95)
Disallow:/
User-agent: Mozilla/4.0 (compatible; MSIE 4.0; Windows 95)
Disallow:/
User-agent: Mozilla/4.0 (compatible; MSIE 4.0; Windows 98)
Disallow:/
User-agent: Mozilla/4.0 (compatible; MSIE 4.0; Windows NT)
Disallow:/
User-agent: Mozilla/4.0 (compatible; MSIE 4.0; Windows XP)
Disallow:/
User-agent: Mozilla/4.0 (compatible; MSIE 4.0; Windows 2000)
Disallow:/
User-agent: Mozilla/4.0 (compatible; MSIE 4.0; Windows ME)
Disallow:/
User-agent: mozilla/5
Disallow:/
User-agent: NetAnts
Disallow:/
User-agent: NICErsPRO
Disallow:/
User-agent: Offline Explorer
Disallow:/
User-agent: Openfind
Disallow:/
User-agent: Openfind data gathere
Disallow:/
User-agent: ProPowerBot/2.14
Disallow:/
User-agent: ProWebWalker
Disallow:/
User-agent: QueryN Metasearch
Disallow:/
User-agent: RepoMonkey
Disallow:/
User-agent: RepoMonkey Bait & Tackle/v1.01
Disallow:/
User-agent: RMA
Disallow:/
User-agent: SiteSnagger
Disallow:/
User-agent: SpankBot
Disallow:/
User-agent: spanner
Disallow:/
User-agent: suzuran
Disallow:/
User-agent: Szukacz/1.4
Disallow:/
User-agent: Teleport
Disallow:/
User-agent: TeleportPro
Disallow:/
User-agent: Telesoft
Disallow:/
User-agent: The Intraformant
Disallow:/
User-agent: TheNomad
Disallow:/
User-agent: TightTwatBot
Disallow:/
User-agent: Titan
Disallow:/
User-agent: toCrawl/UrlDispatcher
Disallow:/
User-agent: True_Robot
Disallow:/
User-agent: True_Robot/1.0
Disallow:/
User-agent: turingos
Disallow:/
User-agent: URLy Warning
Disallow:/
User-agent: VCI
Disallow:/
User-agent: VCI WebViewer VCI WebViewer Win32
Disallow:/
User-agent: Web Image Collector
Disallow:/
User-agent: WebAuto
Disallow:/
User-agent: WebBandit
Disallow:/
User-agent: WebBandit/3.50
Disallow:/
User-agent: WebCopier
Disallow:/
User-agent: WebEnhancer
Disallow:/
User-agent: WebmasterWorldForumBot
Disallow:/
User-agent: WebSauger
Disallow:/
User-agent: Website Quester
Disallow:/
User-agent: Webster Pro
Disallow:/
User-agent: WebStripper
Disallow:/
User-agent: WebZip
Disallow:/
User-agent: WebZip/4.0
Disallow:/
User-agent: Wget
Disallow:/
User-agent: Wget/1.5.3
Disallow:/
User-agent: Wget/1.6
Disallow:/
User-agent: WWW-Collector-E
Disallow:/
User-agent: Xenu’s
Disallow:/
User-agent: Xenu’s Link Sleuth 1.1c
Disallow:/
User-agent: Zeus
Disallow:/
User-agent: Zeus 32297 Webster Pro V2.9 Win32
Disallow:/
Crawl-delay: 20
# Begin Exclusion From Directories from robots.txt
Disallow: /cgi-bin/

Veryimportant variable among the ones passed by above robots.txt is
 

Crawl-Delay: 20

 


You might want to tune that variable a Crawl-Delay of 20 instructs all IP connects from any Web Spiders that are respecting robots.txt variables to delay crawling with 20 seconds between each and every connect client request, that is really useful for the Webserver as less connects means less CPU and Memory usage and less degraded performance put by aggressive bots crawling your site like crazy, requesting resources 10 times per second or so …

As you can conclude by the naming of some of the bots having them disabled would prevent your domain/s clients from Email harvesting Spiders and other not desired activities.


 

2. Listing IP addresses Hits / How many connects per IPs used to determine problematic server overloading a huge number of IPs connects

After saying few words about SE bots and I think it it is fair to also  mention here a number of commands, that helps the sysadmin to inspect Apache's access.log files.
Inspecting the log files regularly is really useful as the number of malicious Spider Bots and the Cracker users tends to be
raising with time, so having a good way to track the IPs that are stoning at your webserver and later prohibiting them softly to crawl either via robots.txt (not all of the Bots would respect that) or .htaccess file or as a last resort directly form firewall is really useful to know.
 

– Below command Generate a list of IPs showing how many times of the IPs connected the webserver (bear in mind that commands are designed log fields order as given by most GNU / Linux distribution + Apache default logging configuration;

 

webhosting-server:~# cd /var/log/apache2 webhosting-server:/var/log/apache2# cat access.log| awk '{print $1}' | sort | uniq -c |sort -n


Below command provides statistics info based on whole access.log file records, sometimes you will need to have analyzed just a chunk of the webserver log, lets say last 12000 IP connects, here is how:
 

webhosting-server:~# cd /var/log/apache2 webhosting-server:/var/log/apache2# tail -n 12000 access.log| awk '{print $1}' | sort | uniq -c |sort -n


You can combine above basic bash shell parser commands with the watch command to have a top like refresh statistics every few updated refreshing IP statistics of most active customers on your websites.

Here is an example:

 

webhosting-server:~# watch "cat access.log| awk '{print $1}' | sort | uniq -c |sort -n";

 


Once you have the top connect IPs if you have a some IP connecting with lets say 8000-10000 thousand times in a really short interval of time 20-30 minues or so. Hence it is a good idea to investigate further where is this IP originating from and if it is some malicious Denial of Service, filter it out either in Firewall (with iptables rules) or ask your ISP or webhosting to do you a favour and drop all the incoming traffic from that IP.

Here is how to investigate a bit more about a server stoner IP;
Lets assume that you found IP: 176.9.50.244 to be having too many connects to your webserver:
 

webhosting-server:~# grep -i 176.9.50.244 /var/log/apache2/access.log|tail -n 1
176.9.50.244 – – [12/Sep/2017:07:42:13 +0300] "GET / HTTP/1.1" 403 371 "-" "Mozilla/5.0 (compatible; MegaIndex.ru/2.0; +http://megaindex.com/crawler)"

 

webhosting-server:~# host 176.9.50.244
244.50.9.176.in-addr.arpa domain name pointer static.244.50.9.176.clients.your-server.de.

 

webhosting-server:~# whois 176.9.50.244|less

 

The outout you will get would be something like:

% This is the RIPE Database query service.
% The objects are in RPSL format.
%
% The RIPE Database is subject to Terms and Conditions.
% See http://www.ripe.net/db/support/db-terms-conditions.pdf

% Note: this output has been filtered.
%       To receive output for a database update, use the "-B" flag.

% Information related to '176.9.50.224 – 176.9.50.255'

% Abuse contact for '176.9.50.224 – 176.9.50.255' is 'abuse@hetzner.de'

inetnum:        176.9.50.224 – 176.9.50.255
netname:        HETZNER-RZ15
descr:          Hetzner Online GmbH
descr:          Datacenter 15
country:        DE
admin-c:        HOAC1-RIPE
tech-c:         HOAC1-RIPE
status:         ASSIGNED PA
mnt-by:         HOS-GUN
mnt-lower:      HOS-GUN
mnt-routes:     HOS-GUN
created:        2012-03-12T09:45:54Z
last-modified:  2015-08-10T09:29:53Z
source:         RIPE

role:           Hetzner Online GmbH – Contact Role
address:        Hetzner Online GmbH
address:        Industriestrasse 25
address:        D-91710 Gunzenhausen
address:        Germany
phone:          +49 9831 505-0
fax-no:         +49 9831 505-3
abuse-mailbox:  abuse@hetzner.de
remarks:        *************************************************
remarks:        * For spam/abuse/security issues please contact *
remarks:        * abuse@hetzner.de, not this address. *
remarks:        * The contents of your abuse email will be *
remarks:        * forwarded directly on to our client for *
….


3. Generate list of directories and files that are most called by clients
 

webhosting-server:~# cd /var/log/apache2; webhosting-server:/var/log/apache2# awk '{print $7}' access.log|cut -d? -f1|sort|uniq -c|sort -nk1|tail -n10

( take in consideration that this info is provided only on current records from /var/log/apache2/ and is short term for long term statistics you have to merge all existing gzipped /var/log/apache2/access.log.*.gz )

To merge all the old gzipped files into one single file and later use above shown command to analyize run:

 

cd /var/log/apache2/
cp -rpf *access.log*.gz apache-gzipped/
cd apache-gzipped
for i in $(ls -1 *access*.log.*.gz); do gzip -d $i; done
rm -f *.log.gz;
for i in $(ls -1 *|grep -v access_log_complete); do cat $i >> access_log_complete; done


Though the accent of above article is Apache Webserver log analyzing, the given command examples can easily be recrafted to work properly on other Web Servers LigHTTPD, Nginx etc.

Above commands are about to put a higher load to your server during execution, so on busy servers it is a better idea, to first go and synchronize the access.log files to another less loaded servers in most small and midsized companies this is being done by a periodic synchronization of the logs to the log server used usually only to store log various files and later used to do various analysis our run analyse software such as Awstats, Webalizer, Piwik, Go Access etc.

Worthy to mention one great text console must have Apache tool that should be mentioned to analyze in real time for the lazy ones to type so much is Apache-top but those script will be not installed on most webhosting servers and VPS-es, so if you don't happen to own a self-hosted dedicated server / have webhosting company etc. – (have root admin access on server), but have an ordinary server account you can use above commands to get an overall picture of abusive webserver IPs.

logstalgia-visual-loganalyzer-in-reali-time-windows-linux-mac
 

If you have a Linux with a desktop GUI environment and have somehow mounted remotely the weblog server partition another really awesome way to visualize in real time the connect requests to  web server Apache / Nginx etc. is with Logstalgia

Well that's all folks, I hope that article learned you something new. Enjoy

Thanks for article neo-tux picture to segarkshtri.com.np)


Share this on

Apache Webserver: How to Set up multile SSL certificates on multiple domains running on one IP address with Apache SNI feature

Wednesday, September 13th, 2017

apache-ssl-handshake-how-client-talks-to-server-illustrated

In the recent past it was impossible to add multiple different SSL .crt / .pem bundle certificates on Apache Webserver but each one of it was supposed to run under a separate domain or subdomain, preconfigured with a separate IP address, this has changed with the introduction of Apache SNI (Server Name Indication). What SNI does is it sends, the site visitor initiating connections on encrypted SSL port (443) or whatever configured a certificate that matches, the client requested server name.

Note that SNI is Apache HTTPD supported only and pitily can't be used on other services such as Mail Servers (SMTPS), (POP3S), (IMAPS) etc.
Older browsers did not have support for proper communication with WebServers supporting SNI communication, so for Websites whose aim is interoperatibility and large audience of Web clients still the preferrable way is to set up each VirtualHost under a separate IP, just like the good old days.

However Small and MidSized businesses could save some cash by not having to buy separate IPs for each Virtualhost, but just use SNI.
Besides that the people are relatively rarely using old browsers without SNI, so having clients with browsers not supporting SNI would certiainly be too rare. To recognize where a browser is having support for TLS or not is to check whether the Browser has support for TLS extension.

One requirement in order for SNI to work properly is to have registered domain because SNI works based on the requested ServerName by client.

On Debian GNU / Linux based distributions, you need to have Apache Webserver installed with enabled mod_ssl module:

 

linux:~# apt-get install –yes apache2

linux:~# a2enmod ssl

linux:~# /etc/init.d/apache2 restart


If you're not planning to get a trusted source certificate, especially if you're just a start-up business which is in process of testing the environment (you still did not ordered certificate via some domain registrar you might want to generate self signed certificate with openssl command and use that temporary:

 

linux:~# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/apache2/ssl/your-domain.com/apache.key –out /etc/apache2/ssl/your-domain.com/apache.crt

Here among the prompted questions you need the a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank.
For some fields leave the default value,
If you enter '.', or press enter the field will be left blank.

—–
 

Country Name (2 letter code) [AU]:BG
State or Province Name (full name) [Some-State]: Sofia
Locality Name (eg, city) []:SOF
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Pc-Freak.NET
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:your-domain.com                
Email Address []:webmaster@your-domain.com

 


(by the way it might be interesting to mention here the list of cheapest domain name registrars on the Internet as of January 2017 – source site here

 

Below order is given as estimated by price /  quality and provided service approximate

 

1. BlueHost.com – Domains $6.95

2. NameCheap.Com – Annual fee $10.69

3. GoDaddy.com – Annual fee $8.99 for first year, $14.99$ for each additional year

4. HostGator.com – Annual fee $15.00

5. 1and1.com – Annual fee $0.99 for first year ($14.99 for each additional year)

6. Network Solutions – This was historically one of the first domain registrar companies, but the brand is pricy $34.99

7. Register.com – Not sure

8. Hostway.com – $9.95 (first year and $9.95 renewals)

9. Moniker.com – Annual fee $11.99

10. Netfirms.ca – Annual fee $9.95 first year, Renewal fee is $11.99 per year

 

Note that domain pricing could value depending on the type of domain name country extension and many of the domain registrars would give you discount if you purchase domain name / SSL for 2 / 3+ years.

sni-illustrated-how-it-works-how-to-configure-multiple-domains-ssl-on-same-ip-apache-webserver

Next step in order to use SNI is to configure the WebServer Virtualhosts file:

 

linux:~# vim /etc/apache2/sites-available/domain-names.com

 

# Instruct Apache to listen for connections on port 443
Listen 443
# Listen for virtual host requests on all IP addresses
NameVirtualHost *:443

# Go ahead and accept connections for these vhosts
# from non-SNI clients
SSLStrictSNIVHostCheck off

<VirtualHost *:80>
        ServerAdmin webmaster@your-domain.com
        ServerName your-domain.com
        DocumentRoot /var/www

# More directives comes here

</VirtualHost>


<VirtualHost *:443>

        ServerAdmin webmaster@localhost
        ServerName your-domain.com
        DocumentRoot /var/www

        #   SSL Engine Switch:
        #   Enable/Disable SSL for this virtual host.
        SSLEngine on

        #   A self-signed (snakeoil) certificate can be created by installing
        #   the ssl-cert package. See
        #   /usr/share/doc/apache2.2-common/README.Debian.gz for more info.
        #   If both key and certificate are stored in the same file, only the
        #   SSLCertificateFile directive is needed.
        SSLCertificateFile /etc/apache2/ssl/your-domain.com/apache.crt
        SSLCertificateKeyFile /etc/apache2/ssl/your-domain.com/apache.key

# More Apache directives could be inserted here
</VirtualHost>

 

<VirtualHost *:443>
  DocumentRoot /var/www/sites/your-domain2
  ServerName www.your-domain2.com

  # Other directives here

</VirtualHost>

Add as many of the SNI enabled VirtualHosts following the example below, or if you prefer seperate the vhosts into separate domains.

I also recommend to check out Apache's official documentation on SNI for NameBasedSSLVhostsWithSNI etc.


Hope this article was not too boring 🙂
Enjoy life

 


Share this on