Posts Tagged ‘common’

Adding custom user based host IP aliases load custom prepared /etc/hosts from non root user on Linux – Script to allow define IPs that doesn’t have DNS records to user preferred hostname

Wednesday, April 14th, 2021

adding-custom-user-based-host-aliases-etc-hosts-logo-linux

Say you have access to a remote Linux / UNIX / BSD server, i.e. a jump host and you have to remotely access via ssh a bunch of other servers
who have existing IP addresses but the DNS resolver recognized hostnames from /etc/resolv.conf are long and hard to remember by the jump host in /etc/resolv.conf and you do not have a way to include a new alias to /etc/hosts because you don't have superuser admin previleges on the hop station.
To make your life easier you would hence want to add a simplistic host alias to be able to easily do telnet, ssh, curl to some aliased name like s1, s2, s3 … etc.


The question comes then, how can you define the IPs to be resolvable by easily rememberable by using a custom User specific /etc/hosts like definition file? 

Expanding /etc/hosts predefined host resolvable records is pretty simple as most as most UNIX / Linux has the HOSTALIASES environment variable
Hostaliases uses the common technique for translating host names into IP addresses using either getaddrinfo(3) or the obsolete gethostbyname(3). As mentioned in hostname(7), you can set the HOSTALIASES environment variable to point to an alias file, and you've got per-user aliases

create ~/.hosts file

linux:~# vim ~/.hosts

with some content like:
 

g google.com
localhostg 127.0.0.1
s1 server-with-long-host1.fqdn-whatever.com 
s2 server5-with-long-host1.fqdn-whatever.com
s3 server18-with-long-host5.fqdn-whatever.com

linux:~# export HOSTALIASES=$PWD/.hosts

The caveat of hostaliases you should know is this will only works for resolvable IP hostnames.
So if you want to be able to access unresolvable hostnames.
You can use a normal alias for the hostname you want in ~/.bashrc with records like:

alias server-hostname="ssh username@10.10.10.18 -v -o stricthostkeychecking=no -o passwordauthentication=yes -o UserKnownHostsFile=/dev/null"
alias server-hostname1="ssh username@10.10.10.19 -v -o stricthostkeychecking=no -o passwordauthentication=yes -o UserKnownHostsFile=/dev/null"
alias server-hostname2="ssh username@10.10.10.20 -v -o stricthostkeychecking=no -o passwordauthentication=yes -o UserKnownHostsFile=/dev/null"

then to access server-hostname1 simply type it in terminal.

The more elegant solution is to use a bash script like below:

# include below code to your ~/.bashrc
function resolve {
        hostfile=~/.hosts
        if [[ -f “$hostfile” ]]; then
                for arg in $(seq 1 $#); do
                        if [[ “${!arg:0:1}” != “-” ]]; then
                                ip=$(sed -n -e "/^\s*\(\#.*\|\)$/d" -e "/\<${!arg}\>/{s;^\s*\(\S*\)\s*.*$;\1;p;q}" "$hostfile")
                                if [[ -n “$ip” ]]; then
                                        command "${FUNCNAME[1]}" "${@:1:$(($arg-1))}" "$ip" "${@:$(($arg+1)):$#}"
                                        return
                                fi
                        fi
                done
        fi
        command "${FUNCNAME[1]}" "$@"
}

function ping {
        resolve "$@"
}

function traceroute {
        resolve "$@"
}

function ssh {
        resolve "$@"
}

function telnet {
        resolve "$@"
}

function curl {
        resolve "$@"
}

function wget {
        resolve "$@"
}

 

Now after reloading bash login session $HOME/.bashrc with:

linux:~# source ~/.bashrc

ssh / curl / wget / telnet / traceroute and ping will be possible to the defined ~/.hosts IP addresses just like if it have been defined global wide on System in /etc/hosts.

Enjoy
 

Hack: Using ssh / curl or wget to test TCP port connection state to remote SSH, DNS, SMTP, MySQL or any other listening service in PCI environment servers

Wednesday, December 30th, 2020

using-curl-ssh-wget-to-test-tcp-port-opened-or-closed-for-web-mysql-smtp-or-any-other-linstener-in-pci-linux-logo

If you work on PCI high security environment servers in isolated local networks where each package installed on the Linux / Unix system is of importance it is pretty common that some basic stuff are not there in most cases it is considered a security hole to even have a simple telnet installed on the system. I do have experience with such environments myself and thus it is pretty daunting stuff so in best case you can use something like a simple ssh client if you're lucky and the CentOS / Redhat / Suse Linux whatever distro has openssh-client package installed.
If you're lucky to have the ssh onboard you can use telnet in same manner as netcat or the swiss army knife (nmap) network mapper tool to test whether remote service TCP / port is opened or not. As often this is useful, if you don't have access to the CISCO / Juniper or other (networ) / firewall equipment which is setting the boundaries and security port restrictions between networks and servers.

Below is example on how to use ssh client to test port connectivity to lets say the Internet, i.e.  Google / Yahoo search engines.
 

[root@pciserver: /home ]# ssh -oConnectTimeout=3 -v google.com -p 23
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to google.com [172.217.169.206] port 23.
debug1: connect to address 172.217.169.206 port 23: Connection timed out
debug1: Connecting to google.com [2a00:1450:4017:80b::200e] port 23.
debug1: connect to address 2a00:1450:4017:80b::200e port 23: Cannot assign requested address
ssh: connect to host google.com port 23: Cannot assign requested address
root@pcfreak:/var/www/images# ssh -oConnectTimeout=3 -v google.com -p 80
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to google.com [172.217.169.206] port 80.
debug1: connect to address 172.217.169.206 port 80: Connection timed out
debug1: Connecting to google.com [2a00:1450:4017:807::200e] port 80.
debug1: connect to address 2a00:1450:4017:807::200e port 80: Cannot assign requested address
ssh: connect to host google.com port 80: Cannot assign requested address
root@pcfreak:/var/www/images# ssh google.com -p 80
ssh_exchange_identification: Connection closed by remote host
root@pcfreak:/var/www/images# ssh google.com -p 80 -v -oConnectTimeout=3
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to google.com [172.217.169.206] port 80.
debug1: connect to address 172.217.169.206 port 80: Connection timed out
debug1: Connecting to google.com [2a00:1450:4017:80b::200e] port 80.
debug1: connect to address 2a00:1450:4017:80b::200e port 80: Cannot assign requested address
ssh: connect to host google.com port 80: Cannot assign requested address
root@pcfreak:/var/www/images# ssh google.com -p 80 -v -oConnectTimeout=5
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to google.com [142.250.184.142] port 80.
debug1: connect to address 142.250.184.142 port 80: Connection timed out
debug1: Connecting to google.com [2a00:1450:4017:80c::200e] port 80.
debug1: connect to address 2a00:1450:4017:80c::200e port 80: Cannot assign requested address
ssh: connect to host google.com port 80: Cannot assign requested address
root@pcfreak:/var/www/images# ssh google.com -p 80 -v
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to google.com [172.217.169.206] port 80.
debug1: Connection established.
debug1: identity file /root/.ssh/id_rsa type 0
debug1: identity file /root/.ssh/id_rsa-cert type -1
debug1: identity file /root/.ssh/id_dsa type -1
debug1: identity file /root/.ssh/id_dsa-cert type -1
debug1: identity file /root/.ssh/id_ecdsa type -1
debug1: identity file /root/.ssh/id_ecdsa-cert type -1
debug1: identity file /root/.ssh/id_ed25519 type -1
debug1: identity file /root/.ssh/id_ed25519-cert type -1
debug1: identity file /root/.ssh/id_xmss type -1
debug1: identity file /root/.ssh/id_xmss-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_7.9p1 Debian-10+deb10u2
debug1: ssh_exchange_identification: HTTP/1.0 400 Bad Request

 


debug1: ssh_exchange_identification: Content-Type: text/html; charset=UTF-8


debug1: ssh_exchange_identification: Referrer-Policy: no-referrer


debug1: ssh_exchange_identification: Content-Length: 1555


debug1: ssh_exchange_identification: Date: Wed, 30 Dec 2020 14:13:25 GMT


debug1: ssh_exchange_identification:


debug1: ssh_exchange_identification: <!DOCTYPE html>

debug1: ssh_exchange_identification: <html lang=en>

debug1: ssh_exchange_identification:   <meta charset=utf-8>

debug1: ssh_exchange_identification:   <meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">

debug1: ssh_exchange_identification:   <title>Error 400 (Bad Request)!!1</title>

debug1: ssh_exchange_identification:   <style>

debug1: ssh_exchange_identification:     *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 10
debug1: ssh_exchange_identification: 0% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.g
debug1: ssh_exchange_identification: oogle.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0
debug1: ssh_exchange_identification: % 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_
debug1: ssh_exchange_identification: color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}

debug1: ssh_exchange_identification:   </style>

debug1: ssh_exchange_identification:   <a href=//www.google.com/><span id=logo aria-label=Google></span></a>

debug1: ssh_exchange_identification:   <p><b>400.</b> <ins>That\342\200\231s an error.</ins>

debug1: ssh_exchange_identification:   <p>Your client has issued a malformed or illegal request.  <ins>That\342\200\231s all we know.</ins>

ssh_exchange_identification: Connection closed by remote host

 

Here is another example on how to test remote host whether a certain service such as DNS (bind) or telnetd is enabled and listening on remote local network  IP with ssh

[root@pciserver: /home ]# ssh 192.168.1.200 -p 53 -v -oConnectTimeout=5
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to 192.168.1.200 [192.168.1.200] port 53.
debug1: connect to address 192.168.1.200 port 53: Connection timed out
ssh: connect to host 192.168.1.200 port 53: Connection timed out

[root@server: /home ]# ssh 192.168.1.200 -p 23 -v -oConnectTimeout=5
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to 192.168.1.200 [192.168.1.200] port 23.
debug1: connect to address 192.168.1.200 port 23: Connection timed out
ssh: connect to host 192.168.1.200 port 23: Connection timed out


But what if Linux server you have tow work on is so paranoid that you even the ssh client is absent? Well you can use anything else that is capable of doing a connectivity to remote port such as wget or curl. Some web servers or application servers usually have wget or curl as it is integral part for some local shell scripts doing various operation needed for proper services functioning or simply to test locally a local or remote listener services, if that's the case we can use curl to connect and get output of a remote service simulating a normal telnet connection like this:

host:~# curl -vv 'telnet://remote-server-host5:22'
* About to connect() to remote-server-host5 port 22 (#0)
*   Trying 10.52.67.21… connected
* Connected to aflpvz625 (10.52.67.21) port 22 (#0)
SSH-2.0-OpenSSH_5.3

Now lets test whether we can connect remotely to a local net remote IP's Qmail mail server with curls telnet simulation mode:

host:~#  curl -vv 'telnet://192.168.0.200:25'
* Expire in 0 ms for 6 (transfer 0x56066e5ab900)
*   Trying 192.168.0.200…
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x56066e5ab900)
* Connected to 192.168.0.200 (192.168.0.200) port 25 (#0)
220 This is Mail Pc-Freak.NET ESMTP

Fine it works, lets now test whether a remote server who has MySQL listener service on standard MySQL port TCP 3306 is reachable with curl

host:~#  curl -vv 'telnet://192.168.0.200:3306'
* Expire in 0 ms for 6 (transfer 0x5601fafae900)
*   Trying 192.168.0.200…
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x5601fafae900)
* Connected to 192.168.0.200 (192.168.0.200) port 3306 (#0)
Warning: Binary output can mess up your terminal. Use "–output -" to tell
Warning: curl to output it to your terminal anyway, or consider "–output
Warning: <FILE>" to save to a file.
* Failed writing body (0 != 107)
* Closing connection 0
root@pcfreak:/var/www/images#  curl -vv 'telnet://192.168.0.200:3306'
* Expire in 0 ms for 6 (transfer 0x5598ad008900)
*   Trying 192.168.0.200…
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x5598ad008900)
* Connected to 192.168.0.200 (192.168.0.200) port 3306 (#0)
Warning: Binary output can mess up your terminal. Use "–output -" to tell
Warning: curl to output it to your terminal anyway, or consider "–output
Warning: <FILE>" to save to a file.
* Failed writing body (0 != 107)
* Closing connection 0

As you can see the remote connection is returning binary data which is unknown to a standard telnet terminal thus to get the output received we need to pass curl suggested arguments.

host:~#  curl -vv 'telnet://192.168.0.200:3306' –output –
* Expire in 0 ms for 6 (transfer 0x55b205c02900)
*   Trying 192.168.0.200…
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x55b205c02900)
* Connected to 192.168.0.200 (192.168.0.200) port 3306 (#0)
g


The curl trick used to troubleshoot remote port to remote host from a Windows OS host which does not have telnet installed by default but have curl instead.

Also When troubleshooting vSphere Replication, it is often necessary to troubleshoot port connectivity as common Windows utilities are not available.
As Curl is available in the VMware vCenter Server Appliance command line interface.

On servers where curl is not there but you have wget is installed you can use it also to test a remote port

 

# wget -vv -O /dev/null http://google.com:554 –timeout=5
–2020-12-30 16:54:22–  http://google.com:554/
Resolving google.com (google.com)… 172.217.169.206, 2a00:1450:4017:80b::200e
Connecting to google.com (google.com)|172.217.169.206|:554… failed: Connection timed out.
Connecting to google.com (google.com)|2a00:1450:4017:80b::200e|:554… failed: Cannot assign requested address.
Retrying.

–2020-12-30 16:54:28–  (try: 2)  http://google.com:554/
Connecting to google.com (google.com)|172.217.169.206|:554… ^C

As evident from output the port 554 is filtered in google which is pretty normal.

If curl or wget is not there either as a final alternative you can either install some perl, ruby, python or bash script etc. that can opens a remote socket to the remote IP.

How to set up dsmc client Tivoli ( TSM ) release version and process check monitoring with Zabbix

Thursday, December 17th, 2020

zabbix-monitor-dsmc-client-monitor-ibm-tsm-with-zabbix-howto

As a part of Monitoring IBM Spectrum (the new name of IBM TSM) if you don't have the money to buy something like HP Open View monitoring or other kind of paid monitoring system but you use Zabbix open source solution to monitor your Linux server infrastructure and you use Zabbix as a main Services and Servers monitoring platform you will want to monitor at least whether the running Tivoli dsmc backup clients run fine on each of the server (e.g. the dsmc client) runs normally as a backup solution with its common /usr/bin/dsmc process service that connects towards remote IBM TSM server where the actual Data storage is kept.

It might be a kind of weird monitoring to setup to have the tsm version frequently reported to a Zabbix server on a first glimpse, but in reality this is quite useful especially if you want to have a better overview of your multiple servers environment IBM (Spectrum Protect) Storage manager backup solution actual release.
 
So the goal is to have reported dsmc interactive storage manager version as reported from
 

[root@server ~]# dsmc

IBM Spectrum Protect
Command Line Backup-Archive Client Interface
  Client Version 8, Release 1, Level 11.0
  Client date/time: 12/17/2020 15:59:32
(c) Copyright by IBM Corporation and other(s) 1990, 2020. All Rights Reserved.

Node Name: P01VWEBTEVI1.FFM.DE.INT.ATOSORIGIN.COM
Session established with server TSM_SERVER: AIX
  Server Version 8, Release 1, Level 10.000
  Server date/time: 12/17/2020 15:59:34  Last access: 12/17/2020 13:28:01

 

into zabbix and set reports in case if your sysadmins have changed version of a IBM TSM to a newer version. Thus for non sysadmins and less technical persons as Service Delivery Managers (SDMs) it is much easier to track changes of multiple servers Tivoli version to a newer one.

Enough talk let me next show you how to setup the required with a small UserParameter one liner bash shell script.
 

1. Create TSM Userparameter script


With Userparameter key and content as below:

[root@server ~]# vim /etc/zabbix/zabbix_agentd.d/userparameter_TSM.conf

 

UserParameter=dsmc.version,cat /var/tsm/sched.log | grep Clie | tail -n 1 | awk '{print $7 " " $8 " " $9 " " $10 " " $11 " " $12 " " $13}'


The script output of TivSM version will be reported as so:

[root@server ~]# cat /var/tsm/sched.log | grep Clie | tail -n 1 | awk '{print $7 " " $8 " " $9 " " $10 " " $11 " " $12 " " $13}'
Client Version 8, Release 1, Level 11.0


 

If you want to get only a major version report from dsmc:

UserParameter=dsmc.version,cat /var/tsm/sched.log | grep Clie | tail -n 1 | awk '{print $7 " " $8 " " $9}'


The output as a major version you will get is

[root@server ~]# cat /var/tsm/sched.log | grep Clie | tail -n 1 | awk '{print $7 " " $8 " " $9}'
Client Version 8,

 

2. Restart the zabbix agent to load userparam script

To load above configured Userparameter script we need to restart zabbix-agent client

[root@server ~]# systemctl restart zabbix-agent

[root@server ~]#  systemctl status zabbix-agent
● zabbix-agent.service – Zabbix Agent
   Loaded: loaded (/usr/lib/systemd/system/zabbix-agent.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2020-07-22 16:17:17 CEST; 4 months 26 days ago
 Main PID: 7817 (zabbix_agentd)
   CGroup: /system.slice/zabbix-agent.service
           ├─7817 /usr/sbin/zabbix_agentd -c /etc/zabbix/zabbix_agentd.conf
           ├─7818 /usr/sbin/zabbix_agentd: collector [idle 1 sec]
           ├─7819 /usr/sbin/zabbix_agentd: listener #1 [waiting for connection]
           ├─7820 /usr/sbin/zabbix_agentd: listener #2 [waiting for connection]
           ├─7821 /usr/sbin/zabbix_agentd: listener #3 [waiting for connection]
           └─7822 /usr/sbin/zabbix_agentd: active checks #1 [idle 1 sec]

 

3. Create template for TSM Service check and TSM Version


You will need to create 1 Trigger and 2 Items for the Service check and for TSM version reporting

tsm-service-version-screenshot-zabbix
As you see necessery names / keys to create are:

Name / Key: TSM – Service State proc.num{dsmcad}

Name / key: TSM version dmsc.version

 

3.1 Create the trigger


Now lets create the trigger that will report the Service State

tsm-service-state-zabbix-screenshot

 

Linux TSM:proc.num[dsmcad].last()}=0

 

3.2 Create the Items


zabbix-dsmc-proc-num-item-setting-screenshot-linux

 

Name: dsmcad
Key: proc.num{dsmcad}

 

tsm-version-item-zabbix-screenshot
 

Update interval: 1d
History Storage period: 90d
Applications: TSM


3.3 Create Zabbix Action

As usual if you want to receive some Email Alerting or lets say send SMS in case of Trigger is matched create the necessery Action with
instructions on how to solve the problem if there is a Standard Operation Procedure ( SOP ) as often called in the corporate world for that.

That's all folks ! 🙂

 

Postfix copy every email to a central mailbox (send a copy of every mail sent via mail server to a given email)

Wednesday, October 28th, 2020

Postfix-logo-always-bcc-email-option-send-all-emails-to-a-single-address-with-postfix.svg

Say you need to do a mail server migration, where you have a local configured Postfix on a number of Linux hosts named:

Linux-host1
Linux-host2
Linux-host3

etc.


all configured to send email via old Email send host (MailServerHostOld.com) in each linux box's postfix configuration's /etc/postfix/main.cf.
Now due to some infrastructure change in the topology of network or anything else, you need to relay Mails sent via another asumably properly configured Linux host relay (MailServerNewHost.com).

Usually such a migrations has always a risk that some of the old sent emails originating from local running scripts on Linux-host1, Linux-Host2 … or some application or anything else set to send via them might not properly deliver emails to some external Internet based Mailboxes via the new relayhost MailServerNewHost.com.

E.g. in /etc/postfix/main.cf Linux-Host* machines, you have below config after the migration:

relayhost = [MailServerNewHost.com]

Lets say that you want to make sure, that you don't end up with lost emails as you can't be sure whether the new email server will deliver correctly to the old repicient emails. What to do then?

To make sure will not end up in undelivered state and get lost forever after a week or so (depending on the mail queue configuration retention period made on Linux sent MTAs and mailrelay MailServerNewHost.com, it is a very good approach to temprorary set all email communication that will be sent via MailServerNewHost.com a BCC emaills (A Blind Carbon Copy) of each sent mail via relay that is set on your local configured Postfix-es on Linux-Host*.

In postfix to achieve that it is very easy all you have to do is set on your MailServerNewHost.com a postfix config variable always_bcc smartly included by postfix Mail Transfer Agent developers for cases exactly like this.

To forward all passed emails via the mail server just place in the end of /etc/postfix/mail.conf after login via ssh on MailServerNewHost.com

always_bcc=All-Emails@your-diresired-redirect-email-address.com


Now all left is to reload the postfix to force the new configuration to get loaded on systemd based hosts as it is usually today do:

# systemctl reload postfix


Finally to make sure all works as expected and mail is sent do from do a testing via local MTAs. 
 

Linux-Host:~# echo -e "Testing body" | mail -s "testing subject" -r "testing@test.com" georgi.stoyanov@remote-user-email-whatever-address.com

Linux-Host:~# echo -e "Testing body" | mail -s "testing subject" -r "testing@test.com" georgi.stoyanov@sample-destination-address.com


As you can see I'm using the -r to simulate a sender address, this is a feature of mailx and is not available on older Linux Os hosts that are bundled with mail only command.
Now go to and open the All-Emails@your-diresired-redirect-email-address.com in Outlook (if it is M$ Office 365 MX Shared mailbox), Thunderbird or whatever email fetching software that supports POP3 or IMAP (in case if you configured the common all email mailbox to be on some other Postfix / Sendmail / Qmail MTA). and check whether you started receiving a lot of emails 🙂

That's all folks enjoy ! 🙂

Fix FTP active connection issues “Cannot create a data connection: No route to host” on ProFTPD Linux dedicated server

Tuesday, October 1st, 2019

proftpd-linux-logo

Earlier I've blogged about an encounter problem that prevented Active mode FTP connections on CentOS
As I'm working for a client building a brand new dedicated server purchased from Contabo Dedi Host provider on a freshly installed Debian 10 GNU / Linux, I've had to configure a new FTP server, since some time I prefer to use Proftpd instead of VSFTPD because in my opinion it is more lightweight and hence better choice for a small UNIX server setups. During this once again I've encounted the same ACTIVE FTP not working from FTP server to FTP client host machine. But before shortly explaining, the fix I find worthy to explain briefly what is ACTIVE / PASSIVE FTP connection.

 

1. What is ACTIVE / PASSIVE FTP connection?
 

Whether in active mode, the client specifies which client-side port the data channel has been opened and the server starts the connection. Or in other words the default FTP client communication for historical reasons is in ACTIVE MODE. E.g.
Client once connected to Server tells the server to open extra port or ports locally via which the overall FTP data transfer will be occuring. In the early days of networking when FTP protocol was developed security was not of such a big concern and usually Networks did not have firewalls at all and the FTP DATA transfer host machine was running just a single FTP-server and nothing more in this, early days when FTP was not even used over the Internet and FTP DATA transfers happened on local networks, this was not a problem at all.

In passive mode, the server decides which server-side port the client should connect to. Then the client starts the connection to the specified port.

But with the ever increasing complexity of Internet / Networks and the ever tightening firewalls due to viruses and worms that are trying to own and exploit networks creating unnecessery bulk loads this has changed …

active-passive-ftp-explained-diagram
 

2. Installing and configure ProFTPD server Public ServerName

I've installed the server with the common cmd:

 

apt –yes install proftpd

 

And the only configuration changed in default configuration file /etc/proftpd/proftpd.conf  was
ServerName          "Debian"

I do this in new FTP setups for the logical reason to prevent the multiple FTP Vulnerability Scan script kiddie Crawlers to know the exact OS version of the server, so this was changed to:

 

ServerName "MyServerHostname"

 

Though this is the bad security through obscurity practice doing so is a good practice.
 

3. Create iptable firewall rules to allow ACTIVE FTP mode


But anyways, next step was to configure the firewall to be allowed to communicate on TCP PORT 21 and 20 to incoming source ports range 1024:65535 (to enable ACTIVE FTP) on firewal level with iptables on INPUT and OUTPUT chain rules, like this:

 

iptables -A INPUT -p tcp –sport 1024:65535 -d 0/0 –dport 21 -m state –state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -p tcp -s 0/0 –sport 1024:65535 -d 0/0 –dport 20 -m state –state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -s 0/0 –sport 21 -d 0/0 –dport 1024:65535 -m state –state ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -s 0/0 –sport 20 -d 0/0 –dport 1024:65535 -m state –state ESTABLISHED,RELATED -j ACCEPT


Talking about Active and Passive FTP connections perhaps for novice Linux users it might be worthy to say few words on Active and Passive FTP connections

Once firewall has enabled FTP Active / Passive connections is on and FTP server is listening, to test all is properly configured check iptable rules and FTP listener:
 

/sbin/iptables -L INPUT |grep ftp
ACCEPT     tcp  —  anywhere             anywhere             tcp spts:1024:65535 dpt:ftp state NEW,ESTABLISHED
ACCEPT     tcp  —  anywhere             anywhere             tcp spts:1024:65535 dpt:ftp-data state NEW,ESTABLISHED
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:ftp
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:ftp-data

netstat -l | grep "ftp"
tcp6       0      0 [::]:ftp                [::]:*                  LISTEN    

 

4. Loading nf_nat_ftp module and net.netfilter.nf_conntrack_helper (for backward compitability)


Next step of course was to add the necessery modules nf_nat_ftp nf_conntrack_sane that makes FTP to properly forward ports with respective Firewall states on any of above source ports which are usually allowed by firewalls, note that the range of ports given 1024:65535 might be too much liberal for paranoid sysadmins and in many cases if ports are not filtered, if you are a security freak you can use some smaller range such as 60000-65535.

 

Here is time to say for sysadmins who haven't recently had a task to configure a new (unecrypted) File Transfer Server as today Secure FTP is almost alltime used for file transfers for the sake of security might be puzzled to find out the old Linux kernel ip_conntrack_ftp which was the standard module used to make FTP Active connections work is substituted nowadays with  nf_nat_ftp and nf_conntrack_sane.

To make the 2 modules permanently loaded on next boot on Debian Linux they have to be added to /etc/modules

Here is how sample /etc/modules that loads the modules on next system boot looks like

cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
softdog
nf_nat_ftp
nf_conntrack_sane


Next to say is that in newer Linux kernels 3.x / 4.x / 5.x the nf_nat_ftp and nf_conntrack-sane behaviour changed so  simply loading the modules would not work and if you do the stupidity to test it with some FTP client (I used gFTP / ncftp from my Linux desktop ) you are about to get FTP No route to host errors like:

 

Cannot create a data connection: No route to host

 

cannot-create-a-data-connection-no-route-to-host-linux-error-howto-fix


Sometimes, instead of No route to host error the error FTP client might return is:

 

227 entering passive mode FTP connect connection timed out error


To make the nf_nat_ftp module on newer Linux kernels hence you have to enable backwards compatibility Kernel variable

 

 

/proc/sys/net/netfilter/nf_conntrack_helper

 

echo 1 > /proc/sys/net/netfilter/nf_conntrack_helper

 

To make it permanent if you have enabled /etc/rc.local legacy one single file boot place as I do on servers – for how to enable rc.local on newer Linuxes check here

or alternatively add it to load via sysctl

sysctl -w net.netfilter.nf_conntrack_helper=1

And to make change permanent (e.g. be loaded on next boot)

echo 'net.netfilter.nf_conntrack_helper=1' >> /etc/sysctl.conf

 

5. Enable PassivePorts in ProFTPD or PassivePortRange in PureFTPD


Last but not least open /etc/proftpd/proftpd.conf find PassivePorts config value (commented by default) and besides it add the following line:

 

PassivePorts 60000 65534

 

Just for information if instead of ProFTPd you experience the error on PureFTPD the configuration value to set in /etc/pure-ftpd.conf is:
 

PassivePortRange 30000 35000


That's all folks, give the ncftp / lftp / filezilla or whatever FTP client you prefer and test it the FTP client should be able to talk as expected to remote server in ACTIVE FTP mode (and the auto passive mode) will be not triggered anymore, nor you will get a strange errors and failure to connect in FTP clients as gftp.

Cheers 🙂

Heroes of Might and Magic 2: Best old-school turn based strategic game to play on your Android mobile phone

Monday, June 16th, 2014

heroes-of-might-and-magic-2-for-your-mobile-smartphone-android-screenshot

Probably many people which are my age (I'm aged 30 now), spent many days and sleepless nights being totally addicted playing probably one of the most addictive (and in my view greatest strategy game of all time) – Heroes of Might and Magic II (HOMM2).
In that thoughts it will be a great news for you if you're owning smartphone that you can turn-back some nice memories and play (for free) a port of Heroes 2 for Android.

Free Heroes 2 Android port is it is made to support multiple screened devices so game  version could be played on both Android Tablet a tiny screen smart phone or a middle sized mobile. Also Free Heroes 2 mobile port allows you to choose 'The Magnifying glass' option on first game boot, so if you're on a tiny screened mobile you can still zoom by pointing on a game object. Free Heroes 2 Android port is there thanks to Gerhard Stein who is also an author of OpenTyrian mobile phone port and the amazing old computer jump-and-run arcade Commander Keen. Game pointer controls of FHeroes2 are pretty convenient and playing the game is almost as confortable as played with a PC mouse.
Free Heroes 2 is port of Free Heroes 2 engineFree implementation of Heroes of the Might and Magic II engine in SDL and because SDL is platform independent Free Heroes is also available for both Windows / Linux. Maybe here is time too mention that Heroes2 original DOS game works perfectly on any modern Linux distribution when started through DOSBOX DOS emulator.

By default Free Heroes2  has no game campaign support yet. In order to enable campaing support into Free Heroes 2, download FULL Heroes 2 gameput data files to mobile SD card to dir app-data/net.sourceforge.fheroes2 and campaign option will be there too.

heroes_of_might_and_magic_ii_play-on-android-best-strategy-old-game-for-android

Heroes of Might and Magic 2 – The succession Wars (or HEROES2, as it is widely known in Gamers communities) is a turn based strategy game  from year 1996 developed by Jon Van Caneghem by his New World Computing company it was marketed under on the market under brand of 3DO Company. Heroes II was voted the sixth-best PC game of all time by PC Gamer in May 1997. Heroes2 has also a game expansion pack called the Price of Loyalty released in 1997 as well as Heroes of Might and Magic II – Gold – from 1998. The game graphic design looks very beautiful and combined with the soundtrack makes playing it an awesome and calming experience. The game is very notable especially for soundtrack which is all of a beautiful classical music.

Heroes-of-might-and-magic-2-best-games-of-all-time-screenshot-HOMM2
(Picture taken and copyrighted by Wikipedia)

Gameplay

The titular heroes (horse) are player characters who can recruit armies, move around the map, capture resources, and engage in combat. The heroes also incorporate some role-playing game elements; they possess a set of statistics that confer bonuses to an army, artifacts that enhance their powers, and knowledge of magical spells that can be used to attack enemies or produce strategic benefits. Also, heroes gain experience levels from battle, such that veteran heroes are significantly more powerful than inexperienced ones.

On a typical map, players begin a game with one town of a chosen alignment. Each town alignment hosts a unique selection of creatures from which the player can build an army. Town alignment also determines other unique traits such as native hero classes, special bonuses or abilities, and leanings toward certain skills or kinds of magic.

heroes-of-might-and-magic-town_castle-sorceress-screenshot

Towns play a central role in the games since they are the primary source of income and new recruits. A typical objective in each game is to capture all enemy towns. Maps may also start with neutral towns, which do not send out heroes but may still be captured by any player. It is therefore possible, and common, to have more towns than players on a map. When captured, a town retains its alignment type, allowing the new owner to create a mixed army. A player or team is eliminated when no towns or heroes are left under their control. Usually the last player or team remaining is the victor.

As heroes visit special locations called obelisks, pieces are removed from a jigsaw puzzle-like map, gradually revealing 'The Ultimate Artifact location to the player. Once found, it confers immense bonuses to the player capable of breaking a stalemate: the grail can be taken back to a town and used to build a special structure, while the ultimate artifact provides the bonuses directly through possession.

heroes-of-might-and-magic-2-battle-for-castle-screenshot

Whenever a player engages in battle

The game changes from the adventure map display to a combat screen, which is based on either a hexagonal or square grid. In this mode, the game mimics the turn-based tactics genre, as the engaged armies must carry through the battle without the opportunity to reinforce or gracefully retreat. With few exceptions, combat must end with the losing army deserting, being destroyed, or paying a heavy price in gold to surrender. Surrendering allows the player to keep the remaining units intact. Battles can be led army army to army or castles / villages can be fight and (captured) occupied. Owning a town gives your hero daily an income of money later used to buy and upgrade castle buildings.

heroes-of-might-and-magic-ii-the-succession-wars-wizard-castle-building-options-screenshot

Also you your moved heroes could overtake mines producing different goods like minerals, sulfur, gold, emeralds etc. Building different buildings and building war units for army usually cost gold and some kind of resource.

Game Story

Heroes II history continues after Heroes I. Ending of Heroes I results in Lord Morglin Ironfist's victory. In the following years, he has successfully unified the continent of Enroth and secured his rule as king. Upon the king's death, his two sons, Archibald and Roland, vie for the crown. Archibald orchestrates a series of events that lead to Roland's exile. Archibald is then declared the new king, while Roland organizes a resistance. Each alignment is represented by one of the game's two campaigns. Archibald's campaign features the three "evil" town alignments, while Roland's campaign features the three "good" town alignments.

If Archibald is victorious, Roland's rebellion is crushed, and Roland himself is imprisoned in Castle Ironfist, leaving Archibald the uncontested ruler of Enroth. The  ending, however, results in Roland's victory, with Archibald being turned to stone by Roland's court wizard, Tanir.

If you're more interested to play modern games and get some more games modern games more entertaining take a look at Kevin Martin's JoyofAndroid Best Adnroid Games post here.

Enjoy

Baby boomers and Generation X, Y, Z – Generational Marketing and 4 Common personality stereotype traits of people born over the last 60 years

Saturday, August 18th, 2018

baby-bommers-and-x-y-z-generations

Those who are employed in the realm of Social or Internet Marketing definitely have to know the existence of at least 4 different conditional stereotypes, these are Baby Boomers and Generation X, Generation Y and Generation Z (Millenials).

According to Socielogist Karl Mannheim (who is among the founding fathers of classical socielogy) – "All members of a generation share a similar collective experience" or in other words people are categorized in generations depending on when they were born.

As stereotypes they're generalization of people born in different periods of time and sharing same or similar traits.
Because of the age and the conditions they grew up and as they share those general spirit of time and age, they tend to be more or less behaving in a similar ways in how they think save / spend money or share some common approach to life choices and attitude towards life and worldview.

But before proceeding to the 4 main cohert provisional stereotypes, its worthy to mention how these four common trait generations came to existence with a little bit of pre-history.

The pre WW I and WW II world situation and the First and Second World War played a pivotal role in forming the social conditions necessery for the development of the baby boomers.

* The depression Era people

Born in period: 1912 – 1921 who came at full maturity around 1930-1939 right in the beginning of WW I (all of whom are already deceased) as of 2018 as a cause of the war uncertainty and the havoc and the war conditions were very conservative, compulsive savers, tried their best to maintain a low debt. They had the mindset (responsibility) to leave some kind of legacy to their children. They were very patriotic, oriented towards work before pleasure, had a great respect for authority and had a strong sense of moral obligation. For all this character traits of this people undoubtfully a key role played the strong belief in God mostly all people had at the time.

The next in line conditional stereotype of people that came to earth are the:

* The World War II Generation
 

Born in year period: 1922 to 1927 who came to a mature age exactly at the terrible years of Second World War.

People of that time were either fighters for or against the Axis Powers or the Central Powers with the common shared goal to fight against the enemy (of course there are multiple of people who were just trying to survive and not taking a side in this meaningless war).

The current amount of people living are estimated to few million of deathbed elders  worldwide.

As above conditional generations types mentioned are of importance for historical reasons and most of the people belonging to those depression pre WWI and WW II era are dead or just a few millions an overall in un less-consuming age (excluding the medicine consumption which is higher compared to youngsters).

I'll further proceed further with the Baby Boomers, GEN X, Y, Zs who are de-facto the still active members participating to society and economy more or less.

baby-boomers-generation-x-y-z-chart-table-by-year-of-birth

So what are these 4 Stereotypes of Generations that and why are so important for the modern marketers or business manager?

 

1. BABY BOOMERS also called for a short (Boomers)
 

we-are-who-are-baby-boomers

These are people who have been defined by a birth year range (period) from early to mid 1940s  until 1960 and 1964.

 In Europe and North America, boomers are widely identified with privilege, as many grew up in a time of widespread government subsidies in post-war housing and education, and increasing affluence.

As a group, baby boomers were considered the wealthiest, most active, and most physically fit generation up to the era in which they arrived, and were amongst the first to grow up genuinely expecting the world to improve with time. They were also the generation that received peak levels of income; they could therefore reap the benefits of abundant levels of food, apparel, retirement programs, and sometimes even "midlife crisis" products. The increased consumerism for this generation has been regularly criticized as excessive (and that's for a good reason).

One feature of the boomers was that they have tended to think of themselves as a special generation, very different from those that had come before or that has come afterward. In the 1960s, as the relatively large numbers of young people became teenagers and young adults, they, and those around them, created a very specific rhetoric around their cohort, and the changes they were bringing about. This rhetoric had an important impact in the self perceptions of the boomers, as well as their tendency to define the world in terms of generations, which was a relatively new phenomenon. The baby boom has been described variously as a "shockwave" and with a methapors such as as "the pig in the python".

 

2. Generation X / GEN X

generation-x-who-are-they-gen-x-explained-picture

Generation X is considered the people born in the following birth year period 1960 forward in time until 1980s. A specific feature in the 60s-80s period was the shifting societal values, perhaps the spring of this generation was also connected to the increasing role and spread of communism in the world.
Sometimes this generation was referred as the "latchkey generation".
The term generation X itself was popularized largely by Douglas Coupland in his novel 1991 novel Generation X Tales for an Accelerated Culture

A very common trait for Generation X was the reduced adult supervision over kids when compared to previous generations a result of increasing divorce rates and the increased role of one parent children upbringing (in most cases that was the mother) which had to be actively involved as a workforce and lacked physically the time to spend enough time with its children and the increased use of childcare options in one parent families.

They were dubbed the "MTV" (Music Television) generation – that was a hit and most popular music TV in the early 1990s.
The kids representing generation X were described as slackers, cynical and disaffected.

The cultural influences dominating the tastes and feelings of the teen masses of that generation was musical genres such as punk music, heavy metal music, grunge and hip-hop and indie films (independent films)  produced outside of the major film studio system.

According to many researches in midtime those generation are described as active, happy and achieving a work-life balance kind of lifestyle.

People belonging to Generation X are described as people with Enterpreneural tendencies.

Just to name a few of the celebrities and successful people who belong to this generation, that's Google's founder Sergey Brinn & Larry Page (born in 1973), Richard Stallman (founder of Free Software movement) as well movie and film producer celebrities such as Georgi Clooney, Lenny Kravitz, Quantin Tarantino, Kevin Smith, David Fincher etc.

According to United Kingdom survey study of 2500+ workers conducted by Workfront, GEN X are found to be among the hardest working employees in today's workforce. They are also ranked high by fellow workers for having a strong work ethics (about 59.5%), being helpful (55.4%) and very skilled (54.5%) of respondents as well marked as the best troubleshooters / problem solvers (41.6%) claimed so.
According to research conducted by Viacom, gen x they have a high desire for flexibility and fulfillment at work.

3. Generation Y (Millenials) – GEN Y
 

who-are-generation-y-millenials-explained

Following Generation X came on earth Genreation Y the birth period dated for this kids were years are stretchy year period that this generation is described are years 1980s – 1990s to yearly 2000s where birth period range of those ppl ends.
This kids are descendants of the GEN X and second wave Baby Boomers.
In the public this generation is referred as "echo boomers".

The Millenials characteristics are different based on the region of birth, they're famous for the increased familiarity with communication, media and digital technologies.

millenials-focus-on-technology-innovation-and-their-technological-preferences

There upbringing was marked by increase in liberal approach to politics.
The Great recession crisis of the 2000s played a major impact on this generation because it has caused historical high levels of un-employment among youngsters and led to a possible long term economic and social damage to this generation.

millennials-are-heavily-influenced-by-their-peers
Gen Y according are less brand loyal and the speed of the Internet has led the cohort to be similarly flexible and changing in its fashion, style consciousness and where
and how it is communicated with.

As I am born in 1983 me and my generation belongs to Generation Y and even though Bulgaria before 1991 was a Communist regime country, I should agree that I and many of my friends share a very similar behavior and way of thinking to the GEN Y stereotype described, but as I was born in a times of transition and Bulgaria as a Soviet Union Satellite at the time has been lacking behind in fashion and international culture due to the communist regime, me and my generation seem to be sharing a lot of common stereotype characteristics with Generation X such as the punk-rock, metal, hip-hop culture MTV culture and partly because of the GEN X like overall view on life.

Among most famous representative successful people of the Millenials generation are Mark Zuckerberg (Facebook founder), Prince William (the second in line to the British throne), Kim Jong Un (the leader dictator of North Korea) etc.
 

 

4. Generation Z ( GEN Z) / iGeneration / Generation Sensible (Post Millenials)
 

who-are-generation-y-millenials-explained

Following Millenials generation is GEN Z, demographers and researchers typically set as a starting birth date period of those generation 1990s and mid 2000s. As of time of writting there is still no clear consensus regarding ending birth years.

This is the so called Internet Generation because this generation used the internet and Smart Mobile Phone technology since a very young age, they are very confortable with technology (kinda of wired) and addicted to social media such as Facebook / Twitter / Instagram etc. Because of the level of digital communication, many people of this generation are more introvert oriented and often have problems expressing themselves freely in groups. Also they tend to lack the physical communication and more digitally community oriented, even though this depends much also on the specific personality and in some cases it is exactly the opposite.

 

 * Summary
 

As a Marketer, Human Resources hiring personal specialist, a CEO or some kind of project / business manager it is a good idea to be aware of these 4 common stereotypes. However as this are stereotypes (and a theory) as everything theoritized the data is slighly biased and untrue. The marketer practice shows that whoever conducts a marketing and bases his sales on this theoritizing should consider this to be just one aspect of the marketing campaign those who are trying to sell, stuff ideas or ideology to any of those generation should be careful not to count 100% on the common traits found among the above 4 major groups and consider the individuality of person everyone has and just experiment a little bit to see what works and what doesn't.

Also it should be mentioned these diversification of stereotypes are mostly valid for the US citizens and Westerners but doesn't fully fit to ex-communist countries or countries of the Soviet union, those countries have a slightly different personality traits of person born in any of the year periods defined, same is more or less true for the poor parts of Africa and India, Vietnam, China and mostly all of the coomunist countries ex and current. It should be said that countries who belonged to the Soviet Union many of which are current Russian Federation Republics have a personality traits that are often mixture of the 4 stereotypes and even have a lot of the traits that were typical for the WW I and WW II generations, which makes dealing with this people a very weird experience.

Nomatter the standard error that should always considered when basing a marketing research hypothesis on Generational Marketing (using generational segmentation in marketing best potential customer targets), having a general insight and taking in consideration those stereotypes could seriously help in both marketing as well as HR specific fields like Change Management.

generation-x-y-z-characteristics

If you're a marketer, I recommend you take a quick look also on following very educative article Generational Makarketing and how to target each of the GEN X, Y, Z and Baby Boomers and what works best for each of them.

Nomatter what just like all Theories, the theory of Boomers and the Generation segmantation is not completely true, but it gives a good soil for reasoning as well definitely helps for people involved in sociology and business.

Comments and feedback on the article are mostly welcome as the topic is very broad and there is much more to be said …

Hope the article was interesting to you ….

What was your Generation like?

Block Web server over loading Bad Crawler Bots and Search Engine Spiders with .htaccess rules

Monday, September 18th, 2017

howto-block-webserver-overloading-bad-crawler-bots-spiders-with-htaccess-modrewrite-rules-file

In last post, I've talked about the problem of Search Index Crawler Robots aggressively crawling websites and how to stop them (the article is here) explaning how to raise delays between Bot URL requests to website and how to completely probhit some bots from crawling with robots.txt.

As explained in article the consequence of too many badly written or agressive behaviour Spider is the "server stoning" and therefore degraded Web Server performance as a cause or even a short time Denial of Service Attack, depending on how well was the initial Server Scaling done.

The bots we want to filter are not to be confused with the legitimate bots, that drives real traffic to your website, just for information

 The 10 Most Popular WebCrawlers Bots as of time of writting are:
 

1. GoogleBot (The Google Crawler bots, funnily bots become less active on Saturday and Sundays :))

2. BingBot (Bing.com Crawler bots)

3. SlurpBot (also famous as Yahoo! Slurp)

4. DuckDuckBot (The dutch search engine duckduckgo.com crawler bots)

5. Baiduspider (The Chineese most famous search engine used as a substitute of Google in China)

6. YandexBot (Russian Yandex Search engine crawler bots used in Russia as a substitute for Google )

7. Sogou Spider (leading Chineese Search Engine launched in 2004)

8. Exabot (A French Search Engine, launched in 2000, crawler for ExaLead Search Engine)

9. FaceBot (Facebook External hit, this crawler is crawling a certain webpage only once the user shares or paste link with video, music, blog whatever  in chat to another user)

10. Alexa Crawler (la_archiver is a web crawler for Amazon's Alexa Internet Rankings, Alexa is a great site to evaluate the approximate page popularity on the internet, Alexa SiteInfo page has historically been the Swift Army knife for anyone wanting to quickly evaluate a webpage approx. ranking while compared to other pages)

Above legitimate bots are known to follow most if not all of W3C – World Wide Web Consorium (W3.Org) standards and therefore, they respect the content commands for allowance or restrictions on a single site as given from robots.txt but unfortunately many of the so called Bad-Bots or Mirroring scripts that are burning your Web Server CPU and Memory mentioned in previous article are either not following /robots.txt prescriptions completely or partially.

Hence with the robots.txt unrespective bots, the case the only way to get rid of most of the webspiders that are just loading your bandwidth and server hardware is to filter / block them is by using Apache's mod_rewrite through

 

.htaccess


file

Create if not existing in the DocumentRoot of your website .htaccess file with whatever text editor, or create it your windows / mac os desktop and transfer via FTP / SecureFTP to server.

I prefer to do it directly on server with vim (text editor)

 

 

vim /var/www/sites/your-domain.com/.htaccess

 

RewriteEngine On

IndexIgnore .htaccess */.??* *~ *# */HEADER* */README* */_vti*

SetEnvIfNoCase User-Agent "^Black Hole” bad_bot
SetEnvIfNoCase User-Agent "^Titan bad_bot
SetEnvIfNoCase User-Agent "^WebStripper" bad_bot
SetEnvIfNoCase User-Agent "^NetMechanic" bad_bot
SetEnvIfNoCase User-Agent "^CherryPicker" bad_bot
SetEnvIfNoCase User-Agent "^EmailCollector" bad_bot
SetEnvIfNoCase User-Agent "^EmailSiphon" bad_bot
SetEnvIfNoCase User-Agent "^WebBandit" bad_bot
SetEnvIfNoCase User-Agent "^EmailWolf" bad_bot
SetEnvIfNoCase User-Agent "^ExtractorPro" bad_bot
SetEnvIfNoCase User-Agent "^CopyRightCheck" bad_bot
SetEnvIfNoCase User-Agent "^Crescent" bad_bot
SetEnvIfNoCase User-Agent "^Wget" bad_bot
SetEnvIfNoCase User-Agent "^SiteSnagger" bad_bot
SetEnvIfNoCase User-Agent "^ProWebWalker" bad_bot
SetEnvIfNoCase User-Agent "^CheeseBot" bad_bot
SetEnvIfNoCase User-Agent "^Teleport" bad_bot
SetEnvIfNoCase User-Agent "^TeleportPro" bad_bot
SetEnvIfNoCase User-Agent "^MIIxpc" bad_bot
SetEnvIfNoCase User-Agent "^Telesoft" bad_bot
SetEnvIfNoCase User-Agent "^Website Quester" bad_bot
SetEnvIfNoCase User-Agent "^WebZip" bad_bot
SetEnvIfNoCase User-Agent "^moget/2.1" bad_bot
SetEnvIfNoCase User-Agent "^WebZip/4.0" bad_bot
SetEnvIfNoCase User-Agent "^WebSauger" bad_bot
SetEnvIfNoCase User-Agent "^WebCopier" bad_bot
SetEnvIfNoCase User-Agent "^NetAnts" bad_bot
SetEnvIfNoCase User-Agent "^Mister PiX" bad_bot
SetEnvIfNoCase User-Agent "^WebAuto" bad_bot
SetEnvIfNoCase User-Agent "^TheNomad" bad_bot
SetEnvIfNoCase User-Agent "^WWW-Collector-E" bad_bot
SetEnvIfNoCase User-Agent "^RMA" bad_bot
SetEnvIfNoCase User-Agent "^libWeb/clsHTTP" bad_bot
SetEnvIfNoCase User-Agent "^asterias" bad_bot
SetEnvIfNoCase User-Agent "^httplib" bad_bot
SetEnvIfNoCase User-Agent "^turingos" bad_bot
SetEnvIfNoCase User-Agent "^spanner" bad_bot
SetEnvIfNoCase User-Agent "^InfoNaviRobot" bad_bot
SetEnvIfNoCase User-Agent "^Harvest/1.5" bad_bot
SetEnvIfNoCase User-Agent "Bullseye/1.0" bad_bot
SetEnvIfNoCase User-Agent "^Mozilla/4.0 (compatible; BullsEye; Windows 95)" bad_bot
SetEnvIfNoCase User-Agent "^Crescent Internet ToolPak HTTP OLE Control v.1.0" bad_bot
SetEnvIfNoCase User-Agent "^CherryPickerSE/1.0" bad_bot
SetEnvIfNoCase User-Agent "^CherryPicker /1.0" bad_bot
SetEnvIfNoCase User-Agent "^WebBandit/3.50" bad_bot
SetEnvIfNoCase User-Agent "^NICErsPRO" bad_bot
SetEnvIfNoCase User-Agent "^Microsoft URL Control – 5.01.4511" bad_bot
SetEnvIfNoCase User-Agent "^DittoSpyder" bad_bot
SetEnvIfNoCase User-Agent "^Foobot" bad_bot
SetEnvIfNoCase User-Agent "^WebmasterWorldForumBot" bad_bot
SetEnvIfNoCase User-Agent "^SpankBot" bad_bot
SetEnvIfNoCase User-Agent "^BotALot" bad_bot
SetEnvIfNoCase User-Agent "^lwp-trivial/1.34" bad_bot
SetEnvIfNoCase User-Agent "^lwp-trivial" bad_bot
SetEnvIfNoCase User-Agent "^Wget/1.6" bad_bot
SetEnvIfNoCase User-Agent "^BunnySlippers" bad_bot
SetEnvIfNoCase User-Agent "^Microsoft URL Control – 6.00.8169" bad_bot
SetEnvIfNoCase User-Agent "^URLy Warning" bad_bot
SetEnvIfNoCase User-Agent "^Wget/1.5.3" bad_bot
SetEnvIfNoCase User-Agent "^LinkWalker" bad_bot
SetEnvIfNoCase User-Agent "^cosmos" bad_bot
SetEnvIfNoCase User-Agent "^moget" bad_bot
SetEnvIfNoCase User-Agent "^hloader" bad_bot
SetEnvIfNoCase User-Agent "^humanlinks" bad_bot
SetEnvIfNoCase User-Agent "^LinkextractorPro" bad_bot
SetEnvIfNoCase User-Agent "^Offline Explorer" bad_bot
SetEnvIfNoCase User-Agent "^Mata Hari" bad_bot
SetEnvIfNoCase User-Agent "^LexiBot" bad_bot
SetEnvIfNoCase User-Agent "^Web Image Collector" bad_bot
SetEnvIfNoCase User-Agent "^The Intraformant" bad_bot
SetEnvIfNoCase User-Agent "^True_Robot/1.0" bad_bot
SetEnvIfNoCase User-Agent "^True_Robot" bad_bot
SetEnvIfNoCase User-Agent "^BlowFish/1.0" bad_bot
SetEnvIfNoCase User-Agent "^JennyBot" bad_bot
SetEnvIfNoCase User-Agent "^MIIxpc/4.2" bad_bot
SetEnvIfNoCase User-Agent "^BuiltBotTough" bad_bot
SetEnvIfNoCase User-Agent "^ProPowerBot/2.14" bad_bot
SetEnvIfNoCase User-Agent "^BackDoorBot/1.0" bad_bot
SetEnvIfNoCase User-Agent "^toCrawl/UrlDispatcher" bad_bot
SetEnvIfNoCase User-Agent "^WebEnhancer" bad_bot
SetEnvIfNoCase User-Agent "^TightTwatBot" bad_bot
SetEnvIfNoCase User-Agent "^suzuran" bad_bot
SetEnvIfNoCase User-Agent "^VCI WebViewer VCI WebViewer Win32" bad_bot
SetEnvIfNoCase User-Agent "^VCI" bad_bot
SetEnvIfNoCase User-Agent "^Szukacz/1.4" bad_bot
SetEnvIfNoCase User-Agent "^QueryN Metasearch" bad_bot
SetEnvIfNoCase User-Agent "^Openfind data gathere" bad_bot
SetEnvIfNoCase User-Agent "^Openfind" bad_bot
SetEnvIfNoCase User-Agent "^Xenu’s Link Sleuth 1.1c" bad_bot
SetEnvIfNoCase User-Agent "^Xenu’s" bad_bot
SetEnvIfNoCase User-Agent "^Zeus" bad_bot
SetEnvIfNoCase User-Agent "^RepoMonkey Bait & Tackle/v1.01" bad_bot
SetEnvIfNoCase User-Agent "^RepoMonkey" bad_bot
SetEnvIfNoCase User-Agent "^Zeus 32297 Webster Pro V2.9 Win32" bad_bot
SetEnvIfNoCase User-Agent "^Webster Pro" bad_bot
SetEnvIfNoCase User-Agent "^EroCrawler" bad_bot
SetEnvIfNoCase User-Agent "^LinkScan/8.1a Unix" bad_bot
SetEnvIfNoCase User-Agent "^Keyword Density/0.9" bad_bot
SetEnvIfNoCase User-Agent "^Kenjin Spider" bad_bot
SetEnvIfNoCase User-Agent "^Cegbfeieh" bad_bot

 

<Limit GET POST>
order allow,deny
allow from all
Deny from env=bad_bot
</Limit>

 


Above rules are Bad bots prohibition rules have RewriteEngine On directive included however for many websites this directive is enabled directly into VirtualHost section for domain/s, if that is your case you might also remove RewriteEngine on from .htaccess and still the prohibition rules of bad bots should continue to work
Above rules are also perfectly suitable wordpress based websites / blogs in case you need to filter out obstructive spiders even though the rules would work on any website domain with mod_rewrite enabled.

Once you have implemented above rules, you will not need to restart Apache, as .htaccess will be read dynamically by each client request to Webserver

2. Testing .htaccess Bad Bots Filtering Works as Expected


In order to test the new Bad Bot filtering configuration is working properly, you have a manual and more complicated way with lynx (text browser), assuming you have shell access to a Linux / BSD / *Nix computer, or you have your own *NIX server / desktop computer running
 

Here is how:
 

 

lynx -useragent="Mozilla/5.0 (compatible; MegaIndex.ru/2.0; +http://megaindex.com/crawler)" -head -dump http://www.your-website-filtering-bad-bots.com/

 

 

Note that lynx will provide a warning such as:

Warning: User-Agent string does not contain "Lynx" or "L_y_n_x"!

Just ignore it and press enter to continue.

Two other use cases with lynx, that I historically used heavily is to pretent with Lynx, you're GoogleBot in order to see how does Google actually see your website?
 

  • Pretend with Lynx You're GoogleBot

 

lynx -useragent="Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" -head -dump http://www.your-domain.com/

 

 

  • How to Pretend with Lynx Browser You are GoogleBot-Mobile

 

lynx -useragent="Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_1 like Mac OS X; en-us) AppleWebKit/532.9 (KHTML, like Gecko) Version/4.0.5 Mobile/8B117 Safari/6531.22.7 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" -head -dump http://www.your-domain.com/

 


Or for the lazy ones that doesn't have Linux / *Nix at disposal you can use WannaBrowser website

Wannabrowseris a web based browser emulator which gives you the ability to change the User-Agent on each website req1uest, so just set your UserAgent to any bot browser that we just filtered for example set User-Agent to CheeseBot

The .htaccess rule earier added once detecting your browser client is coming in with the prohibit browser agent will immediately filter out and you'll be unable to access the website with a message like:
 

HTTP/1.1 403 Forbidden

 

Just as I've talked a lot about Index Bots, I think it is worthy to also mention three great websites that can give you a lot of Up to Date information on exact Spiders returned user-agent, common known Bot traits as well as a a current updated list with the Bad Bots etc.

Bot and Browser Resources information user-agents, bad-bots and odd Crawlers and Bots specifics

1. botreports.com
2. user-agents.org
3. useragentapi.com

 

An updated list with robots user-agents (crawler-user-agents) is also available in github here regularly updated by Caia Almeido

There are also a third party plugin (modules) available for Website Platforms like WordPress / Joomla / Typo3 etc.

Besides the listed on these websites as well as the known Bad and Good Bots, there are perhaps a hundred of others that might end up crawling your webdsite that might or might not need  to be filtered, therefore before proceeding with any filtering steps, it is generally a good idea to monitor your  HTTPD access.log / error.log, as if you happen to somehow mistakenly filter the wrong bot this might be a reason for Website Indexing Problems.

Hope this article give you some valueable information. Enjoy ! 🙂

 

Check Windows Operating System install date, Full list of installed and uninstalled programs from command line / Check how old is your Windows installation?

Tuesday, March 29th, 2016

when-was-windows-installed-check-howto-from-command-line
Sometimes when you have some inherited Windows / Linux OS servers or Desktops, it is useful to be aware what is the Operating System install date. Usually the install date of the OS is closely to the date of purchase of the system this is especially true for Windows but not necessery true for Liunx based installs.

Knowing the install date is useful especially if you're not sure how outdated is a certain operating system. Knowing how long ago a current installation was performed could give you some hints on whether to create a re-install plans in order to keep system security up2date and could give you an idea whether the system is prone to some common errors of the time of installation or security flaws.

 

1. Check out how old is Windows install?

Finding out the age of WIndows installation can be performed across almost all NT 4.0 based Windowses and onwards, getting Winblows install date is obtained same way on both Windows XP / Vista/  7  and 8.

Besides many useful things such as detailed information about the configuration of your PC / notebook systeminfo could also provide you with install date, to do so just run from command line (cmd.exe).
 

C:\Users\hipo> systeminfo | find /i "install date"
Original Install Date:     09/18/13, 15:23:18 PM


check-windows-os-install-date-from-command-line-howto-screenshot

If you need to get the initial Windows system install date however it might be much better to use WMIC command to get the info:

 

 

C:\Users\hipo>WMIC OS GET installdate
InstallDate
20130918152318.000000+180


The only downside of using WMIC as you can see is it provides the Windows OS install date in a raw unparsed format, but for scripters that's great.

2. Check WIndows Installed and Uinstalled software and uptime from command line

One common other thing next to Windows install date is what is the Windows uptime, the easiest way to get that is to run Task Manager in command line run taskmgr

windows-task-manager-how-to-check-windows-operating-system-uptime-easily

For those who want to get the uptime from windows command line for scripting purposes, this can be done again with systeminfo cmd, i.e.:

 

C:\> systeminfo | find "System Boot Time:"
System Boot Time:          03/29/16, 08:48:59 AM


windows-os-command-to-get-system-uptime-screenshot

Other helpful Windows command liners you might want to find out about is getting all the Uninstalled and Installed programs from command line this again is done with WMIC

 

C:\> wmic /OUTPUT:my_software.txt product get name

 


get-a-full-list-of-installed-software-programs-on-windows-xp-vista-7-8-command-howto-screenshot

Alternative way to get a full list of installed software on Windows OS is to use Microsoft/SysInternals psinfo command:

 

C:\> psinfo -s > software.txt
C:\> psinfo -s -c > software.csv


If you need to get a complete list of Uinstalled Software using command line (e.g. for batch scripting) purposes, you can query that from Windows registry, like so:

 

C:\>reg query HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Uninstall


Command Output will be something like on below shot:

windows-OS-show-get-full-list-of-uninstalled-programs-using-a-command-line-screenshot

Well that's all folks 🙂

 

Must have software on freshly installed windows – Essential Software after fresh Windows install

Friday, March 18th, 2016

Install-update-multiple-programs-applications-at-once-using-ninite

If you're into IT industry even if you don't like installing frequently Windows or you're completely Linux / BSD user, you will certainly have a lot of friends which will want help from you to re-install or fix their Windows 7 / 8 / 10 OS. At least this is the case with me every year, I'm kinda of obliged to install fresh windowses on new bought friends or relatives notebooks / desktop PCs.

Of course according to for whom the new Windows OS installed the preferrences of necessery software varies, however more or less there is sort of standard list of Windows Software which is used daily by most of Avarage Computer user, such as:
 

I tend to install on New Windows installs and thus I have more or less systematized the process.

I try to usually stick to free software where possible for each of the above categories as a Free Software enthusiast and luckily nowadays there is a lot of non-priprietary or at least free as in beer software available out there.

For Windows sysadmins or College and other public institutions networks including multiple of Windows Computers which are not inside a domain and also for people in computer repair shops where daily dozens of windows pre-installs or a set of software Automatic updates are  necessery make sure to take a look at Ninite

ninite-automate-windows-program-deploy-and-update-on-new-windows-os-openoffice-screenshot

As official website introduces Ninite:

Ninite – Install and Update All Your Programs at Once

Of course as Ninite is used by organizations as NASA, Harvard Medical School etc. it is likely the tool might reports your installed list of Windows software and various other Win PC statistical data to Ninite developers and most likely NSA, but this probably doesn't much matter as this is probably by the moment you choose to have installed a Windows OS on your PC.

ninite-choises-to-build-an-install-package-with-useful-essential-windows-software-screenshot
 

For Windows System Administrators managing small and middle sized network PCs that are not inside a Domain Controller, Ninite could definitely save hours and at cases even days of boring install and maintainance work. HP Enterprise or HP Inc. Employees or ex-employees would definitely love Ninite, because what Ninite does is pretty much like the well known HP Internal Tool PC COE.

Ninite could also prepare an installer containing multiple applications based on the choice on Ninite's website, so that's also a great thing especially if you need to deploy a different type of Users PCs (Scientific / Gamers / Working etc.)

Perhaps there are also other useful things to install on a new fresh Windows installations, if you're using something I'm missing let me know in comments.