Posts Tagged ‘test’

Zabbix rkhunter monitoring check if rootkits trojans and viruses or suspicious OS activities are detected

Wednesday, December 8th, 2021

monitor-rkhunter-with-zabbix-zabbix-rkhunter-check-logo

If you're using rkhunter to monitor for malicious activities, a binary changes, rootkits, viruses, malware, suspicious stuff and other famous security breach possible or actual issues, perhaps you have configured your machines to report to some Email.
But what if you want to have a scheduled rkhunter running on the machine and you don't want to count too much on email alerting (especially because email alerting) makes possible for emails to be tracked by sysadmin pretty late?

We have been in those situation and in this case me and my dear colleague Georgi Stoyanov developed a small rkhunter Zabbix userparameter check to track and Alert if any traces of "Warning"''s are mateched in the traditional rkhunter log file /var/log/rkhunter/rkhunter.log

To set it up and use it is pretty use you will need to have a recent version of zabbix-agent installed on the machine and connected to a Zabbix server, in my case this is:

[root@centos ~]# rpm -qa |grep -i zabbix-agent
zabbix-agent-4.0.7-1.el7.x86_64

 placed inside /etc/zabbix/zabbix_agentd.d/userparameter_rkhunter_warning_check.conf

[root@centos /etc/zabbix/zabbix_agentd.d ]# cat userparameter_rkhunter_warning_check.conf
# userparameter script to check if any Warning is inside /var/log/rkhunter/rkhunter.log and if found to trigger Zabbix alert
UserParameter=rkhunter.warning, (TODAY=$(date |awk '{ print $1" "$2" "$3 }'); if [ $(cat /var/log/rkhunter/rkhunter.log | awk “/$TODAY/,EOF” | /bin/grep -i ‘\[ Warning \]’ | /usr/bin/wc -l) != ‘0’ ]; then echo 1; else echo 0; fi)
UserParameter=rkhunter.suspected,(/bin/grep -i 'Suspect files: ' /var/log/rkhunter/rkhunter.log|tail -n 1| awk '{ print $4 }')
UserParameter=rkhunter.rootkits,(/bin/grep -i 'Possible rootkits: ' /var/log/rkhunter/rkhunter.log|tail -n 1| awk '{ print $4 }')


2. Prepare Rkhunter Template, Triggers and Items


In Zabbix Server that you access from web control interface, you will have to prepare a new template called lets say Rkhunter with the necessery Triggers and Items


2.1 Create Rkhunter Items
 

On Zabbix Server side, uou will have to configure 3 Items for the 3 configured userparameter above script keys, like so:

rkhunter-items-zabbix-screenshot.png

  • rkhunter.suspected Item configuration


rkhunter-suspected-files

  • rkhunter.warning Zabbix Item config

rkhunter-warning-found-check-zabbix

  • rkhunter.rootkits Zabbix Item config

     

     

    rkhunter-zabbix-possible-rootkits-item1

    2.2 Create Triggers
     

You need to have an overall of 3 triggers like in below shot:

zabbix-rkhunter-all-triggers-screenshot
 

  • rkhunter.rootkits Trigger config

rkhunter-rootkits-trigger-zabbix1

  • rkhunter.suspected Trigger cfg

rkhunter-suspected-trigger-zabbix1

  • rkhunter warning Trigger cfg

rkhunter-warning-trigger1

3. Reload zabbix-agent and test the keys


It is necessery to reload zabbix agent for the new userparameter to start to be sent to remote zabbix server (through a proxy if you have one configured).

[root@centos ~]# systemctl restart zabbix-agent


To make the zabbix-agent send the keys to the server you can use zabbix_sender to have the test tool you will have to have installed (zabbix-sender) on the server.

To trigger a manualTest if you happen to have some problems with the key which shouldn''t be the case you can sent a value to the respectve key with below command:

[root@centos ~ ]# zabbix_sender -vv -c "/etc/zabbix/zabbix_agentd.conf" -k "khunter.warning" -o "1"


Check on Zabbix Server the sent value is received, for any oddities as usual check what is inside  /var/log/zabbix/zabbix_agentd.log for any errors or warnings.

Hack: Using ssh / curl or wget to test TCP port connection state to remote SSH, DNS, SMTP, MySQL or any other listening service in PCI environment servers

Wednesday, December 30th, 2020

using-curl-ssh-wget-to-test-tcp-port-opened-or-closed-for-web-mysql-smtp-or-any-other-linstener-in-pci-linux-logo

If you work on PCI high security environment servers in isolated local networks where each package installed on the Linux / Unix system is of importance it is pretty common that some basic stuff are not there in most cases it is considered a security hole to even have a simple telnet installed on the system. I do have experience with such environments myself and thus it is pretty daunting stuff so in best case you can use something like a simple ssh client if you're lucky and the CentOS / Redhat / Suse Linux whatever distro has openssh-client package installed.
If you're lucky to have the ssh onboard you can use telnet in same manner as netcat or the swiss army knife (nmap) network mapper tool to test whether remote service TCP / port is opened or not. As often this is useful, if you don't have access to the CISCO / Juniper or other (networ) / firewall equipment which is setting the boundaries and security port restrictions between networks and servers.

Below is example on how to use ssh client to test port connectivity to lets say the Internet, i.e.  Google / Yahoo search engines.
 

[root@pciserver: /home ]# ssh -oConnectTimeout=3 -v google.com -p 23
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to google.com [172.217.169.206] port 23.
debug1: connect to address 172.217.169.206 port 23: Connection timed out
debug1: Connecting to google.com [2a00:1450:4017:80b::200e] port 23.
debug1: connect to address 2a00:1450:4017:80b::200e port 23: Cannot assign requested address
ssh: connect to host google.com port 23: Cannot assign requested address
root@pcfreak:/var/www/images# ssh -oConnectTimeout=3 -v google.com -p 80
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to google.com [172.217.169.206] port 80.
debug1: connect to address 172.217.169.206 port 80: Connection timed out
debug1: Connecting to google.com [2a00:1450:4017:807::200e] port 80.
debug1: connect to address 2a00:1450:4017:807::200e port 80: Cannot assign requested address
ssh: connect to host google.com port 80: Cannot assign requested address
root@pcfreak:/var/www/images# ssh google.com -p 80
ssh_exchange_identification: Connection closed by remote host
root@pcfreak:/var/www/images# ssh google.com -p 80 -v -oConnectTimeout=3
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to google.com [172.217.169.206] port 80.
debug1: connect to address 172.217.169.206 port 80: Connection timed out
debug1: Connecting to google.com [2a00:1450:4017:80b::200e] port 80.
debug1: connect to address 2a00:1450:4017:80b::200e port 80: Cannot assign requested address
ssh: connect to host google.com port 80: Cannot assign requested address
root@pcfreak:/var/www/images# ssh google.com -p 80 -v -oConnectTimeout=5
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to google.com [142.250.184.142] port 80.
debug1: connect to address 142.250.184.142 port 80: Connection timed out
debug1: Connecting to google.com [2a00:1450:4017:80c::200e] port 80.
debug1: connect to address 2a00:1450:4017:80c::200e port 80: Cannot assign requested address
ssh: connect to host google.com port 80: Cannot assign requested address
root@pcfreak:/var/www/images# ssh google.com -p 80 -v
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to google.com [172.217.169.206] port 80.
debug1: Connection established.
debug1: identity file /root/.ssh/id_rsa type 0
debug1: identity file /root/.ssh/id_rsa-cert type -1
debug1: identity file /root/.ssh/id_dsa type -1
debug1: identity file /root/.ssh/id_dsa-cert type -1
debug1: identity file /root/.ssh/id_ecdsa type -1
debug1: identity file /root/.ssh/id_ecdsa-cert type -1
debug1: identity file /root/.ssh/id_ed25519 type -1
debug1: identity file /root/.ssh/id_ed25519-cert type -1
debug1: identity file /root/.ssh/id_xmss type -1
debug1: identity file /root/.ssh/id_xmss-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_7.9p1 Debian-10+deb10u2
debug1: ssh_exchange_identification: HTTP/1.0 400 Bad Request

 


debug1: ssh_exchange_identification: Content-Type: text/html; charset=UTF-8


debug1: ssh_exchange_identification: Referrer-Policy: no-referrer


debug1: ssh_exchange_identification: Content-Length: 1555


debug1: ssh_exchange_identification: Date: Wed, 30 Dec 2020 14:13:25 GMT


debug1: ssh_exchange_identification:


debug1: ssh_exchange_identification: <!DOCTYPE html>

debug1: ssh_exchange_identification: <html lang=en>

debug1: ssh_exchange_identification:   <meta charset=utf-8>

debug1: ssh_exchange_identification:   <meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">

debug1: ssh_exchange_identification:   <title>Error 400 (Bad Request)!!1</title>

debug1: ssh_exchange_identification:   <style>

debug1: ssh_exchange_identification:     *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 10
debug1: ssh_exchange_identification: 0% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.g
debug1: ssh_exchange_identification: oogle.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0
debug1: ssh_exchange_identification: % 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_
debug1: ssh_exchange_identification: color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}

debug1: ssh_exchange_identification:   </style>

debug1: ssh_exchange_identification:   <a href=//www.google.com/><span id=logo aria-label=Google></span></a>

debug1: ssh_exchange_identification:   <p><b>400.</b> <ins>That\342\200\231s an error.</ins>

debug1: ssh_exchange_identification:   <p>Your client has issued a malformed or illegal request.  <ins>That\342\200\231s all we know.</ins>

ssh_exchange_identification: Connection closed by remote host

 

Here is another example on how to test remote host whether a certain service such as DNS (bind) or telnetd is enabled and listening on remote local network  IP with ssh

[root@pciserver: /home ]# ssh 192.168.1.200 -p 53 -v -oConnectTimeout=5
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to 192.168.1.200 [192.168.1.200] port 53.
debug1: connect to address 192.168.1.200 port 53: Connection timed out
ssh: connect to host 192.168.1.200 port 53: Connection timed out

[root@server: /home ]# ssh 192.168.1.200 -p 23 -v -oConnectTimeout=5
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to 192.168.1.200 [192.168.1.200] port 23.
debug1: connect to address 192.168.1.200 port 23: Connection timed out
ssh: connect to host 192.168.1.200 port 23: Connection timed out


But what if Linux server you have tow work on is so paranoid that you even the ssh client is absent? Well you can use anything else that is capable of doing a connectivity to remote port such as wget or curl. Some web servers or application servers usually have wget or curl as it is integral part for some local shell scripts doing various operation needed for proper services functioning or simply to test locally a local or remote listener services, if that's the case we can use curl to connect and get output of a remote service simulating a normal telnet connection like this:

host:~# curl -vv 'telnet://remote-server-host5:22'
* About to connect() to remote-server-host5 port 22 (#0)
*   Trying 10.52.67.21… connected
* Connected to aflpvz625 (10.52.67.21) port 22 (#0)
SSH-2.0-OpenSSH_5.3

Now lets test whether we can connect remotely to a local net remote IP's Qmail mail server with curls telnet simulation mode:

host:~#  curl -vv 'telnet://192.168.0.200:25'
* Expire in 0 ms for 6 (transfer 0x56066e5ab900)
*   Trying 192.168.0.200…
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x56066e5ab900)
* Connected to 192.168.0.200 (192.168.0.200) port 25 (#0)
220 This is Mail Pc-Freak.NET ESMTP

Fine it works, lets now test whether a remote server who has MySQL listener service on standard MySQL port TCP 3306 is reachable with curl

host:~#  curl -vv 'telnet://192.168.0.200:3306'
* Expire in 0 ms for 6 (transfer 0x5601fafae900)
*   Trying 192.168.0.200…
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x5601fafae900)
* Connected to 192.168.0.200 (192.168.0.200) port 3306 (#0)
Warning: Binary output can mess up your terminal. Use "–output -" to tell
Warning: curl to output it to your terminal anyway, or consider "–output
Warning: <FILE>" to save to a file.
* Failed writing body (0 != 107)
* Closing connection 0
root@pcfreak:/var/www/images#  curl -vv 'telnet://192.168.0.200:3306'
* Expire in 0 ms for 6 (transfer 0x5598ad008900)
*   Trying 192.168.0.200…
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x5598ad008900)
* Connected to 192.168.0.200 (192.168.0.200) port 3306 (#0)
Warning: Binary output can mess up your terminal. Use "–output -" to tell
Warning: curl to output it to your terminal anyway, or consider "–output
Warning: <FILE>" to save to a file.
* Failed writing body (0 != 107)
* Closing connection 0

As you can see the remote connection is returning binary data which is unknown to a standard telnet terminal thus to get the output received we need to pass curl suggested arguments.

host:~#  curl -vv 'telnet://192.168.0.200:3306' –output –
* Expire in 0 ms for 6 (transfer 0x55b205c02900)
*   Trying 192.168.0.200…
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x55b205c02900)
* Connected to 192.168.0.200 (192.168.0.200) port 3306 (#0)
g


The curl trick used to troubleshoot remote port to remote host from a Windows OS host which does not have telnet installed by default but have curl instead.

Also When troubleshooting vSphere Replication, it is often necessary to troubleshoot port connectivity as common Windows utilities are not available.
As Curl is available in the VMware vCenter Server Appliance command line interface.

On servers where curl is not there but you have wget is installed you can use it also to test a remote port

 

# wget -vv -O /dev/null http://google.com:554 –timeout=5
–2020-12-30 16:54:22–  http://google.com:554/
Resolving google.com (google.com)… 172.217.169.206, 2a00:1450:4017:80b::200e
Connecting to google.com (google.com)|172.217.169.206|:554… failed: Connection timed out.
Connecting to google.com (google.com)|2a00:1450:4017:80b::200e|:554… failed: Cannot assign requested address.
Retrying.

–2020-12-30 16:54:28–  (try: 2)  http://google.com:554/
Connecting to google.com (google.com)|172.217.169.206|:554… ^C

As evident from output the port 554 is filtered in google which is pretty normal.

If curl or wget is not there either as a final alternative you can either install some perl, ruby, python or bash script etc. that can opens a remote socket to the remote IP.

IBM TSM dsmc console client use for listing configured backups, checking set scheduled backups and backup and restore operations howto

Friday, March 6th, 2020

tsm-ibm-logo_tivoli-dsmc-console-client-listing-backups-create-backups-and-restore-on-linux-unix-windows

Creating a simple home based backup solution with some shell scripting and rsync is a common use. However as a sysadmin in a middle sized or large corporations most companies use some professional backup service such as IBM Tivoli Storage Manager TSM – recently IBM changed the name of the product to IBM Spectrum.

IBM TSM  is a data protection platform that gives enterprises a single point of control and administration for backup and recovery that is used for Privare Clouds backup and other high end solutions where data criticality is top.
Usually in large companies TSM backup handling is managed by a separate team or teams as managing a large TSM infrastructure is quite a complex task, however my experience as a sysadmin show me that even if you don't have too much of indepth into tsm it is very useful to know how to manage at least basic Incremental backup operations such as view what is set to be backupped, set-up a new directory structure for backup, check the backup schedule configured, check what files are included and which excluded from the backup store etc. 

TSM has multi OS support ans you can use it on most streamline Operating systems Windows / Mac OS X and Linux in this specific article I'll be talking concretely about backing up data with tsm on Linux, tivoli can be theoretically brought up even on FreeBSD machines via the Linuxemu BSD module and the 64-Bit Tivoli Storage Manager RPMs.
Therefore in this small article I'll try to give few useful operations for the novice admin that stumbles on tsm backupped server that needs some small maintenance.
 

1. Starting up the dsmc command line client

 

Nomatter the operating system on which you run it to run the client run:

# dsmc

 

tsm-check-backup-schedule-set-time

Note that usually dsmc should run as superuser so if you try to run it via a normal non-root user you will get an error message like:

 

[ user@linux ~]$ dsmc
ANS1398E Initialization functions cannot open one of the Tivoli Storage Manager logs or a related file: /var/tsm/dsmerror.log. errno = 13, Permission denied

 

Tivoli SM has an extensive help so to get the use basics, type help
 

tsm> help
1.0 New for IBM Tivoli Storage Manager Version 6.4
2.0 Using commands
  2.1 Start and end a client command session
    2.1.1 Process commands in batch mode
    2.1.2 Process commands in interactive mode
  2.2 Enter client command names, options, and parameters
    2.2.1 Command name
    2.2.2 Options
    2.2.3 Parameters
    2.2.4 File specification syntax
  2.3 Wildcard characters
  2.4 Client commands reference
  2.5 Archive
  2.6 Archive FastBack

Enter 'q' to exit help, 't' to display the table of contents,
press enter or 'd' to scroll down, 'u' to scroll up or
enter a help topic section number, message number, option name,
command name, or command and subcommand:    

 

2. Listing files listed for backups

 

A note to make here is as in most corporate products tsm supports command aliases so any command supported described in the help like query, could be
abbreviated with its first letters only, e.g. query filespace tsm cmd can be abbreviated as

tsm> q fi

Commands can be run non-interactive mode also so if you want the output of q fi you can straight use:

tsm> dsmc q fi

 

tsm-check-included-excluded-files-q-file-if-backupped-list-backup-set-directories

This shows the directories and files that are set for backup creation with Tivoli.

 

3. Getting included and excluded backup set files

 

It is useful to know what are the exact excluded files from tsm set backup this is done with query inclexcl

tsm-check-excluded-included-files

 

4. Querying for backup schedule time

Tivoli as every other backup solution is creating its set to backup files in a certain time slot periods. 
To find out what is the time slot for backup creation use;

tsm> q sched
Schedule Name: WEEKLY_ITSERV
      Description: ITSERV weekly incremental backup
   Schedule Style: Classic
           Action: Incremental
          Options: 
          Objects: 
         Priority: 5
   Next Execution: 180 Hours and 35 Minutes
         Duration: 15 Minutes
           Period: 1 Week  
      Day of Week: Wednesday
            Month:
     Day of Month:
    Week of Month:
           Expire: Never  

 

tsm-query-partitions-backupeed-or-not

 

5. Check which files have been backed up

If you want to make sure backups are really created it is a good to check, which files from the selected backup files have already
a working backup copy.

This is done with query backup like so:

tsm> q ba /home/*

 

tsm-dsmc-query-user-home-for-backups

If you want to query all the current files and directories backed up under a directory and all its subdirectories you need to add the -subdir=yes option as below:

 

tsm> q ba /home/hipo/projects/* -subdir=yes
   
Size      Backup Date        Mgmt Class A/I File
   —-      ———–        ———- — —-
    512  12-09-2011 19:57:09    STANDARD    A  /home/hipo/projects/hfs0106
  1,024  08-12-2011 02:46:53    STANDARD    A  /home/hipo/projects/hsm41perf
    512  12-09-2011 19:57:09    STANDARD    A  /home/hipo/projects/hsm41test
    512  24-04-2012 00:22:56    STANDARD    A  /home/hipo/projects/hsm42upg
  1,024  12-09-2011 19:57:09    STANDARD    A  /home/hipo/projects/hfs0106/test
  1,024  12-09-2011 19:57:09    STANDARD    A  /home/hipo/projects/hfs0106/test/test2
 12,048  04-12-2011 02:01:29    STANDARD    A  /home/hipo/projects/hsm41perf/tables
 50,326  30-04-2012 01:35:26    STANDARD    A  /home/hipo/projects/hsm42upg/PMR70023
 50,326  27-04-2012 00:28:15    STANDARD    A  /home/hipo/projects/hsm42upg/PMR70099
 11,013  24-04-2012 00:22:56    STANDARD    A  /home/hipo/projects/hsm42upg/md5check  

 

  • To make tsm, backup some directories on Linux / AIX other unices:

 

tsm> incr /  /usr  /usr/local  /home /lib

 

  • For tsm to backup some standard netware drives, use:

 

tsm> incr NDS:  USR:  SYS:  APPS:  

 

  • To backup C:\ D:\ E:\ F:\ if TSM is running on Windows

 

tsm> incr C:  D:  E: F:  -incrbydate 

 

  • To back up entire disk volumes irrespective of whether files have changed since the last backup, use the selective command with a wildcard and -subdir=yes as below:

 

tsm> sel /*  /usr/*   /home/*  -su=yes   ** Unix/Linux

 

7. Backup selected files from a backup location

 

It is intuitive to think you can just add some wildcard characters to select what you want
to backup from a selected location but this is not so, if you try something like below
you will get an err.

 

tsm> incr /home/hipo/projects/*/* -su=yes      
ANS1071E Invalid domain name entered: '/home/hipo/projects/*/*'


The proper way to select a certain folder / file for backup is with:

 

tsm> sel /home/hipo/projects/*/* -su=yes

 

8. Restoring tsm data from backup

 

To restore the config httpd.conf to custom directory use:

 

tsm> rest /etc/httpd/conf/httpd.conf  /home/hipo/restore/

 

N!B! that in order for above to work you need to have the '/' trailing slash at the end.

If you want to restore a file under a different name:

 

tsm> rest /etc/ntpd.conf  /home/hipo/restore/

 

9. Restoring a whole backupped partition

 

tsm> rest /home/*  /tmp/restore/ -su=yes

 

This is using the Tivoli 'Restoring multiple files and directories', and the files to restore '*'
are kept till the one that was recovered (saying this in case if you accidently cancel the restore)

 

10. Restoring files with back date 

 

By default the restore function will restore the latest available backupped file, if you need
to recover a specific file, you need the '-inactive' '-pick' options.
The 'pick' interface is interactive so once listed you can select the exact file from the date
you want to restore.

General restore command syntax is:
 

tsm> restore [source-file] [destination-file]

 


tsm> rest /home/hipo/projects/*  /tmp/restore/ -su=yes  -inactive -pick

TSM Scrollable PICK Window – Restore

     #    Backup Date/Time        File Size A/I  File
   ————————————————————————————————–
   170. | 12-09-2011 19:57:09        650  B  A   /home/hipo/projects/hsm41test/inclexcl.test
   171. | 12-09-2011 19:57:09       2.74 KB  A   /home/hipo/projects/hsm41test/inittab.ORIG
   172. | 12-09-2011 19:57:09       2.74 KB  A   /home/hipo/projects/hsm41test/inittab.TEST
   173. | 12-09-2011 19:57:09       1.13 KB  A   /home/hipo/projects/hsm41test/md5.out
   174. | 30-04-2012 01:35:26        512  B  A   /home/hipo/projects/hsm42125upg/PMR70023
   175. | 26-04-2012 01:02:08        512  B  I   /home/hipo/projects/hsm42125upg/PMR70023
   176. | 27-04-2012 00:28:15        512  B  A   /home/hipo/projects/hsm42125upg/PMR70099
   177. | 24-04-2012 19:17:34        512  B  I   /home/hipo/projects/hsm42125upg/PMR70099
   178. | 24-04-2012 00:22:56       1.35 KB  A   /home/hipo/projects/hsm42125upg/dsm.opt
   179. | 24-04-2012 00:22:56       4.17 KB  A   /home/hipo/projects/hsm42125upg/dsm.sys
   180. | 24-04-2012 00:22:56       1.13 KB  A   /home/hipo/projects/hsm42125upg/dsmmigfstab
   181. | 24-04-2012 00:22:56       7.30 KB  A   /home/hipo/projects/hsm42125upg/filesystems
   182. | 24-04-2012 00:22:56       1.25 KB  A   /home/hipo/projects/hsm42125upg/inclexcl
   183. | 24-04-2012 00:22:56        198  B  A   /home/hipo/projects/hsm42125upg/inclexcl.dce
   184. | 24-04-2012 00:22:56        291  B  A   /home/hipo/projects/hsm42125upg/inclexcl.ox_sys
   185. | 24-04-2012 00:22:56        650  B  A   /home/hipo/projects/hsm42125upg/inclexcl.test
   186. | 24-04-2012 00:22:56        670  B  A   /home/hipo/projects/hsm42125upg/inetd.conf
   187. | 24-04-2012 00:22:56       2.71 KB  A   /home/hipo/projects/hsm42125upg/inittab
   188. | 24-04-2012 00:22:56       1.00 KB  A   /home/hipo/projects/hsm42125upg/md5check
   189. | 24-04-2012 00:22:56      79.23 KB  A   /home/hipo/projects/hsm42125upg/mkreport.020423.out
   190. | 24-04-2012 00:22:56       4.27 KB  A   /home/hipo/projects/hsm42125upg/ssamap.020423.out
   191. | 26-04-2012 01:02:08      12.78 MB  A   /home/hipo/projects/hsm42125upg/PMR70023/70023.tar
   192. | 25-04-2012 16:33:36      12.78 MB  I   /home/hipo/projects/hsm42125upg/PMR70023/70023.tar
        0———10——–20——–30——–40——–50——–60——–70——–80——–90–
<U>=Up  <D>=Down  <T>=Top  <B>=Bottom  <R#>=Right  <L#>=Left
<G#>=Goto Line #  <#>=Toggle Entry  <+>=Select All  <->=Deselect All
<#:#+>=Select A Range <#:#->=Deselect A Range  <O>=Ok  <C>=Cancel
pick> 


To navigate in pick interface you can select individual files to restore via the number seen leftside.
To scroll up / down use 'U' and 'D' as described in the legenda.

 

11. Restoring your data to another machine

 

In certain circumstances, it may be necessary to restore some, or all, of your data onto a machine other than the original from which it was backed up.

In ideal case the machine platform should be identical to that of the original machine. Where this is not possible or practical please note that restores are only possible for partition types that the operating system supports. Thus a restore of an NTFS partition to a Windows 9x machine with just FAT support may succeed but the file permissions will be lost.
TSM does not work fine with cross-platform backup / restore, so better do not try cross-platform restores.
 Trying to restore files onto a Windows machine that have previously been backed up with a non-Windows one. TSM created backups on Windows sent by other OS platforms can cause  backups to become inaccessible from the host system.

To restore your data to another machine you will need the TSM software installed on the target machine. Entries in Tivoli configuration files dsm.sys and/or dsm.opt need to be edited if the node that you are restoring from does not reside on the same server. Please see our help page section on TSM configuration files for their locations for your operating system. 

To access files from another machine you should then start the TSM client as below:

 

# dsmc -virtualnodename=RESTORE.MACHINE      


You will then be prompted for the TSM password for this machine.

 

You will probably want to restore to a different destination to the original files to prevent overwriting files on the local machine, as below:

 

  • Restore of D:\ Drive to D:\Restore ** Windows 

 

tsm> rest D:\*   D:\RESTORE\    -su=yes 
 

 

  • Restore user /home/* to /scratch on ** Mac, Unix/Linux

 

tsm> rest /home/* /scratch/     -su=yes  
 

 

  • Restoring Tivoli data on old netware

 

tsm> rest SOURCE-SERVER\USR:*  USR:restore/   -su=yes  ** Netware

 

12. Adding more directories for incremental backup / Check whether TSM backup was done correctly?

The easiest way is to check the produced dschmed.log if everything is okay there should be records in the log that Tivoli backup was scheduled in a some hours time
succesfully.
A normally produced backup scheduled in log should look something like:

 

14-03-2020 23:03:04 — SCHEDULEREC STATUS BEGIN
14-03-2020 23:03:04 Total number of objects inspected:   91,497
14-03-2020 23:03:04 Total number of objects backed up:      113
14-03-2020 23:03:04 Total number of objects updated:          0
14-03-2020 23:03:04 Total number of objects rebound:          0
14-03-2020 23:03:04 Total number of objects deleted:          0
14-03-2020 23:03:04 Total number of objects expired:         53
14-03-2020 23:03:04 Total number of objects failed:           6
14-03-2020 23:03:04 Total number of bytes transferred:    19.38 MB
14-03-2020 23:03:04 Data transfer time:                    1.54 sec
14-03-2020 23:03:04 Network data transfer rate:        12,821.52 KB/sec
14-03-2020 23:03:04 Aggregate data transfer rate:        114.39 KB/sec
14-03-2020 23:03:04 Objects compressed by:                    0%
14-03-2020 23:03:04 Elapsed processing time:           00:02:53
14-03-2020 23:03:04 — SCHEDULEREC STATUS END
14-03-2020 23:03:04 — SCHEDULEREC OBJECT END WEEKLY_23_00 14-12-2010 23:00:00
14-03-2020 23:03:04 Scheduled event 'WEEKLY_23_00' completed successfully.
14-03-2020 23:03:04 Sending results for scheduled event 'WEEKLY_23_00'.
14-03-2020 23:03:04 Results sent to server for scheduled event 'WEEKLY_23_00'.

 

in case of errors you should check dsmerror.log
 

Conclusion


In this article I've briefly evaluated some basics of IBM Commercial Tivoli Storage Manager (TSM) to be able to  list backups, check backup schedules and how to the files set to be
excluded from a backup location and most importantly how to check that data backed up data is in a good shape and accessible.
It was explained how backups can be restored on a local and remote machine as well as how to  append new files to be set for backup on next incremental scheduled backup.
It was shown how the pick interactive cli interface could be used to restore files at a certain data back in time as well as how full partitions can be restored and how some
certain file could be retrieved from the TSM data copy.

Rsync copy files with root privileges between servers with root superuser account disabled

Tuesday, December 3rd, 2019

 

rsync-copy-files-between-two-servers-with-root-privileges-with-root-superuser-account-disabled

Sometimes on servers that follow high security standards in companies following PCI Security (Payment Card Data Security) standards it is necessery to have a very weird configurations on servers,to be able to do trivial things such as syncing files between servers with root privileges in a weird manners.This is the case for example if due to security policies you have disabled root user logins via ssh server and you still need to synchronize files in directories such as lets say /etc , /usr/local/etc/ /var/ with root:root user and group belongings.

Disabling root user logins in sshd is controlled by a variable in /etc/ssh/sshd_config that on most default Linux OS
installations is switched on, e.g. 

grep -i permitrootlogin /etc/ssh/sshd_config
PermitRootLogin yes


Many corporations use Vulnerability Scanners such as Qualys are always having in their list of remote server scan for SSH Port 22 to turn have the PermitRootLogin stopped with:

 

PermitRootLogin no


In this article, I'll explain a scenario where we have synchronization between 2 or more servers Server A / Server B, whatever number of servers that have already turned off this value, but still need to
synchronize traditionally owned and allowed to write directories only by root superuser, here is 4 easy steps to acheive it.

 

1. Add rsyncuser to Source Server (Server A) and Destination (Server B)


a. Execute on Src Host:

 

groupadd rsyncuser
useradd -g 1000 -c 'Rsync user to sync files as root src_host' -d /home/rsyncuser -m rsyncuser

 

b. Execute on Dst Host:

 

groupadd rsyncuser
useradd -g 1000 -c 'Rsync user to sync files dst_host' -d /home/rsyncuser -m rsyncuser

 

2. Generate RSA SSH Key pair to be used for passwordless authentication


a. On Src Host
 

su – rsyncuser

ssh-keygen -t rsa -b 4096

 

b. Check .ssh/ generated key pairs and make sure the directory content look like.

 

[rsyncuser@src-host .ssh]$ cd ~/.ssh/;  ls -1

id_rsa
id_rsa.pub
known_hosts


 

3. Copy id_rsa.pub to Destination host server under authorized_keys

 

scp ~/.ssh/id_rsa.pub  rsyncuser@dst-host:~/.ssh/authorized_keys

 

Next fix permissions of authorized_keys file for rsyncuser as anyone who have access to that file (that exists as a user account) on the system
could steal the key and use it to run rsync commands and overwrite remotely files, like overwrite /etc/passwd /etc/shadow files with his custom crafted credentials
and hence hack you 🙂
 

Hence, On Destionation Host Server B fix permissions with:
 

su – rsyncuser; chmod 0600 ~/.ssh/authorized_keys
[rsyncuser@dst-host ~]$


An alternative way for the lazy sysadmins is to use the ssh-copy-id command

 

$ ssh-copy-id rsyncuser@192.168.0.180
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
root@192.168.0.180's password: 
 

 

For improved security here to restrict rsyncuser to be able to run only specific command such as very specific script instead of being able to run any command it is good to use little known command= option
once creating the authorized_keys

 

4. Test ssh passwordless authentication works correctly


For that Run as a normal ssh from rsyncuser

On Src Host

 

[rsyncuser@src-host ~]$ ssh rsyncuser@dst-host


Perhaps here is time that for those who, think enabling a passwordless authentication is not enough secure and prefer to authorize rsyncuser via a password red from a secured file take a look in my prior article how to login to remote server with password provided from command line as a script argument / Running same commands on many servers 

5. Enable rsync in sudoers to be able to execute as root superuser (copy files as root)

 


For this step you will need to have sudo package installed on the Linux server.

Then, Execute once logged in as root on Destionation Server (Server B)

 

[root@dst-host ~]# grep 'rsyncuser ALL' /etc/sudoers|wc -l || echo ‘rsyncuser ALL=NOPASSWD:/usr/bin/rsync’ >> /etc/sudoers
 

 

Note that using rsync with a ALL=NOPASSWD in /etc/sudoers could pose a high security risk for the system as anyone authorized to run as rsyncuser is able to overwrite and
respectivle nullify important files on Destionation Host Server B and hence easily mess the system, even shell script bugs could produce a mess, thus perhaps a better solution to the problem
to copy files with root privileges with the root account disabled is to rsync as normal user somewhere on Dst_host and use some kind of additional script running on Dst_host via lets say cron job and
will copy gently files on selective basis.

Perhaps, even a better solution would be if instead of granting ALL=NOPASSWD:/usr/bin/rsync in /etc/sudoers is to do ALL=NOPASSWD:/usr/local/bin/some_copy_script.sh
that will get triggered, once the files are copied with a regular rsyncuser acct.

 

6. Test rsync passwordless authentication copy with superuser works


Do some simple copy, lets say copy files on Encrypted tunnel configurations located under some directory in /etc/stunnel on Server A to /etc/stunnel on Server B

The general command to test is like so:
 

rsync -aPz -e 'ssh' '–rsync-path=sudo rsync' /var/log rsyncuser@$dst_host:/root/tmp/


This will copy /var/log files to /root/tmp, you will get a success messages for the copy and the files will be at destination folder if succesful.

 

On Src_Host run:

 

[rsyncuser@src-host ~]$ dst=FQDN-DST-HOST; user=rsyncuser; src_dir=/etc/stunnel; dst_dir=/root/tmp;  rsync -aP -e 'ssh' '–rsync-path=sudo rsync' $src_dir  $rsyncuser@$dst:$dst_dir;

 

7. Copying files with root credentials via script


The simlest file to use to copy a bunch of predefined files  is best to be handled by some shell script, the most simple version of it, could look something like this.
 

#!/bin/bash
# On server1 use something like this
# On server2 dst server
# add in /etc/sudoers
# rsyncuser ALL=NOPASSWD:/usr/bin/rsync

user='rsyncuser';

dst_dir="/root/tmp";
dst_host='$dst_host';
src[1]="/etc/hosts.deny";
src[2]="/etc/sysctl.conf";
src[3]="/etc/samhainrc";
src[4]="/etc/pki/tls/";
src[5]="/usr/local/bin/";

 

for i in $(echo ${src[@]}); do
rsync -aPvz –delete –dry-run -e 'ssh' '–rsync-path=sudo rsync' "$i" $rsyncuser@$dst_host:$dst_dir"$i";
done


In above script as you can see, we define a bunch of files that will be copied in bash array and then run a loop to take each of them and copy to testination dir.
A very sample version of the script rsync_with_superuser-while-root_account_prohibited.sh 
 

Conclusion


Lets do short overview on what we have done here. First Created rsyncuser on SRC Server A and DST Server B, set up the key pair on both copied the keys to make passwordless login possible,
set-up rsync to be able to write as root on Dst_Host / testing all the setup and pinpointing a small script that can be used as a backbone to develop something more complex
to sync backups or keep system configurations identicatial – for example if you have doubts that some user might by mistake change a config etc.
In short it was pointed the security downsides of using rsync NOPASSWD via /etc/sudoers and few ideas given that could be used to work on if you target even higher
PCI standards.

 

Top Paying Google Adsense Keywords for 2013 – Increase blog earnings with high paying keywords

Wednesday, July 31st, 2013

Most-High-Paying-Google-AdSense-Keywords-for-year-2013-Increase-your-revenues-from-Google-with-these-keywords

Whether you're a blogger and you're trying to earn some extra cash for your daily living via blogging you already know how hard it is. It is not hard it is almost impossible to earn enough from Blog or personal website to make it your primary source of income. With the Crisis the CPC (Cost Per Click) rate of Advertisements and the number of people willing to pay big money for Cost per click dropped down drastically … This means also our earnings as bloggers decreased badly … However there is still a bit of hope to boost a bit Google Adsense Paying revenues, by trying to write articles including certain high payed keywords. Not surprisingly this high pay keywords vary seriously over years. You might be shocked to know that there are some keywords for which Google pays over 100$ per CLICK!!! OMG 100$ per click this sounds unrealistic but according to some rumors online its a fact. I've found on the internet a list of 70 keywords said to make your blog money per click from 179 dollars (highest clickable advertiser pay fee) to 51 bucks at minimum. Actually the reason to write this post was to test if this rumors with so high CPC are true. I will update the post later to tell you whether really top words work or its jus ta big fraud. Here is list of 70 top earning Google Adsense revenue keywords for 2013

Keywords
CPC
MESOTHELIOMA LAW FIRM
    $179.01
DONATE CAR TO CHARITY CALIFORNIA
    $130.25
DONATE CAR FOR TAX CREDIT
    $126.65
DONATE CARS IN MA
    $125.58
DONATE YOUR CAR SACRAMENTO
    $118.20
HOW TO DONATE A CAR IN CALIFORNIA
    $111.21
SELL ANNUITY PAYMENT
    $107.46
DONATE YOUR CAR FOR KIDS
    $106.01
ASBESTOS LAWYERS
    $105.84
STRUCTURED ANNUITY SETTLEMENT
    $100.8
ANNUITY SETTLEMENTS
    $100.72
CAR INSURANCE QUOTES COLORADO
    $100.93
NUNAVUT CULTURE
    $99.52
DAYTON FREIGHT LINES
    $99.39
HARDDRIVE DATA RECOVERY SERVICES
    $98.59
DONATE A CAR IN MARYLAND
    $98.51
MOTOR REPLACEMENTS
    $98.43
CHEAP DOMAIN REGISTRATION HOSTING
    $98.39
DONATING A CAR IN MARYLAND
    $98.20
DONATE CARS ILLINOIS
    $98.13
CRIMINAL DEFENSE ATTORNEYS FLORIDA
    $98.07
BEST CRIMINAL LAWYER IN ARIZONA
    $97.93
CAR INSURANCE QUOTES UTAH
    $97.92
LIFE INSURANCE CO LINCOLN
    $97.07
HOLLAND MICHIGAN COLLEGE
    $95.74
ONLINE MOTOR INSURANCE QUOTES
    $95.73
ONLINE COLLEDGES
    $95.65
PAPERPORT PROMOTIONAL CODE
    $95.13
ONLINECLASSES
    $95.06
WORLD TRADE CENTER FOOTAGE
    $95.02
MASSAGE SCHOOL DALLAS TEXAS
    $94.90
PSYCHIC FOR FREE
    $94.61
DONATE OLD CARS TO CHARITY
    $94.55
LOW CREDIT LINE CREDIT CARDS
    $94.49
DALLAS MESOTHELIOMA ATTORNEYS
    $94.33
CAR INSURANCE QUOTES MN
    $94.29
DONATE YOUR CAR FOR MONEY
    $94.01
CHEAP AUTO INSURANCE IN VA
    $93.84
MET AUTO
    $93.70
FORENSICS ONLINE COURSE
    $93.51
HOME PHONE INTERNET BUNDLE
    $93.32
DONATING USED CARS TO CHARITY
    $93.17
PHD IN COUNSELING EDUCATION
    $92.99 
NEUSON
    $92.89
CAR INSURANCE QUOTES PA
    $92.88
ROYALTY FREE IMAGES STOCK
    $92.76
CAR INSURANCE IN SOUTH DAKOTA
    $92.72
EMAIL BULK SERVICE
    $92.55
WEBEX COSTS
    $92.38
CHEAP CAR INSURANCE FOR LADIES
    $92.23
CHEAP CAR INSURANCE IN VIRGINIA
    $92.03
REGISTER FREE DOMAINS
    $92.03
BETTER CONFERENCING CALLS
    $91.44
FUTURISTIC ARCHITECTURE
    $91.44
MORTGAGE ADVISER
    $91.29
CAR DONATE
    $88.26
VIRTUAL DATA ROOMS
    $83.18
AUTOMOBILE ACCIDENT ATTORNEY
    $76.57
AUTO ACCIDENT ATTORNEY
    $75.64
CAR ACCIDENT LAWYERS
    $75.17
DATA RECOVERY RAID
    $73.22
MOTOR INSURANCE QUOTES
    $68.61
PERSONAL INJURY LAWYER
    $66.53
CAR INSURANCE QUOTES
    $61.03
ASBESTOS LUNG CANCER
    $60.96
INJURY LAWYERS
    $60.79
PERSONAL INJURY LAW FIRM
    $60.56
ONLINE CRIMINAL JUSTICE DEGREE
    $60.4
CAR INSURANCE COMPANIES
    $58.66
BUSINESS VOIP SOLUTIONS
    $51.9

Install simscan on Qmail for better Mail server performance and get around unexisting suid perl in newer Linux Debian / Ubuntu servers

Tuesday, August 18th, 2015

qmail-fixing-clamdscan-errors-and-qq-errors-qmail-binary-migration-few-things-to-check-outclamav_logo-installing-clamav-antivirus-to-scan-periodically-debian-server-websites-for-viruses

I've been stuck with qmail-scanner-queue for a while on each and every new Qmail Mail server installation, I've done, this time it was not different but as time evolves and Qmail and Qmail Scanner Wrapper are not regularly updated it is getting, harder and harder to make a fully functional Qmail on newer Linux server distribution releases.

I know many would argue QMAIL is already obsolete but still I have plenty of old servers running QMAIL whose migration might cause more troubles than just continuing to use QMAIL. Moreover QMAIL once set-upped works like a charm.

I've been recently experiencing severe issues with clamdscan errors and I tried to work around this with compiling and using a suid wrapper, however still the clamdscan errors continued and as qmail-scanner is not actively developed and it is much slower than simscan, I've finally decided to give simscan as a mean to fix the clamdscan errors and thanksfully this worked as a solution.

Here is what I did "rawly" to make simscan work on this install:
 

Make sure simscan is properly installed on Debian Linux 7 or Ubuntu servers and probably (should work) on other Deb based Linuxes by following below steps:
 

a) Configure simscan with following compile time options as root (superuser)

./configure \
–enable-user=qscand \
–enable-clamav \
–enable-clamdscan=/usr/local/bin/clamdscan \
–enable-custom-smtp-reject=y \
–enable-per-domain=y \
–enable-attach=y \
–enable-dropmsg=n \
–enable-spam=y \
–enable-spam-hits=5 \
–enable-spam-passthru=y \
–enable-qmail-queue=/var/qmail/bin/qmail-queue \
–enable-ripmime=/usr/local/bin/ripmime \
–enable-sigtool-path=/usr/local/bin/sigtool \
–enable-received=y


b) Compile it

 

 make && make install-strip

c) Fix any wrong permissions of simscan queue directory

 

chmod g+s /var/qmail/simscan/

chown -R qscand:qscand /var/qmail/simscan/
chmod -R 777 simscan/chown -R qscand:qscand simscan/
chown -R qscand:qscand simscan/

d) Add some additional simscan options (how simscan is how to perform scans)

The restart qmail to make mailserver start using simscan instead of qmail-scanner, run below command (again as root):

echo ":clam=yes,spam=yes,spam_hits=8.5,attach=.vbs:.lnk:.scr:.wsh:.hta:.pif" > /var/qmail/control/simcontrol

 

e) Run /var/qmail/bin/simscanmk in order to convert /var/qmail/control/simcontrol into the /var/qmail/control/simcontrol.cdb database

/var/qmail/bin/simscanmk
/var/qmail/bin/simscanmk -g

f) Modify /service/qmail-smtpd/run to set simscan to be default Antivirus Wrapper Scanner

vim /service/qmail-smtpd/run

I'm using thibs's run script so I've uncommented the line there:

QMAILQUEUE="$VQ/bin/simscan"

Below two lines should stay commented as qmail-scanner is no longer used:

##QMAILQUEUE="$VQ/bin/qmail-scanner-queue"
##QMAILQUEUE="$VQ/bin/qmail-scanner-queue.pl"
export QMAILQUEUE

qmailctl restart
* Stopping qmail-smtpdssl.
* Stopping qmail-smtpd.
* Sending qmail-send SIGTERM and restarting.
* Restarting qmail-smtpd.
* Restarting qmail-smtpdssl.

g) Test whether simscan is properly sending / receiving emails:

echo "Testing Email" >> /tmp/mailtest.txt
env QMAILQUEUE=/var/qmail/bin/simscan SIMSCAN_DEBUG=3 /var/qmail/bin/qmail-inject hipo@my-mailserver.com < /tmp/mailtest.txt

Besides that as I'm using qscand:qscand as a user for my overall Qmail Thibs install I had to also do:

chown -R qscand:qscand /var/qmail/simscan/
chmod -R 777 simscan/
chown -R qscand:qscand simscan/

 

It might be a good idea to also place that lines in /etc/rc.local to auto change permissions on Linux boot, just in case something wents wrong with permissions.

Yeah, I know 777 is unsecure but without this permissions, I was still getting errors, plus the server doesn't have any accounts except the administrator, so I do not worry other system users might sniff on email 🙂

h) Test whether Qmail mail server send / receives fine with simscan

After that I've used another mail server with mail command to test whether mail is received:
 

mail -s "testing email1234" hipo@new-configured-qmail-server.com
asdfadsf
.
Cc:

Then it is necessery to also install latest clamav daemon from source in my case that's on Debian GNU / Linux 7, because somehow the Debian shipped binary version of clamav 0.98.5+dfsg-0+deb7u2 does fail to scan any incoming or outgoing email with error:
 

clamdscan: corrupt or unknown clamd scanner error or memory/resource/perms problem – exit status -1/72057594037927935

So to fix it you will have to install clamav on Debian Linux from source.


Voilla, that's all finally it worked !

Check your Server Download / Upload Internet Speed from Console on Linux / BSD / Unix howto

Tuesday, March 17th, 2015

tux-check-internet-network-download-upload-speed-on-linux-console-terminal-linux-bsd-unix
If you've been given a new dedicated server from a New Dedicated-Server-Provider or VPS with Linux and you were told that a certain download speed to the Server is guaranteed from the server provider, in order to be sure the server's connection to the Internet told by service provider is correct it is useful to run a simple measurement console test after logging in remotely to the server via SSH.

Testing connection from Terminal is useful because as you probably know most of Linux / UNIX servers doesn't have a GUI interface and thus it is not possible to test Internet Up / Down Bandwidth through speedtest.net.
 

1. Testing Download Internet Speed given by ISP / Dedi-Server Provider from Linux Console

For the download speed (internet) test the historical approach was to just try downloading the Linux kernel source code from www.kernel.org with some text browser such as lynx or links count the seconds for which the download is completed and then multiple the kernel source archive size on the seconds to get an approximate bandwidth per second, however as nowdays internet connection speeds are much higher, thus it is better to try to download some Linux distribution iso file, you can still use kernel tar archive but it completed too fast to give you some good (adequate) statistics on Download bandwidth.

If its a fresh installed Linux server probably you will probably not have links / elinks and lynx text internet browers  installed so install them depending on deb / rpm distro with:

If on Deb Linuz distro:

 

root@pcfreak:/root# apt-get install –yes links elinks lynx

 

On RPM Based Linuz distro:
 

 

[root@fedora ~]# yum install -y lynx elinks links

 

Conduct Internet  Download Speed with links
root@pcfreak:/root# links https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.19.1.tar.xz

check_your_download_speed-from-console-linux-with-links-text-browser

(Note that the kernel link is current latest stable Kernel source code archive in future that might change, so try with latest archive.)

You can also use non-interactive tool such as wget curl or lftp to measure internet download speed

To test Download Internet Speed with wget without saving anything to disk set output to go to /dev/null 

 

root@pcfreak:~# wget -O /dev/null https://www.pc-freak.net//~hipo/hirens-bootcd/HirensBootCD15/Hirens.BootCD.15.0.zip

 

check_bandwidth_download-internet-speed-with-wget-from-console-non-interactively-on-linux

You see the Download speed is 104 Mbit/s this is so because I'm conducting the download from my local 100Mbit network.

For the test you can use my mirrored version of Hirens BootCD

2. Testing Uplink Internet speed provided by ISP / Server Provider from Linux (SSH) Console

To test your uplink speed you will need lftp or iperf command tool.

 

root@pcfreak:~# apt-cache show lftp|grep -i descr -A 12
Description: Sophisticated command-line FTP/HTTP client programs
 Lftp is a file retrieving tool that supports FTP, HTTP, FISH, SFTP, HTTPS
 and FTPS protocols under both IPv4 and IPv6. Lftp has an amazing set of
 features, while preserving its interface as simple and easy as possible.
 .
 The main two advantages over other ftp clients are reliability and ability
 to perform tasks in background. It will reconnect and reget the file being
 transferred if the connection broke. You can start a transfer in background
 and continue browsing on the ftp site. It does this all in one process. When
 you have started background jobs and feel you are done, you can just exit
 lftp and it automatically moves to nohup mode and completes the transfers.
 It has also such nice features as reput and mirror. It can also download a
 file as soon as possible by using several connections at the same time.

 

root@pcfreak:/root# apt-cache show iperf|grep -i desc -A 2
Description: Internet Protocol bandwidth measuring tool
 Iperf is a modern alternative for measuring TCP and UDP bandwidth performance,
 allowing the tuning of various parameters and characteristics.

 

To test Upload Speed to Internet connect remotely and upload any FTP file:

 

root@pcfreak:/root# lftp -u hipo www.pc-freak.net -e 'put Hirens.BootCD.15.0.zip; bye'

 

uploading-file-with-lftp-screenshot-test-upload-internet-speed-linux

On Debian Linux to install iperf:

 

root@pcfreak:/root# apt-get install –yes iperf

 

On latest CentOS 7 and Fedora (and other RPM based) Linux, you will need to add RPMForge repository and install with yum

 

[root@centos ~]# rpm -ivh  rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm

[root@centos ~]# yum -y install iperf

 

Once having iperf on the server the easiest way currently to test it is to use
serverius.net speedtest server –  located at the Serverius datacenters, AS50673 and is running on a 10GE connection with 5GB cap.

 

root@pcfreak:/root# iperf -c speedtest.serverius.net -P 10
————————————————————
Client connecting to speedtest.serverius.net, TCP port 5001
TCP window size: 16.0 KByte (default)
————————————————————
[ 12] local 83.228.93.76 port 54258 connected with 178.21.16.76 port 5001
[  7] local 83.228.93.76 port 54252 connected with 178.21.16.76 port 5001
[  5] local 83.228.93.76 port 54253 connected with 178.21.16.76 port 5001
[  9] local 83.228.93.76 port 54251 connected with 178.21.16.76 port 5001
[  3] local 83.228.93.76 port 54249 connected with 178.21.16.76 port 5001
[  4] local 83.228.93.76 port 54250 connected with 178.21.16.76 port 5001
[ 10] local 83.228.93.76 port 54254 connected with 178.21.16.76 port 5001
[ 11] local 83.228.93.76 port 54255 connected with 178.21.16.76 port 5001
[  6] local 83.228.93.76 port 54256 connected with 178.21.16.76 port 5001
[  8] local 83.228.93.76 port 54257 connected with 178.21.16.76 port 5001
[ ID] Interval       Transfer     Bandwidth
[  9]  0.0-10.2 sec  4.05 MBytes  3.33 Mbits/sec
[ 10]  0.0-10.2 sec  3.39 MBytes  2.78 Mbits/sec
[ 11]  0.0-10.3 sec  3.75 MBytes  3.06 Mbits/sec
[  4]  0.0-10.3 sec  3.43 MBytes  2.78 Mbits/sec
[ 12]  0.0-10.3 sec  3.92 MBytes  3.18 Mbits/sec
[  3]  0.0-10.4 sec  4.45 MBytes  3.58 Mbits/sec
[  5]  0.0-10.5 sec  4.06 MBytes  3.24 Mbits/sec
[  6]  0.0-10.5 sec  4.30 MBytes  3.42 Mbits/sec
[  8]  0.0-10.8 sec  3.92 MBytes  3.03 Mbits/sec
[  7]  0.0-10.9 sec  4.03 MBytes  3.11 Mbits/sec
[SUM]  0.0-10.9 sec  39.3 MBytes  30.3 Mbits/sec

 

You see currently my home machine has an Uplink of 30.3 Mbit/s per second, that's pretty nice since I've ordered a 100Mbits from my ISP (Unguaranteed Bandwidth Connection Speed) and as you might know it is a standard practice for many Internet Proviers to give Uplink speed of 1/4 from the ISP provided overall bandwidth 1/4 would be 25Mbi/s, meaning my ISP (Bergon.NET) is doing pretty well providing me with even more than promised (ordered) bandwidth.

Iperf is probably the choice of most sysadmins who have to do regular bandwidth in local networks speed between 2 servers or test  Internet Bandwidth speed on heterogenous network with Linux / BSDs / AIX / HP-UX (UNIXes). On HP-UX and AIX and other UNIXes for which iperf doesn't have port you have to compile it yourself.

If you don't have root /admin permissions on server and there is python language enterpreter installed you can use speedtest_cli.py script to test internet throughput connectivity
speedtest_cli uses speedtest.net to test server up / down link just in case if script is lost in future I've made ownload mirror of speedtest_cli.py is here

Quickest way to test net speed with speedtest_cli.py:

 

$ lynx -dump https://raw.github.com/sivel/speedtest-cli/master/speedtest_cli.py > speedtest_cli.py
$ chmod +x speedtest_cli.py
python speedtest_cli.py

speedtest_cli_pyhon_script_screenshot-on-gnu-linux-test-internet-network-speed-on-unix

Maximal protection against SSH attacks. If your server has to stay with open SSH (Secure Shell) port open to the world

Thursday, April 7th, 2011

Brute Force Attack SSH screen, Script kiddie attacking
If you’re a a remote Linux many other Unix based OSes, you have defitenily faced the security threat of many failed ssh logins or as it’s better known a brute force attack

During such attacks your /var/log/messages or /var/log/auth gets filled in with various failed password logs like for example:

Feb 3 20:25:50 linux sshd[32098]: Failed password for invalid user oracle from 95.154.249.193 port 51490 ssh2
Feb 3 20:28:30 linux sshd[32135]: Failed password for invalid user oracle1 from 95.154.249.193 port 42778 ssh2
Feb 3 20:28:55 linux sshd[32141]: Failed password for invalid user test1 from 95.154.249.193 port 51072 ssh2
Feb 3 20:30:15 linux sshd[32163]: Failed password for invalid user test from 95.154.249.193 port 47481 ssh2
Feb 3 20:33:20 linux sshd[32211]: Failed password for invalid user testuser from 95.154.249.193 port 51731 ssh2
Feb 3 20:35:32 linux sshd[32249]: Failed password for invalid user user from 95.154.249.193 port 38966 ssh2
Feb 3 20:35:59 linux sshd[32256]: Failed password for invalid user user1 from 95.154.249.193 port 55850 ssh2
Feb 3 20:36:25 linux sshd[32268]: Failed password for invalid user user3 from 95.154.249.193 port 36610 ssh2
Feb 3 20:36:52 linux sshd[32274]: Failed password for invalid user user4 from 95.154.249.193 port 45514 ssh2
Feb 3 20:37:19 linux sshd[32279]: Failed password for invalid user user5 from 95.154.249.193 port 54262 ssh2
Feb 3 20:37:45 linux sshd[32285]: Failed password for invalid user user2 from 95.154.249.193 port 34755 ssh2
Feb 3 20:38:11 linux sshd[32292]: Failed password for invalid user info from 95.154.249.193 port 43146 ssh2
Feb 3 20:40:50 linux sshd[32340]: Failed password for invalid user peter from 95.154.249.193 port 46411 ssh2
Feb 3 20:43:02 linux sshd[32372]: Failed password for invalid user amanda from 95.154.249.193 port 59414 ssh2
Feb 3 20:43:28 linux sshd[32378]: Failed password for invalid user postgres from 95.154.249.193 port 39228 ssh2
Feb 3 20:43:55 linux sshd[32384]: Failed password for invalid user ftpuser from 95.154.249.193 port 47118 ssh2
Feb 3 20:44:22 linux sshd[32391]: Failed password for invalid user fax from 95.154.249.193 port 54939 ssh2
Feb 3 20:44:48 linux sshd[32397]: Failed password for invalid user cyrus from 95.154.249.193 port 34567 ssh2
Feb 3 20:45:14 linux sshd[32405]: Failed password for invalid user toto from 95.154.249.193 port 42350 ssh2
Feb 3 20:45:42 linux sshd[32410]: Failed password for invalid user sophie from 95.154.249.193 port 50063 ssh2
Feb 3 20:46:08 linux sshd[32415]: Failed password for invalid user yves from 95.154.249.193 port 59818 ssh2
Feb 3 20:46:34 linux sshd[32424]: Failed password for invalid user trac from 95.154.249.193 port 39509 ssh2
Feb 3 20:47:00 linux sshd[32432]: Failed password for invalid user webmaster from 95.154.249.193 port 47424 ssh2
Feb 3 20:47:27 linux sshd[32437]: Failed password for invalid user postfix from 95.154.249.193 port 55615 ssh2
Feb 3 20:47:54 linux sshd[32442]: Failed password for www-data from 95.154.249.193 port 35554 ssh2
Feb 3 20:48:19 linux sshd[32448]: Failed password for invalid user temp from 95.154.249.193 port 43896 ssh2
Feb 3 20:48:46 linux sshd[32453]: Failed password for invalid user service from 95.154.249.193 port 52092 ssh2
Feb 3 20:49:13 linux sshd[32458]: Failed password for invalid user tomcat from 95.154.249.193 port 60261 ssh2
Feb 3 20:49:40 linux sshd[32464]: Failed password for invalid user upload from 95.154.249.193 port 40236 ssh2
Feb 3 20:50:06 linux sshd[32469]: Failed password for invalid user debian from 95.154.249.193 port 48295 ssh2
Feb 3 20:50:32 linux sshd[32479]: Failed password for invalid user apache from 95.154.249.193 port 56437 ssh2
Feb 3 20:51:00 linux sshd[32492]: Failed password for invalid user rds from 95.154.249.193 port 45540 ssh2
Feb 3 20:51:26 linux sshd[32501]: Failed password for invalid user exploit from 95.154.249.193 port 53751 ssh2
Feb 3 20:51:51 linux sshd[32506]: Failed password for invalid user exploit from 95.154.249.193 port 33543 ssh2
Feb 3 20:52:18 linux sshd[32512]: Failed password for invalid user postgres from 95.154.249.193 port 41350 ssh2
Feb 3 21:02:04 linux sshd[32652]: Failed password for invalid user shell from 95.154.249.193 port 54454 ssh2
Feb 3 21:02:30 linux sshd[32657]: Failed password for invalid user radio from 95.154.249.193 port 35462 ssh2
Feb 3 21:02:57 linux sshd[32663]: Failed password for invalid user anonymous from 95.154.249.193 port 44290 ssh2
Feb 3 21:03:23 linux sshd[32668]: Failed password for invalid user mark from 95.154.249.193 port 53285 ssh2
Feb 3 21:03:50 linux sshd[32673]: Failed password for invalid user majordomo from 95.154.249.193 port 34082 ssh2
Feb 3 21:04:43 linux sshd[32684]: Failed password for irc from 95.154.249.193 port 50918 ssh2
Feb 3 21:05:36 linux sshd[32695]: Failed password for root from 95.154.249.193 port 38577 ssh2
Feb 3 21:06:30 linux sshd[32705]: Failed password for bin from 95.154.249.193 port 53564 ssh2
Feb 3 21:06:56 linux sshd[32714]: Failed password for invalid user dev from 95.154.249.193 port 34568 ssh2
Feb 3 21:07:23 linux sshd[32720]: Failed password for root from 95.154.249.193 port 43799 ssh2
Feb 3 21:09:10 linux sshd[32755]: Failed password for invalid user bob from 95.154.249.193 port 50026 ssh2
Feb 3 21:09:36 linux sshd[32761]: Failed password for invalid user r00t from 95.154.249.193 port 58129 ssh2
Feb 3 21:11:50 linux sshd[537]: Failed password for root from 95.154.249.193 port 58358 ssh2

This brute force dictionary attacks often succeed where there is a user with a weak a password, or some old forgotten test user account.
Just recently on one of the servers I administrate I have catched a malicious attacker originating from Romania, who was able to break with my system test account with the weak password tset .

Thanksfully the script kiddie was unable to get root access to my system, so what he did is he just started another ssh brute force scanner to crawl the net and look for some other vulnerable hosts.

As you read in my recent example being immune against SSH brute force attacks is a very essential security step, the administrator needs to take on a newly installed server.

The easiest way to get read of the brute force attacks without using some external brute force filtering software like fail2ban can be done by:

1. By using an iptables filtering rule to filter every IP which has failed in logging in more than 5 times

To use this brute force prevention method you need to use the following iptables rules:
linux-host:~# /sbin/iptables -I INPUT -p tcp --dport 22 -i eth0 -m state -state NEW -m recent -set
linux-host:~# /sbin/iptables -I INPUT -p tcp --dport 22 -i eth0 -m state -state NEW
-m recent -update -seconds 60 -hitcount 5 -j DROP

This iptables rules will filter out the SSH port to an every IP address with more than 5 invalid attempts to login to port 22

2. Getting rid of brute force attacks through use of hosts.deny blacklists

sshbl – The SSH blacklist, updated every few minutes, contains IP addresses of hosts which tried to bruteforce into any of currently 19 hosts (all running OpenBSD, FreeBSD or some Linux) using the SSH protocol. The hosts are located in Germany, the United States, United Kingdom, France, England, Ukraine, China, Australia, Czech Republic and setup to report and log those attempts to a central database. Very similar to all the spam blacklists out there.

To use sshbl you will have to set up in your root crontab the following line:

*/60 * * * * /usr/bin/wget -qO /etc/hosts.deny http://www.sshbl.org/lists/hosts.deny

To set it up from console issue:

linux-host:~# echo '*/60 * * * * /usr/bin/wget -qO /etc/hosts.deny http://www.sshbl.org/lists/hosts.deny' | crontab -u root -

These crontab will download and substitute your system default hosts with the one regularly updated on sshbl.org , thus next time a brute force attacker which has been a reported attacker will be filtered out as your Linux or Unix system finds out the IP matches an ip in /etc/hosts.deny

The /etc/hosts.deny filtering rules are written in a way that only publicly known brute forcer IPs will only be filtered for the SSH service, therefore other system services like Apache or a radio, tv streaming server will be still accessible for the brute forcer IP.

It’s a good practice actually to use both of the methods 😉
Thanks to Static (Multics) a close friend of mine for inspiring this article.

Best Windows tools to Test (Benchmark) Hard Drives, SSD Drives and RAID Storage Controllers

Wednesday, April 23rd, 2014

atto-windows-hard-disk-benchmark-freeware-tool-screenshot-check-hard-disk-speed-windows
Disk Benchmarking is very useful for people involved in Graphic Design, 3D modelling, system admins  and anyone willing to squeeze maximum of his PC hardware.

If you want to do some benchmarking on newly built Windows server targetting Hard Disk performance, just bought a new hard SSD (Solid State Drives) and you want to test how well Hard Drive I/O operations behave or you want to see a regular HDD benchmarking of group of MS Windows PCs and plan hardware optiomization, check out ATTO Disk Benchmark.

So why exactly ATTO Benchmark? – Cause it is one of the best Windows Free Benchmark tools on the internet.

ATTO is a widely-accepted Disk Benchmark freeware utility to help measure storage system performance. ATTO though being freeware is among top tools utilized in industry. It is very useful in comparing different Hard Disk vendors speed, measure Windows storage systems performance with various transfer sizes and test lengths for reads and writes.

ATTO Disk Benchmark is used by manufacturers of Hardware RAID controllers, its precious tool to test Windows storage controllers, host bus adapters (HBAs).

Here is ATTO Benchmark tool specifications (quote from their webstie):
 

  • Transfer sizes from 512KB to 8MB
  • Transfer lengths from 64KB to 2GB
  • Support for overlapped I/O
  • Supports a variety of queue depths
  • I/O comparisons with various test patterns
  • Timed mode allows continuous testing
  • Non-destructive performance measurement on formatted drives
  • Transfer sizes from 512KB to 8MB
  • Transfer lengths from 64KB to 2GB
  • Support for overlapped I/O
  • Supports a variety of queue depths
  • I/O comparisons with various test patterns
  • Timed mode allows continuous testing
  • Non-destructive performance measurement on formatted drives
  • – See more at: http://www.attotech.com/disk-benchmark/#sthash.rRlgSTOE.dpuf

Here is mirrored latest version of ATTO Disk for Download. Once you get your HDD statistics you will probably want to compare to other people results. On  TomsHardware's world famous Hardware geek site there are plenty of Hard Drives performance Charts

Of course there are other GUI alternatives to ATTO Benchmark one historically famous is NBench

NBench

nbench_benchmark_windows_hard-drive-cpu-and-memory

Nbench is nice little benchmarking program for Windows NT. Nbench reports the following components of performance:

CPU speed: integer and floating operations/sec
L1 and L2 cache speeds: MB/sec
main memory speed: MB/sec
disk read and write speeds: MB/sec

SMP systems and multi-tasking OS efficiency can be tested using up to 20 separate threads of execution.

For Console Geeks or Windows server admins there are also some ports of famous *NIX Hard Disk Benchmarking tools:

NTiogen

NTiogen benchmark was written by Symbios Logic, It's Windows NT port of their popular UNIX benchmark IOGEN. NTIOGEN is the parent processes that spawns the specified number of IOGEN processes that actually do the I/O.
The program will display as output the number of processes, the average response time, the number of I/O operations per second, and the number of KBytes per second. You can download mirror copy of Ntiogen here


There are plenty of other GUI and Console HDD Benchmarking Win Tools, i.e.:

IOMeter (ex-developed by Intel and now abandoned available as open source available on SourceForge)

iometer-benchmark-disk-storage-speed-windows
 

Bench32 – Comprehensive benchmark that measures overall system performance under Windows NT or Windows 95, now obsolete not developed anymore abandoned by producer company.

ThreadMark32 – capable of bench (ex developed and supported by ADAPTEC) but also already unsupported

IOZone – filesystem benchmark tool. The benchmark generates and measures a variety of file operations. Iozone has been ported to many machines and runs under many operating systems.
 

N! B! Important note to make here is above suggested tools will provide you more realistic results than the proprietary vendor tools shipped by your hardware vendor. Using proprietary software produced by a single vendor makes it impossible to analyze and compare different hardwares, above HDD benchmarking tools are for "open systems", e.g. nomatter what the hardware producer is produced results can be checked against each other.
Another thing to consider is even though if you use any of above tools to test and compare two storage devices still results will be partially imaginary, its always best to conduct tests in Real Working Application Environments. If you're planning to launch a new services structure always test it first and don't rely on preliminary returned soft benchmarks.

if you know some other useful benchmarking software i'm missing please share.

Create local network between virtual machines in Virtualbox VM – Add local LAN between Linux Virtual Machines

Wednesday, June 11th, 2014

add-virtualbox-virtual-machines-inside-local-network-create-internal-LAN-local-net-linux-windows

I want to do test MySQL Cluster following MySQL Cluster Install Guide for that purpose, I've installed 2 version of CentOS 6.5 inside Virtualbox and I wanted to make the 2 Linux hosts reachable inside a local LAN network, I consulted some colleagues who adviced me to configure two Linux hosts to use Bridget Adapter Virtualbox networking (Network configuration in Virtualbox is done on a Virtual Machine basis from):
 

Devices -> Network Settings

(Attached to: Bridged Adapter)

Note!: that by default Cable Connected (tick) is not selected so when imposing changes on Network – tick should be set)
After Specifying Attached to be Bridged Adapter to make CentOS linux refresh network settings run in gnome-terminal:

[root@centos ~]# dhclient eth0

However CentOS failed to grab itself DHCP IP address.
Thus I tried to assign manually IP addresses with ifconfig, hoping that at least this would work, e.g.:

on CentOS VM 1:

/sbin/ifconfig eth0 192.168.10.1 netmask 255.255.255.0

on CentOS VM 2:

/sbin/ifconfig eth1 192.168.10.2 netmask 255.255.255.0

To test whether there is connection between the 2 VM hosts tried ping-ing 192.168.10.2 (from 192.168.10.1) and tested with telnet if I can access remotely SSH (protocol), from CentOS VM2 1 to CentOS VM2 and vice versa, i.e.:

[root@centos ~]# telnet 192.168.10.2 22

 

Trying 192.168.10.2…
telnet: connect to address 192.168.10.2: No route to host

Then after checking other options and already knowing by using VBox NAT network option I had access to the internet, I tried to attach a standard local IP addresses to both Linux-es as Virtual interfaces (e.g eth0:1), .e.g:

On Linux VM 1:

/sbin/ifconfig eth0:0 192.168.10.1 netmask 255.255.255.0

On Linux VM 2:

/sbin/ifconfig eth1:0 192.168.10.2 netmask 255.255.255.0

Then to test again used telnet

[root@centos ~]# telnet 192.168.10.2 22

Then I found Virtualbox has a special Internal Networking support

to choose in Attached to drop down menu. According to Internal Networking Virtualbox instructions to put two Virtual Machine hosts inside an Internal network they should be both set in Internal network with identical name.
P. S. It is explicitly stated that using Internal Network will enable access between Guest Virtual Machines OS, but hosts will not have access to the Internet (which in my case doesn't really mattered as I needed the two Linux VMs just as a testbed)

virtualbox-create-internal-local-network-between-guest-host-Linux-Windows1

I tried this option but it doesn't work for me for some reason, after some time of research online on how to create local LAN network between 2 Virtual Machines luckily I decided to test all available Virtualbox Networking choices and noticed Host-only adapter.

Selecting Host-only Adapter and using terminal to re-fetch IP address over dhcp:

virtualbox-connect-in-local-lan-network-linux-and-windows-servers-hosts-only-adapter

On CentOS VM1

dhclient eht0

On CentOS VM2

dhclient eth1

assigned me two adjoining IPs – (192.168.56.101 and 192.168.56.102).

Connection between the 2 IPs 192.168.56.101 and 192.168.56.102 on TCP and UDP and ICMP protocol works, now all left is to install MySQL cluster on both nodes.