Posts Tagged ‘connection’

Fix FTP active connection issues “Cannot create a data connection: No route to host” on ProFTPD Linux dedicated server

Tuesday, October 1st, 2019

proftpd-linux-logo

Earlier I've blogged about an encounter problem that prevented Active mode FTP connections on CentOS
As I'm working for a client building a brand new dedicated server purchased from Contabo Dedi Host provider on a freshly installed Debian 10 GNU / Linux, I've had to configure a new FTP server, since some time I prefer to use Proftpd instead of VSFTPD because in my opinion it is more lightweight and hence better choice for a small UNIX server setups. During this once again I've encounted the same ACTIVE FTP not working from FTP server to FTP client host machine. But before shortly explaining, the fix I find worthy to explain briefly what is ACTIVE / PASSIVE FTP connection.

 

1. What is ACTIVE / PASSIVE FTP connection?
 

Whether in active mode, the client specifies which client-side port the data channel has been opened and the server starts the connection. Or in other words the default FTP client communication for historical reasons is in ACTIVE MODE. E.g.
Client once connected to Server tells the server to open extra port or ports locally via which the overall FTP data transfer will be occuring. In the early days of networking when FTP protocol was developed security was not of such a big concern and usually Networks did not have firewalls at all and the FTP DATA transfer host machine was running just a single FTP-server and nothing more in this, early days when FTP was not even used over the Internet and FTP DATA transfers happened on local networks, this was not a problem at all.

In passive mode, the server decides which server-side port the client should connect to. Then the client starts the connection to the specified port.

But with the ever increasing complexity of Internet / Networks and the ever tightening firewalls due to viruses and worms that are trying to own and exploit networks creating unnecessery bulk loads this has changed …

active-passive-ftp-explained-diagram
 

2. Installing and configure ProFTPD server Public ServerName

I've installed the server with the common cmd:

 

apt –yes install proftpd

 

And the only configuration changed in default configuration file /etc/proftpd/proftpd.conf  was
ServerName          "Debian"

I do this in new FTP setups for the logical reason to prevent the multiple FTP Vulnerability Scan script kiddie Crawlers to know the exact OS version of the server, so this was changed to:

 

ServerName "MyServerHostname"

 

Though this is the bad security through obscurity practice doing so is a good practice.
 

3. Create iptable firewall rules to allow ACTIVE FTP mode


But anyways, next step was to configure the firewall to be allowed to communicate on TCP PORT 21 and 20 to incoming source ports range 1024:65535 (to enable ACTIVE FTP) on firewal level with iptables on INPUT and OUTPUT chain rules, like this:

 

iptables -A INPUT -p tcp –sport 1024:65535 -d 0/0 –dport 21 -m state –state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -p tcp -s 0/0 –sport 1024:65535 -d 0/0 –dport 20 -m state –state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -s 0/0 –sport 21 -d 0/0 –dport 1024:65535 -m state –state ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -s 0/0 –sport 20 -d 0/0 –dport 1024:65535 -m state –state ESTABLISHED,RELATED -j ACCEPT


Talking about Active and Passive FTP connections perhaps for novice Linux users it might be worthy to say few words on Active and Passive FTP connections

Once firewall has enabled FTP Active / Passive connections is on and FTP server is listening, to test all is properly configured check iptable rules and FTP listener:
 

/sbin/iptables -L INPUT |grep ftp
ACCEPT     tcp  —  anywhere             anywhere             tcp spts:1024:65535 dpt:ftp state NEW,ESTABLISHED
ACCEPT     tcp  —  anywhere             anywhere             tcp spts:1024:65535 dpt:ftp-data state NEW,ESTABLISHED
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:ftp
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:ftp-data

netstat -l | grep "ftp"
tcp6       0      0 [::]:ftp                [::]:*                  LISTEN    

 

4. Loading nf_nat_ftp module and net.netfilter.nf_conntrack_helper (for backward compitability)


Next step of course was to add the necessery modules nf_nat_ftp nf_conntrack_sane that makes FTP to properly forward ports with respective Firewall states on any of above source ports which are usually allowed by firewalls, note that the range of ports given 1024:65535 might be too much liberal for paranoid sysadmins and in many cases if ports are not filtered, if you are a security freak you can use some smaller range such as 60000-65535.

 

Here is time to say for sysadmins who haven't recently had a task to configure a new (unecrypted) File Transfer Server as today Secure FTP is almost alltime used for file transfers for the sake of security might be puzzled to find out the old Linux kernel ip_conntrack_ftp which was the standard module used to make FTP Active connections work is substituted nowadays with  nf_nat_ftp and nf_conntrack_sane.

To make the 2 modules permanently loaded on next boot on Debian Linux they have to be added to /etc/modules

Here is how sample /etc/modules that loads the modules on next system boot looks like

cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
softdog
nf_nat_ftp
nf_conntrack_sane


Next to say is that in newer Linux kernels 3.x / 4.x / 5.x the nf_nat_ftp and nf_conntrack-sane behaviour changed so  simply loading the modules would not work and if you do the stupidity to test it with some FTP client (I used gFTP / ncftp from my Linux desktop ) you are about to get FTP No route to host errors like:

 

Cannot create a data connection: No route to host

 

cannot-create-a-data-connection-no-route-to-host-linux-error-howto-fix


Sometimes, instead of No route to host error the error FTP client might return is:

 

227 entering passive mode FTP connect connection timed out error


To make the nf_nat_ftp module on newer Linux kernels hence you have to enable backwards compatibility Kernel variable

 

 

/proc/sys/net/netfilter/nf_conntrack_helper

 

echo 1 > /proc/sys/net/netfilter/nf_conntrack_helper

 

To make it permanent if you have enabled /etc/rc.local legacy one single file boot place as I do on servers – for how to enable rc.local on newer Linuxes check here

or alternatively add it to load via sysctl

sysctl -w net.netfilter.nf_conntrack_helper=1

And to make change permanent (e.g. be loaded on next boot)

echo 'net.netfilter.nf_conntrack_helper=1' >> /etc/sysctl.conf

 

5. Enable PassivePorts in ProFTPD or PassivePortRange in PureFTPD


Last but not least open /etc/proftpd/proftpd.conf find PassivePorts config value (commented by default) and besides it add the following line:

 

PassivePorts 60000 65534

 

Just for information if instead of ProFTPd you experience the error on PureFTPD the configuration value to set in /etc/pure-ftpd.conf is:
 

PassivePortRange 30000 35000


That's all folks, give the ncftp / lftp / filezilla or whatever FTP client you prefer and test it the FTP client should be able to talk as expected to remote server in ACTIVE FTP mode (and the auto passive mode) will be not triggered anymore, nor you will get a strange errors and failure to connect in FTP clients as gftp.

Cheers 🙂

Mount remote Linux SSHFS Filesystem harddisk on Windows Explorer SWISH SSHFS file mounter and a short evaluation on what is available to copy files to SSHFS from Windows PC

Monday, February 22nd, 2016

swish-mount-and-copy-files-from-windows-to-linux-via-sshfs-mount

I'm forced to use Windows on my workbook and I found it really irritating, that I can't easily share files in a DropBox, Google Drive, MS OneDrive, Amazon Storage or other cloud-storage free remote service. etc.
I don't want to use DropBox like non self-hosted Data storage because I want to keep my data private and therefore the only and best option for me was to make possible share my Linux harddisk storage
dir remotely to the Windows notebook.

I didn't wanted to setup some complex environment such as Samba Share Server (which used to be often a common option to share file from Linux server to Windows), neither wanted to bother with  installing FTP service and bother with FTP clients, or configuring some other complex stuff such as WebDav – which BTW is an accepted and heavily used solution across corporate clients to access read / write files on a remote Linux servers.
Hence, I made a quick research what else besides could be used to easily share files and data from Windows PC / notebook to a home brew or professional hosting Linux server.

It turned out, there are few of softwares that gives a similar possibility for a home lan small network Linux / Windows hybrid network users such, here is few of the many:

  • SyncThingSyncthing is an open-source file synchronization client/server application, written in Go, implementing its own, equally free Block Exchange Protocol. The source code's content-management repository is hosted on GitHub

     

     

     

     

     

    syncthing-logo

  • OwnCloud – ownCloud provides universal access to your files via the web, your computer or your mobile devices

     

     

     

     

     

    owncloud-logo

  • Seafile – Seafile is a file hosting software system. Files are stored on a central server and can be synchronized with personal computers and mobile devices via the Seafile client. Files can also be accessed via the server's web interface


seafile-client-in-browser

I've checked all of them and give a quick try of Syncthing which is really easy to start, just download the binary launch it and configure it under https://Localhost:8385 URL from a browser on the Linux server.
Syncthing seemed to be nice and easy to configure solution to be able to Sync files between Server A (Windows) and Server B(Linux) and guess many would enjoy it, if you want to give it a try you can follow this short install syncthing article.
However what I didsliked in both SyncThing and OwnCloud and Seafile and all of the other Sync file solutions was, they only supported synchronization via web and didn't seemed to have a Windows Explorer integration and did required
the server to run more services, posing another security hole in the system as such third party softwares are not easily to update and maintain.

Because of that finally after rethinking about some other ways to copy files to a locally mounted Sync directory from the Linux server, I've decided to give SSHFS a try. Mounting SSHFS between two Linux / UNIX hosts is
quite easy task with SSHFS tool

In Windows however the only way I know to transfer files to Linux via SSHFS was with WinSCP client and other SCP clients as well as the experimental:

As well as few others such as ExpandDrive, Netdrive, Dokan SSHFS (mirrored for download here)
I should say that I first decided to try copying few dozen of Gigabyte movies, text, books etc. using WinSCP direct connection, but after getting a couple of timeouts I was tired of WinSCP and decided to look for better way to copy to remote Linux SSHFS.
However the best solution I found after a bit of extensive turned to be:

SWISH – Easy SFTP for Windows

Swish is very straight forward to configure compared to all of them you download the .exe which as of time of writting is at version 0.8.0 install on the PC and right in My Computer you will get a New Device called Swish next to your local and remote drives C:/ D:/ , USBs etc.

swish-new-device-to-appear-in-my-computer-to-mount-sshfs

As you see in below screenshot two new non-standard buttons will Appear in Windows Explorer that lets you configure SWISH

windows-mount-sshfs-swish-add-sftp-connection-button-screenshot

Next and final step before you have the SSHFS remote Linux filesystem visible on Windows Xp / 7 / 8 / 10 is to fill in remote Linux hostname address (or even better fill in IP to get rid of possible DNS issues), UserName (UserID) and Direcory to mount.

swish-new-fill-in-dialog-to-make-new-linux-sshfs-mount-directory-possible-on-windows

Then you will see the SSHFS moutned:

swish-sshfs-mounted-on-windows

You will be asked to accept the SSH host-key as it used to be unknown so far

swish-mount-sshfs-partition-on-windows-from-remote-linux-accept-key

That's it now you will see straight into Windows Explorer the remote Linux SSHFS mounted:

remote-sshfs-linux-filesystem-mounted-in-windows-explorer-with-swish

Once setupped a Swish connection to copy files directly to it you can use the Send to Embedded Windows dialog, as in below screenshot

swish-send-to-files-windows-screenshot

The only 3 problem with SWISH are:

1. It doesn't support Save password, so on every Windows PC reboot when you want to connect to remote Linux SSHFS, you will have to retype remote login user pass.
Fron security stand point this is not such a bad thing, but it is a bit irritating to everytime type the password without an option to save permanently.
The good thing here is you can use Launch Key Agent
as visible in above screenshot and set in Putty Key Agent your remote host SSH key so the passwordless login will work without any authentication at
all however, this might open a security hole if your Win PC gets infected by virus, which might delete something on remote mounted SSHFS filesystem so I personally prefer to retype password on every boot.

2. it is a bit slow so if you're planning to Transfer large amounts of Data as hundreds of megabytes, expect a very slow transfer rate, even in a Local  10Mbit Network to transfer 20 – 30 GB of data, it took me about 2-3 hours or so.
SWISH is not actively supported and it doesn't have new release since 20th of June 2013, but for the general work I need it is just perfect, as I don't tent to be Synchronizing Dozens of Gigabytes all the time between my notebook PC and the Linux server.

3. If you don't use the established mounted connection for a while or your computer goes to sleep mode after recovering your connection to remote Linux HDD if opened in Windows File Explorer will probably be dead and you will have to re-enable it.

For Mac OS X users who want to mount / attach remote directory from a Linux partitions should look in fuguA Mac OS X SFTP, SCP and SSH Frontend

I'll be glad to hear from people on other good ways to achieve same results as with SWISH but have a better copy speed while using SSHFS.

Check your Server Download / Upload Internet Speed from Console on Linux / BSD / Unix howto

Tuesday, March 17th, 2015

tux-check-internet-network-download-upload-speed-on-linux-console-terminal-linux-bsd-unix
If you've been given a new dedicated server from a New Dedicated-Server-Provider or VPS with Linux and you were told that a certain download speed to the Server is guaranteed from the server provider, in order to be sure the server's connection to the Internet told by service provider is correct it is useful to run a simple measurement console test after logging in remotely to the server via SSH.

Testing connection from Terminal is useful because as you probably know most of Linux / UNIX servers doesn't have a GUI interface and thus it is not possible to test Internet Up / Down Bandwidth through speedtest.net.
 

1. Testing Download Internet Speed given by ISP / Dedi-Server Provider from Linux Console

For the download speed (internet) test the historical approach was to just try downloading the Linux kernel source code from www.kernel.org with some text browser such as lynx or links count the seconds for which the download is completed and then multiple the kernel source archive size on the seconds to get an approximate bandwidth per second, however as nowdays internet connection speeds are much higher, thus it is better to try to download some Linux distribution iso file, you can still use kernel tar archive but it completed too fast to give you some good (adequate) statistics on Download bandwidth.

If its a fresh installed Linux server probably you will probably not have links / elinks and lynx text internet browers  installed so install them depending on deb / rpm distro with:

If on Deb Linuz distro:

 

root@pcfreak:/root# apt-get install –yes links elinks lynx

 

On RPM Based Linuz distro:
 

 

[root@fedora ~]# yum install -y lynx elinks links

 

Conduct Internet  Download Speed with links
root@pcfreak:/root# links https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.19.1.tar.xz

check_your_download_speed-from-console-linux-with-links-text-browser

(Note that the kernel link is current latest stable Kernel source code archive in future that might change, so try with latest archive.)

You can also use non-interactive tool such as wget curl or lftp to measure internet download speed

To test Download Internet Speed with wget without saving anything to disk set output to go to /dev/null 

 

root@pcfreak:~# wget -O /dev/null http://pc-freak.net/~hipo/hirens-bootcd/HirensBootCD15/Hirens.BootCD.15.0.zip

 

check_bandwidth_download-internet-speed-with-wget-from-console-non-interactively-on-linux

You see the Download speed is 104 Mbit/s this is so because I'm conducting the download from my local 100Mbit network.

For the test you can use my mirrored version of Hirens BootCD

2. Testing Uplink Internet speed provided by ISP / Server Provider from Linux (SSH) Console

To test your uplink speed you will need lftp or iperf command tool.

 

root@pcfreak:~# apt-cache show lftp|grep -i descr -A 12
Description: Sophisticated command-line FTP/HTTP client programs
 Lftp is a file retrieving tool that supports FTP, HTTP, FISH, SFTP, HTTPS
 and FTPS protocols under both IPv4 and IPv6. Lftp has an amazing set of
 features, while preserving its interface as simple and easy as possible.
 .
 The main two advantages over other ftp clients are reliability and ability
 to perform tasks in background. It will reconnect and reget the file being
 transferred if the connection broke. You can start a transfer in background
 and continue browsing on the ftp site. It does this all in one process. When
 you have started background jobs and feel you are done, you can just exit
 lftp and it automatically moves to nohup mode and completes the transfers.
 It has also such nice features as reput and mirror. It can also download a
 file as soon as possible by using several connections at the same time.

 

root@pcfreak:/root# apt-cache show iperf|grep -i desc -A 2
Description: Internet Protocol bandwidth measuring tool
 Iperf is a modern alternative for measuring TCP and UDP bandwidth performance,
 allowing the tuning of various parameters and characteristics.

 

To test Upload Speed to Internet connect remotely and upload any FTP file:

 

root@pcfreak:/root# lftp -u hipo pc-freak.net -e 'put Hirens.BootCD.15.0.zip; bye'

 

uploading-file-with-lftp-screenshot-test-upload-internet-speed-linux

On Debian Linux to install iperf:

 

root@pcfreak:/root# apt-get install –yes iperf

 

On latest CentOS 7 and Fedora (and other RPM based) Linux, you will need to add RPMForge repository and install with yum

 

[root@centos ~]# rpm -ivh  rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm

[root@centos ~]# yum -y install iperf

 

Once having iperf on the server the easiest way currently to test it is to use
serverius.net speedtest server –  located at the Serverius datacenters, AS50673 and is running on a 10GE connection with 5GB cap.

 

root@pcfreak:/root# iperf -c speedtest.serverius.net -P 10
————————————————————
Client connecting to speedtest.serverius.net, TCP port 5001
TCP window size: 16.0 KByte (default)
————————————————————
[ 12] local 83.228.93.76 port 54258 connected with 178.21.16.76 port 5001
[  7] local 83.228.93.76 port 54252 connected with 178.21.16.76 port 5001
[  5] local 83.228.93.76 port 54253 connected with 178.21.16.76 port 5001
[  9] local 83.228.93.76 port 54251 connected with 178.21.16.76 port 5001
[  3] local 83.228.93.76 port 54249 connected with 178.21.16.76 port 5001
[  4] local 83.228.93.76 port 54250 connected with 178.21.16.76 port 5001
[ 10] local 83.228.93.76 port 54254 connected with 178.21.16.76 port 5001
[ 11] local 83.228.93.76 port 54255 connected with 178.21.16.76 port 5001
[  6] local 83.228.93.76 port 54256 connected with 178.21.16.76 port 5001
[  8] local 83.228.93.76 port 54257 connected with 178.21.16.76 port 5001
[ ID] Interval       Transfer     Bandwidth
[  9]  0.0-10.2 sec  4.05 MBytes  3.33 Mbits/sec
[ 10]  0.0-10.2 sec  3.39 MBytes  2.78 Mbits/sec
[ 11]  0.0-10.3 sec  3.75 MBytes  3.06 Mbits/sec
[  4]  0.0-10.3 sec  3.43 MBytes  2.78 Mbits/sec
[ 12]  0.0-10.3 sec  3.92 MBytes  3.18 Mbits/sec
[  3]  0.0-10.4 sec  4.45 MBytes  3.58 Mbits/sec
[  5]  0.0-10.5 sec  4.06 MBytes  3.24 Mbits/sec
[  6]  0.0-10.5 sec  4.30 MBytes  3.42 Mbits/sec
[  8]  0.0-10.8 sec  3.92 MBytes  3.03 Mbits/sec
[  7]  0.0-10.9 sec  4.03 MBytes  3.11 Mbits/sec
[SUM]  0.0-10.9 sec  39.3 MBytes  30.3 Mbits/sec

 

You see currently my home machine has an Uplink of 30.3 Mbit/s per second, that's pretty nice since I've ordered a 100Mbits from my ISP (Unguaranteed Bandwidth Connection Speed) and as you might know it is a standard practice for many Internet Proviers to give Uplink speed of 1/4 from the ISP provided overall bandwidth 1/4 would be 25Mbi/s, meaning my ISP (Bergon.NET) is doing pretty well providing me with even more than promised (ordered) bandwidth.

Iperf is probably the choice of most sysadmins who have to do regular bandwidth in local networks speed between 2 servers or test  Internet Bandwidth speed on heterogenous network with Linux / BSDs / AIX / HP-UX (UNIXes). On HP-UX and AIX and other UNIXes for which iperf doesn't have port you have to compile it yourself.

If you don't have root /admin permissions on server and there is python language enterpreter installed you can use speedtest_cli.py script to test internet throughput connectivity
speedtest_cli uses speedtest.net to test server up / down link just in case if script is lost in future I've made ownload mirror of speedtest_cli.py is here

Quickest way to test net speed with speedtest_cli.py:

 

$ lynx -dump https://raw.github.com/sivel/speedtest-cli/master/speedtest_cli.py > speedtest_cli.py
$ chmod +x speedtest_cli.py
python speedtest_cli.py

speedtest_cli_pyhon_script_screenshot-on-gnu-linux-test-internet-network-speed-on-unix

Fix MySQL connection error – Host ” is blocked because of many connection errors; unblock with ‘mysqladmin flush-hosts’

Wednesday, July 2nd, 2014

fix-mysql-too-many-connection-errors-explained

If you get a MySQL error like:

Host '' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'

This most likely means your PHP / Java whatever programming language application connecting to MySQL is failing to authenticate with the application created (existing) or that the application is trying too many connections to MySQL in a rate where MySQL server can't serve all the requests.

Some common errors for Too many Connection errors are:
 

  • Networking Problem
  • Server itself could be down
  • Authentication Problems
  • Maximum Connection Errors allowed.

The value of the max_connection_errors system variable determines how many successive interrupted connection requests are permitted to myqsl server.
 

Well anyways if you get the:

Host '' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'

You can consider this a sure sign application connections to MySQLis logging a lot of error connections, for some reason.
This error could also appear on very busy websites where high amount of separete connections are used – I've seen the error occur on PHP websites whether mysql_pconnect(); is selected in favour of the prooved working mysql_connect();

The first thing to do before changing / increasing default set of max connection errors is to check how many max connection errors are set within MySQL?

For that connect with MySQL CLI and issue:
 

mysql> SHOW VARIABLES LIKE '%error%';


+——————–+————————————————————-+
| Variable_name      | Value                                                           |
+——————–+————————————————————-+
| error_count        | 0                                                                     |
| log_error          | /var/log/mysql//mysqld.log                                |
| max_connect_errors | 10000                                                      |
| max_error_count    | 64                                                               |
| slave_skip_errors  | OFF                                                             |
+——————–+————————————————————-+


A very useful mysql cli command in debugging max connection errors reached problem is

mysql> SHOW PROCESSLIST;

 

To solve the error, try to tune in /etc/my.cnf, /etc/mysql/my.cnf or wherever my.cnf is located:

[mysqld]
max_connect_errors
variable

and

wait_timeout var. Some reasonable variable size would be:

max_connect_errors = 100000
wait_timeout = 60

If such (anyways) high values is still not high enough you can raise mysql config connection timeout

 

to

max_connect_errors = 100000000

Also if you want to try raise max_connect_errors var without making it permanenty (i.e. remember var setting after MySQL service restart), set it from MySQL cli with:

SET GLOBAL max_connect_errors


If you want to keep the set default max_connection_errors and fix it temporary, you can try to follow the error

Host '' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'

suggestion and issue in root console:

mysqladmin flush-hosts

Same could also be done from MySQL Cli with cmd:
 

FLUSH HOSTS;

Create local network between virtual machines in Virtualbox VM – Add local LAN between Linux Virtual Machines

Wednesday, June 11th, 2014

add-virtualbox-virtual-machines-inside-local-network-create-internal-LAN-local-net-linux-windows

I want to do test MySQL Cluster following MySQL Cluster Install Guide for that purpose, I've installed 2 version of CentOS 6.5 inside Virtualbox and I wanted to make the 2 Linux hosts reachable inside a local LAN network, I consulted some colleagues who adviced me to configure two Linux hosts to use Bridget Adapter Virtualbox networking (Network configuration in Virtualbox is done on a Virtual Machine basis from):
 

Devices -> Network Settings

(Attached to: Bridged Adapter)

Note!: that by default Cable Connected (tick) is not selected so when imposing changes on Network – tick should be set)
After Specifying Attached to be Bridged Adapter to make CentOS linux refresh network settings run in gnome-terminal:

[root@centos ~]# dhclient eth0

However CentOS failed to grab itself DHCP IP address.
Thus I tried to assign manually IP addresses with ifconfig, hoping that at least this would work, e.g.:

on CentOS VM 1:

/sbin/ifconfig eth0 192.168.10.1 netmask 255.255.255.0

on CentOS VM 2:

/sbin/ifconfig eth1 192.168.10.2 netmask 255.255.255.0

To test whether there is connection between the 2 VM hosts tried ping-ing 192.168.10.2 (from 192.168.10.1) and tested with telnet if I can access remotely SSH (protocol), from CentOS VM2 1 to CentOS VM2 and vice versa, i.e.:

[root@centos ~]# telnet 192.168.10.2 22

 

Trying 192.168.10.2…
telnet: connect to address 192.168.10.2: No route to host

Then after checking other options and already knowing by using VBox NAT network option I had access to the internet, I tried to attach a standard local IP addresses to both Linux-es as Virtual interfaces (e.g eth0:1), .e.g:

On Linux VM 1:

/sbin/ifconfig eth0:0 192.168.10.1 netmask 255.255.255.0

On Linux VM 2:

/sbin/ifconfig eth1:0 192.168.10.2 netmask 255.255.255.0

Then to test again used telnet

[root@centos ~]# telnet 192.168.10.2 22

Then I found Virtualbox has a special Internal Networking support

to choose in Attached to drop down menu. According to Internal Networking Virtualbox instructions to put two Virtual Machine hosts inside an Internal network they should be both set in Internal network with identical name.
P. S. It is explicitly stated that using Internal Network will enable access between Guest Virtual Machines OS, but hosts will not have access to the Internet (which in my case doesn't really mattered as I needed the two Linux VMs just as a testbed)

virtualbox-create-internal-local-network-between-guest-host-Linux-Windows1

I tried this option but it doesn't work for me for some reason, after some time of research online on how to create local LAN network between 2 Virtual Machines luckily I decided to test all available Virtualbox Networking choices and noticed Host-only adapter.

Selecting Host-only Adapter and using terminal to re-fetch IP address over dhcp:

virtualbox-connect-in-local-lan-network-linux-and-windows-servers-hosts-only-adapter

On CentOS VM1

dhclient eht0

On CentOS VM2

dhclient eth1

assigned me two adjoining IPs – (192.168.56.101 and 192.168.56.102).

Connection between the 2 IPs 192.168.56.101 and 192.168.56.102 on TCP and UDP and ICMP protocol works, now all left is to install MySQL cluster on both nodes.

 

Start Stop Restart Microsoft IIS Webserver from command line and GUI

Thursday, April 17th, 2014

start-stop-restart-microsoft-iis-howto-iis-server-logo
For a decomissioning project just recently I had the task to stop Microsoft IIS  on Windows Server system.
If you have been into security for a while you know well how many vulnerabilities Microsoft (Internet Information Server) Webserver used to be. Nowadays things with IIS are better but anyways it is better not to use it if possible …

Nomatter what the rason if you need to make IIS stop serving web pages here is how to do it via command line:

At Windows Command Prompt, type:

net stop WAS

If the command returns error message to stop it type:

net stop W3SVC

stop-microsoft-IIS-webservice
Just in case you have to start it again run:

net start W3SVC

start-restart-IIS-webserver-screenshot

For those who prefer to do it from GUI interface, launch services.msc command from Windows Run:

> services.msc

services-msc-stop-microsoft-iis-webserver

In list of services lookup for
IIS Admin Service and HTTP SSL
a) (Click over it with right mouse button -> Properties)
b) Set Startup type to Manual
c) Click Stop Button

You're done now IIS is stopped to make sure it is stopped you can run from cmd.exe:

telnet localhost 80

when not working you should get 'Could not open connection to the host. on port 80: Connection failed' like shown up in screenshot.

Fix temporary DNS problems on Windows – ipconfig /flushdns

Monday, March 17th, 2014

fix temporary dns issues ipconfig /fush
My internet connection is coming router over a strange Belarusian ADSL modem "Промсвязь". This device serves as ADSL modem and a Wireless Router.
promsviaz_belarusian_adsl_and_wifi_modem

Periodically I'm experiencing issues with DNS, where there is internet but DNS resolving stops woring even though in ipconfig /all I can see DNS settings are proper:

C:\Users\hipo> ipconfig /all

...
Wireless LAN adapter Wireless Network Connection:

   Connection-specific DNS Suffix  . :
   Description . . . . . . . . . . . : Intel(R) Centrino(R) Ultimate-N 6300 AGN
   Physical Address. . . . . . . . . : 3C-A9-F4-4C-E7-98
   DHCP Enabled. . . . . . . . . . . : Yes
   Autoconfiguration Enabled . . . . : Yes
   Link-local IPv6 Address . . . . . : fe80::5d2f:97b8:9e1a:2b13%63(Preferred)
   IPv4 Address. . . . . . . . . . . : 192.168.100.2(Preferred)
   Subnet Mask . . . . . . . . . . . : 255.255.255.0
   Lease Obtained. . . . . . . . . . : March 17, 2014 16:57:40 PM
   Lease Expires . . . . . . . . . . : March 18, 2014 16:57:40 PM
   Default Gateway . . . . . . . . . : 192.168.100.1
   DHCP Server . . . . . . . . . . . : 192.168.100.1
   DHCPv6 IAID . . . . . . . . . . . : 1094494708
   DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-19-CB-1A-5D-A4-5D-36-5A-EB-84

   DNS Servers . . . . . . . . . . . : 8.8.8.8
                                       192.168.100.1
                                       8.8.4.4

   NetBIOS over Tcpip. . . . . . . . : Enabled

 

The fix to situation is to Restart Promsvijazy and restart my Notebook. As this fix takes a lot of time I found another "work around". Make Windows Flush its DNS servers (forget old DNS servers and re-assign them again)

C:\Users\hipo> ipconfig /flushdns

Windows IP Configuration Successfully flushed the DNS Resolver Cache.


 

Other useful commands to make connection re-initiate completely are:

ipconfig /renew

Renews all Adapter settings (Lan, Wiki, PPP etc.) – re-assign IPs / re-initiate connections and

ipconfig /release

Releases any established connection

Sermon on how the Christian can live without Fear – Repent ! Pope Shenouda II Coptic Patriarch

Tuesday, April 9th, 2013

I'm a Christian and even though I'm trying to live as a Christian it is not always working. I'm a human and as all of us I have my fears. Thus by God's grace today by looking on Youtube on what the Coptic Christian say on Fear. I've ended in Sermon from his Holiness Pope Shenoda II. He explains very well the connection of Fear and the lack of Christian repentance. Below is the video I hope some Christian out there can earn by watching this video.
 

HH pope Shenouda Sermon Old ''A Life Without Fear''

Fixing 127.0.0.1 – – “OPTIONS * HTTP/1.0” 200 136 “-” “Apache (internal dummy connection)” / ::1 – – [-.. :- .. +0200] “OPTIONS * HTTP/1.0” 200 Apache access.log junk records

Saturday, December 1st, 2012

If you're on Debian Linux and you played with mpm_prefork_module MinSpareServers and MaxSpareServers directives, it is very likely your access.log apache log ends up with a plenty of junk messages like:

127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"

It was quite unexplainable to me what is causing all this errors. I've seen plenty of posts on the Internet discussing on that but most are somehow outdated and suggested solutions to the weird logged  internal dummy connection messages did not work well for me.

I would not care so much about the message, only if it was not creating a lot of bulk records in my logs which when later are compressed just take up useless disk space and besides that it makes following the Apache log with:

# tail -f  /var/log/apache2/access.log

hardly readable.

  • One of the many solutions and posts suggested a solution with mod_rewrite rules. It claims adding the rules to .htaccess or to apache config files (vhost confs whether multiple vhosts domains):

RewriteCond %{HTTP_USER_AGENT} ^.*internal\ dummy\ connection.*$ [NC]
RewriteRule .* – [F,L]

The full article you read the whole here.
I've tested this rules, and thought I might be doing something wrong this proved unworking for me. Besides that even if it worked I would not imply such fix, as it will be creating a useless extra load on each incoming Apache connection.

 

As a second solution as I found on stackoverflow's website is to add in apache / vhost configs:

<Limit OPTIONS Order allow,deny Deny from all </Limit> I tested this as well but it does not work either. I've seen a bunch of other posts and none seemed to be working, until I finally came across Linux Guru's blog which was discussing a similar issue suggesting a fix. The post is discussing on Apache access.log being filled with messages like: ::1 - - [13/Mar/2008:09:05:13 +0200] "OPTIONS * HTTP/1.0" 200 Which are almost the same except, the 127.0.0.1 is the IPv6's equivalent ::1. The blog provided solution is to use: SetEnvIf Remote_Addr "::1" dontlog CustomLog /var/log/apache2/access.log combined env=!dontlog What this makes is to completely clear up all occurances of ::1 in /var/log/apache2/access.log. Once it uses Apache Internal directive SetEnvIf Remote_Addr "::1" dontlog to "bind" ::1 to dontlog variable and then after the usual Log location definition – e.g. – CustomLog /var/log/apache2/access.log combined it instructs the environment not to log dontlog variable matches, i.e. env=!dontlog

Following he same logic to get rid of the so annoying:

127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)"

I used as a solution adding:

SetEnvIf Remote_Addr "127.0.0.1" dontlog
CustomLog /var/log/apache2/access.log combined env=!dontlog

to /etc/apache2/sites-available/000-default (the default virtualhost), with the CustomLog directive, for more domains and more CustomLog VirtualHost definitions it might be necessary to add it to all Vhosts too.

This solution to Request of the Server to itself is also found on Apache's wiki  check what httpd wiki here.

As I've read further it appeared the same Internal Dummy Connection error is experienced on CentOS Linux too and the SetEnvIf method works there too well you can read post here.

Another possible solution though this didn't work for me is to just play with the settings of MinSpareServers and MaxSpareServers in apache2.conf (or httpd.conf on RedHats and BSD).

There is plenty of things written on the problem and it is really confusing to read about it, as most of the people writing about it were looking for the quick fix and thus just dropped few lines on what worked for them without much details on exact OS en Apache version.

The reason why:
127.0.0.1 – – [25/Nov/2012:06:27:21 +0200] "OPTIONS * HTTP/1.0" 200 136 "-" "Apache (internal dummy connection)" appear in log is due to the fact in Apache 2.x series Apache developers change the the Parent Apache controlling process to send periodic requests to its waiting idling childs, just to make the childs are still alive, this is done somehow in the very inefficient method IMHO by sending those dummy connection requests.

Maybe better and more thoroughful explanation on What is the Dummy Internal Connection and what causes it is on another Bulgarian Fellow Valery Dachev you can read his explan.

On a couple of occasions, I've experienced a very high server loads like load avarage of 180etc. , I have some suspicion that this super high loads are caused somehow by the Internal Dummy Connection thing too, though I'm not sure if my assumptions are correct. It could be I have messed up something with MaxSpareServers / MinSpareServers too, or just the hardware on the host is unable to process a sudden traffic peaks. I've red online other people who complain of similar overloads and complaininng about the Internal Dummy Connection too. But as long as my little research go, I couldn't find noone knowing anything on that. If some of the readers of this post has an idea on that please drop a comment !

Well that's it hope my little blog post sheds some more light on the topic, and lets hope in future Apache versions developers will come with less resource hungry method to do internal dummy checks for exmpl. by sending a SIGUSR signal.

Fix QMAIL mail server – “warning: dropping connection, unable to SSL accept:protocol error” and why error occurs

Friday, August 24th, 2012

Every time I had to modify something in productive QMAIL server install, I end up with some kind unexplainable problems which create huge issues for clients. For one more time I end up with errors after minor “innocent” modifications of a working for more than year time QMAIL ….

After last changes I made to combined qmail install of Thibs, QmailRocks in Daemontools qmail start up files:

/var/qmail/supervise/qmail-smtpd/run
/var/qmail/supervise/qmail-smtpdssl/run

In both files I set variable:

SSL=0 to SSL=1:

After making the change I restarted qmail tested sending emails and all looked well. Therefore I thought all works as usual, e.g. e-mail are properly sent and respectively received to the mail server….

So far so good until just today, when I received urgent phone call in which my employer reported about severe problems with receiving emails.
Trying to send from Gmail or Yahoo to our mail server were unable to be received with some delivery failure errors …
First I was a bit sceptical, hoping that maybe the errors reporting are not caused by the mail server but after giving a try to send email to the mail server in question the reported problem prooved true.
As always when errors with QMail, I checked what the logs says about the problem just to find in /var/log/qmail-smtpd/current log following err:

@4000000050374a0e1a5dc374 sslserver: warning: dropping connection, unable to SSL accept:protocol error

I’ve dig over almost all primary forums, threads and blog posts online but nowhere I couldn’t find anymeaningful as explanation to the error, so was forced to look for solution myself.
Obviously, there was error with SSL, so my first thought was to check if all is fine with permissions of servercert.pem and clientcert.pem. The permissions of two files were as follows:

ls -al /var/qmail/control/clientcert.pem
-rw-r----- 1 root qmail 2136 2011-10-10 13:23 /var/qmail/control/clientcert.pem
ls -al /var/qmail/control/servercert.pem
-rw-r--r-- 1 qmaild qmail 2311 2011-10-12 13:21 /var/qmail/control/servercert.pem

At first glimpse I was suspicious concerning permissions of /var/qmail/control/clientcert.pem but after checking on other Qmail servers which worked just fine I was sure the problem did not root in clientcert.pem permissions.

As you can guess another failure point I suspected was the previous day change of SSL=0 to SSL=1 in /var/qmail/supervise/qmail-smtpd/run and /var/qmail/supervise/qmail-smtpdssl/run. On that account, I immediately reversed back the yesterday setting of SSL=0 and then restart QMAIL.

The usual QMAIL to restart qmail is via qmailctl, but since so often qmailctl does not reload qmail current settings I had to also refresh current working qmail binaries via both stopping qmail with qmailctl stop and through /etc/inittab by commenting out in it line dealing with daemontools svscanboot:

Hence I first stopped all running qmail processes via init script:

# /usr/bin/qmailctl stop

Then commented line:

SV:123456:respawn:/usr/bin/svscanboot

to:

#SV:123456:respawn:/usr/bin/svscanboot

Onwards did reload of initab with command:

# /sbin/init q

Right onwards I uncommented the commented line:

#SV:123456:respawn:/usr/bin/svscanboot

to:

SV:123456:respawn:/usr/bin/svscanboot

And load up daemontools (svscanboot) via inittab issuing:

# /sbin/init q

Finally I had to start QMAIL processes:

# qmailctl restart
...

Change of SSL svscanboot daemontools service script SSL=0 to SSL=1 however created other problems for clients cause any present clients which used crypted connections to SMTP server viaSSL encryption rendered unable to send mails anymore with error messages like:

Cannot establish SSL with SMTP server xx.xxx.xxx.xxx:465, SSL_connect error 336031996.

To work around this issue I had to once again start SSL (set SSL=1) in /var/qmail/supervise/qmail-smtpssl/run and leave SSL switched off for /var/qmail/supervise/qmail-smtp/run.

Even doing this changes for about 20 minutes though I restarted QMAIL multiple times, qmail continued having issues with mails received with the shitty:

@4000000050374a0e1a5dc374 sslserver: warning: dropping connection, unable to SSL accept:protocol error

After multiple restarts “magically” the stupid server figured out it should load my changed setting in qmail-smtpdssl/run (before it finally worked I probably had to restart 20 times using qmailctl stop; qmailctl start ….

I’ve figured out as a good practice to put delay between qmailctl stop and qmailctl start cmds so in restarts I used a little 3 secs sleep in between like so:

# qmailctl stop; killall -9 multilog; sleep 3; qmailctl start

Also killing multilog (killall -9 multilog) is good practice cause often nevertheless restarts server logging is not refreshed …

Something else that might be important is the AUTH settings in qmail-smtpd/run and qmail-smtpdssl/run in thisfinally working qmail they are:

AUTH=1
REQUIRE_AUTH=0
ALLOW_INSECURE_AUTH=0

Hope this post helps someone to solve same crazy error …
Cheers