Posts Tagged ‘port’

How to do a port redirect to localhost service with socat or ncat commands to open temporary access to service not seen on the network

Friday, February 23rd, 2024

socat-simple-redirect-tcp-port-on-linux-bsd-logo

You know sometimes it is necessery to easily and temporary redirect network TCP ports to be able to be accessible from Internal DMZ-ed Network via some Local Network IP connection or if the computer system is Internet based and has an external "'real" Internet Class A / B address to be reachable directly from the internet via lets say a modern Internet browser such as Mozilla Firefox / Google Chrome Browser etc.

Such things are easy to be done with iptables if you need to do the IP redirect permanent with Firewall rule changes on Linux router with iptables.
One way to create a TCP port redirect using firewall would include few iptable rules  like for example:

1. Redirect port traffic from external TCP port source to internal one

# iptables -t nat -I PREROUTING -p tcp –dport 10000 -j REDIRECT –to-ports 80
# iptables -t nat -I OUTPUT -p tcp -o lo –dport 10000 -j REDIRECT –to-ports 80
# iptables -t nat -A OUTPUT -o lo -d 127.0.0.1 -p tcp –dport 80 -j DNAT  –to-destination 192.168.0.50:10000
# iptables -t nat -I OUTPUT –source 0/0 –destination 0/0 -p tcp –dport 80 -j REDIRECT –to-ports 10000


Then you will have 192.168.00.50:10000 listener (assuming that the IP is already configured on some of the host network interface, plugged in to the network).

 But as messing up with the firewall is not the best thing to do especially, if you need to just temporary redirect external listener port to a service configured on the server to only run on TCP port on loopback address 127.0.0.1, you can do it instead with another script or command for simplicy.

One simple way to do a port redirect on the fly on GNU / Linux or FreeBSD / OpenBSD is with socat command.

Lets say you have a running statistics of a web server Apache / Nginx / Haproxy frontend / backend statistics or whatever kind of web TCP service on port 80 on your server and this interface is on purpose configured to be reachable only on localhost interface port 80, so you can either access it by creating an ssh tunnel towards the service on 127.0.0.1 or by accessing it by redirecting the traffic towards another external TCP port, lets say 10000.

Here is how you can achieve

2. Redirect Local network accessible IP on all configured Server network interfaces port 10000 to 127.0.0.1 TCP 80 with socat

# socat tcp-l:10000,fork,reuseaddr tcp:127.0.0.1:80

If you need to access later the redirected port in a Browser, pick up the machine first configured IP and open it in a browser (assuming there is no firewall filter prohibiting access to redirected port).

root@pcfreak:~# ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 109.104.212.130  netmask 255.255.255.0  broadcast 109.104.212.255
        ether 91:f8:51:03:75:e5  txqueuelen 1000  (Ethernet)
        RX packets 652945510  bytes 598369753019 (557.2 GiB)
        RX errors 0  dropped 10541  overruns 0  frame 0
        TX packets 619726615  bytes 630209829226 (586.9 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Then in a browser open http://102.104.212.130 or https://102.104.212.130 (depending on if remote service has SSL encryption enabled or not) and you're done, the configured listener Server service should pop-up on the screen.

3. Redirect IP Traffic from External IP to Localhost loopback interface with netcat ( ncat ) swiss army knife hackers and sysadmins tool

If you need to redirect lets say TCP / IP port 8000 to Port a server local binded service on TCP 80 with ncat, instead of socat (if lets say socat is not pre-installed on the machine), you can do it by simply running those two commands:

[root@server ~]# mkfifo svr1_to_svr2
[root@server ~]# ncat -vk -l 8000 < svr1_to_svr2 | ncat 127.0.0.1 80 > svr1_to_svr2
Ncat: Version 7.92 ( https://nmap.org/ncat )
Ncat: Listening on 0.0.0.0:10000
Ncat: Connection from 10.10.258.39.
Ncat: Connection from 10.10.258.39:51813.
Ncat: Connection from 10.10.258.39.
Ncat: Connection from 10.10.258.39:23179.

 

I you don't care to log what is going on the background of connection and you simply want to background the process with a one liner command you can achive that with:


[root@server /tmp]# cd tmp; mkfifo svr1_to_svr2; (ncat -vk -l 8000 < svr1_to_svr2 | ncat 127.0.0.1 80 > svr1_to_svr2 &)
 

Then you can open the Internal Machine Port 80 TCP service on 8000 in a browser as usual.

For those who want a bit of more sophisticated proxy like script I would suggest you take a look at using netcat and a few lines of shell script loop, that can simulate a raw and very primitive proxy with netcat this is exampled in my previous article Create simple proxy server with netcat ( nc ) based utility.

Hope this article is helpful to anyone, there is plenty of other ways to do a port redirect with lets say perl, python and perhaps other micro tools. If you know of one liners or small scripts, that do it please share in comments, so we can learn from each other ! 

Enjoy ! 🙂
 

How to disable haproxy log for certain frontend / backend or stop haproxy logging completely

Wednesday, September 14th, 2022

haproxy-disable-logging-for-single-frontend-or-backend-or-stop-message-logging-completely-globally

In my previous article I've shortly explained on how it is possible to configure multiple haproxy instances to log in separate log files as well as how to configure a specific frontend to log inside a separate file. Sometimes it is simply unnecessery to keep any kind of log file for haproxy to spare disk space or even for anonymity of traffic. Hence in this tiny article will explain how to disable globally logging for haproxy and how logging for a certain frontend or backend could be stopped.

1. Disable globally logging of haproxy service
 

Disabling globally logging for haproxy in case if you don't need the log is being achieved by redirecting the log variable to /dev/null handler and to also mute the reoccurring alert, notice and info messages, that are produced in case of some extra ordinary events during start / stop of haproxy or during mising backends etc. you can send those messages to local0 and loca1 handlers which will be discarded later by rsyslogd configuration, for example thsi can be achieved with a configuration like:
 

global     log /dev/log    local0 info alert     log /dev/log    local1 notice alert  defaults log global mode http option httplog option dontlognull

 

<level>    is optional and can be specified to filter outgoing messages. By
           default, all messages are sent. If a level is specified, only
           messages with a severity at least as important as this level
           will be sent. An optional minimum level can be specified. If it
           is set, logs emitted with a more severe level than this one will
           be capped to this level. This is used to avoid sending "emerg"
           messages on all terminals on some default syslog configurations.
           Eight levels are known :
             emerg  alert  crit   err    warning notice info  debug

         

By using the log level you can also tell haproxy to omit from logging errors from log if for some reasons haproxy receives a lot of errors and this is flooding your logs, like this:

    backend Backend_Interface
  http-request set-log-level err
  no log


But sometimes you might need to disable it for a single frontend only and comes the question.


2. How to disable logging for a single frontend interface?

I thought that might be more complex but it was pretty easy with the option dontlog-normal haproxy.cfg variable:

Here is sample configuration with frontend and backend on how to instrucruct the haproxy frontend to disable all logging for the frontend
 

frontend ft_Frontend_Interface
#        log  127.0.0.1 local4 debug
        bind 10.44.192.142:12345
       
option dontlog-normal
        mode tcp
        option tcplog

              timeout client 350000
        log-format [%t]\ %ci:%cp\ %fi:%fp\ %b/%s:%sp\ %Tw/%Tc/%Tt\ %B\ %ts\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq
        default_backend bk_WLP_echo_port_service

backend bk_Backend_Interface
                        timeout server 350000
                        timeout connect 35000
        server serverhost1 10.10.192.12:12345 weight 1 check port 12345
        server serverhost2 10.10.192.13:12345 weight 3 check port 12345

 


As you can see from those config, we have also enabled as a check port 12345 which is the application port service if something goes wrong with the application and 12345 is not anymore responding the respective server will get excluded automatically by haproxy and only one of machines will serve, the weight tells it which server will have the preference to serve the traffic the weight ratio will be 1 request will end up on one machine and 3 requests on the other machine.


3. How to disable single configured backend to not log anything but still have a log for the frontend
 

Omit the use of option dontlog normal from frontend inside the backend just set  no log:

backend bk_Backend_Interface
                       
 no log
                        timeout server 350000
                        timeout connect 35000
        server serverhost1 10.10.192.12:12345 weight 1 check port 12345
        server serverhost2 10.10.192.13:12345 weight 3 check port 12345

That's all reload haproxy service on the machine and backend will no longer log to your default configured log file via the respective local0 – local6 handler.

Fix weird double logging in haproxy.log file due to haproxy.cfg misconfiguration

Tuesday, March 8th, 2022

haproxy-logging-front-network-back-network-diagram

While we were building a new machine that will serve as a Haproxy server proxy frontend to tunnel some traffic to a number of backends, 
came across weird oddity. The call requests sent to the haproxy and redirected to backend servers, were being written in the log twice.

Since we have two backend Application servers hat are serving the request, my first guess was this is caused by the fact haproxy
tries to connect to both nodes on each sent request, or that the double logging is caused by the rsyslogd doing something strange on each
received query. The rsyslog configuration configured to send via local6 facility to rsyslog is like that:

$ModLoad imudp
$UDPServerAddress 127.0.0.1
$UDPServerRun 514
#2022/02/02: HAProxy logs to local6, save the messages
local6.*                                                /var/log/haproxy.log


The haproxy basic global and defaults and frontend section config is like that:
 

global
    log          127.0.0.1 local6 debug
    chroot       /var/lib/haproxy
    pidfile      /run/haproxy.pid
    stats socket /var/lib/haproxy/haproxy.sock mode 0600 level admin
    maxconn      4000
    user         haproxy
    group        haproxy
    daemon

 

defaults
    mode        tcp
    log         global
#    option      dontlognull
#    option      httpclose
#    option      httplog
#    option      forwardfor
    option      redispatch
    option      log-health-checks
    timeout connect 10000 # default 10 second time out if a backend is not found
    timeout client 300000
    timeout server 300000
    maxconn     60000
    retries     3
 

listen FRONTEND1
        bind 10.71.80.5:63750
        mode tcp
        option tcplog
        log global
        log-format [%t]\ %ci:%cp\ %bi:%bp\ %b/%s:%sp\ %Tw/%Tc/%Tt\ %B\ %ts\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq
        balance roundrobin
        timeout client 350000
        timeout server 350000
        timeout connect 35000
        server backend-host1 10.80.50.1:13750 weight 1 check port 15000
        server backend-host2 10.80.50.2:13750 weight 2 check port 15000

After quick research online on why this odd double logging of request ends in /var/log/haproxy.log it turns out this is caused by the 

log global  defined double under the defaults section as well as in the frontend itself, hence to resolve simply had to comment out the log global in Frontend, so it looks like so:

listen FRONTEND1
        bind 10.71.80.5:63750
        mode tcp
        option tcplog
  #      log global
        log-format [%t]\ %ci:%cp\ %bi:%bp\ %b/%s:%sp\ %Tw/%Tc/%Tt\ %B\ %ts\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq
        balance roundrobin
        timeout client 350000
        timeout server 350000
        timeout connect 35000
        server backend-host1 10.80.50.1:13750 weight 1 check port 15000
        server backend-host2 10.80.50.2:13750 weight 2 check port 15000

 


Next just reloaded haproxy and one request started leaving only one trail inside haproxy.log as expected 🙂
 

Disable VNC on KVM Virtual Machine without VM restart / How to Change VNC listen address

Monday, February 28th, 2022

disable-vnc-port-listener-on-a-KVM-ran-virtual-machine-virsh-libvirt-libvirt-architecture-design

Say you have recently run a new KVM Virtual machine, have connected via VNC on lets say the default tcp port 5900 
installed a brand new Linux OS using a VNC client to connect, such as:
TightVNC / RealVNC if connecting from Windows Client machine or Vncviewer / Remmina if connecting from Linux / BSD  and now 
you want to turn off the VNC VM listener server either for security reasons to make sure some script kiddie random scanner did not manage to connect and take control over your VM or just because, you will be only further using the new configured VM only via SSH console sessions as they call it in modern times to make a buziness buzz out of it a headless UNIX server (server machines connected a network without a Physical monitor attached to it).


The question comes then how can be the KVM VNC listener on TCP port 5900 be completely disabled?

One way of course is to filter out with a firewall 5900 completely either on a Switch Level (lets say on a Cisco equipment catalist in front of the machine) or the worst solution to  locally filter directly on the server with firewalld or iptables chain rules.
 

1. Disable KVM VNC Port listener via VIRSH VM XML edit

The better way of course  is to completely disable the VNC using KVM, that is possible through the virsh command interface.
By editing the XML Virtual Machine configuration and finding the line about vnc confiuguration with:

root@server:/kvm/disk# virsh edit pcfreakweb
Domain pcfreakweb XML configuration not changed.

like:

<graphics type='vnc' port='5900' autoport='yes' listen='0.0.0.0'>
      <listen type='address' address='0.0.0.0'/>


and set value to undefined:

port='-1'


virsh-KVM-disable-VNC-port-listener-virsh-xml-edit-screenshot

Modifying the XML however will require you to reboot the Virtual Machine for which XML was editted. This might be not possible
if you have a running production server already configured with Apache / Proxy / PostgreSQL / Mail or any other Internet public service.

2. Disable VNC KVM TCP port 5900 to a dynamic running VM without a machine reboot


Thus if you want to remove the KVM VNC Port Listener on 5900 without a VM shutdown / reboot you can do it via KVM's virsh client interface.

root@server:/kvm/disk# virsh
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # qemu-monitor-command pcfreakweb –hmp change  vnc none

 

The virsh management user interface client, can do pretty much more of real time VM changes, it is really useful to use it if you have KVM Hypervisor hosts with 10+ Virtual machines and it if you have to deal with KVM machines on daily, do specific changes to the VMs on how VM networks are configured, information on HV hardware, configure / reconfigure storage volumes to VMs etc, take some time to play with it 🙂

Hack: Using ssh / curl or wget to test TCP port connection state to remote SSH, DNS, SMTP, MySQL or any other listening service in PCI environment servers

Wednesday, December 30th, 2020

using-curl-ssh-wget-to-test-tcp-port-opened-or-closed-for-web-mysql-smtp-or-any-other-linstener-in-pci-linux-logo

If you work on PCI high security environment servers in isolated local networks where each package installed on the Linux / Unix system is of importance it is pretty common that some basic stuff are not there in most cases it is considered a security hole to even have a simple telnet installed on the system. I do have experience with such environments myself and thus it is pretty daunting stuff so in best case you can use something like a simple ssh client if you're lucky and the CentOS / Redhat / Suse Linux whatever distro has openssh-client package installed.
If you're lucky to have the ssh onboard you can use telnet in same manner as netcat or the swiss army knife (nmap) network mapper tool to test whether remote service TCP / port is opened or not. As often this is useful, if you don't have access to the CISCO / Juniper or other (networ) / firewall equipment which is setting the boundaries and security port restrictions between networks and servers.

Below is example on how to use ssh client to test port connectivity to lets say the Internet, i.e.  Google / Yahoo search engines.
 

[root@pciserver: /home ]# ssh -oConnectTimeout=3 -v google.com -p 23
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to google.com [172.217.169.206] port 23.
debug1: connect to address 172.217.169.206 port 23: Connection timed out
debug1: Connecting to google.com [2a00:1450:4017:80b::200e] port 23.
debug1: connect to address 2a00:1450:4017:80b::200e port 23: Cannot assign requested address
ssh: connect to host google.com port 23: Cannot assign requested address
root@pcfreak:/var/www/images# ssh -oConnectTimeout=3 -v google.com -p 80
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to google.com [172.217.169.206] port 80.
debug1: connect to address 172.217.169.206 port 80: Connection timed out
debug1: Connecting to google.com [2a00:1450:4017:807::200e] port 80.
debug1: connect to address 2a00:1450:4017:807::200e port 80: Cannot assign requested address
ssh: connect to host google.com port 80: Cannot assign requested address
root@pcfreak:/var/www/images# ssh google.com -p 80
ssh_exchange_identification: Connection closed by remote host
root@pcfreak:/var/www/images# ssh google.com -p 80 -v -oConnectTimeout=3
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to google.com [172.217.169.206] port 80.
debug1: connect to address 172.217.169.206 port 80: Connection timed out
debug1: Connecting to google.com [2a00:1450:4017:80b::200e] port 80.
debug1: connect to address 2a00:1450:4017:80b::200e port 80: Cannot assign requested address
ssh: connect to host google.com port 80: Cannot assign requested address
root@pcfreak:/var/www/images# ssh google.com -p 80 -v -oConnectTimeout=5
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to google.com [142.250.184.142] port 80.
debug1: connect to address 142.250.184.142 port 80: Connection timed out
debug1: Connecting to google.com [2a00:1450:4017:80c::200e] port 80.
debug1: connect to address 2a00:1450:4017:80c::200e port 80: Cannot assign requested address
ssh: connect to host google.com port 80: Cannot assign requested address
root@pcfreak:/var/www/images# ssh google.com -p 80 -v
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to google.com [172.217.169.206] port 80.
debug1: Connection established.
debug1: identity file /root/.ssh/id_rsa type 0
debug1: identity file /root/.ssh/id_rsa-cert type -1
debug1: identity file /root/.ssh/id_dsa type -1
debug1: identity file /root/.ssh/id_dsa-cert type -1
debug1: identity file /root/.ssh/id_ecdsa type -1
debug1: identity file /root/.ssh/id_ecdsa-cert type -1
debug1: identity file /root/.ssh/id_ed25519 type -1
debug1: identity file /root/.ssh/id_ed25519-cert type -1
debug1: identity file /root/.ssh/id_xmss type -1
debug1: identity file /root/.ssh/id_xmss-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_7.9p1 Debian-10+deb10u2
debug1: ssh_exchange_identification: HTTP/1.0 400 Bad Request

 


debug1: ssh_exchange_identification: Content-Type: text/html; charset=UTF-8


debug1: ssh_exchange_identification: Referrer-Policy: no-referrer


debug1: ssh_exchange_identification: Content-Length: 1555


debug1: ssh_exchange_identification: Date: Wed, 30 Dec 2020 14:13:25 GMT


debug1: ssh_exchange_identification:


debug1: ssh_exchange_identification: <!DOCTYPE html>

debug1: ssh_exchange_identification: <html lang=en>

debug1: ssh_exchange_identification:   <meta charset=utf-8>

debug1: ssh_exchange_identification:   <meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">

debug1: ssh_exchange_identification:   <title>Error 400 (Bad Request)!!1</title>

debug1: ssh_exchange_identification:   <style>

debug1: ssh_exchange_identification:     *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 10
debug1: ssh_exchange_identification: 0% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.g
debug1: ssh_exchange_identification: oogle.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0
debug1: ssh_exchange_identification: % 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_
debug1: ssh_exchange_identification: color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}

debug1: ssh_exchange_identification:   </style>

debug1: ssh_exchange_identification:   <a href=//www.google.com/><span id=logo aria-label=Google></span></a>

debug1: ssh_exchange_identification:   <p><b>400.</b> <ins>That\342\200\231s an error.</ins>

debug1: ssh_exchange_identification:   <p>Your client has issued a malformed or illegal request.  <ins>That\342\200\231s all we know.</ins>

ssh_exchange_identification: Connection closed by remote host

 

Here is another example on how to test remote host whether a certain service such as DNS (bind) or telnetd is enabled and listening on remote local network  IP with ssh

[root@pciserver: /home ]# ssh 192.168.1.200 -p 53 -v -oConnectTimeout=5
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to 192.168.1.200 [192.168.1.200] port 53.
debug1: connect to address 192.168.1.200 port 53: Connection timed out
ssh: connect to host 192.168.1.200 port 53: Connection timed out

[root@server: /home ]# ssh 192.168.1.200 -p 23 -v -oConnectTimeout=5
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to 192.168.1.200 [192.168.1.200] port 23.
debug1: connect to address 192.168.1.200 port 23: Connection timed out
ssh: connect to host 192.168.1.200 port 23: Connection timed out


But what if Linux server you have tow work on is so paranoid that you even the ssh client is absent? Well you can use anything else that is capable of doing a connectivity to remote port such as wget or curl. Some web servers or application servers usually have wget or curl as it is integral part for some local shell scripts doing various operation needed for proper services functioning or simply to test locally a local or remote listener services, if that's the case we can use curl to connect and get output of a remote service simulating a normal telnet connection like this:

host:~# curl -vv 'telnet://remote-server-host5:22'
* About to connect() to remote-server-host5 port 22 (#0)
*   Trying 10.52.67.21… connected
* Connected to aflpvz625 (10.52.67.21) port 22 (#0)
SSH-2.0-OpenSSH_5.3

Now lets test whether we can connect remotely to a local net remote IP's Qmail mail server with curls telnet simulation mode:

host:~#  curl -vv 'telnet://192.168.0.200:25'
* Expire in 0 ms for 6 (transfer 0x56066e5ab900)
*   Trying 192.168.0.200…
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x56066e5ab900)
* Connected to 192.168.0.200 (192.168.0.200) port 25 (#0)
220 This is Mail Pc-Freak.NET ESMTP

Fine it works, lets now test whether a remote server who has MySQL listener service on standard MySQL port TCP 3306 is reachable with curl

host:~#  curl -vv 'telnet://192.168.0.200:3306'
* Expire in 0 ms for 6 (transfer 0x5601fafae900)
*   Trying 192.168.0.200…
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x5601fafae900)
* Connected to 192.168.0.200 (192.168.0.200) port 3306 (#0)
Warning: Binary output can mess up your terminal. Use "–output -" to tell
Warning: curl to output it to your terminal anyway, or consider "–output
Warning: <FILE>" to save to a file.
* Failed writing body (0 != 107)
* Closing connection 0
root@pcfreak:/var/www/images#  curl -vv 'telnet://192.168.0.200:3306'
* Expire in 0 ms for 6 (transfer 0x5598ad008900)
*   Trying 192.168.0.200…
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x5598ad008900)
* Connected to 192.168.0.200 (192.168.0.200) port 3306 (#0)
Warning: Binary output can mess up your terminal. Use "–output -" to tell
Warning: curl to output it to your terminal anyway, or consider "–output
Warning: <FILE>" to save to a file.
* Failed writing body (0 != 107)
* Closing connection 0

As you can see the remote connection is returning binary data which is unknown to a standard telnet terminal thus to get the output received we need to pass curl suggested arguments.

host:~#  curl -vv 'telnet://192.168.0.200:3306' –output –
* Expire in 0 ms for 6 (transfer 0x55b205c02900)
*   Trying 192.168.0.200…
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x55b205c02900)
* Connected to 192.168.0.200 (192.168.0.200) port 3306 (#0)
g


The curl trick used to troubleshoot remote port to remote host from a Windows OS host which does not have telnet installed by default but have curl instead.

Also When troubleshooting vSphere Replication, it is often necessary to troubleshoot port connectivity as common Windows utilities are not available.
As Curl is available in the VMware vCenter Server Appliance command line interface.

On servers where curl is not there but you have wget is installed you can use it also to test a remote port

 

# wget -vv -O /dev/null http://google.com:554 –timeout=5
–2020-12-30 16:54:22–  http://google.com:554/
Resolving google.com (google.com)… 172.217.169.206, 2a00:1450:4017:80b::200e
Connecting to google.com (google.com)|172.217.169.206|:554… failed: Connection timed out.
Connecting to google.com (google.com)|2a00:1450:4017:80b::200e|:554… failed: Cannot assign requested address.
Retrying.

–2020-12-30 16:54:28–  (try: 2)  http://google.com:554/
Connecting to google.com (google.com)|172.217.169.206|:554… ^C

As evident from output the port 554 is filtered in google which is pretty normal.

If curl or wget is not there either as a final alternative you can either install some perl, ruby, python or bash script etc. that can opens a remote socket to the remote IP.

Maximal protection against SSH attacks. If your server has to stay with open SSH (Secure Shell) port open to the world

Thursday, April 7th, 2011

Brute Force Attack SSH screen, Script kiddie attacking
If you’re a a remote Linux many other Unix based OSes, you have defitenily faced the security threat of many failed ssh logins or as it’s better known a brute force attack

During such attacks your /var/log/messages or /var/log/auth gets filled in with various failed password logs like for example:

Feb 3 20:25:50 linux sshd[32098]: Failed password for invalid user oracle from 95.154.249.193 port 51490 ssh2
Feb 3 20:28:30 linux sshd[32135]: Failed password for invalid user oracle1 from 95.154.249.193 port 42778 ssh2
Feb 3 20:28:55 linux sshd[32141]: Failed password for invalid user test1 from 95.154.249.193 port 51072 ssh2
Feb 3 20:30:15 linux sshd[32163]: Failed password for invalid user test from 95.154.249.193 port 47481 ssh2
Feb 3 20:33:20 linux sshd[32211]: Failed password for invalid user testuser from 95.154.249.193 port 51731 ssh2
Feb 3 20:35:32 linux sshd[32249]: Failed password for invalid user user from 95.154.249.193 port 38966 ssh2
Feb 3 20:35:59 linux sshd[32256]: Failed password for invalid user user1 from 95.154.249.193 port 55850 ssh2
Feb 3 20:36:25 linux sshd[32268]: Failed password for invalid user user3 from 95.154.249.193 port 36610 ssh2
Feb 3 20:36:52 linux sshd[32274]: Failed password for invalid user user4 from 95.154.249.193 port 45514 ssh2
Feb 3 20:37:19 linux sshd[32279]: Failed password for invalid user user5 from 95.154.249.193 port 54262 ssh2
Feb 3 20:37:45 linux sshd[32285]: Failed password for invalid user user2 from 95.154.249.193 port 34755 ssh2
Feb 3 20:38:11 linux sshd[32292]: Failed password for invalid user info from 95.154.249.193 port 43146 ssh2
Feb 3 20:40:50 linux sshd[32340]: Failed password for invalid user peter from 95.154.249.193 port 46411 ssh2
Feb 3 20:43:02 linux sshd[32372]: Failed password for invalid user amanda from 95.154.249.193 port 59414 ssh2
Feb 3 20:43:28 linux sshd[32378]: Failed password for invalid user postgres from 95.154.249.193 port 39228 ssh2
Feb 3 20:43:55 linux sshd[32384]: Failed password for invalid user ftpuser from 95.154.249.193 port 47118 ssh2
Feb 3 20:44:22 linux sshd[32391]: Failed password for invalid user fax from 95.154.249.193 port 54939 ssh2
Feb 3 20:44:48 linux sshd[32397]: Failed password for invalid user cyrus from 95.154.249.193 port 34567 ssh2
Feb 3 20:45:14 linux sshd[32405]: Failed password for invalid user toto from 95.154.249.193 port 42350 ssh2
Feb 3 20:45:42 linux sshd[32410]: Failed password for invalid user sophie from 95.154.249.193 port 50063 ssh2
Feb 3 20:46:08 linux sshd[32415]: Failed password for invalid user yves from 95.154.249.193 port 59818 ssh2
Feb 3 20:46:34 linux sshd[32424]: Failed password for invalid user trac from 95.154.249.193 port 39509 ssh2
Feb 3 20:47:00 linux sshd[32432]: Failed password for invalid user webmaster from 95.154.249.193 port 47424 ssh2
Feb 3 20:47:27 linux sshd[32437]: Failed password for invalid user postfix from 95.154.249.193 port 55615 ssh2
Feb 3 20:47:54 linux sshd[32442]: Failed password for www-data from 95.154.249.193 port 35554 ssh2
Feb 3 20:48:19 linux sshd[32448]: Failed password for invalid user temp from 95.154.249.193 port 43896 ssh2
Feb 3 20:48:46 linux sshd[32453]: Failed password for invalid user service from 95.154.249.193 port 52092 ssh2
Feb 3 20:49:13 linux sshd[32458]: Failed password for invalid user tomcat from 95.154.249.193 port 60261 ssh2
Feb 3 20:49:40 linux sshd[32464]: Failed password for invalid user upload from 95.154.249.193 port 40236 ssh2
Feb 3 20:50:06 linux sshd[32469]: Failed password for invalid user debian from 95.154.249.193 port 48295 ssh2
Feb 3 20:50:32 linux sshd[32479]: Failed password for invalid user apache from 95.154.249.193 port 56437 ssh2
Feb 3 20:51:00 linux sshd[32492]: Failed password for invalid user rds from 95.154.249.193 port 45540 ssh2
Feb 3 20:51:26 linux sshd[32501]: Failed password for invalid user exploit from 95.154.249.193 port 53751 ssh2
Feb 3 20:51:51 linux sshd[32506]: Failed password for invalid user exploit from 95.154.249.193 port 33543 ssh2
Feb 3 20:52:18 linux sshd[32512]: Failed password for invalid user postgres from 95.154.249.193 port 41350 ssh2
Feb 3 21:02:04 linux sshd[32652]: Failed password for invalid user shell from 95.154.249.193 port 54454 ssh2
Feb 3 21:02:30 linux sshd[32657]: Failed password for invalid user radio from 95.154.249.193 port 35462 ssh2
Feb 3 21:02:57 linux sshd[32663]: Failed password for invalid user anonymous from 95.154.249.193 port 44290 ssh2
Feb 3 21:03:23 linux sshd[32668]: Failed password for invalid user mark from 95.154.249.193 port 53285 ssh2
Feb 3 21:03:50 linux sshd[32673]: Failed password for invalid user majordomo from 95.154.249.193 port 34082 ssh2
Feb 3 21:04:43 linux sshd[32684]: Failed password for irc from 95.154.249.193 port 50918 ssh2
Feb 3 21:05:36 linux sshd[32695]: Failed password for root from 95.154.249.193 port 38577 ssh2
Feb 3 21:06:30 linux sshd[32705]: Failed password for bin from 95.154.249.193 port 53564 ssh2
Feb 3 21:06:56 linux sshd[32714]: Failed password for invalid user dev from 95.154.249.193 port 34568 ssh2
Feb 3 21:07:23 linux sshd[32720]: Failed password for root from 95.154.249.193 port 43799 ssh2
Feb 3 21:09:10 linux sshd[32755]: Failed password for invalid user bob from 95.154.249.193 port 50026 ssh2
Feb 3 21:09:36 linux sshd[32761]: Failed password for invalid user r00t from 95.154.249.193 port 58129 ssh2
Feb 3 21:11:50 linux sshd[537]: Failed password for root from 95.154.249.193 port 58358 ssh2

This brute force dictionary attacks often succeed where there is a user with a weak a password, or some old forgotten test user account.
Just recently on one of the servers I administrate I have catched a malicious attacker originating from Romania, who was able to break with my system test account with the weak password tset .

Thanksfully the script kiddie was unable to get root access to my system, so what he did is he just started another ssh brute force scanner to crawl the net and look for some other vulnerable hosts.

As you read in my recent example being immune against SSH brute force attacks is a very essential security step, the administrator needs to take on a newly installed server.

The easiest way to get read of the brute force attacks without using some external brute force filtering software like fail2ban can be done by:

1. By using an iptables filtering rule to filter every IP which has failed in logging in more than 5 times

To use this brute force prevention method you need to use the following iptables rules:
linux-host:~# /sbin/iptables -I INPUT -p tcp --dport 22 -i eth0 -m state -state NEW -m recent -set
linux-host:~# /sbin/iptables -I INPUT -p tcp --dport 22 -i eth0 -m state -state NEW
-m recent -update -seconds 60 -hitcount 5 -j DROP

This iptables rules will filter out the SSH port to an every IP address with more than 5 invalid attempts to login to port 22

2. Getting rid of brute force attacks through use of hosts.deny blacklists

sshbl – The SSH blacklist, updated every few minutes, contains IP addresses of hosts which tried to bruteforce into any of currently 19 hosts (all running OpenBSD, FreeBSD or some Linux) using the SSH protocol. The hosts are located in Germany, the United States, United Kingdom, France, England, Ukraine, China, Australia, Czech Republic and setup to report and log those attempts to a central database. Very similar to all the spam blacklists out there.

To use sshbl you will have to set up in your root crontab the following line:

*/60 * * * * /usr/bin/wget -qO /etc/hosts.deny http://www.sshbl.org/lists/hosts.deny

To set it up from console issue:

linux-host:~# echo '*/60 * * * * /usr/bin/wget -qO /etc/hosts.deny http://www.sshbl.org/lists/hosts.deny' | crontab -u root -

These crontab will download and substitute your system default hosts with the one regularly updated on sshbl.org , thus next time a brute force attacker which has been a reported attacker will be filtered out as your Linux or Unix system finds out the IP matches an ip in /etc/hosts.deny

The /etc/hosts.deny filtering rules are written in a way that only publicly known brute forcer IPs will only be filtered for the SSH service, therefore other system services like Apache or a radio, tv streaming server will be still accessible for the brute forcer IP.

It’s a good practice actually to use both of the methods 😉
Thanks to Static (Multics) a close friend of mine for inspiring this article.

mod_rewrite redirect rule 80 to 443 on Apache webserver

Wednesday, April 2nd, 2014

A classic sysadmin scenario is to configure new Apache webserver with requirement to have an SSL ceriticate installed and working on port 443 and all requests coming on port 80 to be redirected to https://.
On Apache this is done with simple mod_rewrite rule:

RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}

Before applying the rule don't forget to have Apache mod_rewrite enabled usually it is not enabled on default most Linux distributions by default.
On shared hostings if you don't have access to directly modify Apache configuration but have .htaccess enabled you can add above rules also to .htaccess

Add this to respective VirtualHost configuration and restart Apache and that's it. If after configuring it for some reason it is not working debug mod_rewrite issues by enabling mod_rewrite's rewrite.log

Other useful Apache mod_rewrite redirect rule is redirect a single landing page from HTTP to HTTP

RewriteEngine On
RewriteRule ^apache-redirect-http-to-https.html$ https://www.site-url.com/apache-redirect-http-to-https.html [R=301,L]

!Note! that in case where performance is a key requirement for a website it might be better to use the standard way to redirect HTTP to HTTPS protocol in Apache through:

ServerName www.site-url.com Redirect / https://www.site-url.com/

To learn more on mod_rewrite redirecting  check out this official documentation on Apache's official site.

Linux / BSD: Check if Apache web server is listening on port 80 and 443

Tuesday, June 3rd, 2014

apache_check_if_web_server_running_port-80-and-port-443-logo-linux-and-bsd-check-apache-running
If you're configuring a new Webserver or adding a new VirtualHost to an existing Apache configuration you will need to restart Apache with or without graceful option once Apache is restarted to assure Apache is continuously running on server (depending on Linux distribution) issue:

1. On Debian Linux / Ubuntu servers

# ps axuwf|grep -i apache|grep -v grep

root 23280 0.0 0.2 388744 16812 ? Ss May29 0:13 /usr/sbin/apache2 -k start
www-data 10815 0.0 0.0 559560 3616 ? S May30 2:25 _ /usr/sbin/apache2 -k start
www-data 10829 0.0 0.0 561340 3600 ? S May30 2:31 _ /usr/sbin/apache2 -k start
www-data 10906 0.0 0.0 554256 3580 ? S May30 0:20 _ /usr/sbin/apache2 -k start
www-data 10913 0.0 0.0 562488 3612 ? S May30 2:32 _ /usr/sbin/apache2 -k start
www-data 10915 0.0 0.0 555524 3588 ? S May30 0:19 _ /usr/sbin/apache2 -k start
www-data 10935 0.0 0.0 553760 3588 ? S May30 0:29 _ /usr/sbin/apache2 -k start

 


2. On CentOS, Fedora, RHEL and SuSE Linux and FreeBSD

ps ax | grep httpd | grep -v grep

 

7661 ? Ss 0:00 /usr/sbin/httpd
7664 ? S 0:00 /usr/sbin/httpd
7665 ? S 0:00 /usr/sbin/httpd
7666 ? S 0:00 /usr/sbin/httpd
7667 ? S 0:00 /usr/sbin/httpd
7668 ? S 0:00 /usr/sbin/httpd
7669 ? S 0:00 /usr/sbin/httpd
7670 ? S 0:00 /usr/sbin/httpd
7671 ? S 0:00 /usr/sbin/httpd

 

Whether a new Apache IP Based VirtualHosts are added to already existing Apache and you have added new

Listen 1.1.1.1:80
Listen 1.1.1.1:443

directives, after Apache is restarted to check whether Apache is listening on port :80 and :443
 

netstat -ln | grep -E ':80|443'

tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:443            0.0.0.0:*               LISTEN


Meaning of 0.0.0.0 is that Apache is configured to Listen on Any Virtualhost IPs and interfaces. This output is usually returned whether in Apache config httpd.conf / apache2.conf webserver is configured with directive.

Listen *:80
 

If in netstat output there is some IP poping up for example  "192.168.1.1:http", this means that only connections to the "192.168.1.1" IP address will be accepted by Apache.

Another way to look for Apache in netstat (in case Apache is configured to listen on some non-standard port number) is with:

netstat -l |grep -E 'http|www'

tcp        0      0 *:www                   *:*                     LISTEN


As sometimes it might be possible that Apache is listening but its processes are in in defunct (Zommbie) state it is always a good idea, also to check if pages server by Apache are opening in browser (check it with elinks, lynx or curl)

To get more thorough information on Apache listened ports, protocol, user with which Apache is running nomatter of Linux distribution use lsof command:
 

/usr/bin/lsof -i|grep -E 'httpd|http|www'

httpd     6982 nobody    3u  IPv4  29388359      0t0  TCP www.pc-freak.net:https (LISTEN)
httpd    18071 nobody    3u  IPv4 702790659      0t0  TCP www.pc-freak.net:http (LISTEN)
httpd    18071 nobody    4u  IPv4 702790661      0t0  TCP www.pc-freak.net.net:https (LISTEN)


If Apache is not showing up even though restarted check what is going wrong in the error logs:

– on Debian standard error log is /var/log/apache2/error.log
– On RHEL, CentOS, SuSE std. error log is in /var/log/httpd/error.log
– on FeeBSD /var/log/httpd-error.log

 

How to create ssh tunnels / ssh tunneling on Linux and FreeBSD with openssh

Saturday, November 26th, 2011

ssh-tunnels-port-forwarding-windows-linux-bypassing-firewall-diagram
SSH tunneling
allows to send and receive traffic using a dedicated port. Using an ssh traffic can have many reasons one most common usage reason is to protect the traffic from a host to a remote server or to access port numbers which are by other means blocked by firewall, e.g.: (get around firewall filtering)
SSH tunneling works only with TCP traffic. The way to make ssh tunnel is with cmds:

host:/root# ssh -L localhost:deshost:destport username@remote-server.net
host:/root# ssh -R restport:desthost:localport username@remote-server.net
host:/root# ssh -X username@remote-server.net

This command will make ssh to bind a port on localhost of the host host:/root# machine to the host desthost:destport (destination host : destinationport). Important to say deshost is the host destination visible from the remote-server.net therefore if the connection is originating from remote-server.net this means desthost will be localhost.
Mutiple ssh tunnels to multiple ports using the above example commands is possible. Here is one example of ssh tunneling
Let’s say its necessery to access an FTP port (21) and an http port (80), listening on remote-server.net In that case desthost will be localhost , we can use locally the port (8080) insetad of 80, so it will be no necessery to make the ssh tunnel with root (admin privileges). After the ssh session gets opened both services will be accessible on the local ports.

host:/home/user$ ssh -L 21:localhost:21 -L 8080:localhost:80 user@remote-server.net

That’s all enjoy 😉

Redirect http URL folder to https e.g. redirect (http:// to https://) with mod_rewrite – redirect port 80 to port 443 Rewrite rule

Saturday, July 17th, 2010

There is a quick way to achieve a a full url redirect from a normal unencrypted HTTP request to a SSL crypted HTTPS

This is achieved through mod_rewrite using the RedirectMatch directive.

For instance let’s say we’d like to redirect https://www.pc-freak.net/blog to https://www.pc-freak.net/blog.
We simply put in our .htacess file the following rule:

Redirect permanent /blog https://www.cadiabank.com/login

Of course this rule assumes that the current working directory where the .htacess file is stored is the main domain directory e.g. / .
However this kind of redirect is a way inflexible so for more complex redirect, you might want to take a look at mod rewrite’s RedirectMatch directive.

For instance if you inted to redirect all urls (https://www.pc-freak.net/blog/something/asdf/etc.) which as you see includes the string blog/somestring/asdf/etc. to (https://www.pc-freak.net/blog/something/asdf/etc then you might use some htaccess RedirectMatch rule like:

RedirectMatch permanent ^/blog/(.*)$ https://www.pc-freak.net.net$1
or
RedirectMatch permanent ^/blog/(.*)$ https://www.pc-freak.net.net/$1

Hopefully your redirect from the http protocol to https protocol with mod_rewrite rule should be completed.
Also consider that the Redirect directive which by the way is an Apache directive should be faster to process requests, so everywhere you can I recommend using instead of RedirectMatch which calls the external Apache mod_rewrite and will probably be times slower.