Posts Tagged ‘configuration file’

How to log multiple haproxy / apache / mysql instance via haproxy log-tagging / Segregating log management for multiple HAProxy instances using rsyslog

Tuesday, May 23rd, 2023

rsyslog-logo-picture-use-programname-and-haproxy-log-tag-directives-together-to-log-as-many-process-streams-as-you-like

 

Introduction

This article provides a guide on refining haproxy  logging mechanism by leveraging the `programname` property in rsyslog, coupled with the `log-tag` directive in haproxy.
This approach will create a granular logging setup, separating logs according to their originating services and specific custom tags, enhancing overall log readability.

Though the article is written concretely for logging multiple log streams from haproxy this can be successfully applied
for any other Linux service to log as many concrete log-tagged data streams as you prefer.

Scope

The guide focuses on tailoring the logging mechanisms for two haproxy  instances named `haproxy` and `haproxyssl`, utilizing the `programname` property in rsyslog and the `log-tag` directive in haproxy for precise log management.

The haproxy and haproxyssl instances are two separate systemd config file prepared instances.
haproxy instance is simple haproxy proxying tcp traffic in non-encrypted form, whether haproxyssl is a special instance
prepared to tunnel the incoming http traffic in ssl form. Both instances of haproxy runs as a separate processes on the server.

Here is the systemd configuration of haproxy systemd service file:

# cat /usr/lib/systemd/system/haproxy.service
[Unit]
Description=HAProxy Load Balancer
After=network-online.target
Wants=network-online.target

[Service]
Environment="CONFIG=/etc/haproxy/haproxy.cfg" "PIDFILE=/run/haproxy.pid"
EnvironmentFile=/etc/sysconfig/haproxy
ExecStartPre=/usr/sbin/haproxy -f $CONFIG -c -q $OPTIONS
ExecStart=/usr/sbin/haproxy -Ws -f $CONFIG -p $PIDFILE $OPTIONS
ExecReload=/usr/sbin/haproxy -f $CONFIG -c -q $OPTIONS
ExecReload=/bin/kill -USR2 $MAINPID
SuccessExitStatus=143
KillMode=mixed
Type=notify

[Install]
WantedBy=multi-user.target


As well as the systemd service configuration for haproxyssl:
 

# cat /usr/lib/systemd/system/haproxyssl.service
[Unit]
Description=HAProxy Load Balancer
After=network-online.target
Wants=network-online.target

[Service]
Environment="CONFIG=/etc/haproxy/haproxy_ssl_prod.cfg" "PIDFILE=/run/haproxy_ssl_prod.pid"
EnvironmentFile=/etc/sysconfig/haproxy
ExecStartPre=/usr/sbin/haproxyssl -f $CONFIG -c -q $OPTIONS
ExecStart=/usr/sbin/haproxyssl -Ws -f $CONFIG -p $PIDFILE $OPTIONS
ExecReload=/usr/sbin/haproxyssl -f $CONFIG -c -q $OPTIONS
ExecReload=/bin/kill -USR2 $MAINPID
SuccessExitStatus=143
KillMode=mixed
Type=notify

[Install]
WantedBy=multi-user.target

 

Step 1: Configuring HAProxy instances with `log-tag`
 

To distinguish between logs from two HAProxy instances, `log-tag` directive is used to add tags to logs. This tag is used to filter these logs in rsyslog.
Modify the HAProxy configuration file in `/etc/haproxy/haproxy.*.cfg`

HAProxy Instance 1 (haproxy)
 

#———————————————————————
# Global settings
#———————————————————————
global
      log          127.0.0.1 local6 debug
      log-tag      haproxy

HAProxy Instance 2 (haproxyssl)


#———————————————————————
# Global settings
#———————————————————————
global
    log          127.0.0.1 local5 debug
    log-tag      haproxyssl

 

Step 2: Implementing rsyslog configuration for haproxy logs
 

Next, create a new rsyslog configuration file, stored in /etc/rsyslog.d/. Ensure the new configuration file ends in `.conf`

HAProxy Instance 1 (haproxy)

Now add rsyslog rules to filters logs based on the `programname` and the custom log tag:
 

# vi /etc/rsyslog.d/55_haproxy.conf
if $programname == 'haproxy' then /var/log/haproxy.log
&stop

HAProxy Instance 2 (haproxyssl)
# vi /etc/rsyslog.d/51_haproxy_ssl.conf
if $programname == 'haproxy_ssl' then /var/log/haproxy_ssl.log
&stop


These rules filter logs that originate from haproxy  and contain the respective string haproxy   or haproxy_ssl , directing them to their respective log files. The `& stop` directive ensures that rsyslog stops processing the log once a match is found, preventing dublication.

Finally, restart both the haproxy and rsyslog services for the changes to take effect:

# systemctl restart haproxy
# systemctl restart haproxyssl
# systemctl restart rsyslog


Reading References

haproxy:   log-tag directive

rsyslog:    rsyslogd documentation

This is a guest article originally written by: Dimitar Paskalev, guest blogging with good interesting articles is always mostly welcome 

Enable PSK encryption on Zabbix Agent (client) sent encrypted monitored datas to Zabbix server

Friday, April 7th, 2023

zabbix-client-server-encryption-public-key-exchange

Those concerned of security and in use of their Zabbix monitored data who communicate Zabbix collected agent
data over internet or via some kind of untrusted network might definitely not enjoy the fact that zabbix-agent sents
its collected data to server in a plain text. Clear text data is allowing any network sniffer to possibly collect your
monitored server and hardware devices data and exposes all data sent over the network to same problems like in the past
the old uencrypted SMTP protocol.

To mitigate those great security hole for the paranoid sys admin it is rather easy to enable PSK (Pre Shared Key) based encryption.
To generate Pre Shared key you have to had to important values present

1. PSK Identity
2. PSK Secret

PSK secret should be minimum of 128 bit (16-byte PSK, entered as 32 hexadecimal digits), and supports up to
2048 bit (256-byte PSK, entered as 512 hexadecimal digits)

Usually something like 256 bit PSK secret on the machine should be strong enough and simply generated by running

# openssl rand -hex 32

1. Agent to zabbix server or proxy connection config

In /etc/zabbix/zabbix_agentd.conf for a Server Active (e.g. server to actively request the client to sent its collected data)
On machine running zabbix-agent should have a configuration similar to:

# cat /etc/zabbix/zabbix_agentd.conf

PidFile=/var/run/zabbix/zabbix_agentd.pid
LogFile=/var/log/zabbix/zabbix_agentd.log
LogFileSize=0

# IP of the machine
SourceIP=10.10.10.30
# turn it on if you need to execute to remote machine commands
EnableRemoteCommands=0

# IP of the server
servers=10.30.50.80
ListenPort=10080

# IP of the machine
ListenIP=10.30.30.31

# IP of the server
ServerActive=10.30.50.80

HostMetadataItem=system.uname
BufferSize=5400
MaxLinesPerSecond=5
Timeout=10
AllowRoot=0
StartAgents=5
LogRemoteCommands=0


# Machine hostname
Hostname=fqdn-of-zabbix-data-collect-server.com
Include=/etc/zabbix/zabbix_agentd.d/*.conf

# Encryption
TLSConnect=psk
TLSAccept=psk
TLSPSKIdentity=PSK to Zabbix Server5
TLSPSKFile=/etc/zabbix/zabbix_agentd.psk


! Important security note

!!! The TLSPSKIdentity value you decide will not be encrypted on transport, so don't use anything sensitive.

Once you include the TSL config

2 Generate / Create Zabbix Agent Key

Generate the key with pseudo-random bites inside /etc/zabbix/zabbix_agentd_key.psk

# cd /etc/zabbix
# openssl rand -hex 32 > zabbix_agentd_key.psk
# chown zabbix:zabbix zabbix_agentd_key.psk
# chmod 600 zabbix_agentd_key.psk

3. Configure PSK encryption in Zabbix Server Web User interface

Go to Zabbix Server User interface in browser and configure the PSK encryption options for the host.

Select the:

'Connections to host' = PSK

'Connections from host' = PSK

'PSK Identity' = [public-value-configured-in-Zabbix-agent-config]

'PSK' = [paste the long hex string generated from the OpenSSL command above]


In some seconds up to a minute or two the Zabbix Server and Agent will successfully communicate using PSK encryption.
Making the monitored data unreadable in plain text for malignant sniffers hanging in the middle equipment between the zabbix-agent and zabbix-server hosts.

4. PSK encryption behind a Proxy

Many companies, nowadays use zabbix proxy for improvement of network infrastrucutre. For example it is used to offload the zabbix-server when multiple zabbix-agents have to report various datas or to monitor servers and devices that are phyisically in separate networks or data centers (are passing through paranoic built firewalls) or monitor locations are having unreliable communications between each other.
 

To enable PSK for communications between your Zabbix Server and Zabbix Proxy.

1. Create a new secret, and add the PSK Identity and Secret to

Administration ⇾ Proxies ⇾ [Your proxy] ⇾ Encryption

2. Adjust the settings inside the zabbix proxies configuration file at /etc/zabbix/zabbix_proxy.conf


If setting up PSK encryption for agents behind a Zabbix proxy, ensure your have

Zabbix Server ⇽⇾ Proxy PSK enabled
first in Zabbix Server UI.

This is because, when you start the Proxy, or do some testing to send some key value to Zabbix server via the proxy with commands :

# zabbix_get -s 127.0.0.1 -k system.hostname
# zabbix_server -R config_cache_reload


config_cache_reload, the Proxy will download all its host settings from the server, and this also includes the servers copy of the secret.

The proxy needs to know the secret since it is now managing the communications on behalf of the server.

3. To add PSK encryption for any Agents behind a proxy, then you continue to set up the Agents as normal by creating a new secret, editing

Configuration ⇾ Hosts ⇾ [Your Host] ⇾ Encryption page

and also editing /etc/zabbix/zabbix_agentd.conf.

Remember that, since your Agents Host configuration in the Zabbix UI will be set as Monitored by Proxy, the PSK settings will be applicable for communications happening between the Zabbix Proxy and the Agent that it is monitoring, not between the Zabbix Server and the Agent behind the proxy.

You can also add PSK Encryption between your Zabbix Proxy and its own local Agent if you want.
You would set its PSK settings in the Proxy Agents host configuration at

Configuration ⇾ Hosts ⇾ [Your proxy] ⇾ Encryption

and modify the settings in the agents on configuration file at /etc/zabbix/zabbix_agentd.conf.
Keep in mind, this is only applicable to communications between the Zabbix Proxy, and its own Agent process.

When setting up PSK encryption for the Zabbix Server, Proxy and Agents, you may see an error in the Proxy logs,

cannot send proxy data to server at "zabbix.your-domain.tld": connection of type "TLS with PSK" is not allowed for proxy "your-proxy".

If you hit this, check that your

Zabbix Server ⇽⇾ Proxy PSK settings

are correct first.

Don't get confused between the Proxies own optional agent process, and its main Proxy process which is required.

How to update expiring OpenSSL certificates without downtime on haproxy Pacemaker / Corosync PCS Cluster

Tuesday, July 19th, 2022

pcm-active-passive-scheme-corosync-pacemaker-openssl-renew-fix-certificate

Lets say you have a running PCS Haproxy cluster with 2 nodes and you have already a configuration in haproxy with a running VIP IP and this proxies
are tunneling traffic to a webserver such as Apache or directly to an Application and you end up in the situation where the configured certificates,
are about to expire soon. As you can guess having the cluster online makes replacing the old expiring SSL certificate with a new one relatively easy
task. But still there are a couple of steps to follow which seems easy but systemizing them and typing them down takes some time and effort.
In short you need to check the current certificates installed on the haproxy inside the Haproxy configuration files,
in my case the haproxy cluster was running 2 haproxy configs haproxyprod.cfg and haproxyqa.cfg and the certificates configured are places inside this
configuration.

Hence to do the certificate update, I had to follow few steps:

A. Find the old certificate key or generate a new one that will be used later together with the CSR (Certificate Request File) to generate the new Secure Socket Layer
certificate pair.
B. Either use the old .CSR (this is usually placed inside the old .CRT certificate file) or generate a new one
C. Copy those .CSR file to the Copy / Paste buffer and place it in the Website field on the step to fill in a CSR for the new certificate on the Domain registrer
such as NameCheap / GoDaddy / BlueHost / Entrust etc.
D. Registrar should then be able to generate files like the the new ServerCertificate.crt, Public Key Root Certificate Authority etc.
E. You should copy and store these files in some database for future perhaps inside some database such as .xdb
for example you can se the X – Certificate and Key management xca (google for xca download).
F. Copy this certificate and place it on the top of the old .crt file that is configured on the haproxies for each domain for which you have configured it on node2
G. standby node1 so the cluster sends the haproxy traffic to node2 (where you should already have the new configured certificate)
H. Prepare the .crt file used by haproxy by including the new ServerCertificate.crt content on top of the file on node1 as well
I. unstandby node1
J. Check in browser by accessing the URL the certificate is the new one based on the new expiry date that should be extended in future
K. Check the status of haproxy
L. If necessery check /var/log/haproxy.log on both clusters to check all works as expected

haserver_cluster_sample

Below are the overall commands to use to complete below jobs

Old extracted keys and crt files are located under /home/username/new-certs

1. Check certificate expiry start / end dates


[root@haproxy-serv01 certs]# openssl s_client -connect 10.40.18.88:443 2>/dev/null| openssl x509 -noout -enddate
notAfter=Aug 12 12:00:00 2022 GMT

2. Find Certificate location taken from /etc/haproxy/haproxyprod.cfg / /etc/haproxy/haproxyqa.cfg

# from Prod .cfg
   bind 10.40.18.88:443 ssl crt /etc/haproxy/certs/www.your-domain.com.crt ca-file /etc/haproxy/certs/ccnr-ca-prod.crt 
 

# from QA .cfg

    bind 10.50.18.87:443 ssl crt /etc/haproxy/certs/test.your-domain.com.crt ca-file /etc/haproxy/certs

3. Check  CRT cert expiry


# for haproxy-serv02 qa :443 listeners

[root@haproxy-serv01 certs]# openssl s_client -connect 10.50.18.87:443 2>/dev/null| openssl x509 -noout -enddate 
notAfter=Dec  9 13:24:00 2029 GMT

 

[root@haproxy-serv01 certs]# openssl x509 -enddate -noout -in /etc/haproxy/certs/www.your-domain.com.crt
notAfter=Aug 12 12:00:00 2022 GMT

[root@haproxy-serv01 certs]# openssl x509 -noout -dates -in /etc/haproxy/certs/www.your-domain.com.crt 
notBefore=May 13 00:00:00 2020 GMT
notAfter=Aug 12 12:00:00 2022 GMT


[root@haproxy-serv01 certs]# openssl x509 -noout -dates -in /etc/haproxy/certs/other-domain.your-domain.com.crt 
notBefore=Dec  6 13:52:00 2019 GMT
notAfter=Dec  9 13:52:00 2022 GMT

4. Check public website cert expiry in a Chrome / Firefox or Opera browser

In a Chrome browser go to updated URLs:

https://www.your-domain/login

https://test.your-domain/login

https://other-domain.your-domain/login

and check the certs

5. Login to one of haproxy nodes haproxy-serv02 or haproxy-serv01

Check what crm_mon (the cluster resource manager) reports of the consistancy of cluster and the belonging members
you should get some output similar to below:

[root@haproxy-serv01 certs]# crm_mon
Stack: corosync
Current DC: haproxy-serv01 (version 1.1.23-1.el7_9.1-9acf116022) – partition with quorum
Last updated: Fri Jul 15 16:39:17 2022
Last change: Thu Jul 14 17:36:17 2022 by root via cibadmin on haproxy-serv01

2 nodes configured
6 resource instances configured

Online: [ haproxy-serv01 haproxy-serv02 ]

Active resources:

 ccnrprodlbvip  (ocf::heartbeat:IPaddr2):       Started haproxy-serv01
 ccnrqalbvip    (ocf::heartbeat:IPaddr2):       Started haproxy-serv01
 Clone Set: haproxyqa-clone [haproxyqa]
     Started: [ haproxy-serv01 haproxy-serv02 ]
 Clone Set: haproxyprod-clone [haproxyprod]
     Started: [ haproxy-serv01 haproxy-serv02 ]


6. Create backup of existing certificates before proceeding to regenerate expiring
On both haproxy-serv01 / haproxy-serv02 run:

 

# cp -vrpf /etc/haproxy/certs/ /home/username/etc-haproxy-certs_bak_$(date +%d_%y_%m)/


7. Find the .key file etract it from latest version of file CCNR-Certificates-DB.xdb

Extract passes from XCA cert manager (if you're already using XCA if not take the certificate from keypass or wherever you have stored it.

+ For XCA cert manager ccnrlb pass
Find the location of the certificate inside the .xdb place etc.

+++++ www.your-domain.com.key file +++++

—–BEGIN PUBLIC KEY—–

—–END PUBLIC KEY—–


# Extracted from old file /etc/haproxy/certs/www.your-domain.com.crt
 

—–BEGIN RSA PRIVATE KEY—–

—–END RSA PRIVATE KEY—–


+++++

8. Renew Generate CSR out of RSA PRIV KEY and .CRT

[root@haproxy-serv01 certs]# openssl x509 -noout -fingerprint -sha256 -inform pem -in www.your-domain.com.crt
SHA256 Fingerprint=24:F2:04:F0:3D:00:17:84:BE:EC:BB:54:85:52:B7:AC:63:FD:E4:1E:17:6B:43:DF:19:EA:F4:99:L3:18:A6:CD

# for haproxy-serv01 prod :443 listeners

[root@haproxy-serv02 certs]# openssl x509 -x509toreq -in www.your-domain.com.crt -out www.your-domain.com.csr -signkey www.your-domain.com.key


9. Move (Standby) traffic from haproxy-serv01 to ccnrl0b2 to test cert works fine

[root@haproxy-serv01 certs]# pcs cluster standby haproxy-serv01


10. Proceed the same steps on haproxy-serv01 and if ok unstandby

[root@haproxy-serv01 certs]# pcs cluster unstandby haproxy-serv01


11. Check all is fine with openssl client with new certificate


Check Root-Chain certificates:

# openssl verify -verbose -x509_strict -CAfile /etc/haproxy/certs/ccnr-ca-prod.crt -CApath  /etc/haproxy/certs/other-domain.your-domain.com.crt{.pem?)
/etc/haproxy/certs/other-domain.your-domain.com.crt: OK

# openssl verify -verbose -x509_strict -CAfile /etc/haproxy/certs/thawte-ca.crt -CApath  /etc/haproxy/certs/www.your-domain.com.crt
/etc/haproxy/certs/www.your-domain.com.crt: OK

################# For other-domain.your-domain.com.crt ##############
Do the same

12. Check cert expiry on /etc/haproxy/certs/other-domain.your-domain.com.crt

# for haproxy-serv02 qa :15443 listeners
[root@haproxy-serv01 certs]# openssl s_client -connect 10.40.18.88:15443 2>/dev/null| openssl x509 -noout -enddate
notAfter=Dec  9 13:52:00 2022 GMT

[root@haproxy-serv01 certs]#  openssl x509 -enddate -noout -in /etc/haproxy/certs/other-domain.your-domain.com.crt 
notAfter=Dec  9 13:52:00 2022 GMT


Check also for 
+++++ other-domain.your-domain.com..key file +++++
 

—–BEGIN PUBLIC KEY—–

—–END PUBLIC KEY—–

 


# Extracted from /etc/haproxy/certs/other-domain.your-domain.com.crt
 

—–BEGIN RSA PRIVATE KEY—–

—–END RSA PRIVATE KEY—–


+++++

13. Standby haproxy-serv01 node 1

[root@haproxy-serv01 certs]# pcs cluster standby haproxy-serv01

14. Renew Generate CSR out of RSA PRIV KEY and .CRT for second domain other-domain.your-domain.com

# for haproxy-serv01 prod :443 renew listeners
[root@haproxy-serv02 certs]# openssl x509 -x509toreq -in other-domain.your-domain.com.crt  -out domain-certificate.com.csr -signkey domain-certificate.com.key


And repeat the same steps e.g. fill the CSR inside the domain registrer and get the certificate and move to the proxy, check the fingerprint if necessery
 

[root@haproxy-serv01 certs]# openssl x509 -noout -fingerprint -sha256 -inform pem -in other-domain.your-domain.com.crt
SHA256 Fingerprint=60:B5:F0:14:38:F0:1C:51:7D:FD:4D:C1:72:EA:ED:E7:74:CA:53:A9:00:C6:F1:EB:B9:5A:A6:86:73:0A:32:8D


15. Check private key's SHA256 checksum

# openssl pkey -in terminals-priv.KEY -pubout -outform pem | sha256sum
# openssl x509 -in other-domain.your-domain.com.crt -pubkey -noout -outform pem | sha256sum

# openssl pkey -in  www.your-domain.com.crt-priv-KEY -pubout -outform pem | sha256sum

# openssl x509 -in  www.your-domain.com.crt -pubkey -noout -outform pem | sha256sum


16. Check haproxy config is okay before reload cert


# haproxy -c -V -f /etc/haproxy/haproxyprod.cfg
Configuration file is valid


# haproxy -c -V -f /etc/haproxy/haproxyqa.cfg
Configuration file is valid

Good so next we can the output of status of certificate

17.Check old certificates are reachable via VIP IP address

Considering that the cluster VIP Address is lets say 10.40.18.88 and running one of the both nodes cluster to check it do something like:
 

# curl -vvI https://10.40.18.88:443|grep -Ei 'start date|expire date'


As output you should get the old certificate


18. Reload Haproxies for Prod and QA on node1 and node2

You can reload the haproxy clusters processes gracefully something similar to kill -HUP but without loosing most of the current established connections with below cmds:

Login on node1 (haproxy-serv01) do:

# /usr/sbin/haproxy -f /etc/haproxy/haproxyprod.cfg -D -p /var/run/haproxyprod.pid  -sf $(cat /var/run/haproxyprod.pid)
# /usr/sbin/haproxy -f /etc/haproxy/haproxyqa.cfg -D -p /var/run/haproxyqa.pid  -sf $(cat /var/run/haproxyqa.pid)

repeat the same commands on haproxy-serv02 host

19.Check new certificates online and the the haproxy logs

# curl -vvI https://10.50.18.88:443|grep -Ei 'start date|expire date'

*       start date: Jul 15 08:19:46 2022 GMT
*       expire date: Jul 15 08:19:46 2025 GMT


You should get the new certificates Issueing start date and expiry date.

On both nodes (if necessery) do:

# tail -f /var/log/haproxy.log

Defining multiple short Server Hostname aliases via SSH config files and defining multiple ssh options for it, Use passwordless authentication via public keys

Thursday, September 16th, 2021

using-ssh-host-acronym-aliases-ssh-client-explained-openssh-logo

In case you have to access multiple servers from your terminal client such as gnome-terminal, kterminal (if on Linux) or something such as mobaxterm + cygwin (if on Windows) with an opens ssh client (ssh command). There is a nifty trick to save time and keyboard typing through creating shortcuts aliases by adding few definitions inside your $HOME/.ssh/config ( ~/.ssh/config ) for your local non root user or even make the configuration system wide (for all existing local /etc/passwd users) via /etc/ssh/ssh_config.
By adding a pseudonym alias for each server it makes sysadmin life much easier as you don't have to type in each time the FQDN (Fully Qualified Domain Name) hostname of remote accessed Linux / Unix / BSD / Mac OS or even Windows sshd ready hosts accessible via remote TCP/IP port 22.


1. Adding local user remote server pointer aliases via ~/.ssh/config


The file ~/.ssh/config is read by the ssh client part of the openssh-client (Linux OS package) on each invokement of the client, and besides defining a pseudonym for the hosts you like to save you time when accessing remote host and hence increase your productivity. Moreover you can also define various other nice options through it to define specifics of remote ssh session for each desired host such as remote host default SSH port (for example if your OpenSSHD is configured to run on non-standard SSH port as lets say 2022 instead of default port TCP 22 for some reason, e.g. security through obscurity etc.).

 

The general syntax of .ssh/config file si simplistic, it goes like this:
 

Host MACHNE_HOSTNAME

SSH_OPTION1 value1
SSH_OPTION1 value1 value2
SSH_OPTION2 value1 value2

 

Host MACHNE_HOSTNAME

SSH_OPTION value
SSH_OPTION1 value1 value2

  • Another understood syntax if you prefer to not have empty whitespaces is to use ( = )
    between the parameter name and values.

Host MACHINE_HOSTNAME
SSH_config=value
SSH_config1=value1 value2

  • All empty lines and lines starting with the hash shebang sign ( # ) would be ignored.
  • All values are case-sensitive, but parameter names are not.

If you have never so far used the $HOME/.ssh/config you would have to create the file and set the proper permissions to it like so:

mkdir -p $HOME/.ssh
chmod 0700 $HOME/.ssh


Below are examples taken from my .ssh/config configuration for all subdomains for my pcfreak.org domain

 

# Ask for password for every subdomain under pc-freak.net for security
Host *.pcfreak.org
user hipopo
passwordauthentication yes
StrictHostKeyChecking no

# ssh public Key authentication automatic login
Host www1.pc-freak.net
user hipopo
Port 22
passwordauthentication no
StrictHostKeyChecking no

UserKnownHostsFile /dev/null

Host haproxy2
    Hostname 213.91.190.233
    User root
    Port 2218
    PubkeyAuthentication yes
    IdentityFile ~/.ssh/haproxy2.pub    
    StrictHostKeyChecking no
    LogLevel INFO     

Host pcfrxenweb
    Hostname 83.228.93.76
    User root
    Port 2218

    PubkeyAuthentication yes
    IdentityFile ~/.ssh/pcfrxenweb.key    
    StrictHostKeyChecking no

Host pcfreak-sf
    Hostname 91.92.15.51
    User root
    Port 2209
    PreferredAuthentications password
    StrictHostKeyChecking no

    Compression yes


As you can see from above configuration the Hostname could be referring either to IP address or to Hostname.

Now to connect to defined IP 91.92.15.51 you can simply refer to its alias

$ ssh pcfreak-sf -v

and you end up into the machine ssh on port 2209 and you will be prompted for a password.

$ ssh pcfrxenweb -v


would lead to IP 83.228.93.76 SSH on Port 2218 and will use the defined public key for a passwordless login and will save you the password typing each time.

Above ssh command is a short alias you can further use instead of every time typing:

$ ssh -i ~/.ssh/pcfrxenweb.key -p 2218 root@83.228.93.76

There is another nifty trick worthy to mention, if you have a defined hostname such as the above config haproxy2 to use a certain variables, but you would like to override some option for example you don't want to connet by default with User root, but some other local account, lets say ssh as devuser@haproxy2 you can type:

$ ssh -o "User=dev" devuser

StrictHostKeyChecking no

– variable will instruct the ssh to not check if the finger print of remote host has changed. Usually this finger print check sum changes in case if for example for some reason the opensshd gets updated or the default /etc/ssh/ssh_host_dsa_key /etc/ssh/sshd_host_dsa_* files have changed due to some reason.
Of course you should use this option only if you tend to access your remote host via a secured VPN or local network, otherwise the Host Key change could be an indicator someone is trying to intercept your ssh session.

 

Compression yes


– variable  enables compression of connection saves few bits was useful in the old modem telephone lines but still could save you few bits
It is also possible to define a full range of IP addresses to be accessed with one single public rsa / dsa key

Below .ssh/config
 

Host 192.168.5.?
     Hostname 192.168.2.18
     User admin
     IdentityFile ~/.ssh/id_ed25519.pub


Would instruct each host attemted to be reached in the IP range of 192.168.2.1-254 to be automatically reachable by default with ssh client with admin user and the respective ed25519.pub key.
 

$ ssh 192.168.1.[1-254] -v

 

2. Adding ssh client options system wide for all existing local or remote LDAP login users


The way to add any Host block is absolutely the same as with a default user except you need to add the configuration to /etc/ssh/ssh_config. Here is a confiugaration from mine Latest Debian Linux

$ cat /etc/ssh/ssh_config

# This is the ssh client system-wide configuration file.  See
# ssh_config(5) for more information.  This file provides defaults for
# users, and the values can be changed in per-user configuration files
# or on the command line.

# Configuration data is parsed as follows:
#  1. command line options
#  2. user-specific file
#  3. system-wide file
# Any configuration value is only changed the first time it is set.
# Thus, host-specific definitions should be at the beginning of the
# configuration file, and defaults at the end.

# Site-wide defaults for some commonly used options.  For a comprehensive
# list of available options, their meanings and defaults, please see the
# ssh_config(5) man page.

Host *
#   ForwardAgent no
#   ForwardX11 no
#   ForwardX11Trusted yes
#   PasswordAuthentication yes
#   HostbasedAuthentication no
#   GSSAPIAuthentication no
#   GSSAPIDelegateCredentials no
#   GSSAPIKeyExchange no
#   GSSAPITrustDNS no
#   BatchMode no
#   CheckHostIP yes
#   AddressFamily any
#   ConnectTimeout 0
#   StrictHostKeyChecking ask
#   IdentityFile ~/.ssh/id_rsa
#   IdentityFile ~/.ssh/id_dsa
#   IdentityFile ~/.ssh/id_ecdsa
#   IdentityFile ~/.ssh/id_ed25519
#   Port 22
#   Protocol 2
#   Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc
#   MACs hmac-md5,hmac-sha1,umac-64@openssh.com
#   EscapeChar ~
#   Tunnel no
#   TunnelDevice any:any
#   PermitLocalCommand no
#   VisualHostKey no
#   ProxyCommand ssh -q -W %h:%p gateway.example.com
#   RekeyLimit 1G 1h
    SendEnv LANG LC_*
    HashKnownHosts yes
    GSSAPIAuthentication yes

As you can see pretty much can be enabled by default such as the forwarding of the Authentication agent option ( -A ) option, necessery for some Company server environments to be anbled. So if you have to connect to remote host with enabled Agent Forwarding instead of typing

ssh -A user@remotehostname


To enable Agent Forwarding instead of

ssh -X user@remotehostname


Simply uncomment and set to yes
 

ForwardX11 yes
ForwardX11Trusted yes


Just simply uncomment above's config ForwardAgent no

As you can see ssh could do pretty much, you can configure enable SSH Tunneling or run via a Proxy with the ProxyCommand (If it is the first time you hear about ProxyCommand I warmly recommend you check my previous article – How to pass SSH traffic through a secured Corporate Proxy Server with corkscrew).

Sometimes for a defines hostname, due to changes on remote server ssh configuration, SSH encryption type or a host key removal you might end up with issues connecting, therefore to override all the previously defined options inside .ssh/config by ignoring the configuration with -F /dev/null

$ ssh -F /dev/null user@freak -v


What we learned ?

To sum it up In this article, we have learned how to easify the stressed sysadmin life, by adding Aliases with certain port numbering and configurations for different remote SSH administrated Linux / Unix, hosts via local ~/.ssh/config or global wide /etc/ssh/ssh_config configuration options, as well as how already applied configuration from ~/.ssh/config affecting each user ssh command execution, could be overriden.

How to check version of most used mail servers Postfix / Qmail / Exim / Sendmail

Wednesday, October 14th, 2020

How to check version of a Linux host's installed Mail server?

Most used mail servers Postfix / Qmail / Exim / Sendmail and usually you have to do a dpkg -l / rpm -qa or whatever package manager to get the package version. But sometimes the package is built to have a different naming convention from the actual installed MTA.

As recently I had to check on a Linux host what kind of version was the installed and used one to the SMTP, below is how to find conrete versions of Postfix / Qmail / Exim / Sendmail.
If none of the 4 is installed and something more cryptic like ssmtp is installed if another one is installed perhaps the best way would be to check with lsof -i :25 command and see  what process has binded and listens on TCP port 25.

mail-server-lsof-linux-screenshot-qmail-vpopmail

 

 

1. How to check Postfix exact mail server version

mail-server-exim-check-lsof-screenshot

Once you can find Postfix is the Network listening MTA, you might think you can simply use postfix -v however, but no …
Unlike many other applications, Postfix has no -v or –versions switch. But you can get the version information easily by using the postconf command as shown below:

root@server :~# postconf mail_version

postfix-show-version-postconf-linux

Other approach is to dump all postfix configuration settings (this is useful to get more info on how postfix is configured) and explicitly grep for the version.
 How to check version of a Linux host's installeded webserver?

root@server :~# postconf -d | grep mail_version

 

2. How to check Exim MTA running version ?

root@exim-mail :/ # exim -bV
Exim version 4.72 #1 built 13-Jul-2010 21:54:55
Copyright (c) University of Cambridge, 1995 – 2007
Berkeley DB: Sleepycat Software: Berkeley DB 4.3.29: (September 19, 2009)
Support for: crypteq iconv() Perl OpenSSL move_frozen_messages Content_Scanning DKIM Old_Demime
Lookups: lsearch wildlsearch nwildlsearch iplsearch cdb dbm dbmnz
Authenticators: cram_md5 plaintext spa
Routers: accept dnslookup ipliteral manualroute queryprogram redirect
Transports: appendfile/maildir/mailstore/mbx autoreply lmtp pipe smtp
Size of off_t: 8
OpenSSL compile-time version: OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
OpenSSL runtime version: OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
Configuration file is /etc/exim.conf

how-to-get-exim-version-on-gnu-linux-screenshot


3. How to check Sendmail Mail Transport Agent exact Mail version ?

Though sendmail is rarely used this days and it usually works mostly on obsolete old scrap hosts
or in some old fashioned conservative organizations such as Banks and Payment services providers, you might need to invertise it, just like the configuration m4 format complexity with its annoying macros, getting the version is also not straight forward:

# sendmail -d0.4 -bv root | grep Version
Version 8.14.4

Above commands should be working on most Linux distributions such as Debian / Ubuntu / Fedora / CentOS / SuSE and other Linux derivatives
 

4. How to check Qmail MTA version?

This is a bit of complicated question, as Qmail's base has not been significantly changed for years.
The latest published qmail package is qmail-1.03.tar.gz.  1.03 was released in 1998, Qmail is famous for its unbreakable security. The author of qmail  Daniel J. Bernstein is famous for writting Qmail to make the work installation and configuration of SMTP simple as of the time of writting sendmail was the defacto standard and sendmail was hard to configure.
Also sendmail was famous for a set of Security holes that got a lot of Sendmail MTA's on the Net got hacked. Thus the QMAIL was written as a more security-aware mail transport agent.

In contrast to sendmail, qmail has a modular architecture composed of mutually untrusting components; for instance, the SMTP listener component of qmail runs with different credentials from the queue manager or the SMTP sender. qmail was also implemented with a security-aware replacement to the C standard library, and as a result has not been vulnerable to stack and heap overflows, format string attacks, or temporary file race conditions.

The core qmail package has not been updated for many years. New features were initially provided by third party patches, from which the most important at the time were brought together in a single meta-patch set called netqmail.

The current version of netqmail is at 1.06 netqmail-1.06.tar.gz as of year 2020.

One possible way to get some info about installed qmail or components is to use the documentation look up command apropos

qmail:~# apropos qmail


or check the manual or at worst check for the installation source files that the person that installed the qmail used 🙂

A fun fact about qmail few might know is D. Bernstein offered in 1997 a US$500 reward for the first person to publish a verifiable security hole in the latest version of the software, for many years till 2005 no hole was found security researcher Georgi Guninski found an integer overflow in qmail. On 64-bit platforms, in default configurations with sufficient virtual memory, the delivery of huge amounts of data to certain qmail components may allow remote code execution. Bernstein disputes that this is a practical attack, arguing that no real-world deployment of qmail would be susceptible. Configuration of resource limits for qmail components mitigates the vulnerability.

On November 1, 2007, Bernstein raised the reward to US$1000. At a slide presentation the following day, Bernstein stated that there were 4 "known bugs" in the ten-year-old qmail-1.03, none of which were "security holes." He characterized the bug found by Guninski as a "potential overflow of an unchecked counter." "Fortunately, counter growth was limited by memory and thus by configuration, but this was pure luck.

5. Quick way to check the type of Mail server installed on Debian based Linux that doesn't have telnet installed


As you know simple telnet localhost 25 or a simple ps -ef could reveal at most times general information on the installed server. However there is another way to do it using package manager. by using embedded bash shell type type command like so:
 

# type -p sendmail |
xargs dpkg -S

type-x-bash-command-to-find-out-email-server-version-on-linux

Another hacky way to check whether exim, postfix or sendmail SMTP is installed is with:

hipo@freak:~$ echo $(man sendmail)| grep "exim"|wc -l
1
hipo@freak:~$ echo $(man sendmail)| grep "postfix"|wc -l
0
hipo@freak:~$ echo $(man sendmail)| grep "sendmail"|wc -l
0

I guess there are nice hacks and ways to get versions, so if you're aware of any please share with me.
Enjoy !

Apache Benchmarking

Monday, January 14th, 2008

They’re few tools out there which are most common in use to do benchmarking and stess test on webservers. One of them “the most common one”is called “ab” or apache benchmark Check it out here another very common tool is called flood Check it here Flood seems to be the newer and most accurate tool to use for stress testing unfortunately it has one weakness. It only works with configuration file which is in xml format. So every time before you start it you have to generate a new xml file to suite your needs. Also a tool recommended to me in the #apache in the irc.freenode.net network is called “jmeter”, it’s located here . I personally didn’t tested it because it uses Java as a back end. While googling around I also have stucked on this interesting project PHPSPEED although I wasn’t able to test it looks like a promising test suite.END—–

Improve Website Apache Webserver SEO without Website source code moficitations with Google PageSpeed module on Debian, Ubuntu, CentOS, Fedora and SuSE Linux

Thursday, December 18th, 2014

Improve-website-apache-webserver-seo-without-website-source-code-modifications-with-Google-PageSpeed-Apache-module

For hosting companies and even personal website speed performance becomes increasingly important factor that gives higher and higher weight on overall PageRank and is one of the key things for Successful Site Search Engine Optimization (positioning) in Search Engines of a not specially SEO friendly crafted website.

Virtually all Google / Yahoo / Bing,  Yahoo  etc. Search Engines give better pagerank to websites which load faster and has little or no downtimes, for the reason a faster loading time of a website pages means better user experience and is indicator that the website is well maintained. 

Often websites deployed written for purpose of a business-es or just community CMS / Blog Website Open Source systems such as Joomla, Drupal and WordPress by default are not made to provide fantastic speed right after deploy without install of custom plugins and website tuning, i.e.:

  • Content size optimization (gzipping)
  • More efficient way to deliver CSS / Javascript (MinifyJS / CSS files into single ones
  • HTML optimization
  • Stripping (useful) page Comments
  • Adding <head> if missing on pages etc.

. Therefore as I said in many of my previous LAMP Optimization articles page  (opening) speed could make really Bad Users / Clients experience when the site grows too big or is badly optimized it gives degraded page speed times (often page loads 20 / 30 seconds waiting for the page to load!). Having Pages lagging on big information sites or EShos has both Ruining Company's Image on the market and quickly convinces the user to use another service from the already thosands available and thus drives out (potential) customers.

As Programming code maintainance and improvement is usually very costly, companies that want to save money or can't afford it (because of the shrinking budgets dictacted by the global economic crisis), the best thing to do is to ask your sysadmin to Squeeze the Best out of the WebService and Servers without major (Backend Code) infrastructural changes.

To  Speed up Apache and create Proper Page Caching without installing on server external PHP Caching modules such as Eaccelerator  / PHP APC caching and without
extra CMS modules
such as lets say WordPress W3 Total Cache there is Google Develop Apache Webserver external module – PageSpeed.

Here is Google Pagespeed Module overview :
 

PageSpeed speeds up your site and reduces page load time. This open-source webserver module automatically applies web performance best practices to pages and associated assets (CSS, JavaScript, images) without requiring that you modify your existing content or workflow.


What does Apache Google PageSpeed actually does?
 

  • Automatic website and asset optimization
  • Latest web optimization techniques
  • 40+ configurable optimization filters
  • Free, open-source, and frequently updated
  • Deployed by individual sites, hosting providers, CDNs


1. Install PageSpeed on Debian / Ubuntu, deb derivatives) Linux

a) Download and install module 

On 64 bit deb based Linux:

cd /usr/local/src
wget https://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-stable_current_amd64.deb 
dpkg -i mod-pagespeed-stable_current_amd64.deb
apt-get -f install


On 32 bit Linux:

cd /usr/local/src
wget https://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-stable_current_i386.deb
dpkg -i 
direct/mod-pagespeed-stable_current_i386.deb
apt-get -f install


b) Restart Apache
 

sudo /etc/init.d/apache2 restart

Important files and folders placed on server by deb installer are:

/usr/bin/pagespeed_js_minify – binary that does Javascript minification
/etc/apache2/mods-available/pagespeed.conf – Pagespeed config
/etc/apache2/mods-available/pagespeed.load – Load module directives in Apache
/etc/cron.daily/mod-pagespeed – mod_pagespeed cron script for checking and installing latest updates.
/var/cache/mod_pagespeed – Mod Pagespeed cahing folder (useful to install memcached to increase even further caching performance)
/var/log/pagespeed – Directory to store pagespeed log files

 

2. Install PageSpeed on (RPM based CentOS, Fedora, RHEL / SuSE Linux)


RPM 64 bit package install:
 

rpm -Uvh https://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-beta_current_x86_64.rpm

 


32 bit pack version:
 

rpm -Uvh https://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-stable_current_i386.rpm


Modify pagespeed mod config 

Restart Apache

sudo /etc/init.d/httpd restart


Important config files and folders created during RPM install are:

  • /etc/cron.daily/mod-pagespeed : mod_pagespeed cron script for checking and installing latest updates.
  • /etc/httpd/conf.d/pagespeed.conf : The main configuration file for Apache.
  • /usr/lib/httpd/modules/mod_pagespeed.so : mod_pagespeed module for Apache.
  • /var/www/mod_pagespeed/cache : File caching direcotry for web sites.
  • /var/www/mod_pagespeed/files : File generate prefix for web sites.

3. Configuring Google PageSpeed module

 

To configure PageSpeed you can either edit the package installed bundled pagespeed.conf (/etc/apache2/mods-available/pagspeed.conf,  /etc/httpd/conf.d/pagespeed.conf) or insert configuration items inside Apache VirtualHosts config files or even if you need flexibility and you don't have straight access to Apache config files (on shared hosting servers where module is available) through .htaccess.
Anyways try to avoid adding pagespeed directives to .htaccess as it will be too slow and inefficient.

Configuration is managed by setting different so-called "Rewrite Levels". Default behavior is to use Level of "Corefilters.", a set of filters (module behavior configs) which according to Google is safe for use. PageSpeed Filters is a set of actions applied to Web Delivered files.

Default config setting is hence:
 

ModPagespeedRewriteLevel CoreFilters

Disabling default set of filters is done with:
 

ModPagespeedRewriteLevel PassThrough

"Corefilters" default filter set as of time of writting this article:
 

add_head
combine_css
convert_jpeg_to_progressive
convert_meta_tags
extend_cache
flatten_css_imports
inline_css
inline_import_to_link
inline_javascript
rewrite_css
rewrite_images
rewrite_javascript
rewrite_style_attributes_with_url

Complete documentation on Configuring PageSpeed Filters is here.

If caching is turned on, default PageSped caching is configured in /var/cache/mod_pagespeed/
Enabling someof the non-Corefilters that sometimes are useful for SEO (reduce of served / returned pagesize) are:
 

ModPagespeedEnableFilters pedantic,remove_comments

By default pagespeed does some things (such as inline_css, inline_javascript and rewrite_images (Optimize, removing Excess pixels).  My litle experience with pagespeed shows in some cases this could break websites), so I found for my case useful to disable some of the filters:

 

vim /etc/apache2/mods-available/pagespeed.conf

 

ModPagespeedDisableFilters rewrite_images,convert_jpeg_to_progressive,inline_css,inline_javascript

 

4. Testing if PageSpeed is Enabled pagespeed_admin

By default PageSpeed has Admin which by default is only allowed to be accessed from server localhost (127.0.0.1) to get basic statistics either install text browser like lynx / elinks or add more access IPs again in pagespeed config / vhosts pagespeed.conf include more Allow lines like below:

 

    <Location /pagespeed_admin>
        Order allow,deny
        Allow from localhost
        Allow from 127.0.0.1
        Allow from 192.168.1.1
        Allow from xxx.xxx.xxx.xxx

        #Allow from All
        SetHandler pagespeed_admin
    </Location>
    <Location /pagespeed_global_admin>
        Order allow,deny
        Allow from localhost
        Allow from 127.0.0.1

        Allow from 192.168.1.1
        Allow from xxx.xxx.xxx.xxx
        SetHandler pagespeed_global_admin
    </Location>

 

Once configured pagespeed_admin access it with favourite browser on:

http://127.0.0.1/pagespeed_admin
http://127.0.0.1/pagespeed_global_admin

improve-website-apache-webserver-seo-without-source-code-modifications-google-pagespeed_admin_panel

Other way to test it is enabled is by creating php file with good old <? phpinfo(); ?> – PHP stats enabled / disabled features code:

pagespeed-in-phpinfo-x-mod-pagespeed-output-screenshot-apache-webserver

I've also tested also pagespeed unstable release, but experienced some segmentation faults in both error.log and access.log so finally decided to keep using stable release.

PageSpeed is a great way to boost your server sites performance, however it comes on certain costs as expect your server CPU Load to jump drastically, (in my case it jumped more than twice), there are Linux servers where enabling the module could totally stone the servers, so before implementing the module on a Production system environment, always first test thouroughfully with loaded pagespeed on UAT (testing) environment with AB or Siege (Apache Benchmarking Tools).

Strained day

Saturday, March 31st, 2007

Yesterday the day was quite strained. We were prepairing for few weeks to host the new website of pozvanete.bgcreated by our firm Design.BG, so yesterday in 9:40, our project manager has called and said pozvanete.bg’s DNSrecord is already changed to point to our server, but there is a problem while http://www.pozvanete.bg opensnormally, http://pozvanete.bg opens DBG’s 404 error page. I remembered that this is due to a configuration of theserver cause there was some SEO stuff in the past on the server, so I was able to fix the problem quickly.The problems started to come after that. The machine where we hosted the site (and it was the only site there was1.6ghz AMD with 1 giga of RAM). Unfortunately 30 minutes after it started to open from our server I observed themachine’s cpu stays idle 0.0 all the time and the site responds very slowly to browser requests. I tried to tinkerit changing things from the webserver configuration file with no luck. I spoke with my boss explained him the situationso he decided we’ll move the site on another machine which is ( 3.0 Ghz Intel ), and the next week we’ll move the siteagain to a rack machine colocated in Sofia in Evo Link. It took a lot of conversations over the phone and talk with Vladibefore we moved completely the site on the new machine before that I have to recompile the machine’s current httpd and php to match the requirements of the site but Praise the Lord in the everything went smoothly and we were able to move the site completely the site to the new location. I’ve speak with Pozvanete’s administrator to change the DNS records to point to the new machine and in 6:00 o’clock the site could be seen from the new server. In the mean time Bobb has bought an IBM rack he quickly packed it and send it to Sofia. Among all this a lot of collegues from the office found me urgent work, I got a complaint about a problem with the mails of propertyinvestld the guy claimed our webmail sent the .doc files as winmail.dat which as I suspected was not true. But Praise the Lord everything went smoothly in the end. In 8:00 o’clock we go out of home with Nomen and decided to go to the Mino’s coffee to see Sami cause he’s has come back from Sofia. Mino’s coffee was a lot of fuller than usual, and it was very smoky, Tsetso speak a lot about art and history as usual, I was bored as usual etc. etc.After that we had the idea to watch a film in Nomen’s home but my Aunt called and said if I have time it will be good to see my grandma cause she is not feeling well (they made her eye surgery 3 days ago). I went to his home and stayed with her it’s awful she is such a nice lady and she’s suffering so much. She said how bad she felt nobody went to the hospital to see her for 3 days ( First I was angry to my mother .. then I calmed down ). I realized all the world is in birth pains as written in the Bible so I praid a lot to the Creator to have mercy over my grandma. Then I tried reading The Bible for some time but I was too sleepy and I went to bed. END—–