Posts Tagged ‘linux?’

Install Zabbix Agent client on CentOS 9 Stream Linux, Disable Selinux and Firewalld on CentOS9 to make zabbix-agentd send data to server

Thursday, April 14th, 2022

https://pc-freak.net/images/zabbix_agent_active_passive-zabbix-agent-centos-9-install-howto

Installing Zabbix is usually a trivial stuff, you either use the embedded distribution built packages if such are available this is for example defetch the right zabbix release repository  that configures the Zabbix official repo in the system, configure the Zabbix server or Proxy if such is used inside /etc/zabbix/zabbix_agentd.conf and start the client, i.e. I expected that it will be a simple and straight forward also on the freshly installed CentOS 9 Linux cause placing a zabbix-agent monitroing is a trivial stuff however installing came to error:

Key import failed (code 2). Failing package is: zabbix-agent-6.0.3-1.el8.x86_64

 

This is what I've done

1. Download and install zabbix-release-6.0-1.el8.noarch.rpm directly from zabbix

I've followed the official documentation from zabbix.com and ran:
 

[root@centos9 /root ]# rpm -Uvh https://repo.zabbix.com/zabbix/6.0/rhel/8/x86_64/zabbix-release-6.0-1.el8.noarch.rpm


2. Install  the zabbix-agent RPM package from the repositry

[root@centos9 rpm-gpg]# yum install zabbix-agent -y
Last metadata expiration check: 0:02:46 ago on Tue 12 Apr 2022 08:49:34 AM EDT.
Dependencies resolved.
=============================================
 Package                               Architecture                Version                              Repository                      Size
=============================================
Installing:
 zabbix-agent                          x86_64                      6.0.3-1.el8                          zabbix                         526 k
Installing dependencies:
 compat-openssl11                      x86_64                      1:1.1.1k-3.el9                       appstream                      1.5 M
 openldap-compat                       x86_64                      2.4.59-4.el9                         baseos                          14 k

Transaction Summary
==============================================
Install  3 PackagesTotal size: 2.0 M
Installed size: 6.1 M
Downloading Packages:
[SKIPPED] openldap-compat-2.4.59-4.el9.x86_64.rpm: Already downloaded
[SKIPPED] compat-openssl11-1.1.1k-3.el9.x86_64.rpm: Already downloaded
[SKIPPED] zabbix-agent-6.0.3-1.el8.x86_64.rpm: Already downloaded
Zabbix Official Repository – x86_64                                                                          1.6 MB/s | 1.7 kB     00:00
Importing GPG key 0xA14FE591:
 Userid     : "Zabbix LLC <packager@zabbix.com>"
 Fingerprint: A184 8F53 52D0 22B9 471D 83D0 082A B56B A14F E591
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-ZABBIX-A14FE591
Key import failed (code 2). Failing package is: zabbix-agent-6.0.3-1.el8.x86_64
 GPG Keys are configured as: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-ZABBIX-A14FE591
The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by e
xecuting 'yum clean packages'.
Error: GPG check FAILED


3. Work around to skip GPG to install zabbix-agent 6 on CentOS 9

With Linux everything becomes more and more of a hack …
The logical thing to was to first,  check and it assure that the missing RPM GPG key is at place

[root@centos9 rpm-gpg]# ls -al  /etc/pki/rpm-gpg/RPM-GPG-KEY-ZABBIX-A14FE591
-rw-r–r– 1 root root 1719 Feb 11 16:29 /etc/pki/rpm-gpg/RPM-GPG-KEY-ZABBIX-A14FE591

Strangely the key was in place.

Hence to have the key loaded I've tried to import the gpg key manually with gpg command:

[root@centos9 rpm-gpg]# gpg –import /etc/pki/rpm-gpg/RPM-GPG-KEY-ZABBIX-A14FE591


And attempted install again zabbix-agent once again:
 

[root@centos9 rpm-gpg]# yum install zabbix-agent -y
Last metadata expiration check: 0:02:46 ago on Tue 12 Apr 2022 08:49:34 AM EDT.
Dependencies resolved.
==============================================
 Package                               Architecture                Version                              Repository                      Size
==============================================
Installing:
 zabbix-agent                          x86_64                      6.0.3-1.el8                          zabbix                         526 k
Installing dependencies:
 compat-openssl11                      x86_64                      1:1.1.1k-3.el9                       appstream                      1.5 M
 openldap-compat                       x86_64                      2.4.59-4.el9                         baseos                          14 k

Transaction Summary
==============================================
Install  3 Packages

Total size: 2.0 M
Installed size: 6.1 M
Downloading Packages:
[SKIPPED] openldap-compat-2.4.59-4.el9.x86_64.rpm: Already downloaded
[SKIPPED] compat-openssl11-1.1.1k-3.el9.x86_64.rpm: Already downloaded
[SKIPPED] zabbix-agent-6.0.3-1.el8.x86_64.rpm: Already downloaded
Zabbix Official Repository – x86_64                                                                          1.6 MB/s | 1.7 kB     00:00
Importing GPG key 0xA14FE591:
 Userid     : "Zabbix LLC <packager@zabbix.com>"
 Fingerprint: A184 8F53 52D0 22B9 471D 83D0 082A B56B A14F E591
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-ZABBIX-A14FE591
Key import failed (code 2). Failing package is: zabbix-agent-6.0.3-1.el8.x86_64
 GPG Keys are configured as: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-ZABBIX-A14FE591
The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by executing 'yum clean packages'.
Error: GPG check FAILED


Unfortunately that was not a go, so totally pissed off I've disabled the gpgcheck for packages completely as a very raw bad and unrecommended work-around to eventually install the zabbix-agentd like that.

Usually the RPM gpg key failures check on RPM packages could be could be workaround with in dnf, so I've tried that one without success.

[root@centos9 rpm-gpg]# dnf update –nogpgcheck
Total                                                                                                        181 kB/s | 526 kB     00:02
Zabbix Official Repository – x86_64                                                                          1.6 MB/s | 1.7 kB     00:00
Importing GPG key 0xA14FE591:
 Userid     : "Zabbix LLC <packager@zabbix.com>"
 Fingerprint: A184 8F53 52D0 22B9 471D 83D0 082A B56B A14F E591
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-ZABBIX-A14FE591
Is this ok [y/N]: y
Key import failed (code 2). Failing package is: zabbix-agent-6.0.3-1.el8.x86_64
 GPG Keys are configured as: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-ZABBIX-A14FE591
The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by executing 'dnf clean packages'.
Error: GPG check FAILED

Further tried to use the –nogpgpcheck 
which according to its man page:


–nogpgpcheck 
Skip checking GPG signatures on packages (if RPM policy allows).


In yum the nogpgcheck option according to its man yum does exactly the same thing


[root@centos9 rpm-gpg]# yum install zabbix-agent –nogpgcheck -y
 

Dependencies resolved.
===============================================
 Package                             Architecture                  Version                               Repository                     Size
===============================================
Installing:
 zabbix-agent                        x86_64                        6.0.3-1.el8                           zabbix                        526 k

Transaction Summary
===============================================

Total size: 526 k
Installed size: 2.3 M
Is this ok [y/N]: y
Downloading Packages:

Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                     1/1
  Running scriptlet: zabbix-agent-6.0.3-1.el8.x86_64                                                                                     1/2
  Reinstalling     : zabbix-agent-6.0.3-1.el8.x86_64                                                                                     1/2
  Running scriptlet: zabbix-agent-6.0.3-1.el8.x86_64                                                                                     1/2
  Running scriptlet: zabbix-agent-6.0.3-1.el8.x86_64                                                                                     2/2
  Cleanup          : zabbix-agent-6.0.3-1.el8.x86_64                                                                                     2/2
  Running scriptlet: zabbix-agent-6.0.3-1.el8.x86_64                                                                                     2/2
  Verifying        : zabbix-agent-6.0.3-1.el8.x86_64                                                                                     1/2
  Verifying        : zabbix-agent-6.0.3-1.el8.x86_64                                                                                     2/2

Installed:
  zabbix-agent-6.0.3-1.el8.x86_64

Complete!
[root@centos9 ~]#

Voila! zabbix-agentd on CentOS 9 Install succeeded!

Yes I know disabling a GPG check is not really secure and seems to be an ugly solution but since I'm cut of time in the moment and it is just for experimental install of zabbix-agent on CentOS
plus we already trusted the zabbix package repository anyways, I guess it doesn't much matter.

4. Configure Zabbix-agent on the machine

Once you choose how the zabbix-agent should sent the data to the zabbix-server (e.g. Active or Passive) mode the The minimum set of configuration you should
have at place should be something like mine:

[root@centos9 ~]# grep -v '\#' /etc/zabbix/zabbix_agentd.conf | sed /^$/d
PidFile=/var/run/zabbix/zabbix_agentd.pid
LogFile=/var/log/zabbix/zabbix_agentd.log
LogFileSize=0
Server=192.168.1.70,127.0.0.1
ServerActive=192.168.1.70,127.0.0.1
Hostname=centos9
Include=/etc/zabbix/zabbix_agentd.d/*.conf

5. Start and Enable zabbix-agent client

To have it up and running

[root@centos9 ~]# systemct start zabbix-agent
[root@centos9 ~]# systemctl enable zabbix-agent

6. Disable SELinux to prevent it interfere with zabbix-agentd 

Other amazement was that even though I've now had configured Active check and a Server and correct configuration the Zabbix-Server could not reach the zabbix-agent for some weird reason.
I thought that it might be selinux and checked it and seems by default in the fresh installed CentOS 9 Linux selinux is already automatically set to enabled.

After stopping it i made sure, SeLinux would block for security reasons client connectivity to the zabbix-server until you either allow zabbix exception in SeLinux or until completely disable it.
 

[root@centos9 ~]# sestatus

SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Memory protection checking:     actual (secure)
Max kernel policy version:      31

To temporarily change the mode from its default targeted to permissive mode 

[root@centos9 ~]# setenforce 0

[root@centos9 ~]# sestatus

SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          permissive
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Memory protection checking:     actual (secure)
Max kernel policy version:      31


That would work for current session but won't take affect on next reboot, thus it is much better to disable selinux on next boot:

[root@centos9 ~]# cat /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing – SELinux security policy is enforced.
#     permissive – SELinux prints warnings instead of enforcing.
#     disabled – No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of these three values:
#     targeted – Targeted processes are protected,
#     minimum – Modification of targeted policy. Only selected processes are protected. 
#     mls – Multi Level Security protection.
SELINUXTYPE=targeted

 

To disable selinux change:

SELINUXTYPE=disabled

[root@centos9 ~]# grep -v \# /etc/selinux/config

SELINUX=disabled
SELINUXTYPE=targeted


To make the OS disable selinux and test it is disabled you will have to reboot 

[root@centos9 ~]# reboot


Check its status again, it should be:

[root@centos9 ~]# sestatus
SELinux status:                 disabled


7. Enable zabbix-agent through firewall or disable firewalld service completely

By default CentOS 9 has the firewalld also enabled and either you have to enable zabbix to communicate to the remote server host.

To enable access for from and to zabbix-agentd in both Active / Passive mode:

#firewall settings:
[root@centos9 rpm-gpg]# firewall-cmd –permanent –add-port=10050/tcp
[root@centos9 rpm-gpg]# firewall-cmd –permanent –add-port=10051/tcp
[root@centos9 rpm-gpg]# firewall-cmd –reload
[root@centos9 rpm-gpg]# systemctl restart firewalld
[root@centos9 rpm-gpg]# systemctl restart zabbix-agent


If the machine is in a local DMZ-ed network with tightly configured firewall router in front of it, you could completely disable firewalld.

[root@centos9 rpm-gpg]# systemctl stop firewalld
[root@centos9 rpm-gpg]# systemctl disable firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

 

Next login to Zabbix-server web interface with administrator and from Configuration -> Hosts -> Create the centos9 hostname and add it a template of choice. The data from the added machine should shortly appear after another zabbix restart:

[root@centos9 rpm-gpg]#  systemctl restart zabbix-agentd


8. Tracking other oddities with the zabbix-agent through log

If anyways still zabbix have issues connectin to remote node, increase the debug log level section
 

[root@centos9 rpm-gpg]# vim /etc/zabbix/zabbix_agentd.conf
DebugLevel 5

### Option: DebugLevel
#       Specifies debug level:
#       0 – basic information about starting and stopping of Zabbix processes
#       1 – critical information
#       2 – error information
#       3 – warnings
#       4 – for debugging (produces lots of information)
#       5 – extended debugging (produces even more information)
#
# Mandatory: no
# Range: 0-5
# Default:
# DebugLevel=3

[root@centos9 rpm-gpg]# systemctl restart zabbix-agent

Keep in mind that debugging will be too verbose, so once you make the machine being seen in zabbix, don't forget to comment out the line and restart agent to turn it off.

9. Testing zabbix-agent, How to send an alert to specific item key

Usually when writting userparameter scripts, data collected from scripts is being sent to zabbix serveria via Item keys.
Thus one way to check the zabbix-agent -> zabbix server data send works fine is to send some simultaneous data via a key
Once zabbix-agent is configured on the machine 

In this case we will use something like ApplicationSupport-Item as an item.
 

[root@centos9 rpm-gpg]# /usr/bin/zabbix_sender -c "/etc/zabbix/zabbix_agentd.conf" -k "ApplicationSupport-Item" -o "here is the message"

Assuming you have created the newly prepared zabbix-agent host into Zabbix Server, you should be shortly able to see the data come in Latest data.

Create Linux High Availability Load Balancer Cluster with Keepalived and Haproxy on Linux

Tuesday, March 15th, 2022

keepalived-logo-linux

Configuring a Linux HA (High Availibiltiy) for an Application with Haproxy is already used across many Websites on the Internet and serious corporations that has a crucial infrastructure has long time
adopted and used keepalived to provide High Availability Application level Clustering.
Usually companies choose to use HA Clusters with Haproxy with Pacemaker and Corosync cluster tools.
However one common used alternative solution if you don't have the oportunity to bring up a High availability cluster with Pacemaker / Corosync / pcs (Pacemaker Configuration System) due to fact machines you need to configure the cluster on are not Physical but VMWare Virtual Machines which couldn't not have configured a separate Admin Lans and Heartbeat Lan as we usually do on a Pacemaker Cluster due to the fact the 5 Ethernet LAN Card Interfaces of the VMWare Hypervisor hosts are configured as a BOND (e.g. all the incoming traffic to the VMWare vSphere  HV is received on one Virtual Bond interface).

I assume you have 2 separate vSphere Hypervisor Physical Machines in separate Racks and separate switches hosting the two VMs.
For the article, I'll call the two brand new brought Virtual Machines with some installation automation software such as Terraform or Ansible – vm-server1 and vm-server2 which would have configured some recent version of Linux.

In that scenario to have a High Avaiability for the VMs on Application level and assure at least one of the two is available at a time if one gets broken due toe malfunction of the HV, a Network connectivity issue, or because the VM OS has crashed.
Then one relatively easily solution is to use keepalived and configurea single High Availability Virtual IP (VIP) Address, i.e. 10.10.10.1, which would float among two VMs using keepalived so at a time at least one of the two VMs would be reachable on the Network.

haproxy_keepalived-vip-ip-diagram-linux

Having a VIP IP is quite a common solution in corporate world, as it makes it pretty easy to add F5 Load Balancer in front of the keepalived cluster setup to have a 3 Level of security isolation, which usually consists of:

1. Physical (access to the hardware or Virtualization hosts)
2. System Access (The mechanism to access the system login credetials users / passes, proxies, entry servers leading to DMZ-ed network)
3. Application Level (access to different programs behind L2 and data based on the specific identity of the individual user,
special Secondary UserID,  Factor authentication, biometrics etc.)

 

1. Install keepalived and haproxy on machines

Depending on the type of Linux OS:

On both machines
 

[root@server1:~]# yum install -y keepalived haproxy

If you have to install keepalived / haproxy on Debian / Ubuntu and other Deb based Linux distros

[root@server1:~]# apt install keepalived haproxy –yes

2. Configure haproxy (haproxy.cfg) on both server1 and server2

 

Create some /etc/haproxy/haproxy.cfg configuration

 

[root@server1:~]vim /etc/haproxy/haproxy.cfg

#———————————————————————
# Global settings
#———————————————————————
global
    log          127.0.0.1 local6 debug
    chroot       /var/lib/haproxy
    pidfile      /run/haproxy.pid
    stats socket /var/lib/haproxy/haproxy.sock mode 0600 level admin 
    maxconn      4000
    user         haproxy
    group        haproxy
    daemon
    #debug
    #quiet

#———————————————————————
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#———————————————————————
defaults
    mode        tcp
    log         global
#    option      dontlognull
#    option      httpclose
#    option      httplog
#    option      forwardfor
    option      redispatch
    option      log-health-checks
    timeout connect 10000 # default 10 second time out if a backend is not found
    timeout client 300000
    timeout server 300000
    maxconn     60000
    retries     3

#———————————————————————
# round robin balancing between the various backends
#———————————————————————

listen FRONTEND_APPNAME1
        bind 10.10.10.1:15000
        mode tcp
        option tcplog
#        #log global
        log-format [%t]\ %ci:%cp\ %bi:%bp\ %b/%s:%sp\ %Tw/%Tc/%Tt\ %B\ %ts\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq
        balance roundrobin
        timeout client 350000
        timeout server 350000
        timeout connect 35000
        server app-server1 10.10.10.55:30000 weight 1 check port 68888
        server app-server2 10.10.10.55:30000 weight 2 check port 68888

listen FRONTEND_APPNAME2
        bind 10.10.10.1:15000
        mode tcp
        option tcplog
        #log global
        log-format [%t]\ %ci:%cp\ %bi:%bp\ %b/%s:%sp\ %Tw/%Tc/%Tt\ %B\ %ts\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq
        balance roundrobin
        timeout client 350000
        timeout server 350000
        timeout connect 35000
        server app-server1 10.10.10.55:30000 weight 5
        server app-server2 10.10.10.55:30000 weight 5 

 

You can get a copy of above haproxy.cfg configuration here.
Once configured roll it on.

[root@server1:~]#  systemctl start haproxy
 
[root@server1:~]# ps -ef|grep -i hapro
root      285047       1  0 Mar07 ?        00:00:00 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
haproxy   285050  285047  0 Mar07 ?        00:00:26 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid

Bring up the haproxy also on server2 machine, by placing same configuration and starting up the proxy.
 

[root@server1:~]vim /etc/haproxy/haproxy.cfg


 

3. Configure keepalived on both servers

We'll be configuring 2 nodes with keepalived even though if necessery this can be easily extended and you can add more nodes.
First we make a copy of the original or existing server configuration keepalived.conf (just in case we need it later on or if you already had something other configured manually by someone – that could be so on inherited servers by other sysadmin)
 

[root@server1:~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.orig
[root@server2:~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.orig

a. Configure keepalived to serve as a MASTER Node

 

[root@server1:~]# vim /etc/keepalived/keepalived.conf

Master Node
global_defs {
  router_id server1-fqdn # The hostname of this host.
  
  enable_script_security
  # Synchro of the state of the connections between the LBs on the eth0 interface
   lvs_sync_daemon eth0
 
notification_email {
        linuxadmin@notify-domain.com     # Email address for notifications 
    }
 notification_email_from keepalived@server1-fqdn        # The from address for the notifications
    smtp_server 127.0.0.1                       # SMTP server address
    smtp_connect_timeout 15
}

vrrp_script haproxy {
  script "killall -0 haproxy"
  interval 2
  weight 2
  user root
}

vrrp_instance LB_VIP_QA {
  virtual_router_id 50
  advert_int 1
  priority 51

  state MASTER
  interface eth0
  smtp_alert          # Enable Notifications Via Email
  
  authentication {
              auth_type PASS
              auth_pass testp141

    }
### Commented because running on VM on VMWare
##    unicast_src_ip 10.44.192.134 # Private IP address of master
##    unicast_peer {
##        10.44.192.135           # Private IP address of the backup haproxy
##   }

#        }
# master node with higher priority preferred node for Virtual IP if both keepalived up
###  priority 51
###  state MASTER
###  interface eth0
  virtual_ipaddress {
     10.10.10.1 dev eth0 # The virtual IP address that will be shared between MASTER and BACKUP
  }
  track_script {
      haproxy
  }
}

 

 To dowload a copy of the Master keepalived.conf configuration click here

Below are few interesting configuration variables, worthy to mention few words on, most of them are obvious by their names but for more clarity I'll also give a list here with short description of each:

 

  • vrrp_instance – defines an individual instance of the VRRP protocol running on an interface.
  • state – defines the initial state that the instance should start in (i.e. MASTER / SLAVE )state –
  • interface – defines the interface that VRRP runs on.
  • virtual_router_id – should be unique value per Keepalived Node (otherwise slave master won't function properly)
  • priority – the advertised priority, the higher the priority the more important the respective configured keepalived node is.
  • advert_int – specifies the frequency that advertisements are sent at (1 second, in this case).
  • authentication – specifies the information necessary for servers participating in VRRP to authenticate with each other. In this case, a simple password is defined.
    only the first eight (8) characters will be used as described in  to note is Important thing
    man keepalived.conf – keepalived.conf variables documentation !!! Nota Bene !!! – Password set on each node should match for nodes to be able to authenticate !
  • virtual_ipaddress – defines the IP addresses (there can be multiple) that VRRP is responsible for.
  • notification_email – the notification email to which Alerts will be send in case if keepalived on 1 node is stopped (e.g. the MASTER node switches from host 1 to 2)
  • notification_email_from – email address sender from where email will originte
    ! NB ! In order for notification_email to be working you need to have configured MTA or Mail Relay (set to local MTA) to another SMTP – e.g. have configured something like Postfix, Qmail or Postfix

b. Configure keepalived to serve as a SLAVE Node

[root@server1:~]vim /etc/keepalived/keepalived.conf
 

#Slave keepalived
global_defs {
  router_id server2-fqdn # The hostname of this host!

  enable_script_security
  # Synchro of the state of the connections between the LBs on the eth0 interface
  lvs_sync_daemon eth0
 
notification_email {
        linuxadmin@notify-host.com     # Email address for notifications
    }
 notification_email_from keepalived@server2-fqdn        # The from address for the notifications
    smtp_server 127.0.0.1                       # SMTP server address
    smtp_connect_timeout 15
}

vrrp_script haproxy {
  script "killall -0 haproxy"
  interval 2
  weight 2
  user root
}

vrrp_instance LB_VIP_QA {
  virtual_router_id 50
  advert_int 1
  priority 50

  state BACKUP
  interface eth0
  smtp_alert          # Enable Notifications Via Email

authentication {
              auth_type PASS
              auth_pass testp141
}
### Commented because running on VM on VMWare    
##    unicast_src_ip 10.10.192.135 # Private IP address of master
##    unicast_peer {
##        10.10.192.134         # Private IP address of the backup haproxy
##   }

###  priority 50
###  state BACKUP
###  interface eth0
  virtual_ipaddress {
     10.10.10.1 dev eth0 # The virtual IP address that will be shared betwee MASTER and BACKUP.
  }
  track_script {
    haproxy
  }
}

 

Download the keepalived.conf slave config here

 

c. Set required sysctl parameters for haproxy to work as expected
 

[root@server1:~]vim /etc/sysctl.conf
#Haproxy config
# haproxy
net.core.somaxconn=65535
net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3

4. Test Keepalived keepalived.conf configuration syntax is OK

 

[root@server1:~]keepalived –config-test
(/etc/keepalived/keepalived.conf: Line 7) Unknown keyword 'lvs_sync_daemon_interface'
(/etc/keepalived/keepalived.conf: Line 21) Unable to set default user for vrrp script haproxy – removing
(/etc/keepalived/keepalived.conf: Line 31) (LB_VIP_QA) Specifying lvs_sync_daemon_interface against a vrrp is deprecated.
(/etc/keepalived/keepalived.conf: Line 31)              Please use global lvs_sync_daemon
(/etc/keepalived/keepalived.conf: Line 35) Truncating auth_pass to 8 characters
(/etc/keepalived/keepalived.conf: Line 50) (LB_VIP_QA) track script haproxy not found, ignoring…

I've experienced this error because first time I've configured keepalived, I did not mention the user with which the vrrp script haproxy should run,
in prior versions of keepalived, leaving the field empty did automatically assumed you have the user with which the vrrp script runs to be set to root
as of RHELs keepalived-2.1.5-6.el8.x86_64, i've been using however this is no longer so and thus in prior configuration as you can see I've
set the user in respective section to root.
The error Unknown keyword 'lvs_sync_daemon_interface'
is also easily fixable by just substituting the lvs_sync_daemon_interface and lvs_sync_daemon and reloading
keepalived etc.

Once keepalived is started and you can see the process on both machines running in process list.

[root@server1:~]ps -ef |grep -i keepalived
root     1190884       1  0 18:50 ?        00:00:00 /usr/sbin/keepalived -D
root     1190885 1190884  0 18:50 ?        00:00:00 /usr/sbin/keepalived -D

Next step is to check the keepalived statuses as well as /var/log/keepalived.log

If everything is configured as expected on both keepalived on first node you should see one is master and one is slave either in the status or the log

[root@server1:~]#systemctl restart keepalived

 

[root@server1:~]systemctl status keepalived|grep -i state
Mar 14 18:59:02 server1-fqdn Keepalived_vrrp[1192003]: (LB_VIP_QA) Entering MASTER STATE

[root@server1:~]systemctl status keepalived

● keepalived.service – LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since Mon 2022-03-14 18:15:51 CET; 32min ago
  Process: 1187587 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 1187589 (code=exited, status=0/SUCCESS)

Mar 14 18:15:04 server1lb-fqdn Keepalived_vrrp[1187590]: Sending gratuitous ARP on eth0 for 10.44.192.142
Mar 14 18:15:50 server1lb-fqdn systemd[1]: Stopping LVS and VRRP High Availability Monitor…
Mar 14 18:15:50 server1lb-fqdn Keepalived[1187589]: Stopping
Mar 14 18:15:50 server1lb-fqdn Keepalived_vrrp[1187590]: (LB_VIP_QA) sent 0 priority
Mar 14 18:15:50 server1lb-fqdn Keepalived_vrrp[1187590]: (LB_VIP_QA) removing VIPs.
Mar 14 18:15:51 server1lb-fqdn Keepalived_vrrp[1187590]: Stopped – used 0.002007 user time, 0.016303 system time
Mar 14 18:15:51 server1lb-fqdn Keepalived[1187589]: CPU usage (self/children) user: 0.000000/0.038715 system: 0.001061/0.166434
Mar 14 18:15:51 server1lb-fqdn Keepalived[1187589]: Stopped Keepalived v2.1.5 (07/13,2020)
Mar 14 18:15:51 server1lb-fqdn systemd[1]: keepalived.service: Succeeded.
Mar 14 18:15:51 server1lb-fqdn systemd[1]: Stopped LVS and VRRP High Availability Monitor

[root@server2:~]systemctl status keepalived|grep -i state
Mar 14 18:59:02 server2-fqdn Keepalived_vrrp[297368]: (LB_VIP_QA) Entering BACKUP STATE

[root@server1:~]# grep -i state /var/log/keepalived.log
Mar 14 18:59:02 server1lb-fqdn Keepalived_vrrp[297368]: (LB_VIP_QA) Entering MASTER STATE
 

a. Fix Keepalived SECURITY VIOLATION – scripts are being executed but script_security not enabled.
 

When configurating keepalived for a first time we have faced the following strange error inside keepalived status inside keepalived.log 
 

Feb 23 14:28:41 server1 Keepalived_vrrp[945478]: SECURITY VIOLATION – scripts are being executed but script_security not enabled.

 

To fix keepalived SECURITY VIOLATION error:

Add to /etc/keepalived/keepalived.conf on the keepalived node hosts
inside 

global_defs {}

After chunk
 

enable_script_security

include

# Synchro of the state of the connections between the LBs on the eth0 interface
  lvs_sync_daemon_interface eth0

 

5. Prepare rsyslog configuration and Inlcude additional keepalived options
to force keepalived log into /var/log/keepalived.log

To force keepalived log into /var/log/keepalived.log on RHEL 8 / CentOS and other Redhat Package Manager (RPM) Linux distributions

[root@server1:~]# vim /etc/rsyslog.d/48_keepalived.conf

#2022/02/02: HAProxy logs to local6, save the messages
local7.*                                                /var/log/keepalived.log
if ($programname == 'Keepalived') then -/var/log/keepalived.log
if ($programname == 'Keepalived_vrrp') then -/var/log/keepalived.log
& stop

[root@server:~]# touch /var/log/keepalived.log

Reload rsyslog to load new config
 

[root@server:~]# systemctl restart rsyslog
[root@server:~]# systemctl status rsyslog

 

rsyslog.service – System Logging Service
   Loaded: loaded (/usr/lib/systemd/system/rsyslog.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/rsyslog.service.d
           └─rsyslog-service.conf
   Active: active (running) since Mon 2022-03-07 13:34:38 CET; 1 weeks 0 days ago
     Docs: man:rsyslogd(8)

           https://www.rsyslog.com/doc/
 Main PID: 269574 (rsyslogd)
    Tasks: 6 (limit: 100914)
   Memory: 5.1M
   CGroup: /system.slice/rsyslog.service
           └─269574 /usr/sbin/rsyslogd -n

Mar 15 08:15:16 server1lb-fqdn rsyslogd[269574]: — MARK —
Mar 15 08:35:16 server1lb-fqdn rsyslogd[269574]: — MARK —
Mar 15 08:55:16 server1lb-fqdn rsyslogd[269574]: — MARK —

 

If once keepalived is loaded but you still have no log written inside /var/log/keepalived.log

[root@server1:~]# vim /etc/sysconfig/keepalived
 KEEPALIVED_OPTIONS="-D -S 7"

[root@server2:~]# vim /etc/sysconfig/keepalived
 KEEPALIVED_OPTIONS="-D -S 7"

[root@server1:~]# systemctl restart keepalived.service
[root@server1:~]#  systemctl status keepalived

● keepalived.service – LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2022-02-24 12:12:20 CET; 2 weeks 4 days ago
 Main PID: 1030501 (keepalived)
    Tasks: 2 (limit: 100914)
   Memory: 1.8M
   CGroup: /system.slice/keepalived.service
           ├─1030501 /usr/sbin/keepalived -D
           └─1030502 /usr/sbin/keepalived -D

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

[root@server2:~]# systemctl restart keepalived.service
[root@server2:~]# systemctl status keepalived

6. Monitoring VRRP traffic of the two keepaliveds with tcpdump
 

Once both keepalived are up and running a good thing is to check the VRRP protocol traffic keeps fluently on both machines.
Keepalived VRRP keeps communicating over the TCP / IP Port 112 thus you can simply snoop TCP tracffic on its protocol.
 

[root@server1:~]# tcpdump proto 112

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
11:08:07.356187 IP server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20
11:08:08.356297 IP server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20
11:08:09.356408 IP server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20
11:08:10.356511 IP server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20
11:08:11.356655 IP server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20

[root@server2:~]# tcpdump proto 112

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
​listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
11:08:07.356187 IP server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20
11:08:08.356297 IP server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20
11:08:09.356408 IP server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20
11:08:10.356511 IP server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20
11:08:11.356655 IP server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20

As you can see the VRRP traffic on the network is originating only from server1lb-fqdn, this is so because host server1lb-fqdn is the keepalived configured master node.

It is possible to spoof the password configured to authenticate between two nodes, thus if you're bringing up keepalived service cluster make sure your security is tight at best the machines should be in a special local LAN DMZ, do not configure DMZ on the internet !!! 🙂 Or if you eventually decide to configure keepalived in between remote hosts, make sure you somehow use encrypted VPN or SSH tunnels to tunnel the VRRP traffic.

[root@server1:~]tcpdump proto 112 -vv
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
11:36:25.530772 IP (tos 0xc0, ttl 255, id 59838, offset 0, flags [none], proto VRRP (112), length 40)
    server1lb-fqdn > vrrp.mcast.net: vrrp server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20, addrs: VIPIP_QA auth "testp431"
11:36:26.530874 IP (tos 0xc0, ttl 255, id 59839, offset 0, flags [none], proto VRRP (112), length 40)
    server1lb-fqdn > vrrp.mcast.net: vrrp server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20, addrs: VIPIP_QA auth "testp431"

Lets also check what floating IP is configured on the machines:

[root@server1:~]# ip -brief address show
lo               UNKNOWN        127.0.0.1/8 
eth0             UP             10.10.10.5/26 10.10.10.1/32 

The 10.10.10.5 IP is the main IP set on LAN interface eth0, 10.10.10.1 is the floating IP which as you can see is currently set by keepalived to listen on first node.

[root@server2:~]# ip -brief address show |grep -i 10.10.10.1

An empty output is returned as floating IP is currently configured on server1

To double assure ourselves the IP is assigned on correct machine, lets ping it and check the IP assigned MAC  currently belongs to which machine.
 

[root@server2:~]# ping 10.10.10.1
PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data.
64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=0.526 ms
^C
— 10.10.10.1 ping statistics —
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms

[root@server2:~]# arp -an |grep -i 10.44.192.142
? (10.10.10.1) at 00:48:54:91:83:7d [ether] on eth0
[root@server2:~]# ip a s|grep -i 00:48:54:91:83:7d
[root@server2:~]# 

As you can see from below output MAC is not found in configured IPs on server2.
 

[root@server1-fqdn:~]# /sbin/ip a s|grep -i 00:48:54:91:83:7d -B1 -A1
 eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:48:54:91:83:7d brd ff:ff:ff:ff:ff:ff
inet 10.10.10.1/26 brd 10.10.1.191 scope global noprefixroute eth0

Pretty much expected MAC is on keepalived node server1.

 

7. Testing keepalived on server1 and server2 maachines VIP floating IP really works
 

To test the overall configuration just created, you should stop keeaplived on the Master node and in meantime keep an eye on Slave node (server2), whether it can figure out the Master node is gone and switch its
state BACKUP to save MASTER. By changing the secondary (Slave) keepalived to master the floating IP: 10.10.10.1 will be brought up by the scripts on server2.

Lets assume that something went wrong with server1 VM host, for example the machine crashed due to service overload, DDoS or simply a kernel bug or whatever reason.
To simulate that we simply have to stop keepalived, then the broadcasted information on VRRP TCP/IP proto port 112 will be no longer available and keepalived on node server2, once
unable to communicate to server1 should chnage itself to state MASTER.

[root@server1:~]# systemctl stop keepalived
[root@server1:~]# systemctl status keepalived

● keepalived.service – LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since Tue 2022-03-15 12:11:33 CET; 3s ago
  Process: 1192001 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 1192002 (code=exited, status=0/SUCCESS)

Mar 14 18:59:07 server1lb-fqdn Keepalived_vrrp[1192003]: Sending gratuitous ARP on eth0 for 10.10.10.1
Mar 15 12:11:32 server1lb-fqdn systemd[1]: Stopping LVS and VRRP High Availability Monitor…
Mar 15 12:11:32 server1lb-fqdn Keepalived[1192002]: Stopping
Mar 15 12:11:32 server1lb-fqdn Keepalived_vrrp[1192003]: (LB_VIP_QA) sent 0 priority
Mar 15 12:11:32 server1lb-fqdn Keepalived_vrrp[1192003]: (LB_VIP_QA) removing VIPs.
Mar 15 12:11:33 server1lb-fqdn Keepalived_vrrp[1192003]: Stopped – used 2.145252 user time, 15.513454 system time
Mar 15 12:11:33 server1lb-fqdn Keepalived[1192002]: CPU usage (self/children) user: 0.000000/44.555362 system: 0.001151/170.118126
Mar 15 12:11:33 server1lb-fqdn Keepalived[1192002]: Stopped Keepalived v2.1.5 (07/13,2020)
Mar 15 12:11:33 server1lb-fqdn systemd[1]: keepalived.service: Succeeded.
Mar 15 12:11:33 server1lb-fqdn systemd[1]: Stopped LVS and VRRP High Availability Monitor.

 

On keepalived off, you will get also a notification Email on the Receipt Email configured from keepalived.conf from the working keepalived node with a simple message like:

=> VRRP Instance is no longer owning VRRP VIPs <=

Once keepalived is back up you will get another notification like:

=> VRRP Instance is now owning VRRP VIPs <=

[root@server2:~]# systemctl status keepalived
● keepalived.service – LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2022-03-14 18:13:52 CET; 17h ago
  Process: 297366 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 297367 (keepalived)
    Tasks: 2 (limit: 100914)
   Memory: 2.1M
   CGroup: /system.slice/keepalived.service
           ├─297367 /usr/sbin/keepalived -D -S 7
           └─297368 /usr/sbin/keepalived -D -S 7

Mar 15 12:11:33 server2lb-fqdn Keepalived_vrrp[297368]: Sending gratuitous ARP on eth0 for 10.10.10.1
Mar 15 12:11:33 server2lb-fqdn Keepalived_vrrp[297368]: Sending gratuitous ARP on eth0 for 10.10.10.1
Mar 15 12:11:33 server2lb-fqdn Keepalived_vrrp[297368]: Remote SMTP server [127.0.0.1]:25 connected.
Mar 15 12:11:33 server2lb-fqdn Keepalived_vrrp[297368]: SMTP alert successfully sent.
Mar 15 12:11:38 server2lb-fqdn Keepalived_vrrp[297368]: (LB_VIP_QA) Sending/queueing gratuitous ARPs on eth0 for 10.10.10.1
Mar 15 12:11:38 server2lb-fqdn Keepalived_vrrp[297368]: Sending gratuitous ARP on eth0 for 10.10.10.1
Mar 15 12:11:38 server2lb-fqdn Keepalived_vrrp[297368]: Sending gratuitous ARP on eth0 for 10.10.10.1
Mar 15 12:11:38 server2lb-fqdn Keepalived_vrrp[297368]: Sending gratuitous ARP on eth0 for 10.10.10.1
Mar 15 12:11:38 server2lb-fqdn Keepalived_vrrp[297368]: Sending gratuitous ARP on eth0 for 10.10.10.1
Mar 15 12:11:38 server2lb-fqdn Keepalived_vrrp[297368]: Sending gratuitous ARP on eth0 for 10.10.10.1

[root@server2:~]#  ip addr show|grep -i 10.10.10.1
    inet 10.10.10.1/32 scope global eth0
    

As you see the VIP is now set on server2, just like expected – that's OK, everything works as expected. If the IP did not move double check the keepalived.conf on both nodes for errors or misconfigurations.

To recover the initial order of things so server1 is MASTER and server2 SLAVE host, we just have to switch on the keepalived on server1 machine.

[root@server1:~]# systemctl start keepalived

The automatic change of server1 to MASTER node and respective move of the VIP IP is done because of the higher priority (of importance we previously configured on server1 in keepalived.conf).
 

What we learned?
 

So what we learned in  this article?
We have seen how to easily install and configure a High Availability Load balancer with Keepalived with single floating VIP IP address with 1 MASTER and 1 SLAVE host and a Haproxy example config with few frontends / App backends. We have seen how the config can be tested for potential errors and how we can monitor whether the VRRP2 network traffic flows between nodes and how to potentially debug it further if necessery.
Further on rawly explained some of the keepalived configurations but as keepalived can do pretty much more,for anyone seriously willing to deal with keepalived on a daily basis or just fine tune some already existing ones, you better read closely its manual page "man keepalived.conf" as well as the official Redhat Linux documentation page on setting up a Linux cluster with Keepalived (Be prepare for a small nightmare as the documentation of it seems to be a bit chaotic, and even I would say partly missing or opening questions on what does the developers did meant – not strange considering the havoc that is pretty much as everywhere these days.)

Finally once keepalived hosts are prepared, it was shown how to test the keepalived application cluster and Floating IP does move between nodes in case if one of the 2 keepalived nodes is inaccessible.

The same logic can be repeated multiple times and if necessery you can set multiple VIPs to expand the HA reachable IPs solution.

high-availability-with-two-vips-example-diagram

The presented idea is with haproxy forward Proxy server to proxy requests towards Application backend (servince machines), however if you need to set another set of server on the flow to  process HTML / XHTML / PHP / Perl / Python  programming code, with some common Webserver setup ( Nginx / Apache / Tomcat / JBOSS) and enable SSL Secure certificate with lets say Letsencrypt, this can be relatively easily done. If you want to implement letsencrypt and a webserver check this redundant SSL Load Balancing with haproxy & keepalived article.

That's all folks, hope you enjoyed.
If you need to configure keepalived Cluster or a consultancy write your query here 🙂

Configure own Media streaming minidlna Linux server to access data from your Smart TV

Friday, February 18th, 2022

dlna-media-minidlna-server-linux-logo

If you happen to buy or already own or just have to install a Smart TV to be connected with a LAN Network to a Linux based custom built NAS (Network Attached Storage) server. You might benefit of the smart TV to Share and Watching the Disk Storage Pictures, Music, Video files from the NAS  to the Smart TV using the Media Server protocol.

You have certainly already faced the Media Server at your life on many locations in stores and Mall Buildings, because virtually any reoccuring advertisements, movies projected on the TVs, Kids entertainment or Floor and Buildings Room location schedules or timeline promition schedules are streamed using the Media Server protocol, for many years now. Thus having a brief idea about Media Server proto existence is foundamental stuff to be aware of for sysadmins and programmers.

Shortly about DLNA UPnP Media Streaming Protocol

Assuming that your Smart TV has been already connected to your Wireless Router 2.4Ghz or 5Ghz Wifi, one would think that the easiest way to share the files with the SmartTV is via something like a simple SAMBA Linux server via smb:// cifs:// protocols or via the good old NFS Server, however most of Samsung Smart TV and many other in year 2022 does not have embedded support for Samba SMB / CIFS Protocol but instead have support for the DLNA (Digital Living Network Alliance) streaming support. DLNA is part of the UPnP (Universal Plug and Play) Protocols, UPnP is also known to those using and familiar with Windows Operating Systems realm simply as UPnP AV Media server or Windows Media server.
Windows Media server for those who never heard it or used it 
 allows you to build a Playlists with Media files Video and Audio data files, that can be then later played remotely via a Local LAN or even long distance over TCP / IP remote side connected Internet network.
 

1. Set up and Stream data via Media server on  Windows PC / notebook with integrated Windows Media server 

Windows Media server configuration on Windows 7, 10 and 11 is a relatively easy to configure via:

Network and Sharing Center -> Media Streaming Options -> Turn on Media Streaming 


Then you have to define the name of the Media Library, configure whether Media server should show
on the Local Netework
for other conected devices and Allow or Block access from the other network present devices.


 2. Using a more advanced Media Server to get rid about the limitation of DLNA set of supported file codecs.
 

The Windows default embedded DLNA server is the easiest and fastest one to set up, but it’s not necessarily the best option.
Due to the way DLNA works, you can only stream certain types of media codecs supported by the server. If you have other types of media not defaultly supported and defined by DLNA win server, it just won’t work.

Thus thanksfully it was developed other DLNA servers improve this by offering real-time transcoding.
If you try to play an unsupported file, they’ll transcode it on-the-fly, streaming the video in a supported format to your DLNA device.
Just to name few of the DLNA Media Streaming servers that have supported for larger MPG Video, MP3 / MP4 and other Audio formats encodings,
you can try Plex or the Universal Media Server both of which are free to use under freeware license and have versions for Linux and Mac OS.


Universal_media_server-windows-screenshot-stream-media-data-on-network

 

3. Setting up a free as in freedom DLNA server MiniDLNA (ReadyMedia) on GNU / Linux


ReadyMedia (formerly known as MiniDLNA) is a simple media server software, with the aim of being fully compliant with DLNA/UPnP-AV clients. It was originally developed by a NETGEAR employee for the ReadyNAS product line.

MiniDNLA daemon serves media files (music, pictures, and video) to clients on a network. Linux Media servers clients you can use to test or scan your network for existent Media servers are multiple perhaps the most famous ones are applications such as totem (for QT users) and Kodi (for KDE).
The devices that can be used with minidlna are devices such as portable media players (iPod), Smartphones, Televisions, Tablets, and gaming systems (such as PS3 and Xbox 360) etc.
 

ReadyMedia is a simple, lightweight, the downside of it is It does not have a web interface for administration and must be configured by editing a text file. But for a simple Video streaming in most cases does a great job.


3.1 Install the minidlna software package 

Minidlna is available out of the box on most linux distributions (Fedora / CentOS / Debian / Ubuntu etc.) as of year 2022.

  • Install on Debian Linux (Deb based distro)

media-server:~# apt install minidlna –yes

  • Install on Fedora / CentOS (other RPM based distro)

media-server:~# yum install -y minidlna


3.2 Configure minidlna

– /etc/minidlna.conf – main config file
Open with text editor and set user= ,  media_dir= ,  port=, friendly_name= ,  network_interface= variables as minimum.
To be add minidlnad support symlinks to external file locations, set also wide_links=yes

media-server:~# vim /etc/minidlna.conf

#user=minidlna
user=root
media_dir=/var/www/owncloud/data
network_interface=eth0,eth1

# Port number for HTTP traffic (descriptions, SOAP, media transfer).
# This option is mandatory (or it must be specified on the command-line using
# "-p").
port=8200
# Name that the DLNA server presents to clients.
# Defaults to "hostname: username".
#friendly_name=
friendly_name=DLNAServer Linux
# set this to yes to allow symlinks that point outside user-defined media_dirs.
wide_links=yes
# Automatic discovery of new files in the media_dir directory.
#inotify=yes

Keep in mind that it is supported to provide separete media_dir and provide different USB / External Hard Drive or SD Card sources separated only by content be it Video, Audio or Pictures short named in config as (A,V,P).

media_dir=P,/media/usb/photos
media_dir=V,/media/external-disk/videos
media_dir=A,/media/sd-card/music

You might want to diasble / ineable the inotify depending on your liking, if you don't plan to place new files automated to the NAS and don't care to get indexed and streamed from the Media server you can disable it with inotify=no otherwise keep that on.

– /etc/default/minidlna – additional startup config to set minidlnad (daemon) options such as setup to run with admin superuser root:root 
(usually it is safe to leave it empty and set the user=root, whether needed straight from /etc/minidlna.conf
That's all now go on and launch the minidlna and enable it to automatically boot on Linux boot.

media-server:~# systemctl start minidlna
media-server:~# systemctl enable minidlna
media-server:~# systemctl status minidlna

 

3.3 Rebuilt minidlna database with data indexed files

If you need to re- generate minidlna's database.
To do so stop the minidlna server with the
 

media-server:~# systemctop stop minidlna


 command, then issue the following command (both commands should be run as root):

media-server:~# minidlna -R

Since this command might kept in the background and keep the minidlna server running with incorrect flags, after a minute or two kill minidlna process and relaunch the server via sysctl.

media-server:~#  killall -9 minidlna
media-server:~#  systemctl start minidlna

 

3.4 Permission Issues / Scanning issues

If you plan to place files in /home directory. You better have a seperate partition or folder *outside* your "home" directory devoted to your media. Default user with which minidlna runs is minidlna, this could prevent some files with root or other users being red. So either run minidlna daemon as root or as other user with whom all media files should be accessible.
If service runs as root:root, and still getting some scanning issues, check permissions on your files and remove special characters from file names.
 

media-server:~# tail -10 /var/log/minidlna/minidlna.log 
[2022/02/17 22:51:36] scanner.c:489: warn: Unsuccessful getting details for /var/www/owncloud/data/Videos/Family-Videos/FILE006.MPG
[2022/02/17 22:52:08] scanner.c:819: warn: Scanning /var/www/owncloud/data finished (10637 files)!
[2022/02/17 22:52:08] playlist.c:135: warn: Parsing playlists…
[2022/02/17 22:52:08] playlist.c:269: warn: Finished parsing playlists.
minidlna.c:1126: warn: Starting MiniDLNA version 1.3.0.
minidlna.c:1186: warn: HTTP listening on port 8200
scanner.c:489: warn: Unsuccessful getting details for /var/www/owncloud/data/admin/files/origin/External SD card/media/Viber Images/IMG-4477de7b1eee273d5e6ae25236c5c223-V.jpg
scanner.c:489: warn: Unsuccessful getting details for /var/www/owncloud/data/Videos/Family-Video/FILE006.MPG
playlist.c:135: warn: Parsing playlists…
playlist.c:269: warn: Finished parsing playlists.

 

3.5. Fix minidlna Inotify errors

In /etc/sysctl.conf 

Add:

fs.inotify.max_user_watches=65536

in a blank line at end of file and do 

media-server:~# sysctl -p

Debugging minidlna problems, index errors, warnings etc

minidlna does write by default to /var/log/minidlna/minidlna.log inspect the log closely and you should get most of the time what is wrong with it.
Note that some files might not get indexed because minidlna won't support the strange file codecs such as SWF encoding, if you have some important files to stream that are not indexed by minidlna, then install and try one of the more sophisticated free software Media Servers for Linux:

plex-media-streaming-server-screenshot

Note that most Linux users from my quick research shows, MediaTomb is the preferred advanced features Open Source Linux Media Server of choice for most of the guys.

mediatomb-linux-media-streaming-server-picture.jpg.webp
 

 

4. Test minidlna Linux servers works, getting information of other DLNA Servers on the network

media-server:~# lynx -dump  http://127.0.0.1:8200
MiniDLNA status

  Media library

   Audio files 0
   Video files 455
   Image files 10182

  Connected clients

   ID Type                   IP Address    HW Address        Connections
   0  Samsung Series [CDEFJ] 192.168.1.11  7C:0A:3D:88:A6:FA 0
   1  Generic DLNA 1.5       192.168.0.241 00:16:4E:1D:48:05 0
   2  Generic DLNA 1.5       192.168.1.18  00:16:3F:0D:45:05 0
   3  Unknown                127.0.0.1     FF:FF:FF:FF:FF:FF 0

   -1 connections currently open
 

Note that there is -1 connections (no active connections) currently to the server. 
The 2 Generic DLNA 1.5 IPs are another DLNA servers provided by a OpenXEN hosted Windows 7 Virtual machines, that are also broadcasting their existence in the network. The Samsung Series [CDEFJ] is the DLNA client on the Samsung TV found, used to detect and stream data from the just configured Linux dlna server.

The DLNA Protocol enabled devices on a network as you can see are quite easy to access, querying localhost on the 8200 server dumps, what minidlna knows, the rest of IPs connecting should not be able to receive this info. But anyways since the minidlna does not have a special layers of security to access it, but the only way to restrict is filtering the 8200 port, it is a very good idea to put a good iptables firewall on the machine to allow only the devices that should have access to the data.

Further more if you happen to need to access the Media files on Linux from GUI you might use some client as upmentioned totem, VLC or if you need something more feature rich Java eezUPnP .

eeZUPnP-screenshot-java-client-for-media-server

That's all folks !
Enjoy your media on the TV 🙂

How to filter dhcp traffic between two networks running separate DHCP servers to prevent IP assignment issues and MAC duplicate addresses

Tuesday, February 8th, 2022

how-to-filter-dhcp-traffic-2-networks-running-2-separate-dhcpd-servers-to-prevent-ip-assignment-conflicts-linux
Tracking the Problem of MAC duplicates on Linux routers
 

If you have two networks that see each other and they're not separated in VLANs but see each other sharing a common netmask lets say 255.255.254.0 or 255.255.252.0, it might happend that there are 2 dhcp servers for example (isc-dhcp-server running on 192.168.1.1 and dhcpd running on 192.168.0.1 can broadcast their services to both LANs 192.168.1.0.1/24 (netmask 255.255.255.0) and Local Net LAN 192.168.1.1/24. The result out of this is that some devices might pick up their IP address via DHCP from the wrong dhcp server.

Normally if you have a fully controlled little or middle class home or office network (10 – 15 electronic devices nodes) connecting to the LAN in a mixed moth some are connected via one of the Networks via connected Wifi to 192.168.1.0/22 others are LANned and using static IP adddresses and traffic is routed among two ISPs and each network can see the other network, there is always a possibility of things to go wrong. This is what happened to me so this is how this post was born.

The best practice from my experience so far is to define each and every computer / phone / laptop host joining the network and hence later easily monitor what is going on the network with something like iptraf-ng / nethogs  / iperf – described in prior  how to check internet spepeed from console and in check server internet connectivity speed with speedtest-cliiftop / nload or for more complex stuff wireshark or even a simple tcpdump. No matter the tools network monitoring is only part on solving network issues. A very must have thing in a controlled network infrastructure is defining every machine part of it to easily monitor later with the monitoring tools. Defining each and every host on the Hybrid computer networks makes administering the network much easier task and  tracking irregularities on time is much more likely. 

Since I have such a hybrid network here hosting a couple of XEN virtual machines with Linux, Windows 7 and Windows 10, together with Mac OS X laptops as well as MacBook Air notebooks, I have followed this route and tried to define each and every host based on its MAC address to pick it up from the correct DHCP1 server  192.168.1.1 (that is distributing IPs for Internet Provider 1 (ISP 1), that is mostly few computers attached UTP LAN cables via LiteWave LS105G Gigabit Switch as well from DHCP2 – used only to assigns IPs to servers and a a single Wi-Fi Access point configured to route incoming clients via 192.168.0.1 Linux NAT gateway server.

To filter out the unwanted IPs from the DHCPD not to propagate I've so far used a little trick to  Deny DHCP MAC Address for unwanted clients and not send IP offer for them.

To give you more understanding,  I have to clear it up I don't want to have automatic IP assignments from DHCP2 / LAN2 to DHCP1 / LAN1 because (i don't want machines on DHCP1 to end up with IP like 192.168.0.50 or DHCP2 (to have 192.168.1.80), as such a wrong IP delegation could potentially lead to MAC duplicates IP conflicts. MAC Duplicate IP wrong assignments for those older or who have been part of administrating large ISP network infrastructures  makes the network communication unstable for no apparent reason and nodes partially unreachable at times or full time …

However it seems in the 21-st century which is the century of strangeness / computer madness in the 2022, technology advanced so much that it has massively started to break up some good old well known sysadmin standards well documented in the RFCs I know of my youth, such as that every electronic equipment manufactured Vendor should have a Vendor Assigned Hardware MAC Address binded to it that will never change (after all that was the idea of MAC addresses wasn't it !). 
Many mobile devices nowadays however, in the developers attempts to make more sophisticated software and Increase Anonimity on the Net and Security, use a technique called  MAC Address randomization (mostly used by hackers / script kiddies of the early days of computers) for their Wi-Fi Net Adapter OS / driver controlled interfaces for the sake of increased security (the so called Private WiFi Addresses). If a sysadmin 10-15 years ago has seen that he might probably resign his profession and turn to farming or agriculture plant growing, but in the age of digitalization and "cloud computing", this break up of common developed network standards starts to become the 'new normal' standard.

I did not suspected there might be a MAC address oddities, since I spare very little time on administering the the network. This was so till recently when I accidently checked the arp table with:

Hypervisor:~# arp -an
192.168.1.99     5c:89:b5:f2:e8:d8      (Unknown)
192.168.1.99    00:15:3e:d3:8f:76       (Unknown)

..


and consequently did a network MAC Address ARP Scan with arp-scan (if you never used this little nifty hacker tool I warmly recommend it !!!)
If you don't have it installed it is available in debian based linuces from default repos to install

Hypervisor:~# apt-get install –yes arp-scan


It is also available on CentOS / Fedora / Redhat and other RPM distros via:

Hypervisor:~# yum install -y arp-scan

 

 

Hypervisor:~# arp-scan –interface=eth1 192.168.1.0/24

192.168.1.19    00:16:3e:0f:48:05       Xensource, Inc.
192.168.1.22    00:16:3e:04:11:1c       Xensource, Inc.
192.168.1.31    00:15:3e:bb:45:45       Xensource, Inc.
192.168.1.38    00:15:3e:59:96:8e       Xensource, Inc.
192.168.1.34    00:15:3e:d3:8f:77       Xensource, Inc.
192.168.1.60    8c:89:b5:f2:e8:d8       Micro-Star INT'L CO., LTD
192.168.1.99     5c:89:b5:f2:e8:d8      (Unknown)
192.168.1.99    00:15:3e:d3:8f:76       (Unknown)

192.168.x.91     02:a0:xx:xx:d6:64        (Unknown)
192.168.x.91     02:a0:xx:xx:d6:64        (Unknown)  (DUP: 2)

N.B. !. I found it helpful to check all available interfaces on my Linux NAT router host.

As you see the scan revealed, a whole bunch of MAC address mess duplicated MAC hanging around, destroying my network topology every now and then 
So far so good, the MAC duplicates and strangely hanging around MAC addresses issue, was solved relatively easily with enabling below set of systctl kernel variables.
 

1. Fixing Linux ARP common well known Problems through disabling arp_announce / arp_ignore / send_redirects kernel variables disablement

 

Linux answers ARP requests on wrong and unassociated interfaces per default. This leads to the following two problems:

ARP requests for the loopback alias address are answered on the HW interfaces (even if NOARP on lo0:1 is set). Since loopback aliases are required for DSR (Direct Server Return) setups this problem is very common (but easy to fix fortunately).

If the machine is connected twice to the same switch (e.g. with eth0 and eth1) eth2 may answer ARP requests for the address on eth1 and vice versa in a race condition manner (confusing almost everything).

This can be prevented by specific arp kernel settings. Take a look here for additional information about the nature of the problem (and other solutions): ARP flux.

To fix that generally (and reboot safe) we  include the following lines into

 

Hypervisor:~# cp -rpf /etc/sysctl.conf /etc/sysctl.conf_bak_07-feb-2022
Hypervisor:~# cat >> /etc/sysctl.conf

# LVS tuning
net.ipv4.conf.lo.arp_ignore=1
net.ipv4.conf.lo.arp_announce=2
net.ipv4.conf.all.arp_ignore=1
net.ipv4.conf.all.arp_announce=2

net.ipv4.conf.all.send_redirects=0
net.ipv4.conf.eth0.send_redirects=0
net.ipv4.conf.eth1.send_redirects=0
net.ipv4.conf.default.send_redirects=0

Press CTRL + D simultaneusly to Write out up-pasted vars.


To read more on Load Balancer using direct routing and on LVS and the arp problem here


2. Digging further the IP conflict / dulicate MAC Problems

Even after this arp tunings (because I do have my Hypervisor 2 LAN interfaces connected to 1 switch) did not resolved the issues and still my Wireless Connected devices via network 192.168.1.1/24 (ISP2) were randomly assigned the wrong range IPs 192.168.0.XXX/24 as well as the wrong gateway 192.168.0.1 (ISP1).
After thinking thoroughfully for hours and checking the network status with various tools and thanks to the fact that my wife has a MacBook Air that was always complaining that the IP it tried to assign from the DHCP was already taken, i"ve realized, something is wrong with DHCP assignment.
Since she owns a IPhone 10 with iOS and this two devices are from the same vendor e.g. Apple Inc. And Apple's products have been having strange DHCP assignment issues from my experience for quite some time, I've thought initially problems are caused by software on Apple's devices.
I turned to be partially right after expecting the logs of DHCP server on the Linux host (ISP1) finding that the phone of my wife takes IP in 192.168.0.XXX, insetad of IP from 192.168.1.1 (which has is a combined Nokia Router with 2.4Ghz and 5Ghz Wi-Fi and LAN router provided by ISP2 in that case Vivacom). That was really puzzling since for me it was completely logical thta the iDevices must check for DHCP address directly on the Network of the router to whom, they're connecting. Guess my suprise when I realized that instead of that the iDevices does listen to the network on a wide network range scan for any DHCPs reachable baesd on the advertised (i assume via broadcast) address traffic and try to connect and take the IP to the IP of the DHCP which responds faster !!!! Of course the Vivacom Chineese produced Nokia router responded DHCP requests and advertised much slower, than my Linux NAT gateway on ISP1 and because of that the Iphone and iOS and even freshest versions of Android devices do take the IP from the DHCP that responds faster, even if that router is not on a C class network (that's invasive isn't it??). What was even more puzzling was the automatic MAC Randomization of Wifi devices trying to connect to my ISP1 configured DHCPD and this of course trespassed any static MAC addresses filtering, I already had established there.

Anyways there was also a good think out of tthat intermixed exercise 🙂 While playing around with the Gigabit network router of vivacom I found a cozy feature SCHEDULEDING TURNING OFF and ON the WIFI ACCESS POINT  – a very useful feature to adopt, to stop wasting extra energy and lower a bit of radiation is to set a swtich off WIFI AP from 12:30 – 06:30 which are the common sleeping hours or something like that.
 

3. What is MAC Randomization and where and how it is configured across different main operating systems as of year 2022?

Depending on the operating system of your device, MAC randomization will be available either by default on most modern mobile OSes or with possibility to have it switched on:

  • Android Q: Enabled by default 
  • Android P: Available as a developer option, disabled by default
  • iOS 14: Available as a user option, disabled by default
  • Windows 10: Available as an option in two ways – random for all networks or random for a specific network

Lately I don't have much time to play around with mobile devices, and I do not my own a luxury mobile phone so, the fact this ne Androids have this MAC randomization was unknown to me just until I ended a small mess, based on my poor configured networks due to my tight time constrains nowadays.

Finding out about the new security feature of MAC Randomization, on all Android based phones (my mother's Nokia smartphone and my dad's phone, disabled the feature ASAP:


4. Disable MAC Wi-Fi Ethernet device Randomization on Android

MAC Randomization creates a random MAC address when joining a Wi-Fi network for the first time or after “forgetting” and rejoining a Wi-Fi network. It Generates a new random MAC address after 24 hours of last connection.

Disabling MAC Randomization on your devices. It is done on a per SSID basis so you can turn off the randomization, but allow it to function for hotspots outside of your home.

  1. Open the Settings app
  2. Select Network and Internet
  3. Select WiFi
  4. Connect to your home wireless network
  5. Tap the gear icon next to the current WiFi connection
  6. Select Advanced
  7. Select Privacy
  8. Select "Use device MAC"
     

5. Disabling MAC Randomization on MAC iOS, iPhone, iPad, iPod

To Disable MAC Randomization on iOS Devices:

Open the Settings on your iPhone, iPad, or iPod, then tap Wi-Fi or WLAN

 

  1. Tap the information button next to your network
  2. Turn off Private Address
  3. Re-join the network


Of course next I've collected their phone Wi-Fi adapters and made sure the included dhcp MAC deny rules in /etc/dhcp/dhcpd.conf are at place.

The effect of the MAC Randomization for my Network was terrible constant and strange issues with my routings and networks, which I always thought are caused by the openxen hypervisor Virtualization VM bugs etc.

That continued for some months now, and the weird thing was the issues always started when I tried to update my Operating system to the latest packetset, do a reboot to load up the new piece of software / libraries etc. and plus it happened very occasionally and their was no obvious reason for it.

 

6. How to completely filter dhcp traffic between two network router hosts
IP 192.168.0.1 / 192.168.1.1 to stop 2 or more configured DHCP servers
on separate networks see each other

To prevent IP mess at DHCP2 server side (which btw is ISC DHCP server, taking care for IP assignment only for the Servers on the network running on Debian 11 Linux), further on I had to filter out any DHCP UDP traffic with iptables completely.
To prevent incorrect route assignments assuming that you have 2 networks and 2 routers that are configurred to do Network Address Translation (NAT)-ing Router 1: 192.168.0.1, Router 2: 192.168.1.1.

You have to filter out UDP Protocol data on Port 67 and 68 from the respective source and destination addresses.

In firewall rules configuration files on your Linux you need to have some rules as:

# filter outgoing dhcp traffic from 192.168.1.1 to 192.168.0.1
-A INPUT -p udp -m udp –dport 67:68 -s 192.168.1.1 -d 192.168.0.1 -j DROP
-A OUTPUT -p udp -m udp –dport 67:68 -s 192.168.1.1 -d 192.168.0.1 -j DROP
-A FORWARD -p udp -m udp –dport 67:68 -s 192.168.1.1 -d 192.168.0.1 -j DROP

-A INPUT -p udp -m udp –dport 67:68 -s 192.168.0.1 -d 192.168.1.1 -j DROP
-A OUTPUT -p udp -m udp –dport 67:68 -s 192.168.0.1 -d 192.168.1.1 -j DROP
-A FORWARD -p udp -m udp –dport 67:68 -s 192.168.0.1 -d 192.168.1.1 -j DROP

-A INPUT -p udp -m udp –sport 67:68 -s 192.168.1.1 -d 192.168.0.1 -j DROP
-A OUTPUT -p udp -m udp –sport 67:68 -s 192.168.1.1 -d 192.168.0.1 -j DROP
-A FORWARD -p udp -m udp –sport 67:68 -s 192.168.1.1 -d 192.168.0.1 -j DROP


You can download also filter_dhcp_traffic.sh with above rules from here


Applying this rules, any traffic of DHCP between 2 routers is prohibited and devices from Net: 192.168.1.1-255 will no longer wrongly get assinged IP addresses from Network range: 192.168.0.1-255 as it happened to me.


7. Filter out DHCP traffic based on MAC completely on Linux with arptables

If even after disabling MAC randomization on all devices on the network, and you know physically all the connecting devices on the Network, if you still see some weird MAC addresses, originating from a wrongly configured ISP traffic router host or whatever, then it is time to just filter them out with arptables.

## drop traffic prevent mac duplicates due to vivacom and bergon placed in same network – 255.255.255.252
dchp1-server:~# arptables -A INPUT –source-mac 70:e2:83:12:44:11 -j DROP


To list arptables configured on Linux host

dchp1-server:~# arptables –list -n


If you want to be paranoid sysadmin you can implement a MAC address protection with arptables by only allowing a single set of MAC Addr / IPs and dropping the rest.

dchp1-server:~# arptables -A INPUT –source-mac 70:e2:84:13:45:11 -j ACCEPT
dchp1-server:~# arptables -A INPUT  –source-mac 70:e2:84:13:45:12 -j ACCEPT


dchp1-server:~# arptables -L –line-numbers
Chain INPUT (policy ACCEPT)
1 -j DROP –src-mac 70:e2:84:13:45:11
2 -j DROP –src-mac 70:e2:84:13:45:12

Once MACs you like are accepted you can set the INPUT chain policy to DROP as so:

dchp1-server:~# arptables -P INPUT DROP


If you later need to temporary, clean up the rules inside arptables on any filtered hosts flush all rules inside INPUT chain, like that
 

dchp1-server:~#  arptables -t INPUT -F

Install and configure rkhunter for improved security on a PCI DSS Linux / BSD servers with no access to Internet

Wednesday, November 10th, 2021

install-and-configure-rkhunter-with-tightened-security-variables-rkhunter-logo

rkhunter or Rootkit Hunter scans systems for known and unknown rootkits. The tool is not new and most system administrators that has to mantain some good security servers perhaps already use it in their daily sysadmin tasks.

It does this by comparing SHA-1 Hashes of important files with known good ones in online databases, searching for default directories (of rootkits), wrong permissions, hidden files, suspicious strings in kernel modules, commmon backdoors, sniffers and exploits as well as other special tests mostly for Linux and FreeBSD though a ports for other UNIX operating systems like Solaris etc. are perhaps available. rkhunter is notable due to its inclusion in popular mainstream FOSS operating systems (CentOS, Fedora,Debian, Ubuntu etc.).

Even though rkhunter is not rapidly improved over the last 3 years (its last Official version release was on 20th of Febuary 2018), it is a good tool that helps to strengthen even further security and it is often a requirement for Unix servers systems that should follow the PCI DSS Standards (Payment Card Industry Data Security Standards).

Configuring rkhunter is a pretty straight forward if you don't have too much requirements but I decided to write this article for the reason there are fwe interesting options that you might want to adopt in configuration to whitelist any files that are reported as Warnings, as well as how to set a configuration that sets a stricter security checks than the installation defaults. 

1. Install rkhunter .deb / .rpm package depending on the Linux distro or BSD

  • If you have to place it on a Redhat based distro CentOS / Redhat / Fedora

[root@Centos ~]# yum install -y rkhunter

 

  • On Debian distros the package name is equevallent to install there exec usual:

root@debian:~# apt install –yes rkhunter

  • On FreeBSD / NetBSD or other BSD forks you can install it from the BSD "World" ports system or install it from a precompiled binary.

freebsd# pkg install rkhunter

One important note to make here is to have a fully functional Alarming from rkhunter, you will have to have a fully functional configured postfix / exim / qmail whatever mail server to relay via official SMTP so you the Warning Alarm emails be able to reach your preferred Alarm email address. If you haven't installed postfix for example and configure it you might do.

– On Deb based distros 

[root@Centos ~]#yum install postfix


– On RPM based distros

root@debian:~# apt-get install –yes postfix


and as minimum, further on configure some functional Email Relay server within /etc/postfix/main.cf
 

# vi /etc/postfix/main.cf
relayhost = [relay.smtp-server.com]

2. Prepare rkhunter.conf initial configuration


Depending on what kind of files are present on the filesystem it could be for some reasons some standard package binaries has to be excluded for verification, because they possess unusual permissions because of manual sys admin monification this is done with the rkhunter variable PKGMGR_NO_VRFY.

If remote logging is configured on the system via something like rsyslog you will want to specificly tell it to rkhunter so this check as a possible security issue is skipped via ALLOW_SYSLOG_REMOTE_LOGGING=1. 

In case if remote root login via SSH protocol is disabled via /etc/ssh/sshd_config
PermitRootLogin no variable, the variable to include is ALLOW_SSH_ROOT_USER=no

It is useful to also increase the hashing check algorithm for security default one SHA256 you might want to change to SHA512, this is done via rkhunter.conf var HASH_CMD=SHA512

Triggering new email Warnings has to be configured so you receive, new mails at a preconfigured mailbox of your choice via variable
MAIL-ON-WARNING=SetMailAddress

 

# vi /etc/rkhunter.conf

PKGMGR_NO_VRFY=/usr/bin/su

PKGMGR_NO_VRFY=/usr/bin/passwd

ALLOW_SYSLOG_REMOTE_LOGGING=1

# Needed for corosync/pacemaker since update 19.11.2020

ALLOWDEVFILE=/dev/shm/qb-*/qb-*

# enabled ssh root access skip

ALLOW_SSH_ROOT_USER=no

HASH_CMD=SHA512

# Email address to sent alert in case of Warnings

MAIL-ON-WARNING=Your-Customer@Your-Email-Server-Destination-Address.com

MAIL-ON-WARNING=Your-Second-Peronsl-Email-Address@SMTP-Server.com

DISABLE_TESTS=os_specific


Optionally if you're using something specific such as corosync / pacemaker High Availability cluster or some specific software that is creating /dev/ files identified as potential Risks you might want to add more rkhunter.conf options like:
 

# Allow PCS/Pacemaker/Corosync
ALLOWDEVFILE=/dev/shm/qb-attrd-*
ALLOWDEVFILE=/dev/shm/qb-cfg-*
ALLOWDEVFILE=/dev/shm/qb-cib_rw-*
ALLOWDEVFILE=/dev/shm/qb-cib_shm-*
ALLOWDEVFILE=/dev/shm/qb-corosync-*
ALLOWDEVFILE=/dev/shm/qb-cpg-*
ALLOWDEVFILE=/dev/shm/qb-lrmd-*
ALLOWDEVFILE=/dev/shm/qb-pengine-*
ALLOWDEVFILE=/dev/shm/qb-quorum-*
ALLOWDEVFILE=/dev/shm/qb-stonith-*
ALLOWDEVFILE=/dev/shm/pulse-shm-*
ALLOWDEVFILE=/dev/md/md-device-map
# Needed for corosync/pacemaker since update 19.11.2020
ALLOWDEVFILE=/dev/shm/qb-*/qb-*

# tomboy creates this one
ALLOWDEVFILE="/dev/shm/mono.*"
# created by libv4l
ALLOWDEVFILE="/dev/shm/libv4l-*"
# created by spice video
ALLOWDEVFILE="/dev/shm/spice.*"
# created by mdadm
ALLOWDEVFILE="/dev/md/autorebuild.pid"
# 389 Directory Server
ALLOWDEVFILE=/dev/shm/sem.slapd-*.stats
# squid proxy
ALLOWDEVFILE=/dev/shm/squid-cf*
# squid ssl cache
ALLOWDEVFILE=/dev/shm/squid-ssl_session_cache.shm
# Allow podman
ALLOWDEVFILE=/dev/shm/libpod*lock*

 

3. Set the proper mirror database URL location to internal network repository

 

Usually  file /var/lib/rkhunter/db/mirrors.dat does contain Internet server address where latest version of mirrors.dat could be fetched, below is how it looks by default on Debian 10 Linux.

root@debian:/var/lib/rkhunter/db# cat mirrors.dat 
Version:2007060601
mirror=http://rkhunter.sourceforge.net
mirror=http://rkhunter.sourceforge.net

As you can guess a machine that doesn't have access to the Internet neither directly, neither via some kind of secure proxy because it is in a Paranoic Demilitarized Zone (DMZ) Network with many firewalls. What you can do then is setup another Mirror server (Apache / Nginx) within the local PCI secured LAN that gets regularly the database from official database on http://rkhunter.sourceforge.net/ (by installing and running rkhunter –update command on the Mirror WebServer and copying data under some directory structure on the remote local LAN accessible server, to keep the DB uptodate you might want to setup a cron to periodically copy latest available rkhunter database towards the http://mirror-url/path-folder/)

# vi /var/lib/rkhunter/db/mirrors.dat

local=http://rkhunter-url-mirror-server-url.com/rkhunter/1.4/


A mirror copy of entire db files from Debian 10.8 ( Buster ) ready for download are here.

Update entire file property db and check for rkhunter db updates

 

# rkhunter –update && rkhunter –propupdate

[ Rootkit Hunter version 1.4.6 ]

Checking rkhunter data files…
  Checking file mirrors.dat                                  [ Skipped ]
  Checking file programs_bad.dat                             [ No update ]
  Checking file backdoorports.dat                            [ No update ]
  Checking file suspscan.dat                                 [ No update ]
  Checking file i18n/cn                                      [ No update ]
  Checking file i18n/de                                      [ No update ]
  Checking file i18n/en                                      [ No update ]
  Checking file i18n/tr                                      [ No update ]
  Checking file i18n/tr.utf8                                 [ No update ]
  Checking file i18n/zh                                      [ No update ]
  Checking file i18n/zh.utf8                                 [ No update ]
  Checking file i18n/ja                                      [ No update ]

 

rkhunter-update-propupdate-screenshot-centos-linux


4. Initiate a first time check and see whether something is not triggering Warnings

# rkhunter –check

rkhunter-checking-for-rootkits-linux-screenshot

As you might have to run the rkhunter multiple times, there is annoying Press Enter prompt, between checks. The idea of it is that you're able to inspect what went on but since usually, inspecting /var/log/rkhunter/rkhunter.log is much more easier, I prefer to skip this with –skip-keypress option.

# rkhunter –check  –skip-keypress


5. Whitelist additional files and dev triggering false warnings alerts


You have to keep in mind many files which are considered to not be officially PCI compatible and potentially dangerous such as lynx browser curl, telnet etc. might trigger Warning, after checking them thoroughfully with some AntiVirus software such as Clamav and checking the MD5 checksum compared to a clean installed .deb / .rpm package on another RootKit, Virus, Spyware etc. Clean system (be it virtual machine or a Testing / Staging) machine you might want to simply whitelist the files which are incorrectly detected as dangerous for the system security.

Again this can be achieved with

PKGMGR_NO_VRFY=

Some Cluster softwares that are preparing their own /dev/ temporary files such as Pacemaker / Corosync might also trigger alarms, so you might want to suppress this as well with ALLOWDEVFILE

ALLOWDEVFILE=/dev/shm/qb-*/qb-*


If Warnings are found check what is the issue and if necessery white list files due to incorrect permissions in /etc/rkhunter.conf .

rkhunter-warnings-found-screenshot

Re-run the check until all appears clean as in below screenshot.

rkhunter-clean-report-linux-screenshot

Fixing Checking for a system logging configuration file [ Warning ]

If you happen to get some message like, message appears when rkhunter -C is done on legacy CentOS release 6.10 (Final) servers:

[13:45:29] Checking for a system logging configuration file [ Warning ]
[13:45:29] Warning: The 'systemd-journald' daemon is running, but no configuration file can be found.
[13:45:29] Checking if syslog remote logging is allowed [ Allowed ]

To fix it, you will have to disable SYSLOG_CONFIG_FILE at all.
 

SYSLOG_CONFIG_FILE=NONE

How to move transfer binary files encoded with base64 on Linux with Copy Paste of text ASCII encoded string

Monday, October 25th, 2021

base64-encode-decode-binary-files-to-transfer-between-servers-base64-artistic-logo

If you have to work on servers in a protected environments that are accessed via multiple VPNs, Jump hosts or Web Citrix and you have no mean to copy binary files to your computer or from your computer because you have all kind of FTP / SFTP or whatever Data Copy clients disabled on remote jump host side or CITRIX server and you still are looking for a way to copy files between your PC and the Remote server Side.
Or for example if you have 2 or more servers that are in a special Demilitarized Network Zones ( DMZ ) and the machines does not have SFTP / FTP / WebServer or other kind of copy protocol service that can be used to copy files between the hosts and you still need to copy some files between the 2 or more machines in a slow but still functional way, then you might not know of one old school hackers trick you can employee to complete the copy of files between DMZ-ed Server Host A lets say with IP address (192.168.50.5) -> Server Host B (192.168.30.7). The way to complete the binary file copy is to Encode the binary on Server Host A and then, use cat  command to display the encoded string and copy whole encoded cat command output  to your (local PC buffer from where you access the remote side via SSH via the CITRIX or Jump host.). Then decode the encoded file with an encoding tool such as base64 or uuencode. In this article, I'll show how this is done with base64 and uuencode. Base64 binary is pretty standard in most Linux / Unix OS-es today on most Linux distributions it is part of the coreutils package.
The main use of base64 encoding to encode non-text Attachment files to Electronic Mail, but for our case it fits perfectly.
Keep in mind, that this hack to copy the binary from Machine A to Machine B of course depends on the Copy / Paste buffer being enabled both on remote Jump host or Citrix from where you reach the servers as well as your own PC laptop from where you access the remote side.

base64-character-encoding-string-table

Base64 Encoding and Decoding text strings legend

The file copy process to the highly secured PCI host goes like this:
 

1. On Server Host A encode with md5sum command

[root@serverA ~]:# md5sum -b /tmp/inputbinfile-to-encode
66c4d7b03ed6df9df5305ae535e40b7d *inputbinfile-to-encode

 

As you see one good location to encode the file would be /tmp as this is a temporary home or you can use alternatively your HOME dir

but you have to be quite careful to not run out of space if you produce it anywhere 🙂

 

2. Encode the binary file with base64 encoding

 [root@serverB ~]:# base64 -w0 inputbinfile-to-encode > outputbin-file.base64

The -w0 option is given to disable line wrapping. Line wrapping is perhaps not needed if you will copy paste the data.

base64-encoded-binary-file-text-string-linux-screenshot

Base64 Encoded string chunk with line wrapping

For a complete list of possible accepted arguments check here.

3. Cat the inputbinfile-to-encode just generated to display the text encoded file in your SecureCRT / Putty / SuperPutty etc. remote ssh access client

[root@serverA ~]:# cat /tmp/inputbinfile-to-encode
f0VMRgIBAQAAAAAAAAAAAAMAPgABAAAAMGEAAAAAAABAAAAAAAAAACgXAgAAAAAAAAAAA
EAAOAALAEAAHQAcAAYAAAAEAAA ……………………………………………………………… cTD6lC+ViQfUCPn9bs

 

4. Select the cat-ted string and copy it to your PC Copy / Paste buffer


If the bin file is not few kilobytes, but few megabytes copying the file might be tricky as the string produced from cat command would be really long, so make sure the SSH client you're using is configured to have a large buffer to scroll up enough and be able to select the whole encoded string until the end of the cat command and copy it to Copy / Paste buffer.

 

5. On Server Host B paste the bas64 encoded binary inside a newly created file

Open with a text editor vim / mc or whatever is available

[root@serverB ~]:# vi inputbinfile-to-encode

Some very paranoid Linux / UNIX systems might not have even a normal text editor like 'vi' if you happen to need to copy files on such one a useful thing is to use a simple cat on the remote side to open a new File Descriptor buffer, like this:

[root@server2 ~]:# cat >> inputbinfile-to-encode <<'EOF'
Paste the string here

 

6. Decode the encoded binary with base64 cmd again

[root@serverB ~]:# base64 –decode outputbin-file.base64 > inputbinfile-to-encode

 

7. Set proper file permissions (the same as on Host A)

[root@serverB ~]:#  chmod +x inputbinfile-to-encode

 

8. Check again the binary file checksum on Host B is identical as on Host A

[root@serverB ~]:# md5sum -b inputbinfile-to-encode
66c4d7b03ed6df9df5305ae535e40b7d *inputbinfile-to-encode

As you can md5sum match on both sides so file should be OK.

 

9. Encoding and decoding files with uuencode


If you are lucky and you have uuencode installed (sharutils) package is present on remote machine to encode lets say an archived set of binary files in .tar.gz format do:

Prepare the archive of all the files you want to copy with tar on Host A:

[root@Machine1 ~]:#  tar -czvf /bin/whatever /usr/local/bin/htop /usr/local/bin/samhain /etc/hosts archived-binaries-and-configs.tar.gz

[root@Machine1 ~]:# uuencode archived-binaries-and-configs.tar.gz archived-binaries-and-configs.uu

Cat / Copy / paste the encoded content as usual to a file on Host B:

Then on Machine 2 decode:

[root@Machine2 ~]:# uuencode -c < archived-binaries-and-configs.tar.gz.uu

 

Conclusion


In this short method I've shown you a hack that is used often by script kiddies to copy over files between pwn3d machines, a method which however is very precious and useful for sysadmins like me who has to admin a paranoid secured servers that are placed in a very hard to access environments.

With the same method you can encode or decode not only binary file but also any standard input/output file content. base64 encoding is quite useful stuff to use also in bash scripts or perl where you want to have the script copy file in a plain text format . Datas are encoded and decoded to make the data transmission and storing process easier. You have to keep in mind always that Encoding and Decoding are not similar to encryption and decryption as encr. deprytion gives a special security layers to the encoded that. Encoded data can be easily revealed by decoding, so if you need to copy between the servers very sensitive data like SSL certificates Private RSA / DSA key, this command line utility tool better to be not used for sesitive data copying.

 

 

Adding proxy to yum repository on Redhat / Fedora / CentOS and other RPM based Linux distributions, Listing and enabling new RPM repositories

Tuesday, September 7th, 2021

yum-add-proxy-host-for-redhat-linux-centos-list-rpm-repositories-enable-disable-repositories

Sometimes if you work in a company that is following PCI standards with very tight security you might need to use a custom company prepared RPM repositories that are accessible only via a specific custom maintained repositories or alternatively you might need the proxy node  to access an external internet repository from the DMZ-ed firewalled zone where the servers lays .
Hence to still be able to maintain the RPM based servers up2date to the latest security patches and install software with yumone very useful feature of yum package manager is to use a proxy host through which you will reach your Redhat Package Manager files  files.

1. The http_proxy and https_proxy shell variables 

To set  a proxy host you need to define there the IP / Hostname or the Fully Qualified Domain Name (FQDN).

By default "http_proxy and https_proxy are empty. As you can guess https_proxy is used if you have a Secure Socket Layer (SSL) certificate for encrypting the communication channel (e.g. you have https:// URL).

[root@rhel: ~]# echo $http_proxy
[root@rhel: ~]#

2. Setting passwordless or password protected proxy host via http_proxy, https_proxy variables

There is a one time very straight forward to configure proxying of traffic via a specific remote configured server with server bourne again  shell (BASH)'s understood variables:
 

a.) Set password free open proxy to shell environment.

[root@centos: ~]# export https_proxy="https://remote-proxy-server:8080"


Now use yum as usual to update the available installabe package list or simply upgrade to the latest packages with lets say:

[root@rhel: ~]# yum check-update && yum update

b.) Configuring password protected proxy for yum

If your proxy is password protected for even tigher security you can provide the password on the command line as well.

[root@centos: ~]# export http_proxy="http://username:pAssW0rd@server:port/"

Note that if you have some special characters you will have to pass the string inside single quotes or escape them to make sure the password will properly handled to server, before trying out the proxy with yum, echo the variable.

[root@centos: ~]# export http_proxy='http://username:p@s#w:E@192.168.0.1:3128/'
  [root@centos: ~]# echo $http_proxy
http://username:p@s#w:E@server:port/

Then do whatever with yum:

[root@centos: ~]# yum check-update && yum search sharutils


If something is wrong and proxy is not properly connected try to reach for the repository manually with curl or wget

[root@centos: ~]# curl -ilk http://download.fedoraproject.org/pub/epel/7/SRPMS/ /epel/7/SRPMS/
HTTP/1.1 302 Found
Date: Tue, 07 Sep 2021 16:49:59 GMT
Server: Apache
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Referrer-Policy: same-origin
Location: http://mirror.telepoint.bg/epel/7/SRPMS/
Content-Type: text/plain
Content-Length: 0
AppTime: D=2264
X-Fedora-ProxyServer: proxy01.iad2.fedoraproject.org
X-Fedora-RequestID: YTeYOE3mQPHH_rxD0sdlGAAAA80
X-Cache: MISS from pcfreak
X-Cache-Lookup: MISS from pcfreak:3128
Via: 1.1 pcfreak (squid/4.6)
Connection: keep-alive


Or if you need, you can test the user, password protected proxy with wget as so:

[root@centos: ~]# wget –proxy-user=USERNAME –proxy-password=PASSWORD http://your-proxy-domain.com/optional-rpms/


If you have lynx installed on the machine you can do the remote proxy successful authentication check with it with less typing:

[root@centos: ~]# lynx -pauth=USER:PASSWORD http://proxy-domain.com/optional-rpm/

 

3. Making yum proxy connection permanent via /etc/yum.conf

 

Perhaps the easiest and quickest way to add the http_proxy / https_proxy configured is to store it to automatically load on each server ssh login in your admin user (root) in /root/.bashrc or /root/.bash_profile or in the global /etc/profile or /etc/profile.d/custom.sh etc.

However if you don't want to have hacks and have more cleanness on the systems, the recommended "Redhat way" so to say is to store the configuration inside /etc/yum.conf

To do it via /etc/yum.conf you have to have some records there like:

# The proxy server – proxy server:port number 
proxy=http://mycache.mydomain.com:3128 
# The account details for yum connections 
proxy_username=yum-user 
proxy_password=qwerty-secret-pass

4. Listing RPM repositories and their state

As I had to install sharutils RPM package to the server which contains the file /bin/uuencode (that is provided on CentOS 7.9 Linux from Repo: base/7/x86_64 I had to check whether the repository was installed on the server.

To get a list of all yum repositories avaiable 

[root@centos:/etc/yum.repos.d]# yum repolist all
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: centos.telecoms.bg
 * epel: mirrors.netix.net
 * extras: centos.telecoms.bg
 * remi: mirrors.netix.net
 * remi-php74: mirrors.netix.net
 * remi-safe: mirrors.netix.net
 * updates: centos.telecoms.bg
repo id                                repo name                                                                         status
base/7/x86_64                          CentOS-7 – Base                                                                   enabled: 10,072
base-debuginfo/x86_64                  CentOS-7 – Debuginfo                                                              disabled
base-source/7                          CentOS-7 – Base Sources                                                           disabled
c7-media                               CentOS-7 – Media                                                                  disabled
centos-kernel/7/x86_64                 CentOS LTS Kernels for x86_64                                                     disabled
centos-kernel-experimental/7/x86_64    CentOS Experimental Kernels for x86_64                                            disabled
centosplus/7/x86_64                    CentOS-7 – Plus                                                                   disabled
centosplus-source/7                    CentOS-7 – Plus Sources                                                           disabled
cr/7/x86_64                            CentOS-7 – cr                                                                     disabled
epel/x86_64                            Extra Packages for Enterprise Linux 7 – x86_64                                    enabled: 13,667
epel-debuginfo/x86_64                  Extra Packages for Enterprise Linux 7 – x86_64 – Debug                            disabled
epel-source/x86_64                     Extra Packages for Enterprise Linux 7 – x86_64 – Source                           disabled
epel-testing/x86_64                    Extra Packages for Enterprise Linux 7 – Testing – x86_64                          disabled
epel-testing-debuginfo/x86_64          Extra Packages for Enterprise Linux 7 – Testing – x86_64 – Debug                  disabled
epel-testing-source/x86_64             Extra Packages for Enterprise Linux 7 – Testing – x86_64 – Source                 disabled
extras/7/x86_64                        CentOS-7 – Extras                                                                 enabled:    500
extras-source/7                        CentOS-7 – Extras Sources                                                         disabled
fasttrack/7/x86_64                     CentOS-7 – fasttrack                                                              disabled
remi                                   Remi's RPM repository for Enterprise Linux 7 – x86_64                             enabled:  7,229
remi-debuginfo/x86_64                  Remi's RPM repository for Enterprise Linux 7 – x86_64 – debuginfo                 disabled
remi-glpi91                            Remi's GLPI 9.1 RPM repository for Enterprise Linux 7 – x86_64                    disabled
remi-glpi92                            Remi's GLPI 9.2 RPM repository for Enterprise Linux 7 – x86_64                    disabled
remi-glpi93                            Remi's GLPI 9.3 RPM repository for Enterprise Linux 7 – x86_64                    disabled
remi-glpi94                            Remi's GLPI 9.4 RPM repository for Enterprise Linux 7 – x86_64                    disabled
remi-modular                           Remi's Modular repository for Enterprise Linux 7 – x86_64                         disabled
remi-modular-test                      Remi's Modular testing repository for Enterprise Linux 7 – x86_64                 disabled
remi-php54                             Remi's PHP 5.4 RPM repository for Enterprise Linux 7 – x86_64                     disabled
remi-php55                             Remi's PHP 5.5 RPM repository for Enterprise Linux 7 – x86_64                     disabled
remi-php55-debuginfo/x86_64            Remi's PHP 5.5 RPM repository for Enterprise Linux 7 – x86_64 – debuginfo         disabled
!remi-php56                            Remi's PHP 5.6 RPM repository for Enterprise Linux 7 – x86_64                     disabled
remi-php56-debuginfo/x86_64            Remi's PHP 5.6 RPM repository for Enterprise Linux 7 – x86_64 – debuginfo         disabled
remi-php70                             Remi's PHP 7.0 RPM repository for Enterprise Linux 7 – x86_64                     disabled
remi-php70-debuginfo/x86_64            Remi's PHP 7.0 RPM repository for Enterprise Linux 7 – x86_64 – debuginfo         disabled
remi-php70-test                        Remi's PHP 7.0 test RPM repository for Enterprise Linux 7 – x86_64                disabled
remi-php70-test-debuginfo/x86_64       Remi's PHP 7.0 test RPM repository for Enterprise Linux 7 – x86_64 – debuginfo    disabled
remi-php71                             Remi's PHP 7.1 RPM repository for Enterprise Linux 7 – x86_64                     disabled
remi-php71-debuginfo/x86_64            Remi's PHP 7.1 RPM repository for Enterprise Linux 7 – x86_64 – debuginfo         disabled
remi-php71-test                        Remi's PHP 7.1 test RPM repository for Enterprise Linux 7 – x86_64                disabled
remi-php71-test-debuginfo/x86_64       Remi's PHP 7.1 test RPM repository for Enterprise Linux 7 – x86_64 – debuginfo    disabled
!remi-php72                            Remi's PHP 7.2 RPM repository for Enterprise Linux 7 – x86_64                     disabled
remi-php72-debuginfo/x86_64            Remi's PHP 7.2 RPM repository for Enterprise Linux 7 – x86_64 – debuginfo         disabled
remi-php72-test                        Remi's PHP 7.2 test RPM repository for Enterprise Linux 7 – x86_64                disabled
remi-php72-test-debuginfo/x86_64       Remi's PHP 7.2 test RPM repository for Enterprise Linux 7 – x86_64 – debuginfo    disabled
remi-php73                             Remi's PHP 7.3 RPM repository for Enterprise Linux 7 – x86_64                     disabled
remi-php73-debuginfo/x86_64            Remi's PHP 7.3 RPM repository for Enterprise Linux 7 – x86_64 – debuginfo         disabled
remi-php73-test                        Remi's PHP 7.3 test RPM repository for Enterprise Linux 7 – x86_64                disabled
remi-php73-test-debuginfo/x86_64       Remi's PHP 7.3 test RPM repository for Enterprise Linux 7 – x86_64 – debuginfo    disabled
remi-php74                             Remi's PHP 7.4 RPM repository for Enterprise Linux 7 – x86_64                     enabled:    423
remi-php74-debuginfo/x86_64            Remi's PHP 7.4 RPM repository for Enterprise Linux 7 – x86_64 – debuginfo         disabled
remi-php74-test                        Remi's PHP 7.4 test RPM repository for Enterprise Linux 7 – x86_64                disabled
remi-php74-test-debuginfo/x86_64       Remi's PHP 7.4 test RPM repository for Enterprise Linux 7 – x86_64 – debuginfo    disabled
remi-php80                             Remi's PHP 8.0 RPM repository for Enterprise Linux 7 – x86_64                     disabled
remi-php80-debuginfo/x86_64            Remi's PHP 8.0 RPM repository for Enterprise Linux 7 – x86_64 – debuginfo         disabled
remi-php80-test                        Remi's PHP 8.0 test RPM repository for Enterprise Linux 7 – x86_64                disabled
remi-php80-test-debuginfo/x86_64       Remi's PHP 8.0 test RPM repository for Enterprise Linux 7 – x86_64 – debuginfo    disabled
remi-safe                              Safe Remi's RPM repository for Enterprise Linux 7 – x86_64                        enabled:  4,549
remi-safe-debuginfo/x86_64             Remi's RPM repository for Enterprise Linux 7 – x86_64 – debuginfo                 disabled
remi-test                              Remi's test RPM repository for Enterprise Linux 7 – x86_64                        disabled
remi-test-debuginfo/x86_64             Remi's test RPM repository for Enterprise Linux 7 – x86_64 – debuginfo            disabled
updates/7/x86_64                       CentOS-7 – Updates                                                                enabled:  2,741
updates-source/7                       CentOS-7 – Updates Sources                                                        disabled
zabbix/x86_64                          Zabbix Official Repository – x86_64                                               enabled:    178
zabbix-debuginfo/x86_64                Zabbix Official Repository debuginfo – x86_64                                     disabled
zabbix-frontend/x86_64                 Zabbix Official Repository frontend – x86_64                                      disabled
zabbix-non-supported/x86_64            Zabbix Official Repository non-supported – x86_64                                 enabled:      5
repolist: 39,364

[root@centos:/etc/yum.repos.d]# yum repolist all|grep -i 'base/7/x86_64'
base/7/x86_64                       CentOS-7 – Base              enabled: 10,072

 

As you can see in CentOS 7 sharutils is enabled from default repositories, however this is not the case on Redhat 7.9, hence to install sharutils there you can one time enable RPM repository to install sharutils 

[root@centos:/etc/yum.repos.d]# yum –enablerepo=rhel-7-server-optional-rpms install sharutils

To install zabbix-agent on the same Redhat server, without caring that I need precisely  know the RPM repository that is providing zabbix agent that in that was (Repo: 3party/7Server/x86_64)  I had to:

[root@centos:/etc/yum.repos.d]# yum –enablerepo \* install zabbix-agent zabbix-sender


Permanently enabling repositories of course is possible via editting or creating fresh new file configuration manually on CentOS / Fedora under directory /etc/yum.repos.d/
On Redhat Enterprise Linux  servers it is easier to use the subscription-manager command instead, like this:
 

[root@rhel:/root]# subscription-manager repos –disable=epel/7Server/x86_64

[root@rhel:/root]# subscription-manager repos –enable=rhel-6-server-optional-rpms

Get dmesg command kernel log report with human date / time timestamp on older Linux distributions

Friday, June 18th, 2021

how-to-get-dmesg-human-readable-timestamp-kernel-log-command-linux-logo

If you're a sysadmin you surely love to take a look at dmesg kernel log output. Usually on many Linux distributions there is a setup that dmesg keeps logging to log files /var/log/dmesg or /var/log/kern.log. But if you get some inherited old Linux servers it is quite possible that the previous machine maintainer did not enable the output of syslog to get logged in /var/log/{dmesg,kern.log,kernel.log}  or even have disabled the kernel log for some reason. Even though that in dmesg output you might find some interesting events reporting issues with Hard drives on its way to get broken / a bad / reads system processes crashing or whatever of other interesting information that could help you prevent severe servers downtimes or problems earlier but due to an old version of Linux distribution lets say Redhat 5 / Debian 6 or old CentOS / Fedora, the version of dmesg command shipped does not support the '-T' option that is present in util-linux package shipped with newer versions of  Redhat 7.X  / 8.X / SuSEs etc.  

 -T, –ctime
              Print human readable timestamps.  The timestamp could be inaccurate!

To illustrate better what I mean, here is an example from the non-human readable timestamp provided by older dmesg command

root@web-server~:# dmesg |tail -n 5
[4505913.361095] hid-generic 0003:1C4F:0002.000E: input,hidraw1: USB HID v1.10 Device [SIGMACHIP USB Keyboard] on usb-0000:00:1d.0-1.3/input1
[4558251.034024] Process accounting resumed
[4615396.191090] r8169 0000:03:00.0 eth1: Link is Down
[4615397.856950] r8169 0000:03:00.0 eth1: Link is Up – 100Mbps/Full – flow control rx/tx
[4644650.095723] Process accounting resumed

Thanksfully using below few lines of shell or perl scripts the dmesg -T  functionality could be added to the system , so you can easily get the proper timestamp out of the obscure default generated timestamp in the same manner as on newer distros.

Here is how to do with it with bash script:

#!/bin/sh paste in .bashrc and use dmesgt to get human readable timestamp
dmesg_with_human_timestamps () {
    FORMAT="%a %b %d %H:%M:%S %Y"

 

    now=$(date +%s)
    cputime_line=$(grep -m1 "\.clock" /proc/sched_debug)

    if [[ $cputime_line =~ [^0-9]*([0-9]*).* ]]; then
        cputime=$((BASH_REMATCH[1] / 1000))
    fi

    dmesg | while IFS= read -r line; do
        if [[ $line =~ ^\[\ *([0-9]+)\.[0-9]+\]\ (.*) ]]; then
            stamp=$((now-cputime+BASH_REMATCH[1]))
            echo "[$(date +”${FORMAT}” –date=@${stamp})] ${BASH_REMATCH[2]}"
        else
            echo "$line"
        fi
    done
}

Copy the script somewhere under lets say /usr/local/bin or wherever you like on the server and add into your HOME ~/.bashrc some alias like:
 

alias dmesgt=dmesg_with_timestamp.sh


You can get a copy dmesg_with_timestamp.sh of the script from here

Or you can use below few lines perl script to get the proper dmeg kernel date / time

 

#!/bin/perl
# on old Linux distros CentOS 6.0 etc. with dmesg (part of util-linux-ng-2.17.2-12.28.el6_9.2.x86_64) etc. dmesg -T not available
# workaround is little pl script below
dmesg_with_human_timestamps () {
    $(type -P dmesg) "$@" | perl -w -e 'use strict;
        my ($uptime) = do { local @ARGV="/proc/uptime";<>}; ($uptime) = ($uptime =~ /^(\d+)\./);
        foreach my $line (<>) {
            printf( ($line=~/^\[\s*(\d+)\.\d+\](.+)/) ? ( “[%s]%s\n", scalar localtime(time – $uptime + $1), $2 ) : $line )
        }'
}


Again to make use of the script put it under /usr/local/bin/check_dmesg_timestamp.pl

alias dmesgt=dmesg_with_human_timestamps

root@web-server:~# dmesgt | tail -n 20

[Sun Jun 13 15:51:49 2021] usb 2-1.3: USB disconnect, device number 9
[Sun Jun 13 15:51:50 2021] usb 2-1.3: new low-speed USB device number 10 using ehci-pci
[Sun Jun 13 15:51:50 2021] usb 2-1.3: New USB device found, idVendor=1c4f, idProduct=0002, bcdDevice= 1.10
[Sun Jun 13 15:51:50 2021] usb 2-1.3: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[Sun Jun 13 15:51:50 2021] usb 2-1.3: Product: USB Keyboard
[Sun Jun 13 15:51:50 2021] usb 2-1.3: Manufacturer: SIGMACHIP
[Sun Jun 13 15:51:50 2021] input: SIGMACHIP USB Keyboard as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.3/2-1.3:1.0/0003:1C4F:0002.000D/input/input25
[Sun Jun 13 15:51:50 2021] hid-generic 0003:1C4F:0002.000D: input,hidraw0: USB HID v1.10 Keyboard [SIGMACHIP USB Keyboard] on usb-0000:00:1d.0-1.3/input0
[Sun Jun 13 15:51:50 2021] input: SIGMACHIP USB Keyboard Consumer Control as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.3/2-1.3:1.1/0003:1C4F:0002.000E/input/input26
[Sun Jun 13 15:51:50 2021] input: SIGMACHIP USB Keyboard System Control as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.3/2-1.3:1.1/0003:1C4F:0002.000E/input/input27
[Sun Jun 13 15:51:50 2021] hid-generic 0003:1C4F:0002.000E: input,hidraw1: USB HID v1.10 Device [SIGMACHIP USB Keyboard] on usb-0000:00:1d.0-1.3/input1
[Mon Jun 14 06:24:08 2021] Process accounting resumed
[Mon Jun 14 22:16:33 2021] r8169 0000:03:00.0 eth1: Link is Down
[Mon Jun 14 22:16:34 2021] r8169 0000:03:00.0 eth1: Link is Up – 100Mbps/Full – flow control rx/tx

How to Install and Play old Arcade Multiple Arcade Machine Emulator Games on Linux in 2021 with xmame and GXMame GUI Frontend

Friday, May 14th, 2021

mame-multiple-arcade-machine-emulator
I've earlier blogged on how to install and play old arcade games with xmame compiled from source on the now old Debian 7 Linux under the article Install xmame from source on Debian Linux 7.0 (Wheezy) to play for better MAME (Arcade Games Emulation)
as well article on using the newer version of MAME emulator instead of xmame under the article Playing Arcade old school games on Debian Linux as well as how to make the MAME emulator work with a joystick see my previous How to configure Joystick ( Gamepad ) on Debian, Ubuntu, Mint GNU / Linux easily.

Since I have preinstalled my notebook with fresh Debian 10 Buster, for a long time I did not have the time as well as desire to play my favourite games of the youth to name a few this is Xain'd Sleena / Cadillacs and Dinosaurs (1993) / the Punisher (1993) / Captain Commando (1991) / Super Mario / Contra / Final Fight etc. and rest of the the SEGA Mega Drive / GameBoy / Nintendo / Terminator game (fake clone of nintendo) and other killing Arcade Classics of the late 90s and early year 2000, which we played on a public houses on a game cabinets with a joysticks. 
Hence I tried to reproduce some of my articles just as of 2021 to see whether still we can get a nice playable MAME emulator on Linux with a Graphical GUI for Mate or GNOME. And it seems using a straight mame out of Debian standard repositories did not work with some of the more sophisticated ROM .zip files from the nices such as Punisher or XSLEENA.zip, this is how this article got born in an attempt to give a way for such old school game freaks as me to be able to play there favorite games of the youth.

Below is the few steps adapted mostly from above articles with some head banging and loosing multiple hours wondering until I finally got a working XMAME ROM emulator with the simple but working GXMAME graphical frontend for M.A.M.E..

To compile xmame with joystick support be it analog or whatever joystick you have to use a Makefile like below

https://www.pc-freak.net/files/xmame-0.103-Makefile-for-joystick

Copy it on your PC and remove it to Makefile in the xmame-0.103 source dir:

# cd /usr/local/src/xmame-0.103


I have to install all the necessery package dependencies development header files, some of which are mentioned in the beginning of post articles and some of which I had to install manually, such as the preset Debian meta-package build-essentials Things to install on newly installed GNU / Linux (My favourite must have Linux text and GUI programs missing in fresh Linux installs), just to mention a few I remember had to install based on some compilation errors:

# apt-get install build-essential
# apt-get install –yes zlib1g-dev

# apt-get install –yes libexpat1-dev
# apt-get install –yes libghc-x11-dev
# apt-get install –yes x11proto-video-dev
# apt-get install –yes libxv-dev
# apt-get install libxext6
# apt-get install libxext6-dev
# apt-get install libxext-dev
# apt-get install libjpeg62-turbo-dev
# apt-get install libxinerama-dev
# apt-get install libgtk-3-dev
# apt-get install syslog-ng-dev
# apt-get install libgtk2.0-dev


If I'm missing some package necessery here you will have to find it yourself based on the *.h file produced as error during compile you should look it up with a cmd like:

# apt-file search glib2.h


And install it further.

You will need to edit Makefile or take and straight use or if necessery adapt an already prepared Makefile for my purpose:
 

# wget https://www.pc-freak.net/files/xmame-0.103-Makefile-for-joystick;
# mv https://www.pc-freak.net/files/xmame-0.103-Makefile-for-joystick Makefile
# make && make install


Some .zip roms does not properly work with the newer mame you need to instead use xmame ..

Below is the version I use on Debian 10 as of May 2021 year.

hipo@jeremiah:/usr/local/src$ xmame –version
GLINFO: loaded OpenGL library libGL.so!
GLINFO: loaded GLU    library libGLU.so!
xmame (x11) version 0.103 (May 13 2021)


To make the joystick work in xmame you will need to have a preset of modules loaded on the Linux for my old Genius joystick this is what works.

hipo@jeremiah:~/Games$ cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
snd-seq
3c59x
snd-emu10k1
snd-pcm-oss
snd-mixer-oss
snd-seq-oss
joydev
ns558
sidewinder
gameport
analog
adi
pcigame
iforce
evdev
usbhid


Fill in your joystick module there and make sure you manually load each one of the modules with modprobe command.
Next calibrate the joystick with some tool like jktest-gtk

jktest-gtk-linux-screenshot
If you need a good frontend for MATE / GNOME for xmame try gxmame. It is a real pain in the ass to configure, note that the only working version of xmame with good configuration as well as gxmame with a game list that is prebuild.
If you need xmame-0.103.tar.bz2 exact xmame version I'm using thethe archive is on:

https://www.pc-freak.net/files/xmame-0.103.tar.bz2
An old bundle of both gxmame and xmamerc configs is here

https://www.pc-freak.net/files/xmame.tar.gz

https://www.pc-freak.net/files/xmame-config-for-joystick-hipo.tar.gz
For example my old working version of xmame ~/.xmame is here https://www.pc-freak.net/files/xmame-config-for-joystick-hipo.tar.gz

Configuration of gxmame (even though it has has a GUI for configuring is below:

https://www.pc-freak.net/files/gxmame.tar.gz

xmamerc sample working file is here

https://www.pc-freak.net/files/xmamerc

My newest current version as of Debian 10 Squeeze of xmame and gxmame is below:

https://www.pc-freak.net/files/gxmame-config-newest_may_2021.tar.gz

https://www.pc-freak.net/files/xmame-config-newest_may_2021.tar.gz

Note that my rom files and stuff is located and configured in neewst configs to be under /home/hipo/Games/roms/ if your location is others grep it in .xmame/* and .gxmame/* files and set the correct PATH locations.

A VERY IMPORTANT NOTE is not to use the stable version of GXMAME even though it worked in 2003 fine as the project is abandoned and unsupported as of 2021 the latest downloadable stable gxmame file on Sourceforge gxmame-0.34b website is not working correctly (even though it compiles fine).
You have to compile and use instead the newer version gxmame-0.35beta1.

If you want to use gxmame with a joystick you need to compile it with the respective option:
 

root@jeremiah:/usr/local/src/gxmame-0.35beta1# ./configure –enable-joystick

 

gxmame 0.35beta1

Print debugging messages…… : no
Joystick support………….. : yes

GXMame will be installed in /usr/local/bin.
Warning: You have an old copy of gxmame at /usr/local/bin/gxmame.

 

configure complete, now type 'make'

To install compiled binaries do the usual:
 

# make && make install

gxmame binary should be installed under /usr/local/bin/gxmame once launched you should get the shiny gxmame GUI.

gxmame-screenshot-debian-10-gnu-linux-mate-desktop

Perhaps there was other stuff I've done in the process I forgot to document here, so if you try to follow my guide and something does not work please tell me what I'm missing and you can't handle it either contact me.

The guide is for Debian Linux but should work on other .Deb based Linux distros such as Ubuntu / Linux Mint etc.

To enjoy my 4GB present of ROM files containing many of best well known M.A.M.E. ARCADE GAMES check archive here. Note that this collection was downloaded on the Internet and I do not hold any responsibility of the archive. If it contains files with any copyright infringment this is to be on your own.