How to Install Jitsi Meet on Debian Linux to have your own free software video conferencing secure server


April 24th, 2025

 

jitsi-meet-create-new-room-for-video-meetings-linux

 

Jitsi Meet is a free, open-source video conferencing platform that allows you to host secure and scalable video calls both using a Mobile Phone / Tablet / PC or any other electronic device for which jitsi client has available port. Jitsi meet is the best free alternative one can get to Rakuten Viber / Facebook (Meta) / Zoom / Apples' Facetime etc.
What makes Jitsi really worthy is it can make your Video streaming communication give you flexibility to keep your communication a little bit private and harder to be captured than if you use the general Video streaming platforms. 
Jitsi is also a very simple to use and can be used either with a Desktop Client on Windows / Linux and Mac OS or Smart Phone running Android (Samsung / Huawei etc.) or iOS (iPhones) you can configure to use the Jitsi server or directly via a SSL encryption secured web URL address. The only thing i really don't like about Jitsi is it uses Java and its way of work is cryptic just like it is pretty hard to debug or understand exactly how the software works, as when errors came the usual crazzy Java exceptions are filling the jitsi logs.

In below short guide, I'll try provides a simple step-by-step instructions for installing Jitsi Meet on a Debian-based systems, hoping that anyone can benefit from Jitsi by building his own server.

 

jitsi-meet-conference-free-open-source-video-streaming-viber-and-facebook-alternative


What you should have before you start buillding your new Jitsi meet server

Before you begin, ensure that your system meets the following requirements:

  • A fresh installation of Debian 10 (Buster) or newer.

  • A non-root user with sudo privileges.

  • A fully updated system.

  • A domain name pointing to your server's IP address.

  • OpenJDK 11 installed.​

To get a better understanding on how Jitsi meet works it is worthy to take a quick look on Jitsi Architectural diagram:

Jitsi-meet-video-conferencing-software-linux-windows-mac-Architectural-diagram
 

1. Update Your System

Start by updating your system's package list and upgrading existing packages:​

# apt update sudo apt upgrade -y

2. Install Required Dependencies

Install the necessary packages for adding repositories and managing keys:​

# apt install apt-transport-https curl gnupg2 -y

3 Add Jitsi Repository

Add the Jitsi repository key to your system:

# curl https://download.jitsi.org/jitsi-key.gpg.key | sudo gpg --dearmor -o /usr/share/keyrings/jitsi-keyring.gpg

Then, add the Jitsi repository:​

# echo "deb [signed-by=/usr/share/keyrings/jitsi-keyring.gpg] https://download.jitsi.org stable/" | sudo tee /etc/apt/sources.list.d/jitsi-stable.list > /dev/null

Update your package list to include the Jitsi repository into apt database:​

 # apt update

4. Install Jitsi Meet

Install the Jitsi Meet package:​  

# apt install jitsi-meet -y

During installation, you'll be prompted to:​

  • Enter the hostname: Provide your domain name (e.g., meet.example.com ).

  • Choose SSL certificate: Select "Generate a new self-signed certificate" or "Obtain a Let's Encrypt certificate" if you have a valid domain.​JitsiScaleway

If you opt for Let's Encrypt, ensure that ports 80 and 443 are open on your firewall.​

5. Configure Firewall openings

If you have already a configured firewall to filter out traffic, open the necessary ports to allow traffic to your Jitsi Meet server from your router or entry firewall device as well as on the Linux itself: ​

Allow access to SSH server

# ufw allow 22/tcp


Allow access to HTTP unecrypted to Jitsi meet server

# ufw allow 80/tcp
# ufw allow 443/tcp


Allow access necessery for proper operation of Jitsi VideoBridge (port range 10000 to 20000)
 

# ufw allow 10000:20000/udp
# ufw enable

 

Verify the firewall status is Okay​ 

# ufw status

6. Access Jitsi Meet in a browser

Open a web browser and navigate to your server's domain or IP address:​

https://meet.your-custom-domain-or-IP.com

Hopefully all is okay and You should see the Jitsi Meet interface, where you can start or join a meeting.​

7. Secure Conference Creation (Optional extra)

By default, anyone can create a conference. To restrict this:​

  1. Install and configure Prosody for authentication.
    For those who don't know Prosody is a modern XMPP communication server

  2. Set up secure domains and configure authentication settings.​

For detailed instructions, refer to the Jitsi DevOps Guide. ​
 

Conclusion

Now You should have successfully installed Jitsi Meet on your Debian server.
Installing to Ubuntu and Redhat OSes such as Fedora or Redhat based distros should be not much difrerent from on this guide, except you have to use
the correct RPM repositories.

Now you can further now host secure video conferences using your own infrastructure and have an increased privacy and perhaps be more calm that the CIA or Mussat, MI6 / FSB might be not spying your Video conference talks (except if they don't already do it on an OS level which most likely the case but this doesn't matter. :).

For advanced configurations and features, consult the Jitsi Handbook and the Jitsi DevOps Guide.​Jitsi

That's all folks Enjoy !

How to Install and Set Up an NFS Server network Shares on on Linux to easify data transfer across multiple hosts


April 7th, 2025

How to Configure NFS Server in Redhat,CentOS,RHEL,Debian,Ubuntu and Oracle Linux

Network File System (NFS) is a protocol that allows one system to share directories and files with others over a network. It's commonly used in Linux environments for file sharing between systems. In this guide, we'll walk you through the steps to install and set up an NFS server on a Linux system.

Prerequisites

Before you start, make sure you have:

  • A Linux system distros (e.g., Ubuntu, CentOS, Debian, etc.)
  • Root or sudo privileges on the system.
  • A network connection between the server (NFS server) and clients (machines that will access the shared directories).
     

1. Install NFS Server Package

 

On Ubuntu / Debian based Linux systems:

a. First, update the package list 

# apt update

b. Install the NFS server package
 

# apt install nfs-kernel-server

On CentOS/REL-based systems:

 2. Install the NFS server package
 

      # yum install nfs-utils 

Once the package is installed, ensure that the necessary services are enabled.

 3. Create Shared Directory for file sharing

Decide which directory you want to share over NFS. If the directory doesn't exist, you can create one. For example:

# mkdir -p /nfs_srv_dir/nfs_share

Make sure the directory has the appropriate permissions so that the nfs clients can access it.

# chown nobody:nogroup /nfs_srv_dir/nfs_share 
# chmod 755 /nfs_srv_dir/nfs_share

4. Configure NFS Exports ( /etc/exports file)

The NFS exports file (/etc/exports) is perhaps most important file you will have to create and deal with regularly to define the expored shares, this file contains the configuration settings for directories you want to share with other systems.

       a. Open the /etc/exports file for editing:

vi /etc/exports

Add an entry for the directory you want to share. For example, if you're sharing /nfs_srv_dir/nfs_share and allowing access to all systems on the network (192.168.1.0/24), add the following line:
 

/nfs_srv_dir/nfs_share 192.168.1.0/24(rw,sync,no_subtree_check)


Here’s what each option means:

  • rw: Read and write access.
  • sync: Ensures that changes are written to disk before responding to the client.

 

Here is few lines of  example of my working /etc/exports on my home running NFS server

/var/www 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/home/jordan 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/mnt/sda1/icons-frescoes/ 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/home/mobfiles 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/mnt/sda1/icons-frescoes/ 192.168.0.200/32(rw,no_root_squash,async,subtree_check)
/home/hipo/public_html 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/home/alex/public_html 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/home/necroleak/public_html 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/bashscripts 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/backups/Family-Videos 192.168.0.200/32(ro,no_root_squash,async,subtree_check)

 

5. Export the NFS Shares with exportfs command

Once the export file is configured, you need to inform the NFS server to start sharing the directory:
 

# exportfs -a


The -a flag will make it export all the sharings.

6. Start and Enable NFS Services

You need to start and enable the NFS server so it will run on system boot.

On Ubuntu / Debian Linux run the following commands:
 

# systemctl start nfs-kernel-server 
# systemctl enable nfs-kernel-server


On CentOS / RHEL Linux:
 

# systemctl start nfs-server
# systemctl enable nfs-server


7. Allow NFS Traffic Through the Firewall

If your server has a firewall configured / enabled, you will need to allow NFS-related ports through the firewall.
These ports include 2049 TCP protocol Ports (NFS) and 111 (RPCbind) UDP and TCP protocol , and some additional ports.

On Ubuntu/Debian (assuming you are using ufw [UNCOMPLICATED FIREWALL]):

# ufw allow from 192.168.1.0/24 to any port nfs sudo ufw reload

On CentOS / RHEL Linux:

# firewall-cmd –permanent –add-service=nfs sudo firewall-cmd –permanent –add-service=mountd sudo firewall-cmd –permanent –add-service=rpc-bind sudo firewall-cmd –reload

8. Verify NFS Server is Running

To ensure the NFS server is running properly, use the following command:
 

# systemctl status nfs-kernel-server

or

# systemctl status nfs-server

You should see output indicating that the service is active and running.

 

9. Test the NFS Share (Client-Side)

To test the NFS share, you will need to mount it on a client machine. Here's how to mount it:

On the client machine, install the NFS client utilities:

Ubuntu / Debian Linux

# apt install nfs-common

For CentOS / RHEL Linux

# yum install nfs-utils


Create a mount point (Nomatter the distro),:
 

# mkdir -p /mnt/nfs_share


Mount the NFS share:

# mount -t nfs <nfs_server_ip>:/nfs_srv_dir/nfs_share /mnt/nfs_share

Replace <nfs_server_ip> with the IP address of the NFS server or DNS host alias if you have one defined in /etc/hosts file.

Verify that the share is mounted:

​# df -h

You should see the NFS share listed under the mounted file systems.

10. Configure Auto-Mount at Boot (Optional)

To have the NFS share automatically mounted at boot, you can add an entry to the /etc/fstab file on the client machine.

Open /etc/fstab for editing:

# vi /etc/fstab

Add the following line: 

<server-ip>:/nfs_srv_dir/nfs_share /mnt/nfs_share nfs defaults 0 0

Save and close the file.

The NFS share will now be automatically mounted whenever the system reboots.

Debug NFS configuration issues (basics)

 

You can continue to modify the /etc/exports file to share more directories or set specific access restrictions depending on your needs.

If you encounter any issues, checking the server logs or using
 

# exportfs -v
/var/www          192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/home/var_data      192.168.0.205/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/mnt/sda1/
        192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/mnt/sda2/info
        192.168.0.200/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/home/mobfiles    192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/home/var_data/public_html
        192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/var/public
        192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/neon/data
        192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/scripts      192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/backups/data-limited
        192.168.0.200/32(async,wdelay,hide,sec=sys,ro,secure,no_root_squash,no_all_squash)
/disk/filetransfer
        192.168.0.200/23(async,wdelay,hide,sec=sys,ro,secure,no_root_squash,no_all_squash)
/public_shared/data
        192.168.0.200/23(async,wdelay,hide,sec=sys,ro,secure,no_root_squash,no_all_squash)


 Of course there is much more to be said on that you can for example, check /var/log/messages /var/log/syslog and other logs that can give you hints about issues, as well as manually try to mount / unmount a NFS stuck share to know more on what is going on, but for a starter that should be enough.

command can help severely in troubleshooting the NFS configuration.

Sum it up what learned ?

We learned how to  set up basic NFS server and mounted its shared directory on a client machine.
This is a great solution for centralized file sharing and collaboration on Linux systems (even though many companies are trying to not use it due to its lack of connection encryption for historical reasons NFS has been widely used over the years and has helped dramatically for the Internet as we know it to become the World Wide Web of today. Thus for a well secured network and perhaps not a critical files infrastructure, still NFS is a key player in file sharing among heterogenous networks for multitudes of Gigabytes or Terra Pentabytes of data you would like to share amoung your Personal Computers / Servers / Phones / Tablets and generally all kind of digital computer equipment devices.

Protect Application servers against sql injects, redirection handling and click jacking with Haproxy load balancer


April 1st, 2025

 

Lets say you are a system administrator that has to manage haproxy Load Balancers for High Availability that are throwing traffic to a set of 4 Application servers and you do only do a traffic round robin load balancing seemless without modifying the sent traffic. The haproxies are used only to send the frontend traffic towards application machines and then the traffic is returned back via another set of haproxies.


As incoming requests to application frontend is crucial to be secure, i'll give in this article few options that can be turned on in haproxy to strenghten security of backend application (against "hackers" / script kiddies ).

Here is the a sample chunk of haproxy frontend backend configuration you can use in haproxy.cfg config file for the purpose.


  frontend Incoming_Frontend
           bind 10.10.150.8:80 ssl crt /etc/haproxy/certs/your-domain-cert.net_haproxy.pem ca-file /etc/haproxy/certs/CustomCompanyCA.crt verify optional
           mode http
                http-request del-header max-forwards
                http-response set-header X-Frame-Options sameorigin
                http-response replace-header Location http[s]*://[^/:]*[:]*[0-9]*(/.*) \1
              option httplog
              timeout client 600s
              log-format %ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r

             default_backend bk_Incoming_Frontend

    backend bk_Incoming_Frontend
           mode http
           balance roundrobin
           timeout server 330s
           timeout connect 4s
           server bk_AppServer_01 10.10.250.40:8088 weight 1 check port 8088 on-marked-down shutdown-sessions
           server bk_AppServer02 10.40.251.30:8088 weight 1 check port 8088 on-marked-down shutdown-sessions
           server bk_AppServer03 10.50.252.40:8088 weight 1 check port 8088 on-marked-down shutdown-sessions
           server bk_AppServer04 10.80.253.50:8088 weight 1 check port 8088 on-marked-down shutdown-sessions

 

The configuration variables that would improve backend security is as so:

  mode http
                http-request del-header max-forwards
                http-response set-header X-Frame-Options sameorigin
                http-response replace-header Location http[s]*://[^/:]*[:]*[0-9]*(/.*) \1
              option httplog

Above config haproxy meaning explained is as follows: 

This HAProxy configuration is set up for handling HTTP traffic with some specific request and response modifications.

Let's go through each directive:

Breakdown of the Configuration:

  1. mode http

    • This tells HAProxy to operate in HTTP mode, meaning it understands and processes HTTP-specific directives (e.g., modifying headers, logging, etc.).

  2. http-request del-header max-forwards

    • This removes the Max-Forwards header from incoming HTTP requests.

    • The Max-Forwards header is used in TRACE or OPTIONS requests to limit the number of hops a request can take.

    • Removing it may help prevent some types of request-loop abuse or simplify routing.

  3. http-response set-header X-Frame-Options sameorigin

    • This sets the X-Frame-Options header in HTTP responses to sameorigin .

    • Purpose: Prevents clickjacking attacks by ensuring that the page can only be embedded in a frame if it’s from the same origin (not by third-party sites).

      For those who don't know Clickjacking is The malicious practice of manipulating a website user's activity by concealing hyperlinks beneath legitimate clickable content, thereby causing the user to perform actions of which they are unaware. For example you click a payment button on a website from a decoy website but instead of paying to the real target site your money are sent to a malicious user's bank account..

  4. http-response replace-header Location http[s]*://[^/:]*[:]*[0-9]*(/.*) \1

    • This modifies the Location header in HTTP responses.

    • It strips out the scheme ( http:// or https:// ), domain, and port, leaving only the path.

    • Example:

      • Before: Location: https://example.com:8080/path/to/resource

      • After: Location: /path/to/resource

    • This ensures that redirects remain relative instead of absolute, which can help in reverse proxy setups.

  5. option httplog

    • Enables detailed logging for HTTP traffic.

    • Logs will include request method, URL, response status, and other useful details for debugging and monitoring.


Purpose of This Configuration:

  • Security:

    • Removing Max-Forwards helps mitigate abuse.

    • X-Frame-Options: sameorigin prevents clickjacking.

  • Redirection Handling:

    • Ensures the backend does not expose internal hostnames or ports in redirects.

  • Logging:

    • Enables HTTP-specific logging for better monitoring and debugging.

This setup is typical for a reverse proxy scenario where HAProxy is fronting backend services while enforcing security measures and keeping responses clean.

What we learned ?

In this short article, we've learned about how to imrpove application security with simple haproxy load balancer by removing Max-forwards (limitation of max hops traffic could have until reaching the destination), the X-Frame-Options that prevents clickjacking and using Redirection Handling to make sure backend does not expose internal  hostnames or ports used in redirects.

Any other meaningful protection options and hints whether proxying traffic with haproxy are mostly welcome to har about in commects section. If you know such help others learn by sharing.

How to convert .p12 ssl certificate to .pem with openssl command


March 21st, 2025

In cryptography, PKCS #12 defines an archive file format for storing many cryptography objects as a single file. It is commonly used to bundle a private key with its X.509 certificate or to bundle all the members of a chain of trust.

A PKCS #12 file may be encrypted and signed. The internal storage containers, called "SafeBags", may also be encrypted and signed. A few SafeBags are predefined to store certificates, private keys and CRLs. Another SafeBag is provided to store any other data at individual implementer's choice.

PKCS #12 is one of the family of standards called Public-Key Cryptography Standards (PKCS) published by RSA Laboratories.

Privacy-Enhanced Mail (PEM) is a de facto file format for storing and sending cryptographic keys, certificates, and other data, based on a set of 1993 IETF standards defining "privacy-enhanced mail." While the original standards were never broadly adopted and were supplanted by PGP and S/MIME, the textual encoding they defined became very popular. The PEM format was eventually formalized by the IETF in RFC 7468.

If you already have a .P12 certificate password signed provided by someone and you need to convert it a .PEM, this can be done like so:

To convert .p12 certificate :

# Initialize variable
cert_p12_in=your-domain-name-cert.p12
cert_p12_pass='XXXZZZYYYPPPQQQ'
cert_pem_out=your-domain-name-cert.pem
 
 
# Extract the private key
openssl pkcs12 -in $cert_p12_in -nocerts -nodes -passin "pass:$cert_p12_pass" 2>/dev/null | sed -ne '/-BEGIN PRIVATE KEY-/,/-END PRIVATE KEY-/p' > $cert_pem_out
 
# Extract the certificate
openssl pkcs12 -in $cert_p12_in -clcerts -nokeys -passin "pass:$cert_p12_pass" 2>/dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' >> $cert_pem_out
 
# Extract the Chain certificate, potentially nothing
openssl pkcs12 -in $cert_p12_in -cacerts -nokeys -chain  -passin "pass:$cert_p12_pass" 2>/dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' >> $cert_pem_out
 
# Display the result
cat $cert_pem_out

That's all you should have the .p12 to .pem successfully converted.
Cheers ! 🙂

Howto Verify an SSL certificate and it private key do match


March 17th, 2025

Howto Verify an SSL certificate and it's private key do match ?

ssl-verify-pem-and-key-certificate-howto

 

In this article I'll show you how can you verify SSL generated certificate match with its private key. This is mostly useful as sometimes installing signed SSL certfificates might mismatch the key and the result is an SSL mismatch that prevents the supposed encryption of the service from end user to the service to work as expected.
 
I assume you already have properly issued and signed SSL certificate and the private key you used to issue the certificate as well as the entire certificate chain CA and root CA, as well as the certificate.

Requirements

You must have the following item :

  • the signed SSL certificate
  • the certificate's private key
  • the entire certification chain (intermediate CA and root CA)

1. Procedure to verify certificate .crt and .key file match

The following procedures can be used to ensure the given certificate/private key are valid.

Private key verification

  • compute the private key modulus

 

$ openssl rsa -in  certificate.key -modulus -noout | openssl md5

(stdin)= e5220727Acc5396139823018773d55db

 

  • compute the certificate modulus

 

$ openssl x509 – in   certificate.crt -modulus -noout | openssl md5 (stdin)= e5220727Acc5396139823018773d55db

 

  • the private key and certificate modulus md5 must match


How to verify Private key verification (one liner command)

The following command should return 'OK'

 

$ [[  "$(openssl rsa -in your_company_private_key.key -modulus -noout | openssl md5)"   ==  "$(openssl x509 -in and_your_company_private_key.crt -modulus -noout | openssl md5)"   ]] && echo OK || echo NOK

 

2. CA (Certificate Authority)  chain verification

Execute the following command, The certificate.ca should contains the entire CA chain (intermediate CA + root CA)

 

$ openssl verify -CAfile certificate_file.ca certificate.crt: OK

 

3. Expiry date verification of SSL certificate

 

$ openssl x509 – in   certificate_file.crt -noout -startdate -enddate

 

4. Verify the expiry date of a running web service online or in private net

 

$ openssl s_client -connect your-remote-service.com:443 2> /dev/null  | openssl x509 -noout -startdate -enddate

notBefore=Oct 5 00:15:00 2024 GMT
notAfter=Oct 18 23:59:59 2026 GMT

 

If the service provide several certificate with SNI you should use this command to get back the good certificate. You have to set the subject certificate you want to get back

 

$ openssl s_client -connect www.your-remote-service.com:443 -servername srv.your-remote-service.com 2> /dev/null

| openssl x509 -noout -startdate -enddate

notBefore=Oct 5 00:15:00 2024 GMT
notAfter=Oct 18 23:59:59 2026 GMT

 

Sum up what learned ?

In this short article we learned how to verify .crt and and .key file does match, how to do a chain verification of SSL cert, how to check the expire date of a certificate, as well as how to use the openssl command to verify whether installed certificate on a web service is set and working.

How to Deploy a Docker Container with Apache on Debian Linux and assign container static IP address


February 14th, 2025

deploy-docker-container-with-static-ip-on-debian-linux-howto-logo

Deploying a Docker container with Apache on Debian Linux is an efficient way to manage web servers in isolated environments. Docker provides a convenient way to package and run applications, and when combined with Apache, it can be used for hosting websites or web applications. In this guide, we’ll walk through the necessary steps to set up and run an Apache web server inside a Docker container on a Debian Linux machine.

Prerequisites

Before starting, ensure that you have the following prerequisites in place:

  • A Debian-based Linux system (e.g., Debian 10, Debian 11).
  • Docker installed on your system. If you don’t have Docker installed, follow the installation steps below.
  • Basic knowledge of Linux commands and Docker concepts.

Step 1: Install Docker on Debian

First, you need to install Docker if it is not already installed on your Debian machine. Here’s how to install Docker on Debian:

  1. Update the package database:
     

    # apt update

  2. Install the required dependencies:

    apt install apt-transport-https ca-certificates curl gnupg lsb-release

  3. Add Docker’s official GPG key:

    # curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

  4. Set up the stable Docker repository:
     

    # echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] \
    https://download.docker.com/linux/debian $(lsb_release -cs) stable" \
    | tee /etc/apt/sources.list.d/docker.list > /dev/null 
    

     

  5. Install Docker Engine:
     

    # apt update sudo apt install docker-ce docker-ce-cli containerd.io

     

  6. Start Docker and enable it to run on boot:
     

    systemctl start docker
    # systemctl enable docker

  7. Verify Docker installation:
     

    # docker --version

    This should display the installed Docker version, confirming that Docker is installed successfully.
     

Step 2: Pull Apache Docker Image or whatever docker image you want to have installed

Now that Docker is set up, you need to pull the official Apache image from Docker Hub. The Apache image is maintained by the Docker team and comes pre-configured for running Apache in a container.
 

  1. Pull the Apache HTTP Server image:

    # docker pull httpd

    This will download the official Apache HTTP server image ( httpd ) from Docker Hub.

Step 3: Run Apache Container

Once the Apache image is pulled, you can start a Docker container running Apache.

  1. Run the Docker container:

    # docker run -d --name apache-container -p 80:80 httpd

    Here’s what the options mean:

    • -d : Runs the container in detached mode (in the background).
    • --name apache-container : Names the container apache-container .
    • -p 80:80 : Maps port 80 on the host to port 80 in the container (so you can access the Apache web server through port 80).
    • httpd : The name of the image you want to run (the Apache HTTP server).
  2. Verify the container is running:

    # docker ps

    This will show a list of running containers. You should see the apache-container running.

  3. Test the Apache server:

    Open a web browser and go to http://<your-server-ip> . You should see the default Apache welcome page, indicating that Apache is running successfully in the Docker container.

Step 4: Customize Apache Configuration (Optional)

You may want to customize the Apache configuration or serve your own website inside the container. Here’s how to do it:

 

. Run the Apache Docker Container with a Specific IP Address

To bind the container to a specific IP address, use the --add-host or --publish flag while running the container.

  • If you want to bind Apache to a specific IP address on the host (for example, 192.168.1.100 ), use the --publish option:

# docker run -d --name apache-container -p 192.168.1.100:80:80 apache-container


This command tells Docker to bind port 80 in the container to port 80 on the host's IP address 192.168.1.100 . Replace 192.168.1.100 with the desired IP address of your system.

  1. Create a directory for your custom website:

    # mkdir -p /home/user/my-website

  2. Add an index.html file or whatever PHP / Perl whatever files will be served:

    Create a simple HTML file in the directory:
     

    # echo '<html><body><h1>Hello, Apache on Docker!</h1></body></html>' > /home/user/my-website/index.html

  3. Stop the running Apache container:

    # docker stop apache-container

  4. Remove the stopped container:

    # docker rm apache-container

  5. Run a new container with your custom website:

    Now, you can mount your custom directory into the container as a volume:

    # docker run -d --name apache-container -p 80:80 -v /home/user/my-website:/usr/local/apache2/htdocs/ httpd

    The -v option mounts the local directory /home/user/my-website to the Apache server’s default document root directory ( /usr/local/apache2/htdocs/ ).

  6. Verify the custom website:

    Reload the web page in your browser. Now, you should see the "Hello, Apache on Docker!" message, confirming that your custom website is being served from the Docker container.

Step 5: Manage Docker Containers

You can manage the running Apache container with the following commands:

  • Stop the container:

    # docker stop apache-container

  • Start the container:

    # docker start apache-container

  • Remove the container (if needed):

    # docker rm apache-container

  • View logs for troubleshooting:

    # docker logs apache-container

Step 6: Automating Docker Container Deployment (Optional step)

If you want the Apache container to restart automatically after a system reboot, you can add the --restart flag to the docker run command.

For example, to make the container restart automatically unless it is manually stopped, use:
 

# docker run -d --name apache-container -p 80:80 --restart unless-stopped \
-v /home/user/my-website:/usr/local/apache2/htdocs/ httpd 

Conclusion

By following these steps, you can easily deploy Apache inside a Docker container on a Debian Linux machine. Docker allows you to run your Apache web server or whatever docker app you need to have in a lightweight and isolated environment, which is useful development, testing, and production environments. You can further customize this setup by adding additional configurations, integrating with databases, or automating deployments with Docker Compose or Kubernetes.

Enjoy your new Dockerized Apache setup!

How to prevent /etc/resolv.conf to overwrite on every Linux boot. Make /etc/resolv.conf DNS records permanent forever


February 4th, 2025

how-to-make-prevent-etc-resolv.conf-to-ovewrite-on-every-linux-boot-make-etc-resolv-conf-permanent-forever

Have you recently been seriously bothered, after one of the updates from older to newer Debian / Ubuntu / CentOS or other Linux distributions by the fact /etc/resolv.conf has become a dynamic file that pretty much in the spirit of cloud technologies is being regenerated and ovewritten on each and every system (server) OS update /  reboot and due to that you start getting some wrong inappropriate DNS records /etc/resolv.conf causing you harm to the server infrastructure?

During my set of server infra i have faced that odditty for some years now and i guess every system administrator out there has suffered at a point by having to migrate an older Linux release to a newer one, where something gets messed up with DNS resolving due to that Linux OS new feature of /etc/resolv.conf not being really static any more.

The Dynamic resolv.conf file for glibc resolver is often generated used to be regenerated by resolvconf command and consequentially can be tampered by dhcpd resolved systemd service as well perhaps other mechanism depending on how the different Linux distribution architects make it to behave …

There are more than one ways to stop the annoying /etc/resolv.conf ovewritten behavior

1. Using dhcpd to stop /etc/resolv.conf being overwritten

Using dhcpd either a small null up script can be used or a separate hook script.

The null script would look like this

root@pcfreak:/root# vim /etc/dhcp/dhclient-enter-hooks.d/nodnsupdate

#!/bin/sh
make_resolv_conf() {
    :
}

root@pcfreak:/root# chmod +x /etc/dhcp/dhclient-enter-hooks.d/nodnsupdate

 

This script overrides an internal function called make_resolv_conf() that would normally overwrite resolv.conf and instead does nothing.

On old Ubuntu s and Debian versions this should work.


Alternative method is to use a small hook dhcp script like this:

root@pcfreak:/root# echo 'make_resolv_conf() { :; }' > /etc/dhcp/dhclient-enter-hooks.d/leave_my_resolv_conf_alone
chmod 755 /etc/dhcp/dhclient-enter-hooks.d/leave_my_resolv_conf_alone


Next boot when dhclient runs onreboot or when you manually run sudo ifdown -a ; sudo ifup -a , 
it loads this script nodnsupdate or the hook script and hopefully your manually configured values of /etc/resolv.conf would not mess up your file anymore.

2. Use a chattr and set immutable flag attribute to /etc/resolv.conf to prevent re-boot to ovewrite it

Anyways the universal and simple way "hack" to prevent /etc/resolv.conf many prefer to use instead of dhcp (especially as not everyone is running a dhcp on a server) , to overwrite is to delete the file and make it immutable with chattr (assuming chattr is supported by the filesystem i.e. EXT3 / EXT4 / XFS , you use on the Linux.).

You might need to check the filesystem type, before using chattr.

root@pcfreak:/root# blkid  | awk '{print $1 ,$3, $4}'
/dev/xvda1: TYPE="xfs"
/dev/xvda2: TYPE="LVM2_member"
/dev/mapper/centos-root: TYPE="xfs"
/dev/mapper/centos-swap: TYPE="swap"
/dev/loop0:
/dev/loop1:
/dev/loop2:

 

Normally EXT fs and XFS support it, note that this is not going to be the case with a network filesystem like NFS.

If you have some weird Filesystem type and you try to chattr you will get error like:

chattr: Inappropriate ioctl for device while reading flags on /etc/resolv.conf

To make /etc/resolv.conf file unchangeable on next boot by dhcpd or systemd-resolved

 a systemd service that provides network name resolution to local applications via a D-Bus interface, the resolve NSS service (nss-resolve)
 

root@pcfreak:/root# rm -f /etc/resolv.conf  
{ echo "nameserver 1.1.1.1";
echo "nameserver 1.0.0.1;
echo "search mydomain.com"; } >  /etc/resolv.conf
chattr +i  /etc/resolv.conf
reboot  


Also it is a good think if you don't plan after some update to have unexpected results caused by systemd-resolved doing something strange is to rename to /etc/systemd/resolved.conf.dpkg-bak or completely remove file

/etc/systemd/resolved.conf

To prevent dhcpd to overwrite the server /etc/resolv.conf from something automatically taken from preconfigured central DNS inside the network configurations made from /etc/network/interfaces configurations such as:

        dns-nameservers 127.0.0.1 8.8.8.8 8.8.4.4 207.67.222.222 208.67.220.220


You need to change the DHCP configuration file named dhclient.conf and use the supersede option. 
To so Edit /etc/dhcp/dhclient.conf.

Look for lines like these:

#supersede domain-name "fugue.com home.vix.com";
#prepend domain-name-servers 127.0.0.1;

Remove the preceding “#” comment and use the domain-name and/or domain-name-servers which you want (your DNS FQDN). Save and hopefully the DNS related ovewrite to /etc/resolv.conf would be stopped, e.g. changes inside /etc/resolv.conf mnually done should stay permanent.

Also it is a good practice to disable ddns-update-style direcive inside /etc/dhcp/dhcpd.conf

root@pcfreak:/root# vim /etc/dhcp/dhcpd.conf
##ddns-update-style none;

However on many newer Debian Linux as of 2025 and its .deb based derivative distros, you have to consider the /etc/resolv.conf is a symlink to another file /etc/resolvconf/run/resolv.conf

If that is the case with you then you'll have to set the immutable chattr attribute flag like so

root@pcfreak:~# chattr -V +i /etc/resolvconf/run/resolv.conf
chattr 1.47.0 (5-Feb-2023)
Flags of /etc/resolvconf/run/resolv.conf set as —-i—————–

root@pcfreak:/root# lsattr /etc/resolvconf/run/resolv.conf
—-i—————– /etc/resolvconf/run/resolv.conf

3.  Make /etc/resolv.conf permanent with simple custom a rc.local boot triggered resolv.conf ovewrite from a resolv.conf_actual template file

Consider that due to the increasing complexity of how Linux based OS-es behaves and the fact the Linux is more and more written to fit integration into the Cloud and be as easy as possible to containerize or orchestrate (with lets say docker or some cloud PODs) and other multitude of OS virtualiozation stuff modernities  /etc/resolv.conf might still continue to ovewrite ! 🙂

Thus I've come up with my very own unique and KISS (Keep it Simple Stupid) method to make sure /etc/resolv.conf is kept permanent and ovewritten on every boot for that "hack" trick you only need to have the good old /etc/rc.local enabled – i have written a short article how it can be enabled on newer debian / ubuntu / fedora / centos Linux here.

Prepare your permanent and static /etc/resolv.conf file containing your preferred server DNSes under a file /etc/resolv.conf_actual

Here is an example of one of my /etc/resolv.conf template files that gets ovweritten on each boot.

root@pcfreak:/root# cat /etc/resolv.conf_actual
domain pc-freak.net
search pc-freak.net
#nameserver 192.168.0.1

nameserver 127.0.0.1
nameserver 8.8.8.8
nameserver 8.8.4.4
nameserver 212.39.90.42
nameserver 212.39.90.43
nameserver 208.67.222.222
nameserver 208.67.220.220
options timeout:2 rotate


And in /etc/rc.local place before the exit directive inside the file simple copy over the original /etc/resolv.conf file real location.

Before proceeding to add it to execute /etc/rc.local assure yourself file is being venerated by OS.
 

root@pcfreak:/etc/dhcp# systemctl status rc-local
● rc-local.service – /etc/rc.local Compatibility
     Loaded: loaded (/etc/systemd/system/rc-local.service; enabled; preset: enabled)
    Drop-In: /usr/lib/systemd/system/rc-local.service.d
             └─debian.conf
     Active: active (exited) since Sun 2024-12-08 21:59:01 EET; 1 month 27 days ago
       Docs: man:systemd-rc-local-generator(8)
    Process: 1417 ExecStart=/etc/rc.local start (code=exited, status=0/SUCCESS)
        CPU: 302ms

Notice: journal has been rotated since unit was started, output may be incomplete.

root@pcfreak:/root# vim /etc/rc.local

 

cp -rpf /etc/resolv.conf_actual /etc/resolvconf/run/resolv.conf


NB ! Make sure those line is placed before any exit 0 command in /etc/rc.local otherwise that won''t work

That's it folks 🙂 
Using this simple trick you should be no longer bothered by a mysterious /etc/resolv.conf overwritten on next server reboot or system update (via a puppet / ansible or some other centralized update automation stuff) causing you a service or infrastructure outage.

Enjoy !

How to log multiple haproxy server instance processes on single server in seperate files with rsyslog filters


February 3rd, 2025

haproxy-log-frontend-backend-and-transferred-connections-in-separate-log-files-on-linux-server-logo

Lets say you want to have 2 separates instances of haproxy and log the output to separate files, how this can be achived?

In this article, i'll tell in few easy steps how to enable multiple haproxy server instances created on the same Linux server / VPS or docker container to run and log its served content in separate log files without using separate file logging handlers "local"s.
The task might be helpful for people who are involved with DevOps and has to route separate proxy traffic on same linux machine.
 

Lets say you have the following haproxy process instances running with separate haproxy configs:
 

1. haproxy
2. haproxy_worker2
3. haproxy_worker3

 

List of processes on the Linux host would looks like that.

[root@linux-server rsyslog.d]# ps -ef|grep -i hap
root     1151275 1147138  0 11:58 pts/2    00:00:00 grep –color=auto -i hap
root     1835200       1  0 Jan30 ?        00:00:00 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
haproxy  1835203 1835200  0 Jan30 ?        00:10:41 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
root     1835216       1  0 Jan30 ?        00:00:00 /usr/sbin/haproxy_worker2 -Ws -f /etc/haproxy/haproxy_worker2.cfg -p /run/haproxy_worker2.pid
haproxy  1835219 1835216  0 Jan30 ?        00:02:46 /usr/sbin/haproxy_worker2 -Ws -f /etc/haproxy/haproxy_worker2.cfg -p /run/haproxy_worker2.pid
root     1835216       1  0 Jan30 ?        00:00:00 /usr/sbin/haproxy_worker3 -Ws -f /etc/haproxy/haproxy_worker3.cfg -p /run/haproxy_worker3.pid
haproxy  1835219 1835216  0 Jan30 ?        00:02:46 /usr/sbin/haproxy_worker3 -Ws -f /etc/haproxy/haproxy_worker3.cfg -p /run/haproxy_worker3.pid

Question is how to log the 3 haproxies passed through configured connection IP and frontend / backend outputs to separate files

 /var/log/haproxy.log , /var/log/haproxy_worker2.log and /var/log/haproxy_worker3.log


To achieve the task, you will need to set-up 3 rsyslog config files name it according to your preferences and make sure no other rsyslog
file with haproxy related configuration does not mess up with the configs (e.g. is not having a config start number NUMBER_file.conf prior to the below created files.

Then create lets say 49_haproxy.conf and 50_haproxy_worker2.conf and 51_haproxy_worker3.conf

[root@linux-server rsyslog.d]# cat 48_haproxy.conf
#$ModLoad imudp
$UDPServerAddress 127.0.0.1
$UDPServerRun 514
#2022/02/02: HAProxy logs to local6, save the messages
# Template to include only the timestamp in HAProxy logs
template(name="HaproxyTimestampOnly" type="string" string="%timegenerated% %msg:::drop-last-lf%\n")
local6.*                /var/log/haproxy.log;HaproxyTimestampOnly
# Apply the template to HAProxy prod port mapping logs
#if $programname startswith 'haproxy[' then /var/log/haproxy.log;HaproxyTimestampOnly
& stop

[root@linux-server rsyslog.d]# cat 50_haproxy_worker2.conf
$ModLoad imudp
$UDPServerAddress 127.0.0.1
$UDPServerRun 514
# Template to include only the timestamp in HAProxy logs
template(name="HaproxyTimestampOnly" type="string" string="%timegenerated% %msg:::drop-last-lf%\n")

# Apply the template to HAProxy prod port mapping logs
if $programname startswith 'haproxy_worker2' then /var/log/haproxy_worker2.log;HaproxyTimestampOnly

 

[root@linux-server rsyslog.d]# cat 51_haproxy_worker3.conf
$ModLoad imudp
$UDPServerAddress 127.0.0.1
$UDPServerRun 514
# Template to include only the timestamp in HAProxy logs
template(name="HaproxyTimestampOnly" type="string" string="%timegenerated% %msg:::drop-last-lf%\n")

# Apply the template to HAProxy prod port mapping logs
if $programname startswith 'haproxy_worker3' then /var/log/haproxy_worker3.log;HaproxyTimestampOnly

Those rsyslog configs permissions has to be as follows:

[root@linux-server home]# ls -al /etc/rsyslog.d/48_haproxy.conf
-rw-r–r– 1 root root 488 Jan 30 12:44 /etc/rsyslog.d/48_haproxy.conf
[root@linux-server home]# ls -al /etc/rsyslog.d/50_haproxy_worker2.conf
-rw-r–r– 1 root root 379 Jan 30 12:45 /etc/rsyslog.d/50_haproxy_worker2.conf
[root@linux-server home]# ls -al /etc/rsyslog.d/51_haproxy_worker2.conf
-rw-r–r– 1 root root 379 Jan 30 12:45 /etc/rsyslog.d/51_haproxy_worker2.conf

 

The permissions for files to log the haproxy has to be as so:

[root@linux-server home]# ls -al /var/log/haproxy.log
-rw-r—– 1 haproxy haproxy 5014349 Feb  3 12:11 /var/log/haproxy.log
[root@linux-server home]# ls -al /var/log/haproxy_worker2.log
-rw-r—– 1 root root 728139 Feb  3 12:11 /var/log/haproxy_worker2.log
[root@linux-server home]# ls -al /var/log/haproxy_worker3.log
-rw-r—– 1 root root 728139 Feb  3 12:11 /var/log/haproxy_worker3.log

To make the changes take affect restart consequentially rsyslog first and then the 3 haproxy instances:

[root@linux-server home]# systemctl restart rsyslog
[root@linux-server home]# systemctl restart haproxy
[root@linux-server home]# systemctl restart haproxy2
[root@linux-server home]# systemctl restart haproxy3

Go on and check the logs that everything comes in from the haproxys running the same server into the separate files:

[root@linux-server home]# tail -f /var/log/haproxy.log /var/log/haproxy_worker2.log /var/log/haproxy_worker3.log

Hope this has helped someone out there looking to solve on how to log multiple haproxy instances on the same servers into separate files.

That's all folks. Enjoy!

Enable automatic updates on CentOS 8 , CentOS 9 Stream Linux with dnf-automatic and Cockpit Web GUI package management tool


January 15th, 2025

centos-8-and-centos-9-linux-enable-automatic-rpm-yum-updates-with-dnf-automatic-logo

Security for any OS is critical nowadays, thus as a CentOS legacy system admin at work or using CentOS Stream releases 8 and 9 that are to be around for the coming years

CentOS 8 and CentOS 9 Stream Lifecycle


CentOS Stream follows the same lifecycle as Red Hat Enterprise Linux. From version 8 onward this means every version is supported for 10 years, split into 5 years of Full Support and 5 years of maintenance support. Users also have the option to purchase an additional 3 years of Extended Life Cycle Support (ELS) as an add-on.

Version    General Availability    Full Support Ends    Maintenance Support Ends    Extended Life Cycle Support (ELS) Ends
8    May 7, 2019    May 31, 2024    May 31, 2029    May 31, 2032
9    May 18, 2022    May 31, 2027    May 31, 2032    May 31, 2035


In this article, you are going to learn how to enable automatic software updates on CentOS 8 and CentOS 9 ( Stream ) Linux OS-es. I'll show how to set up your system to download and apply  security and other updates without user intervention.

It is really useful to use the CentOS automatic updates OS capability, turning on updates and instead typing all the time yum update && yum upgrade (and wasting time to observe the process) as it takes usually some 5 to 10 minutes to make the OS automatically install updates in the background and notify you once all is done so you can periodically check what the dnf-automatic automatic update tool has done that in most cases of success would save you at least few minutes per host. Automatic updates is critical especially if you have to maintain an infrastructure of CentOS virtual servers at version 8 or 9.

Those who use heavily used CentOS might have already enabled and used dnf-automatic, but I guess just like me until recently, most people using CentOS 8 don’t know how to enable and apply CentOS Linux updates automatically and those article might be helpful.
 

1. Enable Automatic CentOS 8 / 9 Updates Using DNF Automatic RPM Package


Install the DNF-automatic RPM package, it will provide a DNF component that enables start automatically the update process. 
To install it on both CentOS 8 / 9.

[root@centos ~]# yum install dnf-automatic
CentOS Stream 9 – BaseOS                                                                                                                                   78 kB/s |  14 kB     00:00
CentOS Stream 9 – AppStream                                                                                                                                28 kB/s |  15 kB     00:00
CentOS Stream 9 – Extras packages                                                                                                                          81 kB/s |  18 kB     00:00
Dependencies resolved.
======================================================
 Package                                         Architecture                             Version                                          Repository                                Size
======================================================
Installing:
 dnf-automatic                                   noarch                                   4.14.0-23.el9                                    baseos                                    33 k
Upgrading:
 dnf                                             noarch                                   4.14.0-23.el9                                    baseos                                   478 k
 dnf-data                                        noarch                                   4.14.0-23.el9                                    baseos                                    37 k
 python3-dnf                                     noarch                                   4.14.0-23.el9                                    baseos                                   461 k
 yum                                             noarch                                   4.14.0-23.el9                                    baseos                                    88 k

Transaction Summary
=======================================================
Install  1 Package
Upgrade  4 Packages

Total download size: 1.1 M
Is this ok [y/N]: y
Downloading Packages:
(1/5): dnf-data-4.14.0-23.el9.noarch.rpm                                                                                                                  556 kB/s |  37 kB     00:00
(2/5): dnf-automatic-4.14.0-23.el9.noarch.rpm                                                                                                             406 kB/s |  33 kB     00:00
(3/5): yum-4.14.0-23.el9.noarch.rpm                                                                                                                       1.4 MB/s |  88 kB     00:00
(4/5): python3-dnf-4.14.0-23.el9.noarch.rpm                                                                                                               4.9 MB/s | 461 kB     00:00
(5/5): dnf-4.14.0-23.el9.noarch.rpm                                                                                                                       2.6 MB/s | 478 kB     00:00
——————————————————————————————————
Total                                                                                                                                                     1.1 MB/s | 1.1 MB     00:00
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                  1/1
  Upgrading        : dnf-data-4.14.0-23.el9.noarch                                                                                                                                    1/9
  Upgrading        : python3-dnf-4.14.0-23.el9.noarch                                                                                                                                 2/9
  Upgrading        : dnf-4.14.0-23.el9.noarch                                                                                                                                         3/9
  Running scriptlet: dnf-4.14.0-23.el9.noarch                                                                                                                                         3/9
  Installing       : dnf-automatic-4.14.0-23.el9.noarch                                                                                                                               4/9
  Running scriptlet: dnf-automatic-4.14.0-23.el9.noarch                                                                                                                               4/9
  Upgrading        : yum-4.14.0-23.el9.noarch                                                                                                                                         5/9
  Cleanup          : yum-4.14.0-9.el9.noarch                                                                                                                                          6/9
  Running scriptlet: dnf-4.14.0-9.el9.noarch                                                                                                                                          7/9
  Cleanup          : dnf-4.14.0-9.el9.noarch                                                                                                                                          7/9
  Running scriptlet: dnf-4.14.0-9.el9.noarch                                                                                                                                          7/9
  Cleanup          : python3-dnf-4.14.0-9.el9.noarch                                                                                                                                  8/9
  Cleanup          : dnf-data-4.14.0-9.el9.noarch                                                                                                                                     9/9
  Running scriptlet: dnf-data-4.14.0-9.el9.noarch                                                                                                                                     9/9
  Verifying        : dnf-automatic-4.14.0-23.el9.noarch                                                                                                                               1/9
  Verifying        : dnf-4.14.0-23.el9.noarch                                                                                                                                         2/9
  Verifying        : dnf-4.14.0-9.el9.noarch                                                                                                                                          3/9
  Verifying        : dnf-data-4.14.0-23.el9.noarch                                                                                                                                    4/9
  Verifying        : dnf-data-4.14.0-9.el9.noarch                                                                                                                                     5/9
  Verifying        : python3-dnf-4.14.0-23.el9.noarch                                                                                                                                 6/9
  Verifying        : python3-dnf-4.14.0-9.el9.noarch                                                                                                                                  7/9
  Verifying        : yum-4.14.0-23.el9.noarch                                                                                                                                         8/9
  Verifying        : yum-4.14.0-9.el9.noarch                                                                                                                                          9/9

Upgraded:
  dnf-4.14.0-23.el9.noarch                   dnf-data-4.14.0-23.el9.noarch                   python3-dnf-4.14.0-23.el9.noarch                   yum-4.14.0-23.el9.noarch
Installed:
  dnf-automatic-4.14.0-23.el9.noarch

Complete!
[root@centos ~]#

Here is info on what dnf-automatic package will do: 

[root@centos ~]# rpm -qi dnf-automatic
Name        : dnf-automatic
Version     : 4.14.0
Release     : 23.el9
Architecture: noarch
Install Date: Wed 15 Jan 2025 08:00:47 AM -03
Group       : Unspecified
Size        : 57937
License     : GPLv2+
Signature   : RSA/SHA256, Thu 02 Jan 2025 01:19:43 PM -03, Key ID 05b555b38483c65d
Source RPM  : dnf-4.14.0-23.el9.src.rpm
Build Date  : Thu 12 Dec 2024 07:30:24 AM -03
Build Host  : s390-08.stream.rdu2.redhat.com
Packager    : builder@centos.org
Vendor      : CentOS
URL         : https://github.com/rpm-software-management/dnf
Summary     : Package manager – automated upgrades
Description :
Systemd units that can periodically download package upgrades and apply them.


Next up is configuring the dnf-automatic updates. The configuration file is located at /etc/dnf/automatic.conf. Once you have opened the file, you can to set the required values to fit your software requirements.
The values you might want to modify are as so:

 

[root@centos ~]# grep -v \# /etc/dnf/automatic.conf|sed '/^$/d'
[commands]
upgrade_type = default
random_sleep = 0
network_online_timeout = 60
download_updates = yes
apply_updates = no
reboot = never
reboot_command = "shutdown -r +5 'Rebooting after applying package updates'"
[emitters]
emit_via = stdio
[email]
email_from = root@example.com
email_to = root
email_host = localhost
[command]
[command_email]
email_from = root@example.com
email_to = root
[base]
debuglevel = 1
[root@centos ~]#

 

The most important things you need to tune in automatic.conf are:

[root@centos ~]# vim /etc/dnf/automatic.conf

apply_updates = no


should be changed to yes 

apply_updates = yes

for automatic updates to start by dnf-automatic service

It is nice to set the email server to use configuration values, as well as email from, email to and the way for
email to be set emit_via = stdio is default (check out the other options if to be used inside the commented lines)

Finally, you can now run dnf-automatic, execute the following command to schedule DNF automatic updates for your CentOS 8 machine.

[root@centos ~]# systemctl enable –now dnf-automatic.timer


The command above enables and starts the system timer. To check the status of the dnf-automatic service, run the following.

[root@centos ~]#  systemctl list-timers *dnf-*
NEXT                        LEFT       LAST                        PASSED      UNIT                ACTIVATES
Wed 2025-01-15 09:31:52 -03 13min left –                           –           dnf-makecache.timer dnf-makecache.service
Thu 2025-01-16 06:21:20 -03 21h left   Wed 2025-01-15 08:09:20 -03 1h 8min ago dnf-automatic.timer dnf-automatic.service

2 timers listed.
Pass –all to see loaded but inactive timers, too.

[root@centos ~]#

 

Enable and Manage Automatic updates with Cockpit GUI web interface


Sooner or later even hard core sysadmins has to enter the 21 century and start using a Web interfaces for server or Desktop Linux management to offload your head for more important stuff.
Cockpit is a great tool to help you automatically manage and update your servers with no need to use the Linux console most of the time.

Cockpit is a very powerful tool you can use to manage remotely updates through a web interface, it is very handy tool for system admins as it gives you overview over updates and supports automatic updates and set RPM package management tasks through web-based console. 
Cockpit allows updates over multiple servers and it makes it a kind of server orchestration tool that allows yo to update many same versioned operating system software.


If you haven't it already pre-installed in CentOS 8 / 9 depending on the type ofinstall you have done, you might need to install Cockpit.

To install cockpit

[root@centos ~]# yum install cockpit -y

To make the web service accessible in a browser you'll have to start it with cmds:

[root@centos ~]# systemctl start cockpit
[root@centos ~]# systemctl status cockpit

To access cockpit you'll either have to access it on https://localhost:9090 in case you need to access it locally via https://SERVER_IP:9090/.
Note that of course you will have to have firewalld enabling traffic to SERVER_IP PORT 9090.

 

centos-steam-cockpit-web-gui-autoupdate-tool-linux-screenshot1

By default cockpit will run with self signed certificate, if you need you can set up a certbot certificate or regenerate the self signed one for better managed security risk. For a first time if you haven't changed the certificate simply use the browser exclusion menu and login to Cockpit.

Once logged in you can check the available updates.

 

centos-steam-cockpit-web-gui-autoupdate-tool-linux-screenshot0

By default you will have to login with non-root account, preferably that should be an account who is authorized to become root via sudo.
To elevate to administrative privileges while in cockpit clock on 'Administrative access' and grant cockpit your superuser privileges.

centos-steam-cockpit-web-gui-autoupdate-tool-linux-screenshot2

Once authorized you can run the upgrade and enojy a coffee or beer in the mean time 🙂

centos-steam-cockpit-web-gui-autoupdate-tool-linux-screenshot-update-ongoing

Among the useful cockpit options, is also the Terminal through which you can run commands like over a normal Web SSH service.

The 'Logs' section is also very useful as it shows you clearly synthesized information on failed services and modules, since last OS system boot.

 

https://pc-freak.net/images/centos-steam-cockpit-web-gui-autoupdate-tool-linux-screenshot3

To add and manage updates for multiple hosts use the 'Add new host' menu that is a expansion of the main machine on which cockpit runs.


centos-steam-cockpit-web-gui-autoupdate-tool-linux-automatic-updates-settings

In the next window, turn automatic updates ON. You can now select the type of updates you want (Apply All Updates or Apply Security Updates), the day and time you want the updates applied, and the server rebooted.

CentOS 9's cockpit even have support for the innovative Kernel live patching, so the machine kernel can be updated even Live and you can save the reboot after complete patching of OS including the kernel.

centos-steam-cockpit-web-gui-autoupdate-tool-linux-kernel-live-patching-menu

Note that you cannot set up automatic updates without rebooting the system. Therefore, make sure your server can be rebooted at the time you’ve selected for the updates.

Sum it up


In this post, we learned have learned how to set up automatic updates for your CentOS 8 / 9 Linux. There are two main stream ways you can do it.
1. By using DNF automatic updates tool.
By enabling DNF automatic updates on CentOS 8 Linux the machine updated is faster, seemless and frequent as compared to manual updates.

This protects the OS more about crackers cyber-attacks. Secondly for the more lazy admins or for better structuring of updates (if it has to be executed on multiple hosts), the Cockpit web console is available.

With Cockpit, it’s much easy to enable automatic updates as the GUI is self-explanatory graphical user interface (GUI) as opposed to the DNF automatic updates, which would waste you more time on CLI ( shell ).
 

Enable Debian Linux automatic updates to keep latest OS Patches / Security Up to Date


January 13th, 2025

Enable Debian Linux automatic updates to keep latest OS Patches / Security Up to Date

Debian: Entenda a Importância Para o Mundo GNU/LINUX

I'm not a big fan of automatism on GNU / Linux as often using automatic updates could totally mess things especially with a complex and a bit chatic OS-es like is Linux nowadays. 
Nevertheless as Security is becoming more and more of a problem especially the browser security, having a scheduled way to apply updates like every normal modern Windows and MAC OS as an option is becoming essential to have a fully manageble Operating system.

As I use Debian GNU / Linux for desktop for my own personal computer and I have already a lot of Debian servers, whose OS minor level and package version maintenance takes up too big chunk of my time (a time I could dedicated to more useful activities). Thus I found it worthy at some cases to trigger Debian's way to keep the OS and security at a present level, the so called Debian "unattended upgrades".

In this article, I'll explain how to install and Enable Automatic (" Unattended " ) Updates on Debian, with the hope that other Debian users might start benefiting from it.
 

Pros of  enabling automatic updates, are:

  • Debian OS Stay secure without constant monitoring.
  • You Save much time by letting your system handle updates.
  • Presumably Enjoying more peace of mind, knowing your system is more protected.

Cons of enabling automatic updates:

  • Some exotic and bad maintained packages (might break after the update)
  • Customizations made on the OS /etc/sysctl.conf or any other very custom server configs might disappear or not work after the update
  • At worst scenario (a very rare but possible case) OS might fail to boot after update 🙂

Regular security updates patch vulnerabilities that could otherwise be exploited by attackers, which is especially important for servers and systems exposed to the internet, where threats evolve constantly.

1. Update Debian System to latest

Before applying automatic updates making any changes, run apt to update package lists and upgrade any outdated packages,to have automatic updates for a smooth configuration process.

# apt update && apt upgrade -y

2. Install the Unattended-Upgrades deb Package 

# apt install unattended-upgrades -y

Reading package lists… Done
Building dependency tree… Done
Reading state information… Done
The following additional packages will be installed:
  distro-info-data gir1.2-glib-2.0 iso-codes libgirepository-1.0-1 lsb-release python-apt-common python3-apt python3-dbus python3-distro-info python3-gi
Suggested packages:
  isoquery python-apt-doc python-dbus-doc needrestart powermgmt-base
The following NEW packages will be installed:
  distro-info-data gir1.2-glib-2.0 iso-codes libgirepository-1.0-1 lsb-release python-apt-common python3-apt python3-dbus python3-distro-info python3-gi unattended-upgrades
0 upgraded, 11 newly installed, 0 to remove and 0 not upgraded.
Need to get 3,786 kB of archives.
After this operation, 24.4 MB of additional disk space will be used.
Do you want to continue? [Y/n]

 

 

# apt install apt-listchanges
Reading package lists… Done
Building dependency tree… Done
Reading state information… Done
The following package was automatically installed and is no longer required:
  linux-image-5.10.0-30-amd64
Use 'apt autoremove' to remove it.
The following additional packages will be installed:
  python3-debconf
The following NEW packages will be installed:
  apt-listchanges python3-debconf
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 137 kB of archives.
After this operation, 452 kB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://deb.debian.org/debian bookworm/main amd64 python3-debconf all 1.5.82 [3,980 B]
Get:2 http://deb.debian.org/debian bookworm/main amd64 apt-listchanges all 3.24 [133 kB]
Fetched 137 kB in 0s (292 kB/s)
Preconfiguring packages …
Deferring configuration of apt-listchanges until /usr/bin/python3
and python's debconf module are available
Selecting previously unselected package python3-debconf.
(Reading database … 84582 files and directories currently installed.)
Preparing to unpack …/python3-debconf_1.5.82_all.deb …
Unpacking python3-debconf (1.5.82) …
Selecting previously unselected package apt-listchanges.
Preparing to unpack …/apt-listchanges_3.24_all.deb …
Unpacking apt-listchanges (3.24) …
Setting up python3-debconf (1.5.82) …
Setting up apt-listchanges (3.24) …

Creating config file /etc/apt/listchanges.conf with new version

 

Example config for apt-listchanges would be like:

# vim /etc/apt/listchanges.conf
[apt]
frontend=pager
email_address=root
confirm=0
save_seen=/var/lib/apt/listchanges.db
which=both

3. Enable Automatic unattended upgrades

Once installed, enable automatic updates with the following command, which will prompt asking if you want to enable automatic updates. Select Yes and press Enter, which will confirm that the unattended-upgrades service is active and ready to manage updates for you.

# dpkg-reconfigure unattended-upgrades

Configure-Unattended-Upgrades-on-Debian_Linux-dpkg-reconfigure-screenshot

Or non-interactively by running command:

# echo unattended-upgrades unattended-upgrades/enable_auto_updates boolean true | debconf-set-selections
dpkg-reconfigure -f noninteractive unattended-upgrades


4. Set the Schedule for Automatic Updates on Debian

By default, unattended-upgrades runs daily, to verify or modify the schedule, check the systemd timer:

# sudo systemctl status apt-daily.timer
# sudo systemctl status apt-daily-upgrade.timer
# systemctl edit apt-daily-upgrade.timer

Current apt-daily.timer config as of Debian 12 (bookworm) is as follows

root@haproxy2:/etc/apt/apt.conf.d# cat  /lib/systemd/system/apt-daily.timer
[Unit]
Description=Daily apt download activities

[Timer]
OnCalendar=*-*-* 6,18:00
RandomizedDelaySec=12h
Persistent=true

[Install]
WantedBy=timers.target
root@haproxy2:/etc/apt/apt.conf.d#


 

# systemctl edit apt-daily-upgrade.timer

[Timer]
OnCalendar=
OnCalendar=03:00
RandomizedDelaySec=0

 

At Line  num 2 above is needed to reset (empty) the default value shown below in line  num 5.
Line 4 is needed to prevent any random delays coming from the defaults.


Now both timers should be active, if not, activate them with:

# systemctl enable –now apt-daily.timer
# systemctl enable –now apt-daily-upgrade.timer


These timers ensure that updates are checked and applied regularly, without manual intervention.

5.Test one time Automatic Updates on Debian works

To ensure everything is working, simulate an unattended upgrade with a dry run:

# unattended-upgrade –dry-run

 

You can monitor automatic updates by checking the logs.

# less /var/log/unattended-upgrades/unattended-upgrades.log

Log shows details of installed updates and any issues that occurred. Reviewing logs periodically can help you ensure that updates are being applied correctly and troubleshoot any problems.

6. Advanced Configuration Options

If you’re a power user or managing multiple systems, you might want to explore these additional settings in the configuration file:

# vim /etc/apt/apt.conf.d/50unattended-upgrades


Configure unattended-upgrades to send you an email whenever updates are installed.

Unattended-Upgrade::Mail "your-email-address@email-address.com";


Enable automatic reboots after kernel updates
by adding the line:

Unattended-Upgrade::Automatic-Reboot "true";

To schedule reboots after package upgrade is applied  at a specific time:

Unattended-Upgrade::Automatic-Reboot-Time "02:00";

Specify packages you don’t want to be updated by editing the Unattended-Upgrade::Package-Blacklist section in the configuration file.

 

Here is alternative way to configure the unattended upgrade, by using apt configuration options:

# vim /etc/apt/apt.conf.d/02periodic

// Control parameters for cron jobs by /etc/cron.daily/apt-compat //


// Enable the update/upgrade script (0=disable)
APT::Periodic::Enable "1";


// Do "apt-get update" automatically every n-days (0=disable)
APT::Periodic::Update-Package-Lists "1";


// Do "apt-get upgrade –download-only" every n-days (0=disable)
APT::Periodic::Download-Upgradeable-Packages "1";


// Run the "unattended-upgrade" security upgrade script
// every n-days (0=disabled)
// Requires the package "unattended-upgrades" and will write
// a log in /var/log/unattended-upgrades
APT::Periodic::Unattended-Upgrade "1";


// Do "apt-get autoclean" every n-days (0=disable)
APT::Periodic::AutocleanInterval "21";


// Send report mail to root
//     0:  no report             (or null string)
//     1:  progress report       (actually any string)
//     2:  + command outputs     (remove -qq, remove 2>/dev/null, add -d)
//     3:  + trace on
APT::Periodic::Verbose "2";

If you have to simultaneously update multiple machines and you're on a limited connection line, configure download limits if you’re on a metered connection by setting options in /etc/apt/apt.conf.d/20auto-upgrades.

7. Stop Automatic Unattended Upgrade

Under some circumstances if it happens the unattended upgrades are no longer required and you want to revert back to manual package updates, to disable the updates you have to disable the unattended-upgrades service

# systemctl stop unattended-upgrades


8.  Stop an ongoing apt deb package set of updates applied on Debian server

Perhaps not often, but it might be you have run an automated upgrade and this has broke a server system or a service and for that reason you would like to stop the upcoming upgrade (some of whose might have started on other servers) immediately, to do so, the easiest way (not always safe thogh) is to kill the unattended-upgrades daemon.
 

# pkill –signal SIGKILL unattended-upgrades


Note that this a very brutal way to kill it and that might lead to some broken package update, that you might have to later fix manually.

If you have the unattended-upgrade process running on the OS in the process list backgrounded and you want to stop the being on the fly upgrade on the system more safely for the system, you can stop and cancel the ongoing apt upgrade  it by running the ncurses prompt interface, through dpkg-reconfigure

# dpkg-reconfigure unattended-upgrades


Then just select No, press Enter. In my case, this has promptly stopped the ongoing unattended upgrade that seemed blocked (at least as promptly as the hardware seemed to allow 🙂 ).

If you want to disable it for future, so it doesn't automatically gets enabled on next manual update, by some update script disable service as well.
 

# systemctl disable unattended-upgrades

 

Close up

That’s all ! Now, your Debian system will automatically handle security updates, keeping your system secure without you having to do a thing.
The same guide should be good for most Deb based distributions such as Ubuntu / Mint and there rest of other Debian derivative OS-es.
You’ve now set up a reliable way to ensure your system stays protected from vulnerabilities, but anyways it is a good practice to always login and check what the update has done to the system, otherwise expect the unexpected.