Posts Tagged ‘sudo’

How to Build a Linux Port Scan Honeypot with iptables and test with nmap

Wednesday, October 8th, 2025

buiild-own-honeypot-with-iptables-and-test-with-nmap-linux-howto

Let’s build a simple, practical Linux port‑scan honeypot that uses iptables to capture/redirect incoming connection attempts and simple listeners that log interaction, and use nmap to test it. I’ll give runnable commands, a small Python honeypot listener, iptables rules (with rate‑limited logging), how to test with nmap, and notes on analysis, hardening, and safety.

Important safety notes (read first)

  • Only deploy this on machines/networks you own or are explicitly authorized to test. Scanning other people’s systems or letting an exposed honeypot be used to attack others is illegal and unethical.
  • Run the honeypot inside an isolated VM or protected network segment (use separate host or VLAN), and don't forward traffic from other networks unless you understand the risks.
  • Keep logs rotated and watch disk usage — honeypots can attract high traffic.

Overview

  1. Prepare an isolated Linux VM.
  2. Add iptables rules to log and optionally redirect connection attempts to local listeners.
  3. Run simple service emulators (Python) to accept connections and log data.
  4. Use nmap to test and validate behavior.
  5. Collect, parse, and analyze logs; optionally integrate with syslog/ELK.

Setup assumptions

  • Ubuntu/Debian or similar Linux.
  • Root (sudo) access.
  • iptables, python3, nmap, and socat/netcat available (I’ll show alternatives).

1) Create a safe environment

  • Use a VM (VirtualBox/QEMU/KVM) or container with no sensitive data.
  • Give it a fixed private IP on an isolated network.
  • Optionally place the VM behind another firewall (so you can control exposure).

2) Simple iptables logging rules (filter table)

We want to log connection attempts but avoid log flooding by using limit. Put these commands in a root shell:

# create a honeypot chain

# iptables -N HONEY

# rate-limited LOG then return so traffic continues to normal behavior

# iptables -A HONEY -m limit –limit 5/min –limit-burst 10 -j LOG \
–log-prefix "HPOT_CONN: " –log-level 4

# record raw packets to kernel log (optional NFLOG/ulogd is better for high volume)

# allow the packet to continue (or drop if you want it to appear closed)

# iptables -A HONEY -j ACCEPT

 

# Apply HONEY chain to incoming TCP attempts to all ports (or a subset)

# iptables -A INPUT -p tcp -j HONEY

Notes:

  • This logs packets hitting INPUT. If your VM is behind NAT or you want to catch forwarded packets, apply to FORWARD too.
  • Using -j LOG writes to kernel log, often /var/log/kern.log or via rsyslog to /var/log/messages.
  • Use NFLOG (-j NFLOG) with ulogd if you want structured logging and higher throughput.

If you prefer to log only SYNs (new connection attempts) to reduce noise:

# iptables -A HONEY -p tcp –syn -m limit –limit 5/min –limit-burst 10 \
-j LOG –log-prefix "HPOT_SYN: " –log-level 4

If you want to redirect certain destination ports into local port listeners (so local fake services can accept connections), use the nat table PREROUTING:

# Example redirect incoming port 80 to local port 8080

# iptables -t nat -A PREROUTING -p tcp –dport 80 -j REDIRECT –to-ports 8080

# For ports other than 80/443, add other rules

# iptables -t nat -A PREROUTING -p tcp –dport 22 -j REDIRECT –to-ports 2222

Notes:

  • -t nat -A PREROUTING catches traffic before routing. Use OUTPUT if traffic originates on host itself.
  • REDIRECT only works for local listeners.

3) Simple Python multi‑port honeypot listener

This script will bind multiple ports and log incoming connections and first bytes (banner or payload). Save as simple_honeypot.py:

#!/usr/bin/env python3

# simple_honeypot.py

import socket, threading, logging, time

 

LISTEN_PORTS = [22, 80, 443, 8080, 2222]   # customize

BANNER = b"HTTP/1.1 200 OK\r\nContent-Length: 2\r\n\r\nOK"

 

logging.basicConfig(

    filename="/var/log/honeypot.log",

    level=logging.INFO,

    format="%(asctime)s %(message)s"

)

 

def handle_client(conn, addr, port):

    try:

        conn.settimeout(5.0)

        data = conn.recv(4096)

        logged = data[:100].decode('utf-8', errors='replace')

        logging.info("CONNECT %s:%s -> port=%d first=%s", addr[0], addr[1], port, logged)

        # Send a small believable banner depending on port

        if port in (80, 8080):

            conn.sendall(BANNER)

        elif port in (22, 2222):

            conn.sendall(b"SSH-2.0-OpenSSH_7.4p1\r\n")

        else:

            conn.sendall(b"220 smtp.example.com ESMTP\r\n")

    except Exception as e:

        logging.info("ERROR %s", e)

    finally:

        try:

            conn.close()

        except:

            pass

 

def start_listener(port):

    s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)

    s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)

    s.bind(('0.0.0.0', port))

    s.listen(50)

    logging.info("Listening on %d", port)

    while True:

        conn, addr = s.accept()

        t = threading.Thread(target=handle_client, args=(conn, addr, port), daemon=True)

        t.start()

 

if __name__ == "__main__":

    for p in LISTEN_PORTS:

        t = threading.Thread(target=start_listener, args=(p,), daemon=True)

        t.start()

    # keep alive

    while True:

        time.sleep(3600)

Run it as root (or with capability binding) so it can listen on low ports:
 

# python3 simple_honeypot.py


# check logs

# tail -F /var/log/honeypot.log

Alternatives:

  • Use socat to spawn a logger for individual ports.
  • Use cowrie, glastopf, honeyd, sshpot etc. if you want full featured honeypots.

4) Using nmap to test the honeypot

Only scan your honeypot IP. Example commands:

Basic TCP port scan:

# nmap -sS -p- -T4 -oN scan_ports.txt <HONEYPOT_IP>

Service/version detection:

# nmap -sV -sC -p 22,80,443,2222 -oN scan_svcs.txt <HONEYPOT_IP>

Scan while avoiding ping (if your VM blocks ICMP):

# nmap -Pn -sS -p1-2000 <HONEYPOT_IP>

Aggressive scan + scripts (use on your own host only):

# nmap -A -T4 <HONEYPOT_IP>

Watch your honeypot logs while scanning to confirm entries.

5) Log collection & analysis

  • Kernel log: sudo tail -F /var/log/kern.log or sudo journalctl -f
  • Python listener log: /var/log/honeypot.log
  • For structured logging and higher throughput: configure ulogd (NFLOG) or forward logs to syslog/rsyslog and ELK/Graylog.
  • Example simple grep: grep HPOT_CONN /var/log/kern.log | tail -n 200

6) Hardening & operational tips

  • Isolate honeypot inside a VM and snapshot it so you can revert.
  • Use rate limits in iptables to avoid log flooding: -m limit –limit 5/min.
  • Rotate logs: configure logrotate for /var/log/honeypot.log.
  • Do not allow outbound traffic from the honeypot unless needed – attackers may use it to pivot. Use egress firewall rules to restrict outbound connections.
  • Consider running the honeypot under an unprivileged user wherever possible.
  • If expecting a lot of traffic, use NFLOG + ulogd or suricata/zeek for more scalable capture.

7) Optional: Richer visibility with NFLOG / ulogd

If you anticipate higher volume or want structured logs, use:

# example: mark packets from all TCP and send to NFLOG group 1

# iptables -I INPUT -p tcp -m limit –limit 10/min -j NFLOG –nflog-group 1

# run ulogd to write NFLOG to a file or DB (configure /etc/ulogd.conf)

8) Example scenario — bring it all together

  1. Start VM and ensure IP 192.168.56.101.
  2. Add iptables logging and redirect HTTP->8080.
  3. Run simple_honeypot.py with ports [22,80,2222,8080].
  4. From your scanner machine: nmap -sS -sV -p22,80,2222,8080 192.168.56.101
  5. Watch /var/log/honeypot.log and kernel logs for HPOT_* prefixes to see connections and payloads.

Quick reference for commands used:

  • Create honeypot chain and log SYNs:

# iptables -N HONEY

# iptables -A HONEY -p tcp –syn -m limit –limit 5/min –limit-burst 10 \
-j LOG –log-prefix "HPOT_SYN: "

# iptables -A INPUT -p tcp -j HONEY

  • Redirect port 80 -> 8080

# iptables -t nat -A PREROUTING -p tcp –dport 80 -j REDIRECT –to-ports 8080

  • Run listener:

# python3 simple_honeypot.py

  • Test with nmap:

# nmap -sS -sV -p22,80,2222,8080 -Pn <IP>

Conclusion

Building a simple Linux port‑scan honeypot with iptables and lightweight listeners gives you practical, immediate visibility into who’s probing your network and how they probe. With a few well‑scoped iptables rules you can capture and rate‑limit connection attempts, redirect selected ports to local emulators, and keep tidy, analysable logs. The small Python listener shown is enough to collect banners and initial payloads; for higher volume or more fidelity you can step up to NFLOG / ulogd, Zeek / Suricata, or production honeypots like Cowrie.

Remember the two rules that keep a honeypot useful and safe: isolate it (VMs, VLANs, strict egress rules) and log thoughtfully (rate limits, rotation, structured formats). That protects your environment and makes the data you collect far more valuable. Over time, turn raw logs into indicators (IP reputations, patterns of ports/probes, common payloads) and feed them into alerts or dashboards to turn passive observation into active defense.

Optimizing Linux Server Performance Through Digital Minimalism and Running Services and System Cleanup

Friday, October 3rd, 2025

linux-logo-optimizing-linux-server-performance-digital-minimalism-software-cleanup

In today’s landscape of bloated software stacks, automated dependency chains, and background services that consume memory and CPU without notice, Linux system administrators and enthusiasts alike benefit greatly from embracing digital minimalism of what is setup on the server and to reduce it to the absolute minimum.

Digital minimalism in the context of Linux servers means removing what you don't need, disabling what you don't use, and optimizing what remains — all with the goal of increasing performance, improving security, and simplifying further maintenance.
In this article, we’ll walk through practical steps to declutter your Linux server, optimize resources, and regain control over what’s running and why.

1. Identify and Remove Unnecessary Packages

Over time, many systems accumulate unused packages — either from experiments, dependency installations, or unnecessary defaults.

On Debian/Ubuntu

Find orphaned packages:
 

# apt autoremove --dry-run


Remove unnecessary packages:
 

# apt autoremove
# apt purge <package-name>


List large installed packages:

# dpkg-query -Wf '${Installed-Size}\t${Package}\n' | sort -n | tail -n 20


On RHEL/CentOS/AlmaLinux:

Find orphaned packages:

# dnf autoremove

List packages sorted by size:

# rpm -qia --qf '%{SIZE}\t%{NAME}\n' | sort -n | tail -n 20


2. Audit and Disable Unused Services
 

Every running service consumes memory, CPU cycles, and opens potential attack surfaces.

List enabled services:
 

# systemctl list-unit-files --type=service --state=enabled

See currently running services:

# systemctl --type=service –state=running

Put some good effort to review and disable all unnecesssery

 

Disable unneeded services :

# systemctl disable --now bluetooth.service
# systemctl disable --now cups.service
# systemctl disable --now ModemManager.service

And so on

Useful services to disable (if unused):
 

Service

Purpose

When to Disable

cups.service

Printer daemon

On headless servers

bluetooth.service

Bluetooth stack

On servers without Bluetooth

avahi-daemon

mDNS/Zeroconf

Not needed on most servers

ModemManager

Modem management

If not using 3G/4G cards

NetworkManager

Dynamic net config

Prefer systemd-networkd for static setups


Simple Shell Script to List & Review Services
 

#!/bin/bash
echo "Enabled services:"
systemctl list-unit-files --state=enabled | grep service
echo ""
echo "Running services:"
systemctl --type=service --state=running

3. Optimize Startup and Boot Time

Analyze system boot performance:

# systemd-analyze

View which services take the longest:

# systemd-analyze blame

min 25.852s certbot.service
5min 20.466s logrotate.service
1min 29.748s plocate-updatedb.service
     54.595s php5.6-fpm.service
     43.445s systemd-logind.service
     42.837s e2scrub_reap.service
     37.915s apt-daily.service
     35.604s mariadb.service
     31.509s man-db.service
     27.405s systemd-journal-flush.service
     18.357s ifupdown-pre.service
     14.672s dev-xvda2.device
     13.523s rc-local.service
     11.024s dpkg-db-backup.service
      9.871s systemd-sysusers.service

...

 

Disable or mask long-running services that are not essential.


Why services masking is important?


Simply because after some of consequential updates, some unwanted service daemon might start up with the system boot.

Example:
 

# systemctl mask lvm2-monitor.service


4. Reduce Memory Usage (Especially on Low-RAM VPS)
 

Monitor memory usage:

# free -h
# top
# htop

Use lightweight alternatives:

Service

Heavy

Lightweight Alternative

Web server

Apache

Nginx / Caddy / Lighttpd

Database

MySQL

MariaDB / SQLite (if local)

Syslog

rsyslog

busybox syslog / systemd journal

Shell

bash

dash / ash

File manager

GNOME Files

mc / ranger (CLI)


5. Configure Swap (Only If Needed)
 

Having too much or too little swap can affect performance.


Check if swap is active:

# swapon --show


Create swap file (if needed):

# fallocate -l 1G /swapfile
# chmod 600 /swapfile
# mkswap /swapfile
# swapon /swapfile

Add to /etc/fstab for persistence:

/swapfile none swap sw 0 0

6. Clean Up Cron Jobs and Timers

Old scheduled tasks can silently run in the background and consume resources.

List user cron jobs:

crontab -l

Check system-wide cron jobs:

# ls /etc/cron.*

# ls -al /var/spool/cron/*


List systemd timers:

# systemctl list-timers


Disable any unneeded timers or outdated cron entries.

7. Optimize Logging and Log Rotation

Logs are essential but can grow large and fill up disk space quickly.

Check log size:

# du -sh /var/log/*

Force logrotate:
 

# logrotate -f /etc/logrotate.conf

Edit /etc/logrotate.conf or specific files in /etc/logrotate.d/* to reduce retention if needed.

8. Check for Zombie Processes and Old Users

Old users and zombie processes can indicate neglected cleanup or the server is (cracked) hacked.

List users:
 

cat /etc/passwd | cut -d: -f1

Remove unused accounts:
 

# userdel -r username


Check for zombie processes:
 

# ps aux | awk '{ if ($8 == "Z") print $0; }'

9. Disable IPv6 (if not used)

IPv6 can add unnecessary complexity and attack surface if you’re not using it.

To disable IPv6 temporarily:

# sysctl -w net.ipv6.conf.all.disable_ipv6=1
# sysctl -w net.ipv6.conf.default.disable_ipv6=1

To disable permanently, add to /etc/sysctl.conf:

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1


10. Final Thoughts: Less Is More

Digital minimalism is not just a personal tech trend — it's a philosophy of clarity, performance, and security. Every running process is a potential vulnerability. Every megabyte of RAM consumed by a useless service is wasted capacity. Every package installed increases the system’s complexity.

By regularly auditing, pruning, and simplifying your Linux server, you not only improve its performance and reliability, but you also reduce future maintenance headaches.

Minimalism = Maintainability.

What is oddjobd and How to Use It Instead of sudo to run limited privileged execution of scripts requiring admin

Tuesday, September 30th, 2025

oddjobd-sudoers-linux-elevate-script-running-linux

In Linux environments, managing privileged operations for unprivileged users is a critical task. Traditionally, tools like sudo have been used to allow users to execute specific commands with elevated privileges. However, in more secure or fine-tuned environments — such as enterprise networks or identity-managed systems — oddjobd offers a more controlled, D-Bus-driven alternative.

This article explains what oddjobd is, how it works, and when you might prefer it over sudo, complete with real-world examples.

What is oddjobd?

oddjobd is a system service (daemon) that runs in the background and allows limited, controlled execution of privileged tasks on behalf of unprivileged users.

Key Features:

  • Allows secure execution of predefined scripts or programs as root (or another user).
  • Communicates over D-Bus for fine-grained access control.
  • Uses Polkit (PolicyKit) to manage who can run which tasks.
  • Commonly used in FreeIPA, SSSD, and LDAP-based environments.
  • Configuration files live in: /etc/oddjobd.conf.d/

How It Works

  • System administrators define specific jobs (scripts or commands) in config files.
  • These jobs are exposed via D-Bus.
  • Unprivileged users (or applications) can request jobs to be executed.
  • Access is granted or denied by Polkit rules, not passwords.
  • No full shell or terminal access is granted — just the job.
 

oddjobd vs sudo

Feature

sudo

oddjobd

Control granularity

Medium (commands)

High (methods, scripts only)

Interactive shell

Yes

No

Config complexity

Simple (/etc/sudoers)

Moderate (conf.d + Polkit)

Uses system user password

Yes

Optional (can be passwordless via Polkit)

Security

Medium

High (no shell, strict policy control)

D-Bus compatible

No

Yes

Ideal for

Power users

Controlled environments (e.g., FreeIPA)

Typical Use Cases for oddjobd


1. Automatically Creating Home Directories


Problem: LDAP/FreeIPA users don’t have home directories created on login.

Solution: Enable oddjobd to create them via oddjob-mkhomedir.

# authconfig –enablemkhomedir –update

On login, PAM calls oddjobd, which creates the home directory as root.
 

2.  Restarting a Service without sudo

Let's say you want a user to restart Apache, but not give them full sudo rights.

a. Create a script

# /usr/local/bin/restart_apache.sh

#!/bin/bash

systemctl restart apache2

echo "Apache restarted by oddjob at $(date)"

chmod +x /usr/local/bin/restart_apache.sh

b. Create Oddjob config
 

# /etc/oddjobd.conf.d/restart_apache.conf

[restart_apache]

program = /usr/local/bin/restart_apache.sh

user = root

c. Polkit rule

 

// /etc/polkit-1/rules.d/60-restart-apache.rules

polkit.addRule(function(action, subject) {

    if (action.id == "org.freedesktop.oddjob.restart_apache" &&

        subject.isInGroup("apacheadmins")) {

        return polkit.Result.YES;

    }

});

 

d. Add user to group

# groupadd apacheadmins

# usermod -aG apacheadmins alice


e. Restart and test

# systemctl restart oddjobd


# As user "alice":

oddjob_request restart_apache


Only the defined method runs — no sudo shell access, no arbitrary commands.
 

3. GUI-friendly Device Control


Use Case: A user wants to reset a USB device via a button in a GUI app.

  • Define the method in oddjobd.
  • Use Polkit for GUI D-Bus permission.
  • The app can call the method securely, without sudo.

Advantages of oddjobd

More Secure Than sudo:

  • No interactive shell or terminal.
  • No command-line injection risks.
  • Can’t “escape” to a shell like with sudo bash.

Granular Control:

  • Limit tasks to a specific script or even script arguments.

D-Bus and GUI Friendly:

  • Apps can call privileged methods without shell hacks.

Policy-Based Authorization (Polkit):

  • Fine-grained user/group access control.
  • No password prompts if not desired.

Enterprise-Ready:

  • Works well with LDAP, FreeIPA, and centralized login environments.

Oddjobd Limitations / Downsides

Limitation

Description

Learning Curve

More complex to set up than sudo

Configuration Overhead

Requires writing config files and Polkit rules

Debugging

Issues may be harder to trace than sudo logs

Not for Ad-hoc Commands

Only predefined jobs can be run

Not Installed by Default

Often needs to be manually installed (oddjob, oddjob-mkhomedir)

When to Use oddjobd Instead of sudo

Use oddjobd when you:

  • Need to allow users or apps to run very specific privileged operations.
  • Want to avoid giving full shell access via sudo.
  • Are working in a managed enterprise environment.
  • Need GUI or D-Bus-based privilege escalation.
  • Require scripted access to root tasks without exposing credentials.

Conclusion

oddjobd is a powerful tool for securely handling privileged operations in Linux, especially where tight access control and automation are required. While sudo is simple and flexible, oddjobd shines in structured, security-conscious environments — particularly those using FreeIPA, LDAP, or automated tools.

If you need a more scriptable, policy-driven, and safer alternative to sudo for specific tasks, oddjobd is well worth exploring.

Implementing and using gssproxy, example guide on how to use it to authenticate ssh, samba, nfs with no password via kerberos protocol

Friday, September 26th, 2025

Implementing and using gssproxy, example guide on how to use it to authenticate ssh, samba, nfs with no password via kerberos protocol

GSS-Proxy is a daemon that safely performs GSSAPI (Kerberos) operations on behalf of other processes. It’s useful when services running as unprivileged users need to accept or initiate Kerberos GSSAPI authentication but shouldn’t hold or access long‑lived keys (keytabs) or raw credentials themselves. Typical users: OpenSSH, SSSD, Samba, NFS idmap, and custom daemons.

This article walks through what gssproxy does, how it works, how to install and configure it, example integrations (sshd and an unprivileged service), testing, debugging and common pitfalls.
 

1. What gssproxy does (quick conceptual summary)
 

  • Runs as a privileged system daemon (typically root) and holds access to keytabs or system credentials.
  • Exposes a local IPC (Unix socket) and controlled API so allowed clients can ask it to perform GSSAPI accept/init operations on their behalf.
  • Enforces access controls by client PID/user and by named service configuration (you map a client identity to the allowed service name and keytab).
  • Minimizes the need to distribute keytabs or give services direct access to Kerberos credentials.
     

2. Installation

On many modern Linux distributions (Fedora, RHEL/CentOS, Debian/Ubuntu) gssproxy is packaged.

Example (RHEL/Fedora/CentOS):

# RHEL/CentOS 7/8/9 (dnf or yum)

 

sudo dnf install gssproxy

 

# or

 

sudo yum install gssproxy

Example (Debian/Ubuntu):

sudo apt update

sudo apt install gssproxy

If you must build from source:

# get source, then typical autotools or meson/ninja workflow per upstream README

./configure

make

sudo make install

 

After install, systemd unit gssproxy.service should be available.
 

3. Main configuration concepts

The main config file is usually /etc/gssproxy/gssproxy.conf. It consists of mechs (mechanisms), services, clients, and possibly mappings. Key elements:

  • mech: declares a GSSAPI mechanism (e.g., krb5) and default keytab(s) for acceptor credentials.
  • service: logical service names (e.g., ssh, nfs, httpd) with attributes: user (the Unix user running the service), keytab, cred_store, mechs, and whether the service is allowed to be client (initiate) and/or server (accept).
  • client: rules mapping local client sockets / users / pids to allowed services.

A minimal working example that allows sshd to use gssproxy:

mechs = {

    krb5_mech = {

        mech = krb5;

        default_keytab = /etc/krb5.keytab;

    };

};

 

services = {

    ssh = {

        mech = krb5_mech;

        user = "sshd";

        keytab = /etc/krb5.keytab;

        # allow both acceptor (server) and initiator (client) ops if needed

        client = yes;

        server = yes;

    };

};

Client rules are often implicit: gssproxy can enforce that calls on a given service socket originate from the configured Unix user. For more complex setups you add policy and client blocks. Example to allow a specific PID or user to use the ssh service:

clients = {

    ssh_clients = {

        clients = [

            { match = "uid:0" },      # root can ask for ssh service

            { match = "user:sshd" },  # or the sshd user

        ];

        service = "ssh";

    };

};

Paths and sockets: gssproxy listens on a socket (e.g. /var/run/gssproxy/socket) and possibly per-user sockets (e.g. /run/gssproxy/uid_1000). The systemd unit usually creates the runtime directory with correct permissions.
 

4. Example: Integrate with OpenSSH server (sshd)

Goal: allow sshd and session processes to accept delegated GSS credentials and let unprivileged child processes use those credentials via gssproxy.

Server side config

  1. Ensure sshd is built/installed with GSSAPI support. On SSH server:

    • In /etc/ssh/sshd_config:
    • GSSAPIAuthentication yes
    • GSSAPICleanupCredentials yes
    • GSSAPIKeyExchange yes        # optional: if you want GSS key exchange
  2. Configure gssproxy with an ssh service entry pointing to the host keytab (so gssproxy can accept SPNEGO/kerberos accept_sec_context calls):

mechs = {

    krb5 = {

        mech = krb5;

        default_keytab = /etc/krb5.keytab;

    };

};

 

services = {

    ssh = {

        mech = krb5;

        user = "sshd";

        keytab = /etc/krb5.keytab;

        server = yes;

        client = yes;

    };

};

  1. Ensure /etc/krb5.keytab contains the host principal host/fqdn@REALM (or host/short@REALM depending on SPN strategy). Use ktutil or kadmin to create/populate.
  2. Restart gssproxy and sshd:

sudo systemctl restart gssproxy

sudo systemctl restart sshd

Client side

  • ssh client configuration (usually ~/.ssh/config or /etc/ssh/ssh_config):

Host myhost.example.com

    GSSAPIAuthentication yes

    GSSAPIDelegateCredentials yes

Client must have a TGT in the credential cache (kinit user), or use a client that acquires one.

Result

When the client initiates GSSAPI authentication and delegates credentials (GSSAPIDelegateCredentials yes or -K for older OpenSSH), gssproxy on the server handles acceptor functions. If a session process needs to use the delegated credentials (e.g., to access network resources as that user), gssproxy arranges a per-session credential store that unprivileged processes can use via the kernel keyring or other mechanisms gssproxy supports.
 

5. Example: Allow an unprivileged service to acquire initiator creds via gssproxy

Suppose a service mydaemon runs as myuser and needs to initiate Kerberos-authenticated connections using a specific service principal stored in /etc/mydaemon.keytab but you don’t want to expose that keytab to myuser.

Add a mech and service:

mechs = {

    krb5 = {

        mech = krb5;

        default_keytab = /etc/krb5.keytab;

    };

    mydaemon_mech = {

        mech = krb5;

        default_keytab = /etc/mydaemon.keytab;

    };

};

 

services = {

    mydaemon = {

        mech = mydaemon_mech;

        user = "myuser";

        keytab = /etc/mydaemon.keytab;

        client = yes;    # allow initiator operations

        server = no;

    };

};

Configure a client mapping so the mydaemon process (uid myuser) is allowed to use the mydaemon service. Once gssproxy runs, mydaemon uses the gssapi libraries (GSSAPI libs detect gssproxy via environment or library probe) and calls the GSSAPI functions; gssproxy will perform gss_acquire_cred using /etc/mydaemon.keytab and return a handle to the calling process. The service itself never directly reads the keytab.
 

6. Testing and tools

  • kinit / klist: manage and list Kerberos TGTs on clients.
  • journalctl -u gssproxy -f (or systemctl status gssproxy) to watch logs.
  • ss -l or ls -l /run/gssproxy to inspect sockets.
  • If you have gssproxy command-line utilities installed (may vary by distro), some installations include gssproxy CLI helpers. Otherwise use the service that relies on gssproxy and watch logs.

Example basic tests:

  1. Ensure gssproxy is running:

sudo systemctl status gssproxy

  1. On server, check socket and permissions:

sudo ls -l /run/gssproxy

# or

sudo ss -x -a | grep gssproxy

  1. Attempt SSH from a client with a TGT:

kinit alice

ssh -o GSSAPIDelegateCredentials=yes alice@server.example.com

# then on server, check journalctl logs for gssproxy/sshd messages
 

7. Debugging tips

  • Journal logs: journalctl -u gssproxy -xe will be your first stop.
  • Permissions: Ensure that gssproxy can read the keytab(s) (typically root-owned with restrictive perms). In config you may point to a keytab readable only by gssproxy.
  • Clients blocked: If a client is denied, check the clients block and match rules (uid/pid/user).
  • Keytab issues: Use klist -k /etc/krb5.keytab to list principals in a keytab. Ensure correct SPN and realm.
  • Clock skew: Kerberos is time-sensitive. Ensure NTP/chrony is working.
  • DNS / SPNs: Ensure hostnames and reverse DNS match the principal names expected for the service.
  • SSHD integration: If sshd still complains it can’t accept GSSAPI creds, enable debug logging (LogLevel DEBUG), and check gssproxy logs.
  • SELinux: On SELinux-enabled systems, you may need to ensure file contexts and SELinux policies allow gssproxy to access keytabs and sockets. Check audit.log for AVC denials and use semanage fcontext/restorecon or local policy modules when needed.
     

8. Common pitfalls & best practices

  • Don’t expose keytabs to unprivileged users. Let gssproxy hold them.
  • Principals & SPNs must match service hostnames used by clients. Consistent DNS is essential.
  • Minimal privileges: configure services and clients narrowly: allow only the minimum users/PIDs and only the required mech ops.
  • Rotation: when rotating keytabs, reload/restart gssproxy or send a signal if supported. Plan for keytab updates.
  • Logging: enable adequate logging during deployment and revert to normal verbosity in production.
  • Testing in staging: GSSAPI behavior across SSH clients and other daemons can be subtle — test across your client set (Linux, macOS, Windows via native Kerberos clients, etc.).
     

9. Security considerations

  • gssproxy centralizes credential access: secure the host and the gssproxy process.
  • Protect keytab files using strict filesystem permissions and (if needed) SELinux policy.
  • Restrict which local processes may request operations for a service — map by UID/PID carefully.
  • Monitor logs for unexpected use of gssproxy.
     

10. Example full config (simple)

Save as /etc/gssproxy/gssproxy.conf:

mechs = {

    krb5 = {

        mech = krb5;

        default_keytab = /etc/krb5.keytab;

    };

};

 

services = {

    ssh = {

        mech = krb5;

        user = "sshd";

        keytab = /etc/krb5.keytab;

        server = yes;

        client = yes;

    };

 

    mydaemon = {

        mech = krb5;

        user = "myuser";

        keytab = /etc/mydaemon.keytab;

        client = yes;

        server = no;

    };

};

 

clients = {

    allow_root_for_ssh = {

        clients = [

            { match = "uid:0" },

        ];

        service = "ssh";

    };

 

    mydaemon_client = {

        clients = [

            { match = "user:myuser" },

        ];

        service = "mydaemon";

    };

};

Restart: sudo systemctl restart gssproxy and then restart dependent services (sshd, mydaemon, etc.) if needed.

 

Useful resources for gssproxy and further integrations

  • Read your distribution’s /usr/share/doc/gssproxy/ or man pages (man gssproxy, man gssproxy.conf) — they contain distribution-specific details.
  • Check integrations: Samba/Winbind, SSSD, NFS idmap — many modern stacks support gssproxy as an option to avoid exposing keytabs to many daemons.
  • For production: automate keytab distribution, rotation and monitor gssproxy usage.

 

How to Install and Use auditd for System Security Auditing on Linux

Thursday, September 25th, 2025

System auditing is essential for monitoring user activity, detecting unauthorized access, and ensuring compliance with security standards. On Linux, the Audit Daemon (auditd) provides powerful auditing capabilities for logging system events and actions.

This short article will walk you through installing, configuring, and using auditd to monitor your Linux system.

What is auditd?

auditd is the user-space component of the Linux Auditing System. It logs system calls, file access, user activity, and more — offering administrators a clear trail of what’s happening on the system.


1. Installing auditd

The auditd package is available by default in most major Linux distributions.

 On Debian/Ubuntu

# apt update
# apt install auditd audispd-plugins

 On CentOS/RHEL/Fedora

# yum install audit

After installation, start and enable the audit daemon

# systemctl start auditd

# systemctl enable auditd

Check its status

# systemctl status auditd

2. Setting Audit Rules

Once auditd is running, you need to define rules that tell it what to monitor.

Example: Monitor changes to /etc/passwd

# auditctl -w /etc/passwd -p rwxa -k passwd_monitor

Explanation:

  • -w /etc/passwd: Watch this file. When the file is accessed, the watcher will generate events.
  • -p rwxa: Monitor read, write, execute, and attribute changes
  • -k passwd_monitor: Assign a custom key name to identify logs. Later on, we could search for this (arbitrary) passwd string to identify events tagged with this key.

List active rules:

# auditctl -l

3. Common auditd Rules for Security Monitoring

Here are some common and useful auditd rules you can use to monitor system activity and enhance Linux system security. These rules are typically added to the /etc/audit/rules.d/audit.rules or /etc/audit/audit.rules file, depending on your system.

a. Monitor Access to /etc/passwd and /etc/shadow
 

-w /etc/passwd -p wa -k passwd_changes
-w /etc/shadow -p wa -k shadow_changes

  • Monitors read/write/attribute changes to password files.

b. Monitor sudoers file and directory
 

-w /etc/sudoers -p wa -k sudoers
-w /etc/sudoers.d/ -p wa -k sudoers

  • Tracks any change to sudo configuration files.

c. Monitor Use of chmod, chown, and passwd
 

-a always,exit -F arch=b64 -S chmod -S fchmod -S fchmodat -k perm_mod
-a always,exit -F arch=b64 -S chown -S fchown -S fchownat -k perm_mod
-a always,exit -F arch=b64 -S passwd -k passwd_changes

  • Watches permission and ownership changes.

d. Monitor User and Group Modifications

-w /etc/group -p wa -k group_mod
-w /etc/gshadow -p wa -k gshadow_mod
-w /etc/security/opasswd -p wa -k opasswd_mod

  • Catches user/group-related config changes.

e. Track Logins, Logouts, and Session Initiation

-w /var/log/lastlog -p wa -k logins
-w /var/run/faillock/ -p wa -k failed_login
-w /var/log/faillog -p wa -k faillog

  • Tracks login attempts and failures.

f. Monitor auditd Configuration Changes

-w /etc/audit/ -p wa -k auditconfig
-w /etc/audit/audit.rules -p wa -k auditrules

  • Watches changes to auditd configuration and rules.

g. Detect Changes to System Binaries

-w /bin/ -p wa -k bin_changes
-w /sbin/ -p wa -k sbin_changes
-w /usr/bin/ -p wa -k usr_bin_changes
-w /usr/sbin/ -p wa -k usr_sbin_changes

  • Ensures core binaries aren't tampered with.

h. Track Kernel Module Loading and Unloading

-a always,exit -F arch=b64 -S init_module -S delete_module -k kernel_mod

  • Detects dynamic kernel-level changes.

l. Monitor File Deletions

-a always,exit -F arch=b64 -S unlink -S unlinkat -S rename -S renameat -k delete

  • Tracks when files are removed or renamed.

m. Track Privilege Escalation via setuid/setgid

-a always,exit -F arch=b64 -S setuid -S setgid -k priv_esc

  • Helps detect changes in user or group privileges.

n. Track Usage of Dangerous Binaries (e.g., su, sudo, netcat)

-w /usr/bin/su -p x -k su_usage
-w /usr/bin/sudo -p x -k sudo_usage
-w /bin/nc -p x -k netcat_usage

  • Useful for catching potentially malicious command usage.

o. Monitor Cron Jobs

-w /etc/cron.allow -p wa -k cron_allow
-w /etc/cron.deny -p wa -k cron_deny
-w /etc/cron.d/ -p wa -k cron_d
-w /etc/crontab -p wa -k crontab
-w /var/spool/cron/ -p wa -k user_crontabs

  • Alerts on cron job creation/modification.

p. Track Changes to /etc/hosts and DNS Settings

-w /etc/hosts -p wa -k etc_hosts
-w /etc/resolv.conf -p wa -k resolv_conf

  • Monitors potential redirection or DNS manipulation.

q. Monitor Mounting and Unmounting of Filesystems

-a always,exit -F arch=b64 -S mount -S umount2 -k mounts

  • Useful for detecting USB or external drive activity.

r. Track Execution of New Programs

-a always,exit -F arch=b64 -S execve -k exec

  • Captures command execution (can generate a lot of logs).
     

A complete list of rules you can get from the hardening.rules auditd file place it under /etc/audit/rules.d/hardening.rules
and reload auditd to load the configurations.

Tips

  • Use ausearch -k <key> to search audit logs for matching rule.
  • Use auditctl -l to list active rules.
  • Use augenrules –load after editing rules in /etc/audit/rules.d/.


4. Reading Audit Logs

Audit logs events are stored in:

/var/log/audit/audit.log

By default, the location, this can be changed through /etc/auditd/auditd.conf

View recent entries:
 

# tail -f /var/log/audit/audit.log

Search by key:
 

# ausearch -k passwd_monitor

Generate a summary report:

# aureport -f

# aureport


Example: Show all user logins / IPs :

# aureport -au

 

5. Making Audit Rules Persistent

Rules added with auditctl are not persistent and will be lost on reboot. To make them permanent:

Edit the audit rules configuration:

# vim /etc/audit/rules.d/audit.rules

Add your rules, for example:

-w /etc/passwd -p rwxa -k passwd_monitor

Apply the rules:

# augenrules –load

7. Some use case examples of auditd in auditing Linux servers by sysadmins / security experts
 

Below are real-world, practical examples where auditd is actively used by sysadmins, security teams, or compliance officers to detect suspicious activity, meet compliance requirements, or conduct forensic investigations.

a. Detect Unauthorized Access to /etc/shadow

Use Case: Someone tries to read or modify password hashes.

Audit Rule:

-w /etc/shadow -p wa -k shadow_watch

Real-World Trigger:

sudo cat /etc/shadow

Check Logs:
 

# ausearch -k shadow_watch -i

Real Output:
 

type=SYSCALL msg=audit(09/18/2025 14:02:45.123:1078):

  syscall=openat

  exe="/usr/bin/cat"

  success=yes

  path="/etc/shadow"

  key="shadow_watch"

b. Detect Use of chmod to Make Files Executable

Use Case: Attacker tries to make a script executable (e.g., malware).

Audit Rule:

-a always,exit -F arch=b64 -S chmod -k chmod_detect

Real-World Trigger:
 

 # chmod +x /tmp/evil_script.sh

Check Logs:

# ausearch -k chmod_detect -i

c. Monitor Execution of nc (Netcat)

Use Case: Netcat is often used for reverse shells or unauthorized network comms.

Audit Rule:
 

-w /bin/nc -p x -k netcat_usage
 

Real-World Trigger:

nc -lvp 4444

Log Entry:

type=EXECVE msg=audit(09/18/2025 14:35:45.456:1123):

  argc=3 a0="nc" a1="-lvp" a2="4444"

  key="netcat_usage"

 

d. Alert on Kernel Module Insertion
 

Use Case: Attacker loads rootkit or malicious kernel module.

Audit Rule:

-a always,exit -F arch=b64 -S init_module -S delete_module -k kernel_mod

Real-World Trigger:

# insmod myrootkit.ko

Audit Log:
 

type=SYSCALL msg=audit(09/18/2025 15:00:13.100:1155):

  syscall=init_module

  exe="/sbin/insmod"

  key="kernel_mod"

e. Watch for Unexpected sudo Usage

Use Case: Unusual use of sudo might indicate privilege escalation.

Audit Rule:

-w /usr/bin/sudo -p x -k sudo_watch

Real-World Trigger:

sudo whoami

View Log:
 

# ausearch -k sudo_watch -i


f. Monitor Cron Job Modification

Use Case: Attacker schedules persistence via cron.

Audit Rule:

-w /etc/crontab -p wa -k cron_mod

Real-World Trigger:
 

echo "@reboot /tmp/backdoor" >> /etc/crontab

Logs:
 

type=SYSCALL msg=audit(09/18/2025 15:05:45.789:1188):

  syscall=open

  path="/etc/crontab"

  key="cron_mod"

g. Detect File Deletion or Renaming
 

Use Case: Attacker removes logs or evidence.

Audit Rule:

-a always,exit -F arch=b64 -S unlink -S unlinkat -S rename -S renameat -k file_delete

Real-World Trigger:

# rm -f /var/log/syslog

Logs:
 

type=SYSCALL msg=audit(09/18/2025 15:10:33.987:1210):

  syscall=unlink

  path="/var/log/syslog"

  key="file_delete"


h. Detect Script or Malware Execution
 

Use Case: Capture any executed command.

Audit Rule:
 

-a always,exit -F arch=b64 -S execve -k exec

Real-World Trigger:

/tmp/myscript.sh

Log View:

# ausearch -k exec -i | grep /tmp/myscript.sh

l. Detect Manual Changes to /etc/hosts

Use Case: DNS hijacking or phishing setup.

Audit Rule:

-w /etc/hosts -p wa -k etc_hosts

Real-World Trigger:
 

# echo "1.2.3.4 google.com" >> /etc/hosts

Logs:

type=SYSCALL msg=audit(09/18/2025 15:20:11.444:1234):

  path="/etc/hosts"

  syscall=open

  key="etc_hosts"


8. Enable Immutable Mode (if necessery)

For enhanced security, you can make audit rules immutable, preventing any changes until reboot:

# auditctl -e 2


To make this setting persistent, add the following to the end of /etc/audit/rules.d/audit.rules:

-e 2


Common Use Cases

Here are a few more examples of what you can monitor:

Monitor all sudo usage:

# auditctl -w /var/log/auth.log -p wa -k sudo_monitor


Monitor a directory for file access:

# auditctl -w /home/username/important_dir -p rwxa -k dir_watch

Audit execution of a specific command (e.g., rm):

# auditctl -a always,exit -F arch=b64 -S unlink,unlinkat -k delete_cmd

(Adjust arch=b64 to arch=b32 if on 32-bit system.)

9. Managing the Audit Log Size

Audit logs can grow large over time. To manage log rotation and size, edit:
 

# vim /etc/audit/auditd.conf

Set log rotation options like:

max_log_file = 8

num_logs = 5

Then restart auditd:
 

# systemctl restart auditd

Conclusion

The Linux Audit Daemon (auditd) is a powerful tool to track system activity, enhance security, and meet compliance requirements. With just a few configuration steps, you can monitor critical files, user actions, and system behavior in real time.

 

References

  • man auditd
  • man auditctl
  • Linux Audit Wiki

 

How To Install ChatGPT on Debian Linux with snap

Tuesday, August 19th, 2025

chatgpt-desktop-linux-screenshot

To install ChatGPT (official desktop app) on Debian Linux using Snap, do the following:
You need as

Prerequisites

  1. Debian-based system (e.g., Debian, Ubuntu, Mint whatever deb based Linux).

  2. Snap package manager installed.


1. Install Snap (if not installed)

Run these commands in your terminal:

# apt update sudo apt install snapd

Enable and start the Snap daemon:

# systemctl enable snapd sudo systemctl start snapd

Create a symbolic link to ensure

snap

is accessible:

# ln -s /var/lib/snapd/snap /snap


2. Find ChatGPT Snap Package

The official ChatGPT desktop app (by OpenAI) is not available as a Snap package directly from OpenAI, but a third-party Snap package or wrapper may exist.

You can search with:

# snap find chatgpt


As of now, you might see unofficial community packages (e.g.,

chatgpt-desktop

or

chatgpt-wrapper

, etc.).

 

3. Install ChatGPT Snap Package

If you find a package (e.g., chatgpt-desktop), install it like this:

# snap install chatgpt-desktop

Note: Be cautious about third-party Snap packages—review the publisher and permissions.

4.Launch ChatGPT

Once installed, launch it from your app menu or run:

/snap/bin/chatgpt-desktop-client

 

Alternative (if no Snap available)

If no Snap package is available or you're uncomfortable with third-party sources:

Option: Use the Official .deb Installer

OpenAI released an official desktop app for Linux in

.deb

format:

To use the native deb;

  1. Download from: https://openai.com/chat

  2. Install with:

# apt install ./chatgpt_*.deb

 

Deploying a Puppet Server and Patching multiple Debian Linux Servers

Tuesday, June 17th, 2025

how-to-install-puppet-server-clients-for-multiple-servers-and-services-infrastructure-logo

 

Puppet Overview

Puppet is a powerful automation tool used to manage configurations and automate administrative tasks across multiple systems. This guide walks you through installing a Puppet server (master) and configuring 10 Debian-based client servers (agents) to automatically install system updates (patches) using Puppet.

Table of Contents

  1. Prerequisites

  2. Step 1: Install Puppet Server on Debian

  3. Step 2: Configure the Puppet Server

  4. Step 3: Install Puppet Agent on 10 Debian Clients

  5. Step 4: Sign Agent Certificates

  6. Step 5: Create a Puppet Module for Patching

  7. Step 6: Assign the Module and Trigger Updates

  8. Conclusion


1. Prerequisites

  • Debian server to act as the Puppet master (e.g., Debian 11)
     
  • Debian servers as Puppet agents (clients)
     
  • Root or sudo access on all systems
     
  • Static IPs or properly configured hostnames
     
  • Network connectivity between master and agents
     

2. Install Puppet Server on Debian

a. Add the Puppet APT repository

# wget https://apt.puppet.com/puppet7-release-bullseye.deb
# dpkg -i puppet7-release-bullseye.deb
# apt update

b. Install Puppet Server

# apt install puppetserver -y

c. Configure JVM memory (optional but recommended)

Edit /etc/default/puppetserver:  

JAVA_ARGS="-Xms512m -Xmx1g"


d. Enable and start the Puppet Server

# systemctl enable puppetserver 
# systemctl start puppetserver


3. Configure the Puppet Server

a. Set the hostname
 

# hostnamectl set-hostname puppet.example.com


Update /etc/hosts with your server’s IP and FQDN if DNS is not configured:  

192.168.1.10 puppet.pc-freak.net puppet

 

b. Configure Puppet

Edit /etc/puppetlabs/puppet/puppet.conf:

[main] certname = puppet.pc-freak.net
server = puppet.pc-freak.net
environment = production
runinterval = 1h

 

Restart Puppet server:

# systemctl restart puppetserver

4. Install Puppet Agent on 10 Debian Clients

Repeat this section on each client server (Debian 10/11).

a. Add the Puppet repository

# wget https://apt.puppet.com/puppet7-release-bullseye.deb
# dpkg -i puppet7-release-bullseye.deb 
# apt update


b. Install the Puppet agent

# apt install puppet-agent -y


c. Configure the agent to point to the master

# /opt/puppetlabs/bin/puppet config set server puppet.example.com –section main


d. Start the agent to request a certificate

# /opt/puppetlabs/bin/puppet agent –test

 

5. Sign Agent Certificates on the Puppet Server

Run on the Puppet master below 2 cmds:

# /usr/bin/puppetserver ca list –all


Sign all pending requests:

# /usr/bin/puppetserver ca sign –all

Verify connection to puppet server is fine:

# /opt/puppetlabs/bin/puppet node find haproxy2.pc-freak.net


6.  Create a Puppet Module for Patching

a. Create the patching module

# mkdir -p /etc/puppetlabs/code/environments/production/modules/patching/manifests


b. Add a manifest file
/etc/puppetlabs/code/environments/production/modules/patching/manifests/init.pp
:

class patching {

  exec { 'apt_update':
    command => '/usr/bin/apt update',
    path    => [‘/usr/bin’, ‘/usr/sbin’],
    unless  => '/usr/bin/test $(find /var/lib/apt/lists/ -type f -mmin -60 | wc -l) -gt 0',
  }

  exec { 'apt_upgrade':
    command => '/usr/bin/apt upgrade -y',
    path    => [‘/usr/bin’, ‘/usr/sbin’],
    require => Exec[‘apt_update’],
    unless  => '/usr/bin/test $(/usr/bin/apt list –upgradable 2>/dev/null | wc -l) -le 1',
  }

}

This class updates the package list and applies all available security and feature updates.

7.  Assign the Module and Trigger Updates

a. Edit site.pp on the Puppet master:

 

# vim /etc/puppetlabs/code/environments/production/manifests/site.pp

 

node default {
  include patching
}

node 'agent1.example.com' {
  include patching
}

b. Run Puppet manually on each agent to test:

# /opt/puppetlabs/bin/puppet agent –test

Once confirmed working, Puppet agents will run this patching class automatically every hour (default runinterval).

8. Check the status of puppetserver and puppet agent on hosts is fine
 

root@puppetserver:/etc/puppet# systemctl status puppetserver
● puppetserver.service – Puppet Server
     Loaded: loaded (/lib/systemd/system/puppetserver.service; enabled; preset: enabled)
     Active: active (running) since Mon 2025-06-16 23:44:42 EEST; 37min ago
       Docs: https://puppet.com/docs/puppet/latest/server/about_server.html
    Process: 2166 ExecStartPre=sh -c echo -n 0 > ${RUNTIME_DIRECTORY}/restart (code=exited, status=0/SUCCESS)
    Process: 2168 ExecStartPost=sh -c while ! head -c1 ${RUNTIME_DIRECTORY}/restart | grep -q '^1'; do kill -0 $MAINPID && sleep 1 || exit 1; done (code=exited, status=0/SUCCESS)
   Main PID: 2167 (java)
      Tasks: 64 (limit: 6999)
     Memory: 847.0M
        CPU: 1min 28.704s
     CGroup: /system.slice/puppetserver.service
             └─2167 /usr/bin/java -Xms512m -Xmx1g -Djruby.lib=/usr/share/jruby/lib -XX:+CrashOnOutOfMemoryError -XX:ErrorFile=/var/log/puppetserver/puppetserver_err_pid%p.log -jar /usr/share/pup>

юни 16 23:44:06 haproxy2 systemd[1]: Starting puppetserver.service – Puppet Server…
юни 16 23:44:30 haproxy2 java[2167]: 2025-06-16T23:44:30.516+03:00 [clojure-agent-send-pool-0] WARN FilenoUtil : Native subprocess control requires open access to the JDK IO subsystem
юни 16 23:44:30 haproxy2 java[2167]: Pass '–add-opens java.base/sun.nio.ch=ALL-UNNAMED –add-opens java.base/java.io=ALL-UNNAMED' to enable.
юни 16 23:44:42 haproxy2 systemd[1]: Started puppetserver.service – Puppet Server.

root@grafana:/etc/puppet# systemctl status puppet
* puppet.service – Puppet agent
     Loaded: loaded (/lib/systemd/system/puppet.service; enabled; preset: enabled)
     Active: active (running) since Mon 2025-06-16 21:22:17 UTC; 18s ago
       Docs: man:puppet-agent(8)
   Main PID: 1660157 (puppet)
      Tasks: 6 (limit: 2307)
     Memory: 135.6M
        CPU: 5.303s
     CGroup: /system.slice/puppet.service
             |-1660157 /opt/puppetlabs/puppet/bin/ruby /opt/puppetlabs/puppet/bin/puppet agent –no-daemonize
             `-1660164 "puppet agent: applying configuration"

Jun 16 21:22:17 grafana systemd[1]: Started puppet.service – Puppet agent.
Jun 16 21:22:28 grafana puppet-agent[1660157]: Starting Puppet client version 7.34.0
Jun 16 21:22:33 grafana puppet-agent[1660164]: Requesting catalog from puppet.pc-freak.net:8140 (192.168.1.58)

9. Use Puppet facter to extract interesting information from the Puppet running OS
 

facter is a powerful command-line tool Puppet agents use to gather system information (called facts). You can also run it manually on any machine to quickly inspect system details.

Here are some interesting examples to get useful info from a machine using facter:


a) Get all facts about Linux OS

facter

 

b) get OS name / version

facter os.name os.release.full
os.name => Debian
os.release.full => 12.10


c) check the machine  hostname and IP address

$ facter hostname ipaddress

hostname => puppet-client1
ipaddress => 192.168.0.220

d) Get amount of RAM on the machine

$ facter memorysize
16384 MB


e) Get CPU (Processor information)

$ facter processors

{
  count => 4,
  models => [“Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz”],
  physicalcount => 1,
  speed => "1.60 GHz"
}

10. Conclusion

You've successfully set up a Puppet server and configured a sample Debian client systems to automatically install security patches using a custom module.
To apply this on the rest of systems where puppet agent is installed repeat the process on each of the left 9 nodes.
This approach provides centralized control, consistent configuration, and peace of mind for you as system administrator if you have the task to manage multiple Debian servers
with an easy.
Of course the given configuration is very sample and to guarantee proper functiononing of your infrastrcutreu you'll have to read and experiment with puppet, however I hope article is a good
standpoint to have puppet server / agent running relatively easy.

How to Install Jitsi Meet on Debian Linux to have your own free software video conferencing secure server

Thursday, April 24th, 2025

 

jitsi-meet-create-new-room-for-video-meetings-linux

 

Jitsi Meet is a free, open-source video conferencing platform that allows you to host secure and scalable video calls both using a Mobile Phone / Tablet / PC or any other electronic device for which jitsi client has available port. Jitsi meet is the best free alternative one can get to Rakuten Viber / Facebook (Meta) / Zoom / Apples' Facetime etc.
What makes Jitsi really worthy is it can make your Video streaming communication give you flexibility to keep your communication a little bit private and harder to be captured than if you use the general Video streaming platforms. 
Jitsi is also a very simple to use and can be used either with a Desktop Client on Windows / Linux and Mac OS or Smart Phone running Android (Samsung / Huawei etc.) or iOS (iPhones) you can configure to use the Jitsi server or directly via a SSL encryption secured web URL address. The only thing i really don't like about Jitsi is it uses Java and its way of work is cryptic just like it is pretty hard to debug or understand exactly how the software works, as when errors came the usual crazzy Java exceptions are filling the jitsi logs.

In below short guide, I'll try provides a simple step-by-step instructions for installing Jitsi Meet on a Debian-based systems, hoping that anyone can benefit from Jitsi by building his own server.

 

jitsi-meet-conference-free-open-source-video-streaming-viber-and-facebook-alternative


What you should have before you start buillding your new Jitsi meet server

Before you begin, ensure that your system meets the following requirements:

  • A fresh installation of Debian 10 (Buster) or newer.

  • A non-root user with sudo privileges.

  • A fully updated system.

  • A domain name pointing to your server's IP address.

  • OpenJDK 11 installed.​

To get a better understanding on how Jitsi meet works it is worthy to take a quick look on Jitsi Architectural diagram:

Jitsi-meet-video-conferencing-software-linux-windows-mac-Architectural-diagram
 

1. Update Your System

Start by updating your system's package list and upgrading existing packages:​

# apt update sudo apt upgrade -y

2. Install Required Dependencies

Install the necessary packages for adding repositories and managing keys:​

# apt install apt-transport-https curl gnupg2 -y

3 Add Jitsi Repository

Add the Jitsi repository key to your system:

# curl https://download.jitsi.org/jitsi-key.gpg.key | sudo gpg --dearmor -o /usr/share/keyrings/jitsi-keyring.gpg

Then, add the Jitsi repository:​

# echo "deb [signed-by=/usr/share/keyrings/jitsi-keyring.gpg] https://download.jitsi.org stable/" | sudo tee /etc/apt/sources.list.d/jitsi-stable.list > /dev/null

Update your package list to include the Jitsi repository into apt database:​

 # apt update

4. Install Jitsi Meet

Install the Jitsi Meet package:​  

# apt install jitsi-meet -y

During installation, you'll be prompted to:​

  • Enter the hostname: Provide your domain name (e.g., meet.example.com).

  • Choose SSL certificate: Select "Generate a new self-signed certificate" or "Obtain a Let's Encrypt certificate" if you have a valid domain.​JitsiScaleway

If you opt for Let's Encrypt, ensure that ports 80 and 443 are open on your firewall.​

5. Configure Firewall openings

If you have already a configured firewall to filter out traffic, open the necessary ports to allow traffic to your Jitsi Meet server from your router or entry firewall device as well as on the Linux itself: ​

Allow access to SSH server

# ufw allow 22/tcp


Allow access to HTTP unecrypted to Jitsi meet server

# ufw allow 80/tcp
# ufw allow 443/tcp


Allow access necessery for proper operation of Jitsi VideoBridge (port range 10000 to 20000)
 

# ufw allow 10000:20000/udp
# ufw enable

 

Verify the firewall status is Okay​ 

# ufw status

6. Access Jitsi Meet in a browser

Open a web browser and navigate to your server's domain or IP address:​

https://meet.your-custom-domain-or-IP.com

Hopefully all is okay and You should see the Jitsi Meet interface, where you can start or join a meeting.​

7. Secure Conference Creation (Optional extra)

By default, anyone can create a conference. To restrict this:​

  1. Install and configure Prosody for authentication.
    For those who don't know Prosody is a modern XMPP communication server

  2. Set up secure domains and configure authentication settings.​

For detailed instructions, refer to the Jitsi DevOps Guide. ​
 

Conclusion

Now You should have successfully installed Jitsi Meet on your Debian server.
Installing to Ubuntu and Redhat OSes such as Fedora or Redhat based distros should be not much difrerent from on this guide, except you have to use
the correct RPM repositories.

Now you can further now host secure video conferences using your own infrastructure and have an increased privacy and perhaps be more calm that the CIA or Mussat, MI6 / FSB might be not spying your Video conference talks (except if they don't already do it on an OS level which most likely the case but this doesn't matter. :).

For advanced configurations and features, consult the Jitsi Handbook and the Jitsi DevOps Guide.​Jitsi

That's all folks Enjoy !

How to Install and Set Up an NFS Server network Shares on on Linux to easify data transfer across multiple hosts

Monday, April 7th, 2025

How to Configure NFS Server in Redhat,CentOS,RHEL,Debian,Ubuntu and Oracle Linux

Network File System (NFS) is a protocol that allows one system to share directories and files with others over a network. It's commonly used in Linux environments for file sharing between systems. In this guide, we'll walk you through the steps to install and set up an NFS server on a Linux system.

Prerequisites

Before you start, make sure you have:

  • A Linux system distros (e.g., Ubuntu, CentOS, Debian, etc.)
  • Root or sudo privileges on the system.
  • A network connection between the server (NFS server) and clients (machines that will access the shared directories).
     

1. Install NFS Server Package

 

On Ubuntu / Debian based Linux systems:

a. First, update the package list 

# apt update

b. Install the NFS server package
 

# apt install nfs-kernel-server

On CentOS/REL-based systems:

 2. Install the NFS server package
 

      # yum install nfs-utils 

Once the package is installed, ensure that the necessary services are enabled.

 3. Create Shared Directory for file sharing

Decide which directory you want to share over NFS. If the directory doesn't exist, you can create one. For example:

# mkdir -p /nfs_srv_dir/nfs_share

Make sure the directory has the appropriate permissions so that the nfs clients can access it.

# chown nobody:nogroup /nfs_srv_dir/nfs_share 
# chmod 755 /nfs_srv_dir/nfs_share

4. Configure NFS Exports ( /etc/exports file)

The NFS exports file (/etc/exports) is perhaps most important file you will have to create and deal with regularly to define the expored shares, this file contains the configuration settings for directories you want to share with other systems.

       a. Open the /etc/exports file for editing:

vi /etc/exports

Add an entry for the directory you want to share. For example, if you're sharing /nfs_srv_dir/nfs_share and allowing access to all systems on the network (192.168.1.0/24), add the following line:
 

/nfs_srv_dir/nfs_share 192.168.1.0/24(rw,sync,no_subtree_check)


Here’s what each option means:

  • rw: Read and write access.
  • sync: Ensures that changes are written to disk before responding to the client.

 

Here is few lines of  example of my working /etc/exports on my home running NFS server

/var/www 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/home/jordan 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/mnt/sda1/icons-frescoes/ 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/home/mobfiles 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/mnt/sda1/icons-frescoes/ 192.168.0.200/32(rw,no_root_squash,async,subtree_check)
/home/hipo/public_html 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/home/alex/public_html 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/home/necroleak/public_html 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/bashscripts 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/backups/Family-Videos 192.168.0.200/32(ro,no_root_squash,async,subtree_check)

 

5. Export the NFS Shares with exportfs command

Once the export file is configured, you need to inform the NFS server to start sharing the directory:
 

# exportfs -a


The -a flag will make it export all the sharings.

6. Start and Enable NFS Services

You need to start and enable the NFS server so it will run on system boot.

On Ubuntu / Debian Linux run the following commands:
 

# systemctl start nfs-kernel-server 
# systemctl enable nfs-kernel-server


On CentOS / RHEL Linux:
 

# systemctl start nfs-server
# systemctl enable nfs-server


7. Allow NFS Traffic Through the Firewall

If your server has a firewall configured / enabled, you will need to allow NFS-related ports through the firewall.
These ports include 2049 TCP protocol Ports (NFS) and 111 (RPCbind) UDP and TCP protocol , and some additional ports.

On Ubuntu/Debian (assuming you are using ufw [UNCOMPLICATED FIREWALL]):

# ufw allow from 192.168.1.0/24 to any port nfs sudo ufw reload

On CentOS / RHEL Linux:

# firewall-cmd –permanent –add-service=nfs sudo firewall-cmd –permanent –add-service=mountd sudo firewall-cmd –permanent –add-service=rpc-bind sudo firewall-cmd –reload

8. Verify NFS Server is Running

To ensure the NFS server is running properly, use the following command:
 

# systemctl status nfs-kernel-server

or

# systemctl status nfs-server

You should see output indicating that the service is active and running.

 

9. Test the NFS Share (Client-Side)

To test the NFS share, you will need to mount it on a client machine. Here's how to mount it:

On the client machine, install the NFS client utilities:

Ubuntu / Debian Linux

# apt install nfs-common

For CentOS / RHEL Linux

# yum install nfs-utils


Create a mount point (Nomatter the distro),:
 

# mkdir -p /mnt/nfs_share


Mount the NFS share:

# mount -t nfs <nfs_server_ip>:/nfs_srv_dir/nfs_share /mnt/nfs_share

Replace <nfs_server_ip> with the IP address of the NFS server or DNS host alias if you have one defined in /etc/hosts file.

Verify that the share is mounted:

​# df -h

You should see the NFS share listed under the mounted file systems.

10. Configure Auto-Mount at Boot (Optional)

To have the NFS share automatically mounted at boot, you can add an entry to the /etc/fstab file on the client machine.

Open /etc/fstab for editing:

# vi /etc/fstab

Add the following line: 

<server-ip>:/nfs_srv_dir/nfs_share /mnt/nfs_share nfs defaults 0 0

Save and close the file.

The NFS share will now be automatically mounted whenever the system reboots.

Debug NFS configuration issues (basics)

 

You can continue to modify the /etc/exports file to share more directories or set specific access restrictions depending on your needs.

If you encounter any issues, checking the server logs or using
 

# exportfs -v
/var/www          192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/home/var_data      192.168.0.205/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/mnt/sda1/
        192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/mnt/sda2/info
        192.168.0.200/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/home/mobfiles    192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/home/var_data/public_html
        192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/var/public
        192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/neon/data
        192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/scripts      192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/backups/data-limited
        192.168.0.200/32(async,wdelay,hide,sec=sys,ro,secure,no_root_squash,no_all_squash)
/disk/filetransfer
        192.168.0.200/23(async,wdelay,hide,sec=sys,ro,secure,no_root_squash,no_all_squash)
/public_shared/data
        192.168.0.200/23(async,wdelay,hide,sec=sys,ro,secure,no_root_squash,no_all_squash)


 Of course there is much more to be said on that you can for example, check /var/log/messages /var/log/syslog and other logs that can give you hints about issues, as well as manually try to mount / unmount a NFS stuck share to know more on what is going on, but for a starter that should be enough.

command can help severely in troubleshooting the NFS configuration.

Sum it up what learned ?

We learned how to  set up basic NFS server and mounted its shared directory on a client machine.
This is a great solution for centralized file sharing and collaboration on Linux systems (even though many companies are trying to not use it due to its lack of connection encryption for historical reasons NFS has been widely used over the years and has helped dramatically for the Internet as we know it to become the World Wide Web of today. Thus for a well secured network and perhaps not a critical files infrastructure, still NFS is a key player in file sharing among heterogenous networks for multitudes of Gigabytes or Terra Pentabytes of data you would like to share amoung your Personal Computers / Servers / Phones / Tablets and generally all kind of digital computer equipment devices.