Archive for October, 2025

How to Install and Use Netdata on Debian / Ubuntu Linux for Real-Time Server Monitoring

Wednesday, October 29th, 2025

netdata-server-monitoring-simple-tool-for-linux-bsd-windows

Monitoring system performance is one of the most overlooked aspects of server administration – until something breaks. That is absolutely true for beginners and home brew labs for learnings on how to manage servers and do basic system administration. For novice monitoring servers and infrastructure is not a big deal as usually testing machines are OK to break things. But still if you happen to host yourself some own brew websites a blog or Video streaming servers a torrent server, a tor server, some kind of public proxy or whatever of community advantageous solution for free at home, then monitoring will become important topic at some time when your small thing becomes big.

 While many sysadmins reach for Prometheus or Zabbix, using those comes with much of a complexity and effort to put in installing + setting up the overall server and agent clients as well as configure different monitoring checks.

Anyways sometimes you just need a lightweight, zero-configuration server monitoring tool that gives you instant visibility and you don’t want to learn much to have the basic and a bit of heuristic monitoring in the house.

That’s where Netdata shines.

What is Netdata?

Netdata is an open-source, real-time performance monitoring tool that visualizes CPU, memory, disk, network, and process metrics directly in your browser. It’s written in C and optimized for minimal overhead, making it perfect for both old hardware and modern VPS setups.

Netdata is built to primarly monitor different Linux OS distributions such as CentOS / RHEL / AlmaLinux / Rocky Linux / OpenSuse / Arch Linux / Amazon Linux and Oracle Linux it also has support for Windows OS as well as partially supports Mac OS and FreeBSD / OpenBSD / Dragonfly BSD and some other BSDs but some of its Linux features are not available on those.

1. Install Netdata real time performance monitoring tool


root@pcfreak:~# apt-cache show netdata|grep -i description -A3

Description-en: real-time performance monitoring (metapackage)

 Netdata is distributed, real-time, performance and health monitoring for

 systems and applications. It provides insights of everything happening on the

 systems it runs using interactive web dashboards.

Description-md5: 6843dd310958e94a27dd618821504b8e

Homepage: https://github.com/netdata/netdata

Section: net

Priority: optional

Netdata runs on most Linux distributions. On Debian or Ubuntu, installation is dead simple:

# apt update

# apt install curl -y


To be with the latest version since the default repositories provide usually older release you can use for install directly the kickstart installer from netdata.

# bash <(curl -Ss https://my-netdata.io/kickstart.sh)

This script automatically installs dependencies, sets up the Netdata daemon, and enables it as a systemd service.

Once installed, Netdata starts immediately. You can verify with:

# systemctl status netdata

2. Access the Web Dashboard


Open your browser and navigate to:

http://your-server-ip:19999/

You’ll be greeted by an intuitive dashboard showing real-time graphs of every aspect of your system — CPU load, memory usage, disk I/O, network bandwidth, and more.

If you’re running this on a public server, make sure to secure access with a firewall or reverse proxy, since by default Netdata is open to all IPs.

Example (UFW):

# ufw allow from YOUR.IP.ADDRESS.HERE to any port 19999

# ufw enable

3. Enable Persistent Storage

By default, Netdata stores only live data in memory. To retain historical data:
 

# vim /etc/netdata/netdata.conf

Find the [global] section and modify:

[global]

  history = 86400

  update every = 1

This keeps 24 hours of data at one-second resolution. You can also connect Netdata to a database backend for long-term archiving.

4. Secure with Basic Auth (Optional)

If you want simple password protection:

# apt install apache2-utils

# htpasswd -c /etc/netdata/.htpasswd admin

Then edit /etc/netdata/netdata.conf to include:

[web]

  mode = static-threaded

  default port = 19999

  allow connections from = localhost 192.168.1.*

  basic auth users file = /etc/netdata/.htpasswd

Restart Netdata:

# systemctl restart netdata

Why Netdata is Awesome 

Unlike Prometheus or Grafana, Netdata gives you instant insights without heavy setup. It’s ideal for:

  • Debugging high load or memory leaks in real time
  • Monitoring multiple VPS or embedded devices
  • Visualizing system resource usage with minimal CPU cost

And because it’s written in C, it’s insanely fast — often using less than 1% CPU on idle systems.

Final Thoughts

If you’re running any Linux server – whether for personal projects, web hosting, or experiments – Netdata is one of the easiest ways to visualize what’s happening under the hood.

You can even integrate it into your homelab or connect multiple nodes to the Netdata Cloud for centralized monitoring. And of course the full featured use and all the features of the tool are available just in the cloud version but that is absolutely normal. As the developers of netdata seems to have adopted the usual business model of providing things for free and same time selling another great things to make cash.

Netdata is really cool solution without cloud for people who needs to be able to quickly monitor like 20 or 50 servers without putting too much effort. You can simply install it across the machines and you will get a plenty of monitoring, just open each of the machines inside a separate browser tabs and take a look at what is going on once or 2 times a day. It is very old fashioned way to do monitoring but still can make sense if you don't want to bother too much with developing the monitoring or put any effort in it but still have a kind of obeservability on your mid size computer infrastructure.

So, next time your VPS feels sluggish or your load average spikes, fire up Netdata — your CPU will tell you exactly what’s wrong.

How to Install and Troubleshoot Grafana Server on Linux: Complete Step-by-Step Tutorial

Saturday, October 25th, 2025

install-grafana-on-linux-logo

If you’ve ever set up monitoring for your servers or applications, you’ve probably heard about Grafana.
It’s the de facto open-source visualization and dashboard tool for time-series data — capable of turning raw metrics into beautiful and insightful charts.

In this article, I’ll walk you through how to install Grafana on Linux, configure it properly, and deal with common problems that many sysadmins hit along the way.
This guide is written with real-world experience, blending tips from the official documentation and years of self-hosting tinkering.

1. Prerequisites

You’ll need:

  • A Linux machine — Ubuntu 22.04 LTS, Debian 11+, or CentOS 8 / Rocky Linux 9.
  • A superuser or user with sudo privileges.
  • At least 512 MB of RAM and 1 CPU core (Grafana is lightweight).
  • Internet access to fetch the repository and packages.

Optionally:

  • A domain name (for example: grafana.yourdomain.com)
  • Nginx/Apache if you want to enable HTTPS later.

2. Update Your System

Before any installation, ensure your system packages are up to date:

# Debian/Ubuntu
# apt update && sudo apt upgrade -y

# or

# dnf update -y # CentOS/RHEL/Rocky

3. Install Grafana (from the Official Repository)

Grafana provides an official APT and YUM repository. Always use it to get security and feature updates.

For Debian / Ubuntu:
 

# apt install -y software-properties-common apt-transport-https wget
# wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -
# echo "deb https://packages.grafana.com/oss/deb stable main" | \
# tee /etc/apt/sources.list.d/grafana.list
# apt update
# apt install grafana -y 


For CentOS / Rocky / RHEL:

# tee /etc/yum.repos.d/grafana.repo <<EOF
[grafana]
name=Grafana OSS
baseurl=https://packages.grafana.com/oss/rpm
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
EOF
# dnf install grafana -y

 

4. Start and Enable the Service

Once installation completes, start and enable Grafana to run on boot:
 

# systemctl daemon-reload
# systemctl enable grafana-server
# systemctl start grafana-server

Check status:

# systemctl status grafana-server


You should see Active: running.

5. Access the Grafana Web Interface

Grafana listens on port 3000 by default.

Open your browser and visit:

http://<your_server_ip>:3000

Login with the default credentials:

Username: admin
Password: admin

You’ll be asked to change the password — do it immediately for security.

6. Configure Grafana Basics

Configuration file location:

/etc/grafana/grafana.ini

Common settings to edit:

  • Port:

http_port = 8080

  • Root URL (for reverse proxy):

root_url = https://grafana.yourdomain.com/

  • Database: (change from SQLite to MySQL/PostgreSQL if desired)
[database]
type = mysql
host = 127.0.0.1:3306
name = grafana
user = grafana
password = supersecret_password582?255!#

Restart Grafana after making changes:

# systemctl restart grafana-server


7. Optional – Secure Grafana with HTTPS and Reverse Proxy

If Grafana is public-facing, always use HTTPS.

Install Nginx and Certbot:

# apt install nginx certbot python3-certbot-nginx -y
# certbot --nginx -d grafana.yourdomain.com

This automatically sets up SSL certificates and redirects traffic from HTTP → HTTPS.

8. Add a Data Source

Once logged in:

  1. Go to Connections → Data Sources → Add data source

  2. Choose Prometheus, InfluxDB, Loki, or your preferred backend

  3. Enter the connection URL (e.g., http://localhost:9090)

  4. Click “Save & Test”

If successful, you can start creating dashboards!

9. Troubleshooting Common Grafana Installation Issues

Even the smoothest installs sometimes go wrong.
Here are real-world fixes for typical problems sysadmins face:

Problem

Likely Cause

Fix

Service won’t start

Misconfigured grafana.ini or port in use

Run # journalctl -u grafana-server -xe to see logs. Correct syntax, change port, restart.

Port 3000 already in use

Another app (maybe NodeJS app or Jenkins)

sudo lsof -i :3000 → stop or reassign the conflicting service.

Blank login page / dashboard not loading

Browser cache or reverse proxy misconfig

Clear cache, check Nginx proxy settings (proxy_pass should point to http://localhost:3000).

APT key errors (Ubuntu 22.04+)

apt-key deprecated

Use signed-by=/usr/share/keyrings/grafana.gpg method as Grafana docs suggest.

“Bad Gateway” via proxy

Missing header forwarding

In Nginx config, ensure:

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;


“` |
| **Can’t access Grafana from LAN**

Most common reason Firewall blocking port

# ufw allow 3000/tcp


 (or adjust firewalld). 

 

10. Maintenance and Backup Tips
 

– Backup your dashboard database frequently: 
– SQLite: `/var/lib/grafana/grafana.db`
– MySQL/PostgreSQL: use your normal DB backup methods

– Watch logs regularly, e.g.: 

# journalctl -u grafana-server -f

  • Upgrade grafana safely:

# apt update && apt install grafana -y

(Always back up before upgrading!)

Conclusion

Grafana is a powerful, elegant way to visualize your system metrics, application logs, and alerts.
Installing it on Linux is straightforward — but knowing where it can fail and how to fix it is what separates a beginner from a seasoned sysadmin.

By following this guide, you not only get Grafana up and running, but also understand the moving parts behind the scenes — a perfect fit for any DIY server admin or DevOps engineer who loves keeping full control.

Digital Vigilance: Practical Cyber Defense for the New Era of All connected dependency on technology

Friday, October 24th, 2025

 

Introduction

There was a time when cybersecurity was mostly about erecting a firewall, installing antivirus software and hoping no one clicked a suspicious link. That era is steadily fading. Today, as more work moves to the cloud, as AI tools proliferate, and as threat actors adopt business-like models, the battlefield has shifted dramatically. According to analysts at Gartner, 2025 brings some of the most significant inflections in cybersecurity in recent memory. 

In this article we’ll cover the major trends, why they matter, and — importantly — what you as an individual or sysadmin can start doing today to stay ahead.

1. Generative AI: Weapon and Shield

AI / ML (Machine Learning)) is now deeply ingrained in both the offence and defence sides of cybersecurity.

  • On the defence side: AI models help detect anomalies, process huge volumes of logs, and automate responses. 
  • On the offence side: Attackers use AI to craft more convincing phishing campaigns, automate vulnerability discovery, generate fake identities or even design malware. 
  • Data types are changing: It’s no longer just databases and spreadsheets. Unstructured data (images, video, text) used by AI models is now a primary risk.

What to do:

  • Make sure any sensitive AI-training data or inference logs are stored securely.
  • Build anomaly-detection systems that don’t assume “normal” traffic anymore.
  • Flag when your organisation uses AI tools: do you know what data the tool uses, where it stores it, who can access it?

2. Zero Trust Isn’t Optional Anymore

 

cyber-security-threats-to-watch-in-2025

The old model — trust everything inside the perimeter, block everything outside — is obsolete. Distributed workforces, cloud services, edge devices: they all blur the perimeter. Hence the rise of Zero Trust Architecture (ZTA) — “never trust, always verify.” INE+1

Key features:

  • Every device, every user, every session must be authenticated and authorised.
  • Least-privilege access: users should have the minimum permissions needed.
  • Micro-segmentation: limit lateral movement in networks.
  • Real-time monitoring and visibility of sessions and devices.

What to do:

  • Audit your devices and users: who has broad permissions? Who accesses critical systems?
  • Implement multifactor authentication (MFA) everywhere you can.
  • Review network segmentation: can a compromised device access everything? If yes, that’s a red flag.
     

3. Ransomware & RaaS – The Business Model of Cybercrime

Cybercriminals are organizing like businesses: they have supply chains, service models, profit centres. The trend of Ransomware‑as‑a‑Service (RaaS) continues to expand. Dataconomy+1

What’s changed:

  • Ransomware doesn’t just encrypt data. Attackers often steal data first, then threaten to release it. 
  • Attackers are picking higher-value targets and critical infrastructure.
  • The attack surface has exploded: IoT devices, cloud mis-configurations, unmanaged identity & access.

What to do:

  • Back up your critical systems regularly — test restores, not just backups.
  • Keep systems patched (though even fully patched systems can be attacked, so patching is necessary but not sufficient).
  • Monitor for abnormal behaviour: large data exfiltration, new admin accounts, sudden access from odd places.
  • Implement strong incident response procedures: when it happens, how do you contain it?

4. Supply Chains, IoT & Machine Identities

Modern IT is no longer just endpoints and servers. We have IoT devices, embedded systems, cloud services, machine-to-machine identities. According to Gartner, machine identities are expanding attack surfaces if unmanaged.

Key issues:

  • Devices (especially IoT) often ship with weak/default credentials.
  • Machine identities: software services, APIs, automation tools need their own identity/access management.
  • Supply chains: your vendor might be the weakest link — compromise of software or hardware upstream affects you.

What to do:

  • Create an inventory of all devices and services — yes all.
  • Enforce device onboarding processes: credentials changed, firmware up-to-date, network segmented.
  • Review your vendors: what security standards do they follow? Do they give you visibility into their supply chain risk?
     

5. Cloud & Data Privacy — New Rules, New Risks

As data moves into the cloud and into AI systems, the regulatory and technical risks converge. For example, new laws like the EU AI Act will start affecting how organisations handle AI usage and data. Source: Gcore
Cloud environments also bring mis-configurations, improper access controls, shadow-IT and uncontrolled data sprawl. techresearchs.com+1

What to do:
 

  • If using cloud services, check settings for major risk zones (e.g., S3 buckets, unsecured APIs).
  • Implement strong Identity & Access Management (IAM) controls for cloud resources.
  • Make data-privacy part of your security plan: what data you collect, where it is stored, for how long.
  • Perform periodic audits and compliance checks especially if you handle users from different jurisdictions.
     

6. Skills, Culture & Burn-out — The Human Factor

Often overlooked: no matter how good your tech is, people and culture matter. Gartner Security behaviour programs help reduce human-error incidents — and they’re becoming more essential.
Also, the cybersecurity talent shortage and burnout among security teams is real.

What to do:

 

  • Invest in security awareness training: phishing simulation, strong password practices, device hygiene.
  • Foster a culture where security is everyone’s responsibility, not just the “IT team’s problem.”
  • For small teams: consider managed security services or cloud-based monitoring to lean on external support.

7. What This Means for Smaller Organisations & Individual Users

Often the big reports focus on enterprises. But smaller organisations (and individual users) are just as vulnerable — sometimes more so, because they have fewer resources and less mature security.
Here are some concrete actions:

  • Use strong, unique passwords and a password manager.
  • Enable MFA everywhere (email, online services, VPNs).
  • Keep your systems updated — OS, applications, firmware.
  • Be suspicious of unexpected communications (phishing).
  • Have an incident response plan: who do you call if things go wrong?
  • Backup your data offline and test restores.
  • If you run services (web-server, mail server): monitor logs, check for new accounts, stray network connections.
     

Conclusion

Cybersecurity in 2025 is not a “set once and forget” system. It’s dynamic, multi-layered and deeply integrated into business functions and personal habits. The trends above — generative AI, zero trust, supply chain risks, cloud data sprawl — are changing the rules of the game.
Thus for all of us and especially sysadmins / system engineers or Site Reliabiltiy Managers (SRE), Developers, Testers or whatever you call it this meen we need to keep learning, be careful with the tech stuff we use, and build security as a continuous practice rather than a one-off box-to-tick.

 

How to Restrict a Program to Use Only a Specific Port Range for Outgoing Connections (Linux)

Wednesday, October 22nd, 2025

In some advanced system administration and security scenarios, you might want to restrict a specific program (or a user) to only use a specific range of source ports when making outbound connections. This can be important for:

  • Securing network access per service
  • Enforcing port-based identity
  • Making firewall rules easier to manage
  • Avoiding source port collisions in NAT environments

This article explains how to achieve this using iptables, cgroups, and kernel-level ephemeral port controls, with practical, copy-paste-ready examples.

What meant by "Port Range Restriction"?

When a program connects to an external server (e.g., with curl or a custom app), the kernel assigns a source port — typically from a dynamic range like 32768–60999. If you want to limit a specific app or user to only use, say, source ports 50000–51000 for all its outbound connections, you have to implement that at the firewall or kernel level.

Option 1: Using iptables with the owner Module (Per-User Limiting)

This is the most straightforward way if the program runs as a known user or UID.

Example: Allow only source ports 50000–51000 for myuser

# Block all outgoing TCP traffic from 'myuser' outside the port range

# iptables -A OUTPUT -p tcp -m owner –uid-owner myuser ! –sport 50000:51000 -j DROP

 

# Optional: for UDP

# iptables -A OUTPUT -p udp -m owner –uid-owner myuser ! –sport 50000:51000 -j DROP

Note: This will not stop the application from running — it just ensures the kernel drops packets that don't meet your policy.

Option 2: Using cgroups with net_cls + iptables (Per-Process Control)

For more precise control, especially in multi-user systems or containers, you can use Linux cgroups to classify traffic per process.

Step-by-step:

  1. Create a net_cls cgroup:

# mkdir -p /sys/fs/cgroup/net_cls/myapp

# echo 0x100001 | sudo tee /sys/fs/cgroup/net_cls/myapp/net_cls.classid

  1. Assign your process to the cgroup:

# Replace <PID> with the actual process ID

# echo <PID> | sudo tee /sys/fs/cgroup/net_cls/myapp/tasks

  1. Add iptables rule:

# iptables -A OUTPUT -m cgroup –cgroup 0x100001 -p tcp ! –sport 50000:51000 -j DROP


This gives you fine-grained per-process port restrictions.


Optional: Set Ephemeral Port Range (Globally or Per Namespace)

Linux picks ephemeral ports from a default system-wide range (32768–60999). You can limit this globally (not recommended in shared systems) or within a network namespace.

To view the current range:

# cat /proc/sys/net/ipv4/ip_local_port_range

To change the range:

# echo "50000 51000" | sudo tee /proc/sys/net/ipv4/ip_local_port_range

Warning: This affects all processes. Better used inside isolated environments (e.g., containers or netns).

Run the App with Explicit bind() to Source Port

If you control the source code of the app, the cleanest method is to explicitly bind to the desired source port(s) before making the connection.

Python Example:

import socket

 

s = socket.socket()

s.bind(('', 50001))  # bind to specific source port

s.connect(('example.com', 443))

Many apps and libraries (like curl, wget, netcat, python requests, etc.) don't expose source port binding — in those cases, control must be done externally.

Logging Violations

To log connection attempts from disallowed source ports:
 

# iptables -A OUTPUT -p tcp -m owner –uid-owner myuser ! –sport 50000:51000 \

  -j LOG –log-prefix "PORT VIOLATION: " –log-level 4
 

Containerized or Isolated Environments: Use Namespaces or Docker

For Docker or custom network namespaces, you can:

  • Use a private ephemeral port range
  • Use iptables inside the namespace
  • Isolate the entire network stack for maximum safety

Docker Example:

 # docker run -d –name limited_app –net=custom_net -p 50000-51000:50000-51000 myimage

Then configure the app inside the container to bind or use that port range.

Important Limitations

  • iptables owner match works only for UIDs, not process names.
  • If the app doesn’t bind explicitly, the kernel still picks a port — make sure the global range or netns is set correctly.
  • Some apps (e.g. Java, browsers) may use multiple ephemeral ports simultaneously — test carefully.
  • Not all programs allow setting source port via command line.

Recommended Best Practices

Let’s say you want to allow only ports 50000–51000 for a service called myapp:

  1. Create a dedicated system user:

# sudo useradd -r -s /sbin/nologin myapp

  1. Run the program as this user:

# sudo -u myapp /path/to/myapp

  1. Set the desired port range system-wide (optional):

# echo "50000 51000" | sudo tee /proc/sys/net/ipv4/ip_local_port_range

  1. Add iptables rule:

# iptables -A OUTPUT -p tcp -m owner –uid-owner myapp ! –sport 50000:51000 -j DROP

  1. (Optional) Add logging:

# iptables -A OUTPUT -p tcp -m owner –uid-owner myapp ! –sport 50000:51000 -j LOG –log-prefix "MYAPP PORT BLOCKED: "

  1. Make sure the app binds or connects correctly from that range.

Final Thoughts

Restricting outbound source port ranges per application or user is an underused but powerful technique to lock down systems, especially in:

  • High-security environments
  • Multi-tenant servers
  • NAT/firewall-managed networks
  • IDS/IPS-aware setups


You can start with simple iptables rules for users, and later adopt cgroups or container namespaces for deeper isolation.

Force specific source ports for a app like curl

To force specific source ports in general-purpose tools like curl or nc, try using iptables SNAT (with care), or consider writing wrappers/scripts that handle binding via LD_PRELOAD or custom socket logic.

How to boost Linux Server Speed with tmpfs and few smart Optimization tweaks

Monday, October 20th, 2025

speed-up-accelerate-wordpress-joomla-drupal-cms-and-mysql-server-with-tmpfs_ramfs_decrease-pageload-times-with-ram-caching.png

If you’ve ever managed a busy Linux server, you’ve probably noticed how I/O bottlenecks can make your system feel sluggish – even when CPU and memory usage look fine. I recently faced this problem on a Debian box running Nginx, PHP-FPM, and MySQL / MariaDB. The fix turned out to be surprisingly simple: using tmpfs for temporary directories and tweaking a few kernel parameters.

1. Identify the resource issue Bottleneck

Running iostat -x 1 showed my /var/lib/mysql drive constantly pegged at 100% utilization, while CPU load stayed low. Classic disk-bound performance issue.

I checked swap usage:

# swapon –show

# free -h

The system was swapping occasionally — not good for performance-critical workloads.

2. Use tmpfs for Cache and Temporary Files

Linux allows you to use part of your RAM as a fast, volatile filesystem. Perfect for things like cache and sessions.

I edited /etc/fstab and added:

tmpfs   /tmp            tmpfs   defaults,noatime,mode=1777,size=1G  0  0

tmpfs   /var/cache/nginx tmpfs  defaults,noatime,mode=0755,size=512M 0 0

Then:

# mount -a

Immediately, I noticed fewer disk I/O spikes. Nginx’s cache hits were lightning-fast, and PHP temporary files were written in memory instead of SSD.

3. Tune Kernel Parameters

Adding these lines to /etc/sysctl.conf helped reduce swapping and improve responsiveness:

vm.swappiness=10

vm.vfs_cache_pressure=50

Then apply:

# sysctl -p

4. Use tmpfs for MySQL / MariaDB tmpdir (Optional)

If you have enough RAM, move MySQL’s temporary directory to tmpfs:

# mkdir -p /mnt/mysqltmp

# mount -t tmpfs -o size=512M tmpfs /mnt/mysqltmp

# chown mysql:mysql /mnt/mysqltmp

Then set in /etc/mysql/my.cnf: / /etc/mysql/mariadb.cnf

tmpdir = /mnt/mysqltmp

This dramatically speeds up large sorts and temporary table creation.

5. Monitor the Effects and tune up if necessery

After applying these changes, iostat showed disk utilization dropping from 95% to under 20% under the same workload.
Average response times from Nginx dropped by around 30–40%, and the server felt much more responsive.
However other cases might be different so it is a good idea to play around with tmpfs side according to your CPU / Memory system parameters etc. and find out the best values that would fit your Linux setup best.

Short rephrasal

tmpfs is one of those underused Linux features that can make a real-world difference for sysadmins and self-hosters. Just remember that data in tmpfs disappears after reboot, so only use it for volatile data (cache, temp files, etc.).

If you’re running a VPS or small dedicated box and looking for a quick, low-cost performance boost — give tmpfs a try. You might be surprised how much smoother your system feels.

 

How to Build a Linux Port Scan Honeypot with iptables and test with nmap

Wednesday, October 8th, 2025

buiild-own-honeypot-with-iptables-and-test-with-nmap-linux-howto

Let’s build a simple, practical Linux port‑scan honeypot that uses iptables to capture/redirect incoming connection attempts and simple listeners that log interaction, and use nmap to test it. I’ll give runnable commands, a small Python honeypot listener, iptables rules (with rate‑limited logging), how to test with nmap, and notes on analysis, hardening, and safety.

Important safety notes (read first)

  • Only deploy this on machines/networks you own or are explicitly authorized to test. Scanning other people’s systems or letting an exposed honeypot be used to attack others is illegal and unethical.
  • Run the honeypot inside an isolated VM or protected network segment (use separate host or VLAN), and don't forward traffic from other networks unless you understand the risks.
  • Keep logs rotated and watch disk usage — honeypots can attract high traffic.

Overview

  1. Prepare an isolated Linux VM.
  2. Add iptables rules to log and optionally redirect connection attempts to local listeners.
  3. Run simple service emulators (Python) to accept connections and log data.
  4. Use nmap to test and validate behavior.
  5. Collect, parse, and analyze logs; optionally integrate with syslog/ELK.

Setup assumptions

  • Ubuntu/Debian or similar Linux.
  • Root (sudo) access.
  • iptables, python3, nmap, and socat/netcat available (I’ll show alternatives).

1) Create a safe environment

  • Use a VM (VirtualBox/QEMU/KVM) or container with no sensitive data.
  • Give it a fixed private IP on an isolated network.
  • Optionally place the VM behind another firewall (so you can control exposure).

2) Simple iptables logging rules (filter table)

We want to log connection attempts but avoid log flooding by using limit. Put these commands in a root shell:

# create a honeypot chain

# iptables -N HONEY

# rate-limited LOG then return so traffic continues to normal behavior

# iptables -A HONEY -m limit –limit 5/min –limit-burst 10 -j LOG \
–log-prefix "HPOT_CONN: " –log-level 4

# record raw packets to kernel log (optional NFLOG/ulogd is better for high volume)

# allow the packet to continue (or drop if you want it to appear closed)

# iptables -A HONEY -j ACCEPT

 

# Apply HONEY chain to incoming TCP attempts to all ports (or a subset)

# iptables -A INPUT -p tcp -j HONEY

Notes:

  • This logs packets hitting INPUT. If your VM is behind NAT or you want to catch forwarded packets, apply to FORWARD too.
  • Using -j LOG writes to kernel log, often /var/log/kern.log or via rsyslog to /var/log/messages.
  • Use NFLOG (-j NFLOG) with ulogd if you want structured logging and higher throughput.

If you prefer to log only SYNs (new connection attempts) to reduce noise:

# iptables -A HONEY -p tcp –syn -m limit –limit 5/min –limit-burst 10 \
-j LOG –log-prefix "HPOT_SYN: " –log-level 4

If you want to redirect certain destination ports into local port listeners (so local fake services can accept connections), use the nat table PREROUTING:

# Example redirect incoming port 80 to local port 8080

# iptables -t nat -A PREROUTING -p tcp –dport 80 -j REDIRECT –to-ports 8080

# For ports other than 80/443, add other rules

# iptables -t nat -A PREROUTING -p tcp –dport 22 -j REDIRECT –to-ports 2222

Notes:

  • -t nat -A PREROUTING catches traffic before routing. Use OUTPUT if traffic originates on host itself.
  • REDIRECT only works for local listeners.

3) Simple Python multi‑port honeypot listener

This script will bind multiple ports and log incoming connections and first bytes (banner or payload). Save as simple_honeypot.py:

#!/usr/bin/env python3

# simple_honeypot.py

import socket, threading, logging, time

 

LISTEN_PORTS = [22, 80, 443, 8080, 2222]   # customize

BANNER = b"HTTP/1.1 200 OK\r\nContent-Length: 2\r\n\r\nOK"

 

logging.basicConfig(

    filename="/var/log/honeypot.log",

    level=logging.INFO,

    format="%(asctime)s %(message)s"

)

 

def handle_client(conn, addr, port):

    try:

        conn.settimeout(5.0)

        data = conn.recv(4096)

        logged = data[:100].decode('utf-8', errors='replace')

        logging.info("CONNECT %s:%s -> port=%d first=%s", addr[0], addr[1], port, logged)

        # Send a small believable banner depending on port

        if port in (80, 8080):

            conn.sendall(BANNER)

        elif port in (22, 2222):

            conn.sendall(b"SSH-2.0-OpenSSH_7.4p1\r\n")

        else:

            conn.sendall(b"220 smtp.example.com ESMTP\r\n")

    except Exception as e:

        logging.info("ERROR %s", e)

    finally:

        try:

            conn.close()

        except:

            pass

 

def start_listener(port):

    s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)

    s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)

    s.bind(('0.0.0.0', port))

    s.listen(50)

    logging.info("Listening on %d", port)

    while True:

        conn, addr = s.accept()

        t = threading.Thread(target=handle_client, args=(conn, addr, port), daemon=True)

        t.start()

 

if __name__ == "__main__":

    for p in LISTEN_PORTS:

        t = threading.Thread(target=start_listener, args=(p,), daemon=True)

        t.start()

    # keep alive

    while True:

        time.sleep(3600)

Run it as root (or with capability binding) so it can listen on low ports:
 

# python3 simple_honeypot.py


# check logs

# tail -F /var/log/honeypot.log

Alternatives:

  • Use socat to spawn a logger for individual ports.
  • Use cowrie, glastopf, honeyd, sshpot etc. if you want full featured honeypots.

4) Using nmap to test the honeypot

Only scan your honeypot IP. Example commands:

Basic TCP port scan:

# nmap -sS -p- -T4 -oN scan_ports.txt <HONEYPOT_IP>

Service/version detection:

# nmap -sV -sC -p 22,80,443,2222 -oN scan_svcs.txt <HONEYPOT_IP>

Scan while avoiding ping (if your VM blocks ICMP):

# nmap -Pn -sS -p1-2000 <HONEYPOT_IP>

Aggressive scan + scripts (use on your own host only):

# nmap -A -T4 <HONEYPOT_IP>

Watch your honeypot logs while scanning to confirm entries.

5) Log collection & analysis

  • Kernel log: sudo tail -F /var/log/kern.log or sudo journalctl -f
  • Python listener log: /var/log/honeypot.log
  • For structured logging and higher throughput: configure ulogd (NFLOG) or forward logs to syslog/rsyslog and ELK/Graylog.
  • Example simple grep: grep HPOT_CONN /var/log/kern.log | tail -n 200

6) Hardening & operational tips

  • Isolate honeypot inside a VM and snapshot it so you can revert.
  • Use rate limits in iptables to avoid log flooding: -m limit –limit 5/min.
  • Rotate logs: configure logrotate for /var/log/honeypot.log.
  • Do not allow outbound traffic from the honeypot unless needed – attackers may use it to pivot. Use egress firewall rules to restrict outbound connections.
  • Consider running the honeypot under an unprivileged user wherever possible.
  • If expecting a lot of traffic, use NFLOG + ulogd or suricata/zeek for more scalable capture.

7) Optional: Richer visibility with NFLOG / ulogd

If you anticipate higher volume or want structured logs, use:

# example: mark packets from all TCP and send to NFLOG group 1

# iptables -I INPUT -p tcp -m limit –limit 10/min -j NFLOG –nflog-group 1

# run ulogd to write NFLOG to a file or DB (configure /etc/ulogd.conf)

8) Example scenario — bring it all together

  1. Start VM and ensure IP 192.168.56.101.
  2. Add iptables logging and redirect HTTP->8080.
  3. Run simple_honeypot.py with ports [22,80,2222,8080].
  4. From your scanner machine: nmap -sS -sV -p22,80,2222,8080 192.168.56.101
  5. Watch /var/log/honeypot.log and kernel logs for HPOT_* prefixes to see connections and payloads.

Quick reference for commands used:

  • Create honeypot chain and log SYNs:

# iptables -N HONEY

# iptables -A HONEY -p tcp –syn -m limit –limit 5/min –limit-burst 10 \
-j LOG –log-prefix "HPOT_SYN: "

# iptables -A INPUT -p tcp -j HONEY

  • Redirect port 80 -> 8080

# iptables -t nat -A PREROUTING -p tcp –dport 80 -j REDIRECT –to-ports 8080

  • Run listener:

# python3 simple_honeypot.py

  • Test with nmap:

# nmap -sS -sV -p22,80,2222,8080 -Pn <IP>

Conclusion

Building a simple Linux port‑scan honeypot with iptables and lightweight listeners gives you practical, immediate visibility into who’s probing your network and how they probe. With a few well‑scoped iptables rules you can capture and rate‑limit connection attempts, redirect selected ports to local emulators, and keep tidy, analysable logs. The small Python listener shown is enough to collect banners and initial payloads; for higher volume or more fidelity you can step up to NFLOG / ulogd, Zeek / Suricata, or production honeypots like Cowrie.

Remember the two rules that keep a honeypot useful and safe: isolate it (VMs, VLANs, strict egress rules) and log thoughtfully (rate limits, rotation, structured formats). That protects your environment and makes the data you collect far more valuable. Over time, turn raw logs into indicators (IP reputations, patterns of ports/probes, common payloads) and feed them into alerts or dashboards to turn passive observation into active defense.

How to Create Secure Stateful Firewall Rules with nftables on Linux

Monday, October 6th, 2025

nftables-logo-linux-mastering-stateful-firewall-rules-with-nftables-firewall

Firewalls are the frontline defense of any server or network. While many sysadmins are familiar with iptables, nftables is the modern Linux firewall framework offering more power, flexibility, and performance.

One of the key features of nftables is stateful packet inspection, which lets you track the state of network connections and write precise rules that dynamically accept or reject packets based on connection status.

In this guide, we’ll deep dive into stateful firewall rules using nftables — what they are, why they matter, and how to master them for a robust, secure network.

What is Stateful Firewalling?

A stateful firewall keeps track of all active connections passing through it. It monitors the state of a connection (new, established, related, invalid) and makes decisions based on that state rather than just static IP or port rules.

This allows:

  • Legitimate traffic for existing connections to pass freely
  • Blocking unexpected or invalid packets
  • Better security and less manual rule writing

Understanding Connection States in nftables

nftables uses the conntrack subsystem to track connections. Common states are:

State

Description

new

Packet is trying to establish a new connection

established

Packet belongs to an existing connection

related

Packet related to an existing connection (e.g. FTP data)

invalid

Packet that does not belong to any connection or is malformed

Basic Stateful Rule Syntax in nftables

The key keyword is ct state. For example:

nft add rule inet filter input ct state established,related accept

This means: allow any incoming packets that are part of an established or related connection.

Step-by-Step: Writing a Stateful Firewall with nftables

  1. Create the base table and chains


nft add table inet filter

nft add chain inet filter input { type filter hook input priority 0 \; }

nft add chain inet filter forward { type filter hook forward priority 0 \; }

nft add chain inet filter output { type filter hook output priority 0 \; }

  1. Allow loopback traffic

nft add rule inet filter input iif lo accept

  1. Allow established and related connections

nft add rule inet filter input ct state established,related accept

  1. Drop invalid packets

nft add rule inet filter input ct state invalid drop

  1. Allow new SSH connections

nft add rule inet filter input tcp dport ssh ct state new accept

  1. Drop everything else

nft add rule inet filter input drop

Why Use Stateful Filtering?

  • Avoid writing long lists of rules for each connection direction
  • Automatically handle protocols with dynamic ports (e.g. FTP, SIP)
  • Efficient resource usage and faster lookups
  • Better security by rejecting invalid or unexpected packets
nftables-tcpip-model-diagram-logo

Advanced Tips for Stateful nftables Rules

  • Use ct helper for protocols requiring connection tracking helpers (e.g., FTP)
  • Combine ct state with interface or user match for granular control
  • Use counters with rules to monitor connection states
  • Rate-limit new connections using limit rate with ct state new

Real-World Example: Preventing SSH Brute Force with Stateful Rules

nft add rule inet filter input tcp dport ssh ct state new limit rate 5/minute accept

nft add rule inet filter input tcp dport ssh drop

This allows only 5 new SSH connections per minute.

Troubleshooting Stateful Rules

  • Use conntrack -L to list tracked connections
  • Logs can help; enable logging on dropped packets temporarily
  • Check if your firewall blocks ICMP (important for some connections)
  • Remember some protocols may require connection helpers

Making Your nftables Rules Permanent

By default, any rules you add using nft commands are temporary — they live in memory and are lost after a reboot.

To make your nftables rules persistent, you need to save them to a configuration file and ensure they're loaded at boot.

Option 1. Using the nftables Service (Preferred on Most Distros)

Most modern Linux distributions (Debian ≥10, Ubuntu ≥20.04, CentOS/RHEL ≥8) come with a systemd service called nftables.service that automatically loads rules from /etc/nftables.conf at boot.

 Do the following to make nftables load on boot:

Dump your current rules into a file:

# nft list ruleset > /etc/nftables.conf

Enable the nftables service to load them at boot:

# systemctl enable nftables

(Optional) Start the service immediately if it’s not running:

# systemctl start nftables

Check status:

# systemctl status nftables

Now your rules will survive reboots.

Alternative way to load nftables on network UP, Use Hooks in
/etc/network/if-pre-up.d/ or Custom Scripts (Advanced)

If your distro doesn't use nftables.service or you're on a minimal setup (e.g., Alpine, Slackware, older Debian), you can load the rules manually at boot:

Save your rules:

# nft list ruleset > /etc/nftables.rules

Create a script to load them (e.g., /etc/network/if-pre-up.d/nftables):

#!/bin/sh

nft -f /etc/nftables.rules

Make it executable:

chmod +x /etc/network/if-pre-up.d/nftables

This method works on systems without systemd.

Sample /etc/nftables.conf config

We first define variables which we can use later on in our ruleset:

 

define NIC_NAME = "eth0"

define NIC_MAC_GW = "DE:AD:BE:EF:01:01"

define NIC_IP = "192.168.1.12"

define LOCAL_INETW = { 192.168.0.0/16 }

define LOCAL_INETWv6 = { fe80::/10 }

define DNS_SERVERS = { 1.1.1.1, 8.8.8.8 }

define NTP_SERVERS = { time1.google.com, time2.google.com, time3.google.com, time4.google.com }

define DHCP_SERVER = "192.168.1.1"

Next code block shows ip filter and ip6 filter sample:

We first create an explicit deny rule (policy drop;) for the chain input and chain output.
This means all network traffic is dropped unless its explicitly allowed later on.

Next we have to define these exceptions based on network traffic we want to allow.
Loopback network traffic is only allowed from the loopback interface and within RFC loopback network space.

nftables automatically maps network protocol names to port numbers (e.g. HTTPS 443).
In this example, we only allow incoming sessions which we initiated (ct state established accept) from ephemeral ports (dport 32768-65535). Be aware an app or web server should allow newly initiated sessions (ct state new).

Certain network sessions initiated by this host (ct state new,established accept) in the chain output are explicitly allowed in the output chain. We also allow outgoing ping requests (icmp type echo-request), but do not want others to ping this host, hence ct state established in the icmp type input chain. 

table ip filter {

    chain input {

       type filter hook input priority 0; policy drop;

       iifname "lo" accept

       iifname "lo" ip saddr != 127.0.0.0/8 drop

       iifname $NIC_NAME ip saddr 0.0.0.0/0 ip daddr $NIC_IP tcp sport { ssh, http, https, http-alt } tcp dport 32768-65535 ct state established accept

       iifname $NIC_NAME ip saddr $NTP_SERVERS ip daddr $NIC_IP udp sport ntp udp dport 32768-65535 ct state established accept

       iifname $NIC_NAME ip saddr $DHCP_SERVER ip daddr $NIC_IP udp sport bootpc udp dport 32768-65535 ct state established log accept

       iifname $NIC_NAME ip saddr $DNS_SERVERS ip daddr $NIC_IP udp sport domain udp dport 32768-65535 ct state established accept

       iifname $NIC_NAME ip saddr $LOCAL_INETW ip daddr $NIC_IP icmp type echo-reply ct state established accept

    }

 

    chain output {

       type filter hook output priority 0; policy drop;

       oifname "lo" accept

       oifname "lo" ip daddr != 127.0.0.0/8 drop

       oifname $NIC_NAME ip daddr 0.0.0.0/0 ip saddr $NIC_IP tcp dport { ssh, http, https, http-alt } tcp sport 32768-65535 ct state new,established accept

       oifname $NIC_NAME ip daddr $NTP_SERVERS ip saddr $NIC_IP udp dport ntp udp sport 32768-65535 ct state new,established accept

       oifname $NIC_NAME ip daddr $DHCP_SERVER ip saddr $NIC_IP udp dport bootpc udp sport 32768-65535 ct state new,established log accept

       oifname $NIC_NAME ip daddr $DNS_SERVERS ip saddr $NIC_IP udp dport domain udp sport 32768-65535 ct state new,established accept

       oifname $NIC_NAME ip daddr $LOCAL_INETW ip saddr $NIC_IP icmp type echo-request ct state new,established accept

    }

 

    chain forward {

       type filter hook forward priority 0; policy drop;

    }

}

 

The next code block is used to block incoming and outgoing IPv6 traffic, except ping requests (icmpv6 type echo-request) and IPv6 network discovery (nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert).

vNICs are often automatically provisioned with IPv6 addresses and left untouched. These interfaces can be abused by malicious entities to tunnel out confidential data or even a shell.

table ip6 filter {

    chain input {

       type filter hook input priority 0; policy drop;

       iifname "lo" accept

       iifname "lo" ip6 saddr != ::1/128 drop

       iifname $NIC_NAME ip6 saddr $LOCAL_INETWv6 icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, echo-reply, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert } ct state established accept

    }

 

    chain output {

       type filter hook output priority 0; policy drop;

       oifname "lo" accept

       oifname "lo" ip6 daddr != ::1/128 drop

       oifname $NIC_NAME ip6 daddr $LOCAL_INETWv6 icmpv6 type echo-request ct state new,established accept

    }

 

    chain forward {

       type filter hook forward priority 0; policy drop;

    }

}

 Last code block is used for ARP traffic which limits ARP broadcast network frames:

table arp filter {

   chain input {

       type filter hook input priority 0; policy accept;

       iif $NIC_NAME limit rate 1/second burst 2 packets accept

   }

 

   chain output {

       type filter hook output priority 0; policy accept;

   }

}

To load up nftables rules

# systemctl restart nftables && systemctl status nftables && nft list ruleset

Test Before Save and Apply

NB !!! Always test your rules before saving them permanently. A typo can lock you out of your server !!!

Try:

# nft flush ruleset

# nft -f /etc/nftables.conf


!!! Make sure to test your ports are truly open or closed. You can use nc, telnet or tcpdump for this. !!!

Or use a screen or tmux session and set a watchdog timer (e.g., at now +2 minutes reboot) so you can recover if something goes wrong.

Conclusion

In the ever-evolving landscape of network security, relying on static firewall rules is no longer enough. Stateful filtering with nftables gives sysadmins the intelligence and flexibility needed to deal with real-world traffic — allowing good connections, rejecting bad ones, and keeping things efficient.

With just a few lines, you can build a firewall that’s not only more secure but also easier to manage and audit over time.Whether you're protecting a personal server, a VPS, or a corporate gateway, understanding ct state is a critical step in moving from "good enough" security to proactive, intelligent defense.
If you're still relying on outdated iptables chains with hundreds of line-by-line port filters, maybe it's time to embrace the modern way.
nftables isn’t just the future — it’s the present. Further on log, monitor, and learn from your own traffic.

Start with the basics, then layer on your custom rules and monitoring and enjoy your system services and newtork being a bit more secure than before.


Cheers ! 🙂

Optimizing Linux Server Performance Through Digital Minimalism and Running Services and System Cleanup

Friday, October 3rd, 2025

linux-logo-optimizing-linux-server-performance-digital-minimalism-software-cleanup

In today’s landscape of bloated software stacks, automated dependency chains, and background services that consume memory and CPU without notice, Linux system administrators and enthusiasts alike benefit greatly from embracing digital minimalism of what is setup on the server and to reduce it to the absolute minimum.

Digital minimalism in the context of Linux servers means removing what you don't need, disabling what you don't use, and optimizing what remains — all with the goal of increasing performance, improving security, and simplifying further maintenance.
In this article, we’ll walk through practical steps to declutter your Linux server, optimize resources, and regain control over what’s running and why.

1. Identify and Remove Unnecessary Packages

Over time, many systems accumulate unused packages — either from experiments, dependency installations, or unnecessary defaults.

On Debian/Ubuntu

Find orphaned packages:
 

# apt autoremove --dry-run


Remove unnecessary packages:
 

# apt autoremove
# apt purge <package-name>


List large installed packages:

# dpkg-query -Wf '${Installed-Size}\t${Package}\n' | sort -n | tail -n 20


On RHEL/CentOS/AlmaLinux:

Find orphaned packages:

# dnf autoremove

List packages sorted by size:

# rpm -qia --qf '%{SIZE}\t%{NAME}\n' | sort -n | tail -n 20


2. Audit and Disable Unused Services
 

Every running service consumes memory, CPU cycles, and opens potential attack surfaces.

List enabled services:
 

# systemctl list-unit-files --type=service --state=enabled

See currently running services:

# systemctl --type=service –state=running

Put some good effort to review and disable all unnecesssery

 

Disable unneeded services :

# systemctl disable --now bluetooth.service
# systemctl disable --now cups.service
# systemctl disable --now ModemManager.service

And so on

Useful services to disable (if unused):
 

Service

Purpose

When to Disable

cups.service

Printer daemon

On headless servers

bluetooth.service

Bluetooth stack

On servers without Bluetooth

avahi-daemon

mDNS/Zeroconf

Not needed on most servers

ModemManager

Modem management

If not using 3G/4G cards

NetworkManager

Dynamic net config

Prefer systemd-networkd for static setups


Simple Shell Script to List & Review Services
 

#!/bin/bash
echo "Enabled services:"
systemctl list-unit-files --state=enabled | grep service
echo ""
echo "Running services:"
systemctl --type=service --state=running

3. Optimize Startup and Boot Time

Analyze system boot performance:

# systemd-analyze

View which services take the longest:

# systemd-analyze blame
min 25.852s certbot.service
5min 20.466s logrotate.service
1min 29.748s plocate-updatedb.service
54.595s php5.6-fpm.service
43.445s systemd-logind.service
42.837s e2scrub_reap.service
37.915s apt-daily.service
35.604s mariadb.service
31.509s man-db.service
27.405s systemd-journal-flush.service
18.357s ifupdown-pre.service
14.672s dev-xvda2.device
13.523s rc-local.service
11.024s dpkg-db-backup.service
9.871s systemd-sysusers.service
...

 

Disable or mask long-running services that are not essential.


Why services masking is important?


Simply because after some of consequential updates, some unwanted service daemon might start up with the system boot.

Example:
 

# systemctl mask lvm2-monitor.service


4. Reduce Memory Usage (Especially on Low-RAM VPS)
 

Monitor memory usage:

# free -h
# top
# htop

Use lightweight alternatives:

Service

Heavy

Lightweight Alternative

Web server

Apache

Nginx / Caddy / Lighttpd

Database

MySQL

MariaDB / SQLite (if local)

Syslog

rsyslog

busybox syslog / systemd journal

Shell

bash

dash / ash

File manager

GNOME Files

mc / ranger (CLI)


5. Configure Swap (Only If Needed)
 

Having too much or too little swap can affect performance.


Check if swap is active:

# swapon --show


Create swap file (if needed):

# fallocate -l 1G /swapfile
# chmod 600 /swapfile
# mkswap /swapfile
# swapon /swapfile

Add to /etc/fstab for persistence:

/swapfile none swap sw 0 0

6. Clean Up Cron Jobs and Timers

Old scheduled tasks can silently run in the background and consume resources.

List user cron jobs:

crontab -l

Check system-wide cron jobs:

# ls /etc/cron.*
# ls -al /var/spool/cron/*


List systemd timers:

# systemctl list-timers


Disable any unneeded timers or outdated cron entries.

7. Optimize Logging and Log Rotation

Logs are essential but can grow large and fill up disk space quickly.

Check log size:

# du -sh /var/log/*

Force logrotate:
 

# logrotate -f /etc/logrotate.conf

Edit /etc/logrotate.conf or specific files in /etc/logrotate.d/* to reduce retention if needed.

8. Check for Zombie Processes and Old Users

Old users and zombie processes can indicate neglected cleanup or the server is (cracked) hacked.

List users:
 

cat /etc/passwd | cut -d: -f1

Remove unused accounts:
 

# userdel -r username


Check for zombie processes:
 

# ps aux | awk '{ if ($8 == "Z") print $0; }'

9. Disable IPv6 (if not used)

IPv6 can add unnecessary complexity and attack surface if you’re not using it.

To disable IPv6 temporarily:

# sysctl -w net.ipv6.conf.all.disable_ipv6=1
# sysctl -w net.ipv6.conf.default.disable_ipv6=1

To disable permanently, add to /etc/sysctl.conf:

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1


10. Final Thoughts: Less Is More

Digital minimalism is not just a personal tech trend — it's a philosophy of clarity, performance, and security. Every running process is a potential vulnerability. Every megabyte of RAM consumed by a useless service is wasted capacity. Every package installed increases the system’s complexity.

By regularly auditing, pruning, and simplifying your Linux server, you not only improve its performance and reliability, but you also reduce future maintenance headaches.

Minimalism = Maintainability.

How to Run Your Own Windows Domain Authentication on Linux

Thursday, October 2nd, 2025

samba-active-directory-win-tux-logo

 

Run Your Own Domain Authentication on Linux

Running your own domain authentication system on Linux can significantly enhance security and manageability in your IT environment. Whether you're setting up centralized login for a small network or a more complex domain environment, Linux provides powerful tools to become your own domain controller using open-source software.

In this guide, we’ll walk you through setting up Samba as an Active Directory (AD) Domain Controller on a Linux server.
These tutorial should work fine on Debian 12 (Bookworm), though it should work with minor modifications on pretty much most of recent Debs and deb based distros.

What is Domain Authentication?

Domain authentication allows users to log in to any authorized machine within a network using the same set of credentials. It provides centralized management of:

  • Users and groups
  • Computer accounts
  • Group policies
  • File and printer sharing
  • Access control

Microsoft's Active Directory is the most well-known implementation, but you can achieve similar functionality using Samba on Linux.

Pre-requirements

  • A fresh Linux installation (Ubuntu Server 22.04 LTS or Debian 12 recommended)
  • Static IP address
  • Root or sudo access
  • Domain name (e.g., mydomain.local)
 

1. Update System and Set proper Hostname

# apt update && sudo apt upgrade -y

# hostnamectl set-hostname dc1.mydomain.local


Add the hostname to /etc/hosts:

# vim /etc/hosts

Add the local network IP the SMB Domain controller will have locally on the machine:

192.168.1.100  dc1.mydomain.local dc1

 

2. Install Samba and Required Packages

# apt install samba krb5-config krb5-user winbind smbclient dnsutils -y

During the installation, you may be prompted for Kerberos configuration:

  • Default realm: MYDOMAIN.LOCAL
  • KDC: dc1.mydomain.local
  • Admin server: dc1.mydomain.local


samba-active-directory-raw-illustration

 

3. Provision Samba as a Domain Controller

First, stop any running Samba services:
 

# systemctl stop smbd nmbd winbind

# systemctl disable smbd nmbd winbind

Move default config:

# mv /etc/samba/smb.conf /etc/samba/smb.conf.bak

Now provision the domain:

# samba-tool domain provision –use-rfc2307 –interactive

Answer prompts:

  • Realm: MYDOMAIN.LOCAL
  • Domain: MYDOMAIN
  • Server role: dc
  • DNS backend: SAMBA_INTERNAL
  • Admin password: (choose a strong one)

Once done, configure Kerberos using the samba krb5.conf template file:

# mv /etc/krb5.conf /etc/krb5.conf.bak

# cp /var/lib/samba/private/krb5.conf /etc/

 

4. Start and Enable Samba AD Services

# systemctl unmask samba-ad-dc

# systemctl enable samba-ad-dc –now

Verify it’s working by running:

# samba-tool domain level show

Check Kerberos authentication is OK:

# kinit administrator

# klist

You should see a valid Kerberos ticket.

5. Configure DNS (Optional but Recommended)

If using SAMBA_INTERNAL DNS backend:

Check DNS resolution is OK:

# host -t A dc1.mydomain.local

# host -t SRV _kerberos._udp.mydomain.local

If you want clients to resolve domain names, configure them to use the Samba DC's IP as their DNS server.

6. Add Users and Join Client Machines

Add a new user:

# samba-tool user add your.samba.user

Join a Windows client:

  1. Go to System Properties → Computer Name → Change settings
  2. Click Domain, enter MYDOMAIN
  3. Authenticate with Administrator and the password you set
  4. Reboot

7. Managing the Domain

You can manage users, groups, and policies simply via commands or GUI interface or LDAP tools:

  • samba-tool (CLI)
  • RSAT tools on Windows (for GUI management)
  • via LDAP tools (if you have to stick to RFC2307)

Example commands:

# samba-tool user list

# samba-tool group list

# samba-tool user setpassword your.samba.user

8. Managing Samba AD Samba Linux Domain easily with UI
 

You can manage a Samba domain (especially when it's running as an Active Directory Domain Controller) via a web interface — but not directly through Samba itself, since it doesn't come with a built-in web UI.

Instead, you can integrate Samba with third-party web-based tools that provide management interfaces for:

  • Users and groups
  • Computer accounts
  • LDAP directory entries
  • Domain policies (to a limited extent)

Popular Web Interfaces to Manage a Samba Domain

Here are the most reliable options:

8.1. [Cockpit + 389 Directory Server or FreeIPA (for LDAP-based domains)]

  • Cockpit is a modern web admin interface for Linux servers.
  • When paired with FreeIPA, you can manage users, groups, policies, and more.
  • However, this is more suited for FreeIPA-based domains, not Samba AD.

✅ Great for: Linux-native domains
❌ Not compatible with Windows-style Samba AD

 

8.2. [LDAP Account Manager (LAM)] – RECOMMENDED FOR SAMBA + AD

Website: https://www.ldap-account-manager.org/

LDAP Account Manager (LAM) is one of the best tools to manage a Samba domain via LDAP, especially when:

  • You use Samba in AD DC mode with RFC2307 extensions (for Unix attributes)
  • Or, you're using Samba as a member server with an external LDAP backend

Features:

  • Web-based GUI to manage:

     

     

    • Users and groups
    • Samba-specific attributes (like SID, RID, home directories)
    • POSIX and Windows-compatible accounts
  • Can bind directly to the Samba LDAP directory

Authentication: Admin binds via LDAP (either over plain or TLS)

✅ Works with Samba AD (with some config)
✅ Handles Samba3/4 user schemas
✅ Active development and documentation

 

8.3. Samba Web Administration Tool (SWAT) ❌ Deprecated

SWAT was the original web interface for Samba but:

  • It was deprecated and removed from Samba after version 4.1
  • It's no longer secure or maintained
  • Not suitable for Samba AD DC environments

Recommendation: Do not use SWAT

8.4. Webmin (Partial Support)

  • Webmin is a general Linux web admin tool
  • It has a Samba module, but:

     

     

    • Designed for traditional Samba file sharing (not AD/DC mode)
    • Cannot manage Samba AD users/groups
    • Doesn’t interact with samba-tool or the AD schema

✅ Works for standalone Samba file servers
Not suitable for Samba AD DCs

Can You really Use RSAT Instead ?

If you want full Active Directory-style control (like Group Policy, OU structure, DNS, etc.), the best GUI tool is actually RSAT (Remote Server Administration Tools) on Windows
but for that of course you will have to have an own Windows Server setup especailly for it.

  • Connects to your Samba AD DC
  • Fully supports:

     

     

    • Users and groups
    • Group Policy Objects (GPO)
    • DNS management (if using internal Samba DNS)

Install RSAT on a Windows machine and run dsa.msc (Active Directory Users and Computers).

✅ Officially supported
✅ Full compatibility with Samba AD
Requires a Windows machine

Summary: Web UI for Samba Domain Management

 

Tool

Works with Samba AD DC?

Features

Notes

LDAP Account Manager (LAM)

Yes

User/group management

Best web option

Cockpit + FreeIPA

❌ No (not Samba AD)

Excellent for FreeIPA domains

Not compatible with Samba AD

Webmin

❌ Not fully

File shares only

No AD/DC management

RSAT (Windows)

✅ Yes

Full AD management

Not web-based

Recommendation

If you're running a Samba AD DC and want a web-based interface:

  • Use LAM (LDAP Account Manager) for basic account management
  • Use RSAT tools on Windows for full domain administration
  • Avoid SWAT and Webmin for this purpose

Security Considerations

  • Ensure firewall allows relevant ports (e.g., 53, 88, 389, 445, etc.) with Iptables / firewalld or whatever firewall solution you have present on the server and in the Network in which you hosted the server
  • Keep the system updated
  • Use secure passwords and rotate them regularly
  • Consider setting up replication if high availability is needed

Conclusion

Running your own domain authentication system on Linux using Samba is a powerful way to control user access in a centralized manner. It’s ideal for small to mid-sized networks, homelabs, or even enterprise environments looking for a cost-effective alternative to Windows Server.

With Samba acting as your domain controller, you can enjoy the benefits of centralized authentication, integrated DNS, and a high degree of compatibility with Windows clients — all while staying in the open-source ecosystem.

 

References

  • Samba Wiki: Setting up Samba as an AD Domain Controller
  • man samba-tool
  • man smb.conf


Notes and things to consider:

/var/lib/samba/private/krb5.conf file is generated only after you provision Samba as an Active Directory (AD) Domain Controller using:

# samba-tool domain provision

After provisioning, Samba creates a custom Kerberos config at:

/var/lib/samba/private/krb5.conf

 

This is true for both Debian and Ubuntu because it's handled by the Samba package itself, not the distro.

Why use that krb5.conf instead of Debian's default?

Well because:

The default /etc/krb5.conf on Debian isn't tailored for Samba AD.
The one Samba generates includes correct realm, KDC, and admin server settings.
It avoids subtle issues like failed kinit or broken Kerberos trust.

So you copy it over Debian’s default:

 

Gotchas on Debian to be aware of

Do not install samba via tasksel (like tasksel's “Samba file server” role), as it sets up a traditional SMB server, not AD.

Only use samba-tool domain provision if you're setting up AD DC.

Debian sometimes separates systemd services (e.g., samba-ad-dc might not be enabled by default). So make sure to enable samba-ad-dc instead of smbd/nmbd.