Archive for the ‘Automation’ Category

Building a 10-Server FreeBSD Jail Cluster Running a LAMP (Linux / Apache / MySQL / Perl / PHP / Python) Stack

Wednesday, March 25th, 2026

building-freebsd-jails-cluster-running-linux-apache-10-cluster-high-availability-with-mariadb-perl-php-howto

Virtualization and workload isolation are foundational to modern infrastructure.
While most teams today default to container platforms like Docker and orchestration systems such as Kubernetes, an older and highly capable alternative exists in the form of jails from FreeBSD.

FreeBSD jails provide lightweight OS-level isolation, allowing multiple independent userland environments to run on a single host. Introduced long before containers became mainstream, jails were designed with a strong focus on security, simplicity, and performance.
Despite their maturity and robustness, they are less commonly used today, largely due to the rapid rise of container ecosystems and cloud-native tooling.

Choosing between jails and containers is not simply a matter of “old vs new,” but rather a trade-off between control and simplicity versus portability and ecosystem support.

Short Comparison of FreeBSD jails and Containers ( Pros and Cons )

Advantages of FreeBSD Jails

a. Strong, simple isolation

Jails provide a clear and tightly integrated security boundary within the FreeBSD kernel. Their design is straightforward, reducing the risk of misconfiguration compared to layered container security models.

freebsd_jails_infographic_diagram

b. High performance

Because jails operate very close to the base system, they deliver near-native performance with minimal overhead—especially beneficial for networking and I/O-heavy workloads.

c. Operational simplicity

There are fewer component moving parts (easier to maintain and debbug):

  • No separate container runtime
  • No image layers
  • No complex orchestration requirements

This makes jails appealing for stable, long-running systems.

d. Predictability and stability

FreeBSD’s conservative, design philosophy results in systems that are highly stable over long periods, that is ideal for infrastructure roles like: storage or networking.

Disadvantages of FreeBSD Jails

a. Limited portability

Not neceserry a huge disadvantage but still,
Jails are tied to FreeBSD. Unlike containers, they cannot be easily moved across different operating systems or cloud platforms.


b. Smaller ecosystem

FBSD Jails is not full equivallent to:

  • Container registries (like Docker Hub)
  • Massive orchestration ecosystems (similar things has to be done with scripts and customizations)
  • Broad third-party integrations

This can slow down a bit development and deployment workflows. Though for a matured Applications that are once well tuned with jails that can be not a real probblem.

Note that though a con, this can also be a pros, as once you tune up an App for it becomes easier to maintain.

c. Less automation tooling

While tools exist, they are not as standardized or widely adopted as container-based CI/CD pipelines.

d. Harder to find people for it
 

Most developers and DevOps engineers are trained in container technologies, making hiring and collaboration easier in container-based environments. However for senior hard core sysadmins and system engineers that could be also advantage as not so many people have an indepth insight with both freebsd and fbsd jails.

This guide walks through a practical, production-style setup: 10 FreeBSD servers, each running isolated jails that host a classic LAMP stack (Linux, here replaced by FreeBSD, Apache, MySQL/MariaDB, PHP).
However still the use of companies or individuals who choose freebsd jails aim to better focus is on repeatability, clean architecture, and operational sanity, not just getting it to run once.

Architecture Overview of sample FBSD Cluster

Our Goal:

  • 10 physical or virtual servers
  • Each server runs multiple jails
  • Each jail runs a LAMP app instance
  • Load balancing across nodes (to have a High Availability Cluster like setup)

Host Setup:

  • 2 × load balancer nodes (nginx or HAProxy)
  • 6 × application nodes (Apache + PHP in jails)
  • 2 × database nodes (MariaDB primary/replica)

All systems run FreeBSD, using native jails for isolation.

1. Base FreeBSD Installation (All 10 Servers)

Install FreeBSD on each machine (minimal install is fine).

Update system:

# freebsd-update fetch install
# pkg update && pkg upgrade -y

Install base tools:

# pkg install -y sudo vim bash git

2. Install Jail Management tool (iocage)

We’ll use iocage, a modern jail manager.

# pkg install -y iocage
# sysrc iocage_enable="YES"
# service iocage start

Activate ZFS (recommended):

# zpool create zroot /dev/da0

Initialize iocage:

# iocage activate zroot
# iocage fetch

3. Create a Reusable Jail Template

Instead of building each jail manually, create a golden template.

# iocage create -n lamp-template -r 13.2-RELEASE ip4_addr="vnet0|10.0.0.10/24" boot=off
# iocage start lamp-template
# iocage console lamp-template

4. Install LAMP Stack Inside the Jail

Inside the jail:

4.1. Install Apache

# pkg install -y apache24
# sysrc apache24_enable="YES"

4.2. Install MariaDB

# pkg install -y mariadb106-server
# sysrc mysql_enable="YES"

Initialize DB:

service mysql-server start
mysql_secure_installation

4.3. Install PHP pre-compiled ports

# pkg install -y php82 php82-mysqli php82-mbstring php82-opcache


Configure Apache to use PHP:

# echo 'LoadModule php_module libexec/apache24/libphp.so' >> /usr/local/etc/apache24/httpd.conf
# echo 'AddType application/x-httpd-php .php' >> /usr/local/etc/apache24/httpd.conf

5. Test LAMP Stack works OK

Create a test file:

# echo "<?php phpinfo(); ?>" > /usr/local/www/apache24/data/index.php

Start services:

service apache24 start

Visit the jail IP and confirm PHP (page output) works in Firefox / Chrome Browser.

6. Convert Template into Clones

Stop Jail and snapshot:

iocage stop lamp-template
iocage snapshot lamp-template@base

Clone for production:

iocage clone lamp-template -n app01 ip4_addr="vnet0|10.0.0.21/24"
iocage clone lamp-template -n app02 ip4_addr="vnet0|10.0.0.22/24"

Repeat across servers and once working create a small shell script to run as a cron job to create backups automated.

Each server might run 5 up to 20 jails depending on resources.

7. Networking Between Jails

Use VNET for proper isolation:

Enable bridge on host:

# ifconfig bridge0 create
# ifconfig bridge0 addm em0 up

Assign jail interfaces automatically via iocage.

8.  Load Balancing Layer

On 2 dedicated nodes, install nginx:

# pkg install -y nginx
# sysrc nginx_enable="YES"

Example config:

http {
    upstream backend {
        server 10.0.0.21;
        server 10.0.0.22;
        server 10.0.1.21;
        server 10.0.1.22;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://backend;
        }
    }
}

9. Database Strategy

You have few options to choose from:

a. Use Centralized DB

  • Dedicated DB jails on 2 nodes
  • Primary + replica

b. Use Per-node DB (simpler)

  • Each jail has its own MariaDB
  • Use app-level replication if needed

10. Automation Across 10 Servers

Use tools like:

  • Ansible
  • SSH scripts
  • ZFS replication

Example (simple parallel execution loop) or use a set of scripts to handle updating with some Ansible Playbooks or Puppet:

# for host in server{1..10}; do
  ssh $host "pkg update"
done

Few more Operational Tips to consider

a. Tune up setup / Do Resource management

  • Limit jail CPU/memory using rctl
  • Avoid overcommitting RAM

b. Use Centralized Logging

c. Do regular jail Backups

  • Use ZFS snapshots to backup each of the Jails:

# zfs snapshot zroot/iocage/jails/app01@backup

d. Tighten Security

  • Disable root SSH
  • Use PF firewall on host
  • Keep jails minimal

e. Do a Further Scaling Strategy

  • Add more servers -> replicate template
  • Add more jails -> clone snapshots
  • Scale horizontally via load balancer

Summary and Last Thoughts

When Choose FBSD Jails and when Containers

  • Use jails when you control the infrastructure, need maximum efficiency, and value simplicity (e.g., appliances, CDNs, storage systems).
  • Use containers when portability, scalability, and integration with modern DevOps workflows are critical.

This setup plays to the strengths of FreeBSD jails:

1. Performance: near-native speed
2.Isolation: strong and predictable
3. Simplicity: fewer layers than container stacks

FreeBSD jails remain a powerful and efficient isolation mechanism, particularly well-suited for controlled, performance-sensitive environments. Containers, however, dominate in modern application deployment due to their flexibility and ecosystem. The choice ultimately depends on whether you prioritize system-level control or platform-level convenience.

You won’t get the ecosystem of tools like Docker or Kubernetes, but you gain control, stability, and efficiency, which is exactly why companies like Netflix still rely on this model in critical infrastructure.

 

Build a Central Linux Logging Server to Collect, Store, and Visualize All Infrastructure node Logs

Friday, March 20th, 2026

build-a-central-linux-logging-server-to-collect-store-and-visualize-all-infrastructure-node-logs
If you manage multiple servers or collection of multiple services on many nodes within a company server infrastructure, you know the pain of dealing with logs scattered to multiple locations across systems. It is really crazy and takes up a lot of time and drains energy.
One server shows nothing, another rotated logs yesterday, and your app logs are buried somewhere in /var/log/app.

A central logging server solves this problem, as all logs collected, stored, and accessible in one single place.

In this article will present shortly how to build one using ELK Stack + Beats (lightweight agents) on a Linux server.

1. Architecture Overview

Here’s the typical flow looks like this:

[ Servers / Apps ] –> [ Filebeat / Metricbeat ] –> [ Logstash ] –> [ Elasticsearch ] –> [ Kibana / Grafana (Visualization) ]

  • Beats → Lightweight log shippers installed on all machines.
  • Logstash → Optional pipeline for parsing, filtering, and enriching logs.
  • Elasticsearch → Storage and search engine.
  • Kibana / Grafana → Visualization dashboards.

2. Prepare Your Central Logging Server

Requirements:

  • Debian Linux 12 recommended / Ubuntu or Fedora RHEL
  • At least 4 GB RAM (8+ GB for production ELK)
  • Plan enough SSD storage (logs grow fast)
  • Open ports: 5044 for Beats, 9200 for Elasticsearch, 5601 for Kibana

Install Prerequisites

# apt update && sudo apt install openjdk-17-jdk wget curl apt-transport-https -y

ELK requires Java, OpenJDK 17 should work fine.

3. Install Elasticsearch

# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.11.1-amd64.deb
# dpkg -i elasticsearch-8.11.1-amd64.deb
# systemctl enable elasticsearch
# systemctl start elasticsearch


Check ElasticSearch server is running:

# curl -X GET "localhost:9200/"

That should see the Cluster info in JSON format.

4. Install Kibana

# wget https://artifacts.elastic.co/downloads/kibana/kibana-8.11.1-amd64.deb
# dpkg -i kibana-8.11.1-amd64.deb
# systemctl enable kibana
# systemctl start kibana


Access Kibana URL in browser:

http://<server-ip>:5601

5. Install Logstash to Process logs before sending to Elasticserch

# wget https://artifacts.elastic.co/downloads/logstash/logstash-8.11.1.deb
# dpkg -i logstash-8.11.1.deb
# systemctl enable logstash
# systemctl start logstash

Logstash allows filtering and structuring logs before sending them to Elasticsearch. Example simple pipeline:

# vim /etc/logstash/conf.d/syslog.conf

input {
  beats {
    port => 5044
  }
}
filter {
  grok { match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:host} %{DATA:program}: %{GREEDYDATA:message}" } }
}
output {
  elasticsearch {
    hosts => [“localhost:9200”]
    index => "central-logs-%{+YYYY.MM.dd}"
  }
}

Start Logstash

# systemctl restart logstash

6. Install Beats on Client Machines

On each server you want to monitor:

# apt install filebeat metricbeat -y


Configure Filebeat

Edit config

# vim  /etc/filebeat/filebeat.yml

Set the output to your central server:

output.logstash:

hosts: [":5044"]

Start the agent:

systemctl enable filebeat
systemctl start filebeat

Do the same for Metricbeat if you want metrics like CPU, memory, disk.

7. Create Dashboards in Kibana or Grafana

  • In Kibana, use Discover to view logs.
  • Create visualizations for errors, warnings, top endpoints, etc.
  • Use Grafana if you want multi-source dashboards, combining logs and metrics.

8. Optional: Secure Your Logging Server

  • Enable TLS/SSL in Beats and Elasticsearch.
  • Use firewall rules to restrict access.
  • Create dedicated users in Elasticsearch for log access.

9. Maintenance Tips

  • Index Lifecycle Management → Rotate daily and delete old logs automatically.
  • Monitor disk usage → Logs grow fast. SSDs are better.
  • Filter noise → Don’t ship debug logs unless needed.
  • Backup Elasticsearch → Especially if logs are critical.

Sum Up, how it Works

  • All logs are centralized → easier troubleshooting.
  • Scalable → add new servers, Beats handle shipping automatically.
  • Searchable → find errors instantly using Elasticsearch.
  • Visual → dashboards in Kibana/Grafana give real-time insight.

How to Easily Integrate AI Into Bash on Linux with ollama

Monday, January 12th, 2026

Essential Ollama Commands You Should Know | by Jaydeep Karale | Artificial Intelligence in Plain English

AI is more and more entering the Computer scene and so is also in the realm of Computer / Network management. Many proficient admins already start to get advantage to it.
AI doesn’t need a GUI, or a special cloud dashboard or fancy IDE so for console geeks Sysadmins / System engineers and Dev Ops it can be straightly integrated into the bash / zsh etc. and used to easify a bit your daily sys admin tasks.

If you live in the terminal, the most powerful place to add AI is Bash itself. With a few tools and a couple of lines of shell code, you can turn your terminal into an AI-powered assistant that writes commands, explains errors, and helps automate everyday Linux tasks.

No magic. No bloat. Just Unix philosophy with a brain.

1. AI as a Command-Line Tool

Instead of treating AI like a chatbot, treat it like any other CLI utility:

  • stdin → prompt
  • stdout → response
  • pipes → integration

Hardware Requirements of Ollama Server

2. Minimum VM (It runs, but you’ll hate it)

Only use this to test that things work.

  • vCPU: 2
  • RAM: 4 GB
  • Disk: 20 GB (SSD mandatory)
  • Storage type: ( HDD / network storage = bad )
  • Models: phi3, tinyllama

Expect:

  • Very slow pulls
  • Long startup times
  • Laggy responses

a. Recommended VM (Actually usable)

This is the sweet spot for Bash integration and daily CLI use.

  • vCPU: 4–6 (modern host CPU)
  • RAM: 8–12 GB
  • Disk: 30–50 GB local SSD
  • CPU type: host-passthrough (important!)
  • NUMA: off (for small VMs)

Models that feel okay:

  • phi3
  • mistral
  • llama3:8b (slow but tolerable)
 

b. “Feels Good” VM (CPU-only but not painful)

If you want it to feel responsive.

  • vCPU: 8
  • RAM: 16 GB
  • Disk: NVMe-backed storage
  • CPU flags: AVX2 enabled
  • Hugepages: optional but nice

Models:

  • llama3:8b
  • codellama:7b
  • mixtral (slow, but usable)
 

Hypervisor-Specific Advice (Important)

c. KVM / Proxmox (Best choice)

  • CPU type: host
  • Enable AES-NI, AVX, AVX2
  • Use virtio-scsi
  • Cache: writeback
  • IO thread: enabled

d. If running VM on VMware platform

  • Enable Expose hardware-assisted virtualization
  • Use paravirtual SCSI
  • Reserve memory if possible

e. VirtualBox (Not recommended)

  • Poor CPU feature exposure
  • Weak IO performance

Avoid if you can

f. Local AI With Ollama (Recommended)

If you want privacy, low latency, and no API keys, Ollama is currently the easiest way to run LLMs locally on Linux.

3. Install Ollama

# curl -fsSL https://ollama.com/install.sh | sh


Start the service:

# ollama serve

Pull a model (lightweight and fast):

# ollama pull llama3

Test it:

ollama run llama3 "Explain what the ls command does"

If that works, you’re ready to integrate.

To check whether all is properly setup after installed llama3:

root@haproxy2:~# ollama list
NAME             ID              SIZE      MODIFIED
llama3:latest    365c0bd3c000    4.7 GB    About an hour ago

root@haproxy2:~# systemctl status ollama
● ollama.service – Ollama Service
     Loaded: loaded (/etc/systemd/system/ollama.service; enabled; preset: enabled)
     Active: active (running) since Mon 2026-01-12 16:43:30 EET; 15min ago
   Main PID: 37436 (ollama)
      Tasks: 16 (limit: 6999)
     Memory: 5.0G
        CPU: 13min 5.264s
     CGroup: /system.slice/ollama.service
             ├─37436 /usr/local/bin/ollama serve
             └─37472 /usr/local/bin/ollama runner –model /usr/share/ollama/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53>

яну 12 16:45:34 haproxy2 ollama[37436]: llama_context: Flash Attention was auto, set to enabled
яну 12 16:45:34 haproxy2 ollama[37436]: llama_context:        CPU compute buffer size =   258.50 MiB
яну 12 16:45:34 haproxy2 ollama[37436]: llama_context: graph nodes  = 999
яну 12 16:45:34 haproxy2 ollama[37436]: llama_context: graph splits = 1
яну 12 16:45:34 haproxy2 ollama[37436]: time=2026-01-12T16:45:34.959+02:00 level=INFO source=server.go:1376 msg="llama runner>
яну 12 16:45:34 haproxy2 ollama[37436]: time=2026-01-12T16:45:34.989+02:00 level=INFO source=sched.go:517 msg="loaded runners>
яну 12 16:45:35 haproxy2 ollama[37436]: time=2026-01-12T16:45:35.000+02:00 level=INFO source=server.go:1338 msg="waiting for >
яну 12 16:45:35 haproxy2 ollama[37436]: time=2026-01-12T16:45:35.001+02:00 level=INFO source=server.go:1376 msg="llama runner>
яну 12 16:55:58 haproxy2 ollama[37436]: [GIN] 2026/01/12 – 16:55:58 | 200 |    3.669915ms |       127.0.0.1 | HEAD     "/"
яну 12 16:55:58 haproxy2 ollama[37436]: [GIN] 2026/01/12 – 16:55:58 | 200 |   42.244006ms |       127.0.0.1 | GET      "/api/>
root@haproxy2:~#

4. Turning AI Into a Bash Command

Let’s create a simple AI helper command called ai.

Add this to your ~/.bashrc or ~/.bash_aliases:

ai() {

  ollama run llama3 "$*"

}

Reload your shell:

$ source ~/.bashrc


Now you can do things like:

$ ai "Write a bash command to find large files"

 

$ ai "Explain this error: permission denied"

$ ai "Convert this sed command to awk"


At this point, AI is already a first-class CLI tool.

5. Using AI to Generate Bash Commands

One of the most useful patterns is asking AI to output only shell commands.

Example:

$ ai "Give me a bash command to recursively find .log files larger than 100MB. Output only the command."

Copy, paste, done.

You can even enforce this behavior with a wrapper:

aicmd() {

  ollama run llama3 "Output ONLY a valid bash command. No explanation. Task: $*"

}

Now:

$ aicmd "list running processes using more than 1GB RAM"

Danger note: always read commands before running them. AI is smart, not trustworthy.

6. AI for Explaining Commands and Logs

This is where AI shines.

Pipe output directly into it:

$ dmesg | tail -n 50 | ai "Explain what is happening here"

Or errors:

$ make 2>&1 | ai "Explain this error and suggest a fix"

You’ve just built a terminal-native debugger.

7. Smarter Bash History Search

You can even use AI to interpret your intent instead of remembering exact commands:

aih() {

  history | ai "From this bash history, find the best command for: $*"

}

Example:

$ aih "compress a directory into tar.gz"

It’s like Ctrl+R, but semantic.

8. Using AI With Shell Scripts

AI can help generate scripts inline:

$ ai "Write a bash script that monitors disk usage and sends a notification when it exceeds 90%"

You’re not replacing scripting skills – you’re accelerating them.

Think of AI as:

  • a junior sysadmin
  • a documentation search engine
  • a rubber duck that talks back

9. Where Ollama Stores Its Data

Depending on how it runs:

System service (most common)

Models live here:

/usr/share/ollama/.ollama/

Inside:

models/

blobs/

User-only install

~/.ollama/

Since you installed as root + systemd, use the first path.

See What’s Taking Space

# du -sh /usr/share/ollama/.ollama/*

Typical output:

  • models/ → metadata
  • blobs/ → the big files (GBs)

10. Remove Unused Models (Safe Way)

List models Ollama knows about:

# ollama list

Remove a model properly:

# ollama rm llama3

This removes metadata and unreferenced blobs.

Always try this first.

11. Full Manual Cleanup (Hard Reset)

If things are broken, stuck, or you want a clean slate:

Stop Ollama

# systemctl stop ollama

Delete all local models and cache

# rm -rf /usr/share/ollama/.ollama/models

# rm -rf /usr/share/ollama/.ollama/blobs

(Optional but safe)

# rm -rf /usr/share/ollama/.ollama/tmp

Start Ollama again

# systemctl start ollama

Ollama will recreate everything automatically.

Verify Cleanup Worked

# du -sh /usr/share/ollama/.ollama

# ollama list

You should see:

  • Very small disk usage
  • Empty model list

Prevent Disk Bloat (Highly Recommended)

Only pull small models on VMs

Stick to:

  • phi3
  • mistral
  • tinyllama

Remove models you don’t use

# ollama rm modelname

Set a custom data directory (optional)

If /usr is small, move Ollama data:

# systemctl stop ollama

# mkdir -p /opt/ollama-data

# chown -R ollama:ollama /opt/ollama-data

Edit service:

# systemctl edit ollama

Add:

[Service]

Environment=OLLAMA_HOME=/opt/ollama-data

Then:

# systemctl daemon-reload

# systemctl start ollama

Quick “Nuke It” One-Liner (Use With Care)

Deletes everything Ollama-related:

# systemctl stop ollama && rm -rf /usr/share/ollama/.ollama && systemctl start ollama

API-Based Option (Cloud Models)

If you prefer cloud models (OpenAI, Anthropic, etc.), the pattern is identical:

  • Use curl
  • Pass prompt
  • Parse output

Once AI returns text to stdout, Bash doesn’t care where it came from.

12. Best Practices how to use shell AI to not overload machine

Before you go wild:

  • Don’t auto-execute AI output
  • Don’t run AI as root
  • Treat responses as suggestions
  • Version-control important scripts
  • Keep prompts specific
     

AI is powerful — but Linux still assumes you know what you’re doing.

Sum it up

Adding AI to Bash isn’t about replacing skills.
It’s about removing friction.

When AI gets easy to use from the command line it is a great convenience for those who don't want to switch to browser all the time and copy / paste like crazy.
AI as a command-line tool fits perfectly into the Linux console:

  • composable
  • scriptable
  • optional
  • powerful

Once you’ve used AI from inside your shell for a few hours, going back to browser-based AI chat stuff like querying ChatGPT feels… slow and inefficient.
However keep in mind that ollama is away from perfect and has a lot of downsides and ChatGPT / Grok / DeepSeek often might give you better results, however as ollama is really isolated and non-depend on external sources your private quries will not get into a public AI historic database and you won't be tracked.
So everything has its Pros and Cons. I'm pretty sure that this tool and free AI tools like those will certainly have a good future and will be heavily used by system admins and
programmers in the coming future.
The terminal just got smarter. And it didn’t need a GUI to do it.

Digital Vigilance: Practical Cyber Defense for the New Era of All connected dependency on technology

Friday, October 24th, 2025

 

Introduction

There was a time when cybersecurity was mostly about erecting a firewall, installing antivirus software and hoping no one clicked a suspicious link. That era is steadily fading. Today, as more work moves to the cloud, as AI tools proliferate, and as threat actors adopt business-like models, the battlefield has shifted dramatically. According to analysts at Gartner, 2025 brings some of the most significant inflections in cybersecurity in recent memory. 

In this article we’ll cover the major trends, why they matter, and — importantly — what you as an individual or sysadmin can start doing today to stay ahead.

1. Generative AI: Weapon and Shield

AI / ML (Machine Learning)) is now deeply ingrained in both the offence and defence sides of cybersecurity.

  • On the defence side: AI models help detect anomalies, process huge volumes of logs, and automate responses. 
  • On the offence side: Attackers use AI to craft more convincing phishing campaigns, automate vulnerability discovery, generate fake identities or even design malware. 
  • Data types are changing: It’s no longer just databases and spreadsheets. Unstructured data (images, video, text) used by AI models is now a primary risk.

What to do:

  • Make sure any sensitive AI-training data or inference logs are stored securely.
  • Build anomaly-detection systems that don’t assume “normal” traffic anymore.
  • Flag when your organisation uses AI tools: do you know what data the tool uses, where it stores it, who can access it?

2. Zero Trust Isn’t Optional Anymore

 

cyber-security-threats-to-watch-in-2025

The old model — trust everything inside the perimeter, block everything outside — is obsolete. Distributed workforces, cloud services, edge devices: they all blur the perimeter. Hence the rise of Zero Trust Architecture (ZTA) — “never trust, always verify.” INE+1

Key features:

  • Every device, every user, every session must be authenticated and authorised.
  • Least-privilege access: users should have the minimum permissions needed.
  • Micro-segmentation: limit lateral movement in networks.
  • Real-time monitoring and visibility of sessions and devices.

What to do:

  • Audit your devices and users: who has broad permissions? Who accesses critical systems?
  • Implement multifactor authentication (MFA) everywhere you can.
  • Review network segmentation: can a compromised device access everything? If yes, that’s a red flag.
     

3. Ransomware & RaaS – The Business Model of Cybercrime

Cybercriminals are organizing like businesses: they have supply chains, service models, profit centres. The trend of Ransomware‑as‑a‑Service (RaaS) continues to expand. Dataconomy+1

What’s changed:

  • Ransomware doesn’t just encrypt data. Attackers often steal data first, then threaten to release it. 
  • Attackers are picking higher-value targets and critical infrastructure.
  • The attack surface has exploded: IoT devices, cloud mis-configurations, unmanaged identity & access.

What to do:

  • Back up your critical systems regularly — test restores, not just backups.
  • Keep systems patched (though even fully patched systems can be attacked, so patching is necessary but not sufficient).
  • Monitor for abnormal behaviour: large data exfiltration, new admin accounts, sudden access from odd places.
  • Implement strong incident response procedures: when it happens, how do you contain it?

4. Supply Chains, IoT & Machine Identities

Modern IT is no longer just endpoints and servers. We have IoT devices, embedded systems, cloud services, machine-to-machine identities. According to Gartner, machine identities are expanding attack surfaces if unmanaged.

Key issues:

  • Devices (especially IoT) often ship with weak/default credentials.
  • Machine identities: software services, APIs, automation tools need their own identity/access management.
  • Supply chains: your vendor might be the weakest link — compromise of software or hardware upstream affects you.

What to do:

  • Create an inventory of all devices and services — yes all.
  • Enforce device onboarding processes: credentials changed, firmware up-to-date, network segmented.
  • Review your vendors: what security standards do they follow? Do they give you visibility into their supply chain risk?
     

5. Cloud & Data Privacy — New Rules, New Risks

As data moves into the cloud and into AI systems, the regulatory and technical risks converge. For example, new laws like the EU AI Act will start affecting how organisations handle AI usage and data. Source: Gcore
Cloud environments also bring mis-configurations, improper access controls, shadow-IT and uncontrolled data sprawl. techresearchs.com+1

What to do:
 

  • If using cloud services, check settings for major risk zones (e.g., S3 buckets, unsecured APIs).
  • Implement strong Identity & Access Management (IAM) controls for cloud resources.
  • Make data-privacy part of your security plan: what data you collect, where it is stored, for how long.
  • Perform periodic audits and compliance checks especially if you handle users from different jurisdictions.
     

6. Skills, Culture & Burn-out — The Human Factor

Often overlooked: no matter how good your tech is, people and culture matter. Gartner Security behaviour programs help reduce human-error incidents — and they’re becoming more essential.
Also, the cybersecurity talent shortage and burnout among security teams is real.

What to do:

 

  • Invest in security awareness training: phishing simulation, strong password practices, device hygiene.
  • Foster a culture where security is everyone’s responsibility, not just the “IT team’s problem.”
  • For small teams: consider managed security services or cloud-based monitoring to lean on external support.

7. What This Means for Smaller Organisations & Individual Users

Often the big reports focus on enterprises. But smaller organisations (and individual users) are just as vulnerable — sometimes more so, because they have fewer resources and less mature security.
Here are some concrete actions:

  • Use strong, unique passwords and a password manager.
  • Enable MFA everywhere (email, online services, VPNs).
  • Keep your systems updated — OS, applications, firmware.
  • Be suspicious of unexpected communications (phishing).
  • Have an incident response plan: who do you call if things go wrong?
  • Backup your data offline and test restores.
  • If you run services (web-server, mail server): monitor logs, check for new accounts, stray network connections.
     

Conclusion

Cybersecurity in 2025 is not a “set once and forget” system. It’s dynamic, multi-layered and deeply integrated into business functions and personal habits. The trends above — generative AI, zero trust, supply chain risks, cloud data sprawl — are changing the rules of the game.
Thus for all of us and especially sysadmins / system engineers or Site Reliabiltiy Managers (SRE), Developers, Testers or whatever you call it this meen we need to keep learning, be careful with the tech stuff we use, and build security as a continuous practice rather than a one-off box-to-tick.

 

Deploying a Puppet Server and Patching multiple Debian Linux Servers

Tuesday, June 17th, 2025

how-to-install-puppet-server-clients-for-multiple-servers-and-services-infrastructure-logo

 

Puppet Overview

Puppet is a powerful automation tool used to manage configurations and automate administrative tasks across multiple systems. This guide walks you through installing a Puppet server (master) and configuring 10 Debian-based client servers (agents) to automatically install system updates (patches) using Puppet.

Table of Contents

  1. Prerequisites

  2. Step 1: Install Puppet Server on Debian

  3. Step 2: Configure the Puppet Server

  4. Step 3: Install Puppet Agent on 10 Debian Clients

  5. Step 4: Sign Agent Certificates

  6. Step 5: Create a Puppet Module for Patching

  7. Step 6: Assign the Module and Trigger Updates

  8. Conclusion


1. Prerequisites

  • Debian server to act as the Puppet master (e.g., Debian 11)
     
  • Debian servers as Puppet agents (clients)
     
  • Root or sudo access on all systems
     
  • Static IPs or properly configured hostnames
     
  • Network connectivity between master and agents
     

2. Install Puppet Server on Debian

a. Add the Puppet APT repository

# wget https://apt.puppet.com/puppet7-release-bullseye.deb
# dpkg -i puppet7-release-bullseye.deb
# apt update

b. Install Puppet Server

# apt install puppetserver -y

c. Configure JVM memory (optional but recommended)

Edit /etc/default/puppetserver:  

JAVA_ARGS="-Xms512m -Xmx1g"


d. Enable and start the Puppet Server

# systemctl enable puppetserver 
# systemctl start puppetserver


3. Configure the Puppet Server

a. Set the hostname
 

# hostnamectl set-hostname puppet.example.com


Update /etc/hosts with your server’s IP and FQDN if DNS is not configured:  

192.168.1.10 puppet.pc-freak.net puppet

 

b. Configure Puppet

Edit /etc/puppetlabs/puppet/puppet.conf:

[main] certname = puppet.pc-freak.net
server = puppet.pc-freak.net
environment = production
runinterval = 1h

 

Restart Puppet server:

# systemctl restart puppetserver

4. Install Puppet Agent on 10 Debian Clients

Repeat this section on each client server (Debian 10/11).

a. Add the Puppet repository

# wget https://apt.puppet.com/puppet7-release-bullseye.deb
# dpkg -i puppet7-release-bullseye.deb 
# apt update


b. Install the Puppet agent

# apt install puppet-agent -y


c. Configure the agent to point to the master

# /opt/puppetlabs/bin/puppet config set server puppet.example.com –section main


d. Start the agent to request a certificate

# /opt/puppetlabs/bin/puppet agent –test

 

5. Sign Agent Certificates on the Puppet Server

Run on the Puppet master below 2 cmds:

# /usr/bin/puppetserver ca list –all


Sign all pending requests:

# /usr/bin/puppetserver ca sign –all

Verify connection to puppet server is fine:

# /opt/puppetlabs/bin/puppet node find haproxy2.pc-freak.net


6.  Create a Puppet Module for Patching

a. Create the patching module

# mkdir -p /etc/puppetlabs/code/environments/production/modules/patching/manifests


b. Add a manifest file
/etc/puppetlabs/code/environments/production/modules/patching/manifests/init.pp
:

class patching {

  exec { 'apt_update':
    command => '/usr/bin/apt update',
    path    => [‘/usr/bin’, ‘/usr/sbin’],
    unless  => '/usr/bin/test $(find /var/lib/apt/lists/ -type f -mmin -60 | wc -l) -gt 0',
  }

  exec { 'apt_upgrade':
    command => '/usr/bin/apt upgrade -y',
    path    => [‘/usr/bin’, ‘/usr/sbin’],
    require => Exec[‘apt_update’],
    unless  => '/usr/bin/test $(/usr/bin/apt list –upgradable 2>/dev/null | wc -l) -le 1',
  }

}

This class updates the package list and applies all available security and feature updates.

7.  Assign the Module and Trigger Updates

a. Edit site.pp on the Puppet master:

 

# vim /etc/puppetlabs/code/environments/production/manifests/site.pp

 

node default {
  include patching
}

node 'agent1.example.com' {
  include patching
}

b. Run Puppet manually on each agent to test:

# /opt/puppetlabs/bin/puppet agent –test

Once confirmed working, Puppet agents will run this patching class automatically every hour (default runinterval).

8. Check the status of puppetserver and puppet agent on hosts is fine
 

root@puppetserver:/etc/puppet# systemctl status puppetserver
● puppetserver.service – Puppet Server
     Loaded: loaded (/lib/systemd/system/puppetserver.service; enabled; preset: enabled)
     Active: active (running) since Mon 2025-06-16 23:44:42 EEST; 37min ago
       Docs: https://puppet.com/docs/puppet/latest/server/about_server.html
    Process: 2166 ExecStartPre=sh -c echo -n 0 > ${RUNTIME_DIRECTORY}/restart (code=exited, status=0/SUCCESS)
    Process: 2168 ExecStartPost=sh -c while ! head -c1 ${RUNTIME_DIRECTORY}/restart | grep -q '^1'; do kill -0 $MAINPID && sleep 1 || exit 1; done (code=exited, status=0/SUCCESS)
   Main PID: 2167 (java)
      Tasks: 64 (limit: 6999)
     Memory: 847.0M
        CPU: 1min 28.704s
     CGroup: /system.slice/puppetserver.service
             └─2167 /usr/bin/java -Xms512m -Xmx1g -Djruby.lib=/usr/share/jruby/lib -XX:+CrashOnOutOfMemoryError -XX:ErrorFile=/var/log/puppetserver/puppetserver_err_pid%p.log -jar /usr/share/pup>

юни 16 23:44:06 haproxy2 systemd[1]: Starting puppetserver.service – Puppet Server…
юни 16 23:44:30 haproxy2 java[2167]: 2025-06-16T23:44:30.516+03:00 [clojure-agent-send-pool-0] WARN FilenoUtil : Native subprocess control requires open access to the JDK IO subsystem
юни 16 23:44:30 haproxy2 java[2167]: Pass '–add-opens java.base/sun.nio.ch=ALL-UNNAMED –add-opens java.base/java.io=ALL-UNNAMED' to enable.
юни 16 23:44:42 haproxy2 systemd[1]: Started puppetserver.service – Puppet Server.

root@grafana:/etc/puppet# systemctl status puppet
* puppet.service – Puppet agent
     Loaded: loaded (/lib/systemd/system/puppet.service; enabled; preset: enabled)
     Active: active (running) since Mon 2025-06-16 21:22:17 UTC; 18s ago
       Docs: man:puppet-agent(8)
   Main PID: 1660157 (puppet)
      Tasks: 6 (limit: 2307)
     Memory: 135.6M
        CPU: 5.303s
     CGroup: /system.slice/puppet.service
             |-1660157 /opt/puppetlabs/puppet/bin/ruby /opt/puppetlabs/puppet/bin/puppet agent –no-daemonize
             `-1660164 "puppet agent: applying configuration"

Jun 16 21:22:17 grafana systemd[1]: Started puppet.service – Puppet agent.
Jun 16 21:22:28 grafana puppet-agent[1660157]: Starting Puppet client version 7.34.0
Jun 16 21:22:33 grafana puppet-agent[1660164]: Requesting catalog from puppet.pc-freak.net:8140 (192.168.1.58)

9. Use Puppet facter to extract interesting information from the Puppet running OS
 

facter is a powerful command-line tool Puppet agents use to gather system information (called facts). You can also run it manually on any machine to quickly inspect system details.

Here are some interesting examples to get useful info from a machine using facter:


a) Get all facts about Linux OS

facter

 

b) get OS name / version

facter os.name os.release.full
os.name => Debian
os.release.full => 12.10


c) check the machine  hostname and IP address

$ facter hostname ipaddress

hostname => puppet-client1
ipaddress => 192.168.0.220

d) Get amount of RAM on the machine

$ facter memorysize
16384 MB


e) Get CPU (Processor information)

$ facter processors

{
  count => 4,
  models => [“Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz”],
  physicalcount => 1,
  speed => "1.60 GHz"
}

10. Conclusion

You've successfully set up a Puppet server and configured a sample Debian client systems to automatically install security patches using a custom module.
To apply this on the rest of systems where puppet agent is installed repeat the process on each of the left 9 nodes.
This approach provides centralized control, consistent configuration, and peace of mind for you as system administrator if you have the task to manage multiple Debian servers
with an easy.
Of course the given configuration is very sample and to guarantee proper functiononing of your infrastrcutreu you'll have to read and experiment with puppet, however I hope article is a good
standpoint to have puppet server / agent running relatively easy.