Archive for March, 2026

How to Install and Use Grafana Loki on Linux for mupltiple server Log Metrics Monitoring

Tuesday, March 31st, 2026

how-to-install-and-use-grafana-loki-on-linux-for-log-metrics-monitoring-for-multiple-server-observability-logo
Grafana Loki
has become a popular choice for log management on Linux systems, nowadays, because free software like under AGPLv3 licence, it’s lightweight, cost-efficient, and integrates seamlessly with modern observability stacks. Unlike traditional log systems, Loki focuses on indexing metadata (labels) instead of full log content, which makes it especially attractive for Linux environments where logs can grow quickly.

Grafana Loki can be used to create fully featured logging stack. It has a small index and highly compressed chunks which simplifies the operation and significantly lowers the Storage expense of it.
Unlike other logging systems, Loki is built around the idea of only indexing metadata about your logs labels (just like Prometheus labels).
Log data itself is then compressed and stored in chunks in object stores such as Amazon Simple Storage Service (S3) or Google Cloud Storage (GCS), or even locally on the filesystem.

In this article will give you some real-world, practical usage of Loki on Linux, from its setup from zero to day-to-day use workflows.

Reasons why to use Loki on Linux ?

Linux systems generate logs mainly in /var/log but often used extra installed Apps tend to log in different locations for easier log distinguishment, e.g.
logs location might lack a good structure (be everywhere) :

Some common example locations, where logs are stored

  • /var/log/syslog
  • /var/log/auth.log
  • Application logs (/opt/app/logs/*.log)
  • Container logs, are kept within respective container ( Docker /  PodMan Kubernetes )

Sonner or later if you have to manage a large infrastructure of servers you end up, it is pretty easy to end up in a log mess.

This is exaclty where Loki helps you solve:

  • Centralize logs from multiple machines (within Grafana)
  • Search logs efficiently using log craeted labels
  • Correlate logs with metrics in Grafana

Loki Architecture Overview


loki-use-stack-chain-diagram-from-cloud-to-grafana

A typical Loki setup on Linux has 3 components:

  1. Loki server -> stores and queries logs
  2. Promtail -> collects logs from the around the system
  3. Grafana -> Use it to visualizes and queries logs

Promtail acts like a lightweight agent that tails log files and sends them to Loki.

I. Installing Loki on Linux

1. Download Loki

$ cd /usr/local/src
$ wget https://github.com/grafana/loki/releases/latest/download/loki-linux-amd64
$ chmod +x loki-linux-amd64
# mv loki-linux-amd64 /usr/local/bin/loki

2. Create a simple config like

auth_enabled: false

server:
  http_listen_port: 3100

ingester:
  lifecycler:
    address: 127.0.0.1
  chunk_idle_period: 5m

schema_config:
  configs:
    – from: 2020-10-24
      store: boltdb-shipper
      object_store: filesystem
      schema: v11
      index:
        prefix: index_
        period: 24h

storage_config:
  filesystem:
    directory: /var/lib/loki/chunks

3. Run Loki

# loki -config.file=loki.yaml


Hopefully if all is okay with loki.yaml config the service will start.

a. Installing Promtail (Log Collection)

Example  config (to modify to your preferences):

scrape_configs:
  – job_name: linux-logs
    static_configs:
      – targets:
          – localhost
        labels:
          job: syslog
          host: my-linux-server
          __path__: /var/log/*.log

This collects all logs in /var/log/ and labels them.

b. Run Promtail

# promtail -config.file=promtail.yaml

! Note that loki and promtail it is run as root (to have permissions to files which will be processed). This is not the best practice, so for security reasons,
if you have the necessery storage move out the files to a central log aggregator directory with a script set a unprevileged non-root user for it and run the services with those user.

c. Run loki / promtail as non-root user:

Once tested it runs, it is good idea to run two tools with non-root user, i.e.:
Run promtail as a dedicated user (e.g., promtail).

Add that user to groups like:

adm (for /var/log)

systemd-journal (for journal logs)
Adjust file permissions if needed

# useradd –system –no-create-home promtail
# usermod -aG adm promtail

$ loki -config.file=loki.yaml
$ promtail -config.file=promtail.yaml

II. Practical Use Cases of Loki on Linux

1. System Troubleshooting

One good use of Loki is to Search for errors in syslog:

{job="syslog"} |= "error"

By this you can Quickly diagnose:

  • Boot issues
  • Service failures
  • Kernel errors

2. SSH Login Monitoring

Track login attempts from /var/log/auth.log for many VM hosts:

{job="syslog"} |= "sshd"

You can detect:

  • Failed login attempts
  • Brute-force attacks
  • Unauthorized access

3. Application Debugging (look for exceptions)

If your app logs to /var/log/app.log and you App running it, to get a view on java thrown exceptions:

{job="app"} |= "exception"

This use case can Help developers to:

  • Trace bugs
  • Monitor runtime issues
  • Correlate logs with deployments

4. Multi-Server Log Aggregation

Once you run Promtail on multiple Linux servers:

labels:
  host: server1

Then you can do query to extract collected data for each one if it:

{job="syslog", host=~"server1|server2"}

This makes multiple machines behave like one unified log source.

5. Log-Based Metrics

You can extract metrics from logs:

count_over_time({job="syslog"} |= "error" [5m])

Use this for:

  • Alerting
  • Error rate tracking
  • Incident detection

III. Using Grafana for Visualization

In Grafana, you can:

  • View logs in real time
  • Build dashboards
  • Create alerts based on log patterns

Example use would be:

Create Grafana Panel showing error rate per host and Alert when errors exceed a threshold.

loki-log-drill-down-sample-in-grafana

Good Practices on Loki use

1. Always Use Meaningful Labels

Example for Good label should contain as many descriptory parameters as possible:

labels:
  app: nginx
  env: prod
  virtualization: vmware
  type: Middleware
  service:: proxy
  Customer: customerA

Bad obscure label:

labels:
  request_id: 123456  


2. Avoid Too many Unique labels

Keep in mind Too many unique labels leads to poor performance !.

3. Rotate Logs Properly and optimize with Secure Loki Endpoint

Loki won't manage your internal logs, as it can well complement ( but not replaces ), on Server / VM traditional tools like journalctl / grep / logrotate. but just give you a better overview of what is inside of service spit logs based on easy to give criterias from Grafana.
You will still need usually at best scenario to  setup of a Central Logging Server (to store all Infrastucture logs).
Consider also that sending data from your logs with Loki, like with a zabbix client it is always a idea to have reverse proxy like NGINX or Haproxy to reduce Network bandwith and for better management centralization of the infra.

4. Secure Loki Endpoint

  • Use reverse proxy (NGINX)
  • Enable authentication in production

Closure Summary

On Linux, Grafana Loki can help when:

  • You have multiple servers
  • Logs are growing fast
  • You need centralized  and relatively easy observability

Loki has its downtimes too as processing the logs to really extract data hits a high CPU use. Running it on a multiple machines is useful,
especially if your machines has high unutilized CPU IDLE time and you want to make the log data collection per server based being so to say partially duplicated and indepdendent from centralized logging. .
For high scale infrastructure, however sysadmins prefer to use an ELK OpenSearch Stack or log databases such as:
VictoriaLogs. With having infrastrcture of 100 servers or so perhaps setting up with some Ansible automation Loki makes sense.
Loki
is not meant to replace databases or full-text search engines, but great often for simple  log aggregation and analysis and of the simplistic tools available today.

Automatically Re-plug all USB devices on system resume on Debian Linux using systemd

Thursday, March 26th, 2026

automatically-replug-all-usb-devices-on-system-resume-on-Debian-Ubuntu-Linux
Lets say you’re like me and you have an old but gold USB device like USB joystick Maxfire G-08XU (i've described how to configure Joystick / Gamepad on Debian Ubuntu easily), an USB flash drive stick or even some obscure USB keyboard model, that are not among the most compatible device on earth for linux. The result is in device plug and Sleeping the system or Hibernating it for a while (when go to bed) you end up with USB device being undetected by the system. Once you recover the Laptop / PC from being in Sleep mode / hibernate, the device becomes undetected by system, even though, even though the Linux kernel recognizes in lsusb. That weirdity continues until you do the manual hard workaround, which is to manually unplug the device cable and replug it again.
Though Linux has advanced much with this stuff over last years still this problems can occur every now and then. Thanksfully there is a quick fix to that. You can create a small script that reloads all the USB devices on PC
want the script to run automatically after your Debian laptop wakes up from suspend/hibernate. On Linux, the way to do this is using systemd sleep hooks. Here’s how to do it properly by using a small script + systemd.

1. Create a systemd sleep script

Create a new directory and file:

# mkdir -p /etc/systemd/system-sleep
# vim /etc/systemd/system-sleep/usb-replug.sh

Add this content:

#!/bin/bash
# Only run on resume (wake up)
case "$1" in
post)
# Replace '1-3' with your USB bus-port ID
echo '1-3' | tee /sys/bus/usb/drivers/usb/unbind
sleep 2
echo '1-3' | tee /sys/bus/usb/drivers/usb/bind
;;
esac
If 


If you need script logging use instead this small script:

 

#!/bin/bash

case $1/$2 in
pre/*)
# before suspend: you can put commands here if needed
;;
post/*)
# after resume: run your USB replug commands
echo "$(date) – Running USB replug script" >> /var/log/usb-replug.log
# Example command: trigger USB rescan
for bus in /sys/bus/usb/devices/*/authorized; do
echo 0 | sudo tee $bus
echo 1 | sudo tee $bus
done
;;
esac

2. Make it executable and reload systemd services

# chmod +x /etc/systemd/system-sleep/usb-replug.sh

Once you’ve created the script in /etc/systemd/system-sleep/ and made it executable, systemd will automatically call it on suspend/resume.

To make sure everything is recognized, you can:

  1. Reload systemd units (optional but recommended)

# systemctl daemon-reload
  1. Test it manually by suspending and resuming your machine

# systemctl suspend

After resuming, your script should run automatically and you should see the missing devices that you had to physically unplug and plug back to normal.
Hooray ! 🙂

3. How it works (systemd respawn)

  • systemd runs scripts in /etc/systemd/system-sleep/ on suspend and resume.

  • $1 is either pre (before sleep) or post (after wake).

  • The script unbinds and rebinds your USB device right after the system resumes.

Tip: You can also use usbreset instead of unbind/bind if you prefer, just replace the echo lines with:

# usbreset /dev/bus/usb/001/005

Alternatively you can use one time a simple one liner script that does the job like this:
 

# cat replug_usbs_linux.sh
#!/bin/bash

# one liner script to replug all USB devices like you have physically replugged all USBs useful if for example some of USB devices stuck after linux computer sleep

# for example my old maxfire g-08 usb joystick does mess up and i have to physically replug it (to work around this i simply run this script

d=$(lsusb -t | grep -m1 'Driver=' | sed -E 's|.*Port ([0-9]+):.*Bus ([0-9]+).*|\2-\1|') && echo $d | sudo tee /sys/bus/usb/drivers/usb/unbind && sleep 2 && echo $d | sudo tee /sys/bus/usb/drivers/usb/bind

 

Building a 10-Server FreeBSD Jail Cluster Running a LAMP (Linux / Apache / MySQL / Perl / PHP / Python) Stack

Wednesday, March 25th, 2026

building-freebsd-jails-cluster-running-linux-apache-10-cluster-high-availability-with-mariadb-perl-php-howto

Virtualization and workload isolation are foundational to modern infrastructure.
While most teams today default to container platforms like Docker and orchestration systems such as Kubernetes, an older and highly capable alternative exists in the form of jails from FreeBSD.

FreeBSD jails provide lightweight OS-level isolation, allowing multiple independent userland environments to run on a single host. Introduced long before containers became mainstream, jails were designed with a strong focus on security, simplicity, and performance.
Despite their maturity and robustness, they are less commonly used today, largely due to the rapid rise of container ecosystems and cloud-native tooling.

Choosing between jails and containers is not simply a matter of “old vs new,” but rather a trade-off between control and simplicity versus portability and ecosystem support.

Short Comparison of FreeBSD jails and Containers ( Pros and Cons )

Advantages of FreeBSD Jails

a. Strong, simple isolation

Jails provide a clear and tightly integrated security boundary within the FreeBSD kernel. Their design is straightforward, reducing the risk of misconfiguration compared to layered container security models.

freebsd_jails_infographic_diagram

b. High performance

Because jails operate very close to the base system, they deliver near-native performance with minimal overhead—especially beneficial for networking and I/O-heavy workloads.

c. Operational simplicity

There are fewer component moving parts (easier to maintain and debbug):

  • No separate container runtime
  • No image layers
  • No complex orchestration requirements

This makes jails appealing for stable, long-running systems.

d. Predictability and stability

FreeBSD’s conservative, design philosophy results in systems that are highly stable over long periods, that is ideal for infrastructure roles like: storage or networking.

Disadvantages of FreeBSD Jails

a. Limited portability

Not neceserry a huge disadvantage but still,
Jails are tied to FreeBSD. Unlike containers, they cannot be easily moved across different operating systems or cloud platforms.


b. Smaller ecosystem

FBSD Jails is not full equivallent to:

  • Container registries (like Docker Hub)
  • Massive orchestration ecosystems (similar things has to be done with scripts and customizations)
  • Broad third-party integrations

This can slow down a bit development and deployment workflows. Though for a matured Applications that are once well tuned with jails that can be not a real probblem.

Note that though a con, this can also be a pros, as once you tune up an App for it becomes easier to maintain.

c. Less automation tooling

While tools exist, they are not as standardized or widely adopted as container-based CI/CD pipelines.

d. Harder to find people for it
 

Most developers and DevOps engineers are trained in container technologies, making hiring and collaboration easier in container-based environments. However for senior hard core sysadmins and system engineers that could be also advantage as not so many people have an indepth insight with both freebsd and fbsd jails.

This guide walks through a practical, production-style setup: 10 FreeBSD servers, each running isolated jails that host a classic LAMP stack (Linux, here replaced by FreeBSD, Apache, MySQL/MariaDB, PHP).
However still the use of companies or individuals who choose freebsd jails aim to better focus is on repeatability, clean architecture, and operational sanity, not just getting it to run once.

Architecture Overview of sample FBSD Cluster

Our Goal:

  • 10 physical or virtual servers
  • Each server runs multiple jails
  • Each jail runs a LAMP app instance
  • Load balancing across nodes (to have a High Availability Cluster like setup)

Host Setup:

  • 2 × load balancer nodes (nginx or HAProxy)
  • 6 × application nodes (Apache + PHP in jails)
  • 2 × database nodes (MariaDB primary/replica)

All systems run FreeBSD, using native jails for isolation.

1. Base FreeBSD Installation (All 10 Servers)

Install FreeBSD on each machine (minimal install is fine).

Update system:

# freebsd-update fetch install
# pkg update && pkg upgrade -y

Install base tools:

# pkg install -y sudo vim bash git

2. Install Jail Management tool (iocage)

We’ll use iocage, a modern jail manager.

# pkg install -y iocage
# sysrc iocage_enable="YES"
# service iocage start

Activate ZFS (recommended):

# zpool create zroot /dev/da0

Initialize iocage:

# iocage activate zroot
# iocage fetch

3. Create a Reusable Jail Template

Instead of building each jail manually, create a golden template.

# iocage create -n lamp-template -r 13.2-RELEASE ip4_addr="vnet0|10.0.0.10/24" boot=off
# iocage start lamp-template
# iocage console lamp-template

4. Install LAMP Stack Inside the Jail

Inside the jail:

4.1. Install Apache

# pkg install -y apache24
# sysrc apache24_enable="YES"

4.2. Install MariaDB

# pkg install -y mariadb106-server
# sysrc mysql_enable="YES"

Initialize DB:

service mysql-server start
mysql_secure_installation

4.3. Install PHP pre-compiled ports

# pkg install -y php82 php82-mysqli php82-mbstring php82-opcache


Configure Apache to use PHP:

# echo 'LoadModule php_module libexec/apache24/libphp.so' >> /usr/local/etc/apache24/httpd.conf
# echo 'AddType application/x-httpd-php .php' >> /usr/local/etc/apache24/httpd.conf

5. Test LAMP Stack works OK

Create a test file:

# echo "<?php phpinfo(); ?>" > /usr/local/www/apache24/data/index.php

Start services:

service apache24 start

Visit the jail IP and confirm PHP (page output) works in Firefox / Chrome Browser.

6. Convert Template into Clones

Stop Jail and snapshot:

iocage stop lamp-template
iocage snapshot lamp-template@base

Clone for production:

iocage clone lamp-template -n app01 ip4_addr="vnet0|10.0.0.21/24"
iocage clone lamp-template -n app02 ip4_addr="vnet0|10.0.0.22/24"

Repeat across servers and once working create a small shell script to run as a cron job to create backups automated.

Each server might run 5 up to 20 jails depending on resources.

7. Networking Between Jails

Use VNET for proper isolation:

Enable bridge on host:

# ifconfig bridge0 create
# ifconfig bridge0 addm em0 up

Assign jail interfaces automatically via iocage.

8.  Load Balancing Layer

On 2 dedicated nodes, install nginx:

# pkg install -y nginx
# sysrc nginx_enable="YES"

Example config:

http {
    upstream backend {
        server 10.0.0.21;
        server 10.0.0.22;
        server 10.0.1.21;
        server 10.0.1.22;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://backend;
        }
    }
}

9. Database Strategy

You have few options to choose from:

a. Use Centralized DB

  • Dedicated DB jails on 2 nodes
  • Primary + replica

b. Use Per-node DB (simpler)

  • Each jail has its own MariaDB
  • Use app-level replication if needed

10. Automation Across 10 Servers

Use tools like:

  • Ansible
  • SSH scripts
  • ZFS replication

Example (simple parallel execution loop) or use a set of scripts to handle updating with some Ansible Playbooks or Puppet:

# for host in server{1..10}; do
  ssh $host "pkg update"
done

Few more Operational Tips to consider

a. Tune up setup / Do Resource management

  • Limit jail CPU/memory using rctl
  • Avoid overcommitting RAM

b. Use Centralized Logging

c. Do regular jail Backups

  • Use ZFS snapshots to backup each of the Jails:

# zfs snapshot zroot/iocage/jails/app01@backup

d. Tighten Security

  • Disable root SSH
  • Use PF firewall on host
  • Keep jails minimal

e. Do a Further Scaling Strategy

  • Add more servers -> replicate template
  • Add more jails -> clone snapshots
  • Scale horizontally via load balancer

Summary and Last Thoughts

When Choose FBSD Jails and when Containers

  • Use jails when you control the infrastructure, need maximum efficiency, and value simplicity (e.g., appliances, CDNs, storage systems).
  • Use containers when portability, scalability, and integration with modern DevOps workflows are critical.

This setup plays to the strengths of FreeBSD jails:

1. Performance: near-native speed
2.Isolation: strong and predictable
3. Simplicity: fewer layers than container stacks

FreeBSD jails remain a powerful and efficient isolation mechanism, particularly well-suited for controlled, performance-sensitive environments. Containers, however, dominate in modern application deployment due to their flexibility and ecosystem. The choice ultimately depends on whether you prioritize system-level control or platform-level convenience.

You won’t get the ecosystem of tools like Docker or Kubernetes, but you gain control, stability, and efficiency, which is exactly why companies like Netflix still rely on this model in critical infrastructure.

 

Build a Central Linux Logging Server to Collect, Store, and Visualize All Infrastructure node Logs

Friday, March 20th, 2026

build-a-central-linux-logging-server-to-collect-store-and-visualize-all-infrastructure-node-logs
If you manage multiple servers or collection of multiple services on many nodes within a company server infrastructure, you know the pain of dealing with logs scattered to multiple locations across systems. It is really crazy and takes up a lot of time and drains energy.
One server shows nothing, another rotated logs yesterday, and your app logs are buried somewhere in /var/log/app.

A central logging server solves this problem, as all logs collected, stored, and accessible in one single place.

In this article will present shortly how to build one using ELK Stack + Beats (lightweight agents) on a Linux server.

1. Architecture Overview

Here’s the typical flow looks like this:

[ Servers / Apps ] –> [ Filebeat / Metricbeat ] –> [ Logstash ] –> [ Elasticsearch ] –> [ Kibana / Grafana (Visualization) ]

  • Beats → Lightweight log shippers installed on all machines.
  • Logstash → Optional pipeline for parsing, filtering, and enriching logs.
  • Elasticsearch → Storage and search engine.
  • Kibana / Grafana → Visualization dashboards.

2. Prepare Your Central Logging Server

Requirements:

  • Debian Linux 12 recommended / Ubuntu or Fedora RHEL
  • At least 4 GB RAM (8+ GB for production ELK)
  • Plan enough SSD storage (logs grow fast)
  • Open ports: 5044 for Beats, 9200 for Elasticsearch, 5601 for Kibana

Install Prerequisites

# apt update && sudo apt install openjdk-17-jdk wget curl apt-transport-https -y

ELK requires Java, OpenJDK 17 should work fine.

3. Install Elasticsearch

# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.11.1-amd64.deb
# dpkg -i elasticsearch-8.11.1-amd64.deb
# systemctl enable elasticsearch
# systemctl start elasticsearch


Check ElasticSearch server is running:

# curl -X GET "localhost:9200/"

That should see the Cluster info in JSON format.

4. Install Kibana

# wget https://artifacts.elastic.co/downloads/kibana/kibana-8.11.1-amd64.deb
# dpkg -i kibana-8.11.1-amd64.deb
# systemctl enable kibana
# systemctl start kibana


Access Kibana URL in browser:

http://<server-ip>:5601

5. Install Logstash to Process logs before sending to Elasticserch

# wget https://artifacts.elastic.co/downloads/logstash/logstash-8.11.1.deb
# dpkg -i logstash-8.11.1.deb
# systemctl enable logstash
# systemctl start logstash

Logstash allows filtering and structuring logs before sending them to Elasticsearch. Example simple pipeline:

# vim /etc/logstash/conf.d/syslog.conf

input {
  beats {
    port => 5044
  }
}
filter {
  grok { match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:host} %{DATA:program}: %{GREEDYDATA:message}" } }
}
output {
  elasticsearch {
    hosts => [“localhost:9200”]
    index => "central-logs-%{+YYYY.MM.dd}"
  }
}

Start Logstash

# systemctl restart logstash

6. Install Beats on Client Machines

On each server you want to monitor:

# apt install filebeat metricbeat -y


Configure Filebeat

Edit config

# vim  /etc/filebeat/filebeat.yml

Set the output to your central server:

output.logstash:

hosts: [":5044"]

Start the agent:

systemctl enable filebeat
systemctl start filebeat

Do the same for Metricbeat if you want metrics like CPU, memory, disk.

7. Create Dashboards in Kibana or Grafana

  • In Kibana, use Discover to view logs.
  • Create visualizations for errors, warnings, top endpoints, etc.
  • Use Grafana if you want multi-source dashboards, combining logs and metrics.

8. Optional: Secure Your Logging Server

  • Enable TLS/SSL in Beats and Elasticsearch.
  • Use firewall rules to restrict access.
  • Create dedicated users in Elasticsearch for log access.

9. Maintenance Tips

  • Index Lifecycle Management → Rotate daily and delete old logs automatically.
  • Monitor disk usage → Logs grow fast. SSDs are better.
  • Filter noise → Don’t ship debug logs unless needed.
  • Backup Elasticsearch → Especially if logs are critical.

Sum Up, how it Works

  • All logs are centralized → easier troubleshooting.
  • Scalable → add new servers, Beats handle shipping automatically.
  • Searchable → find errors instantly using Elasticsearch.
  • Visual → dashboards in Kibana/Grafana give real-time insight.

Linux Bash Logging log everything. Prevent user from delete his history and keep record of every command User ever Run

Tuesday, March 17th, 2026

make_bash_history_permanent-how-to-keep-every-user-command-forever-prevent-users-from-deleting-their-bash-history-on-linux

Whether you're managing servers, writing scripts, or troubleshooting complex systems, one of the most valuable tools at your disposal is your command history. But the default Bash history has serious limitations: it’s easy to lose, doesn't timestamp by default, and doesn't log everything in real time.

What if you could keep a permanent, timestamped, real-time log of every command you ever run in Bash?

Good news: you can.

In this guide, we’ll walk through how to set up robust, automatic Bash logging to track every command you type—across sessions, with full timestamps, and even with user and host information. Ideal for system administrators, developers, auditors, or anyone who wants to maintain a clear, searchable audit trail.

Why Bash Logging Persistence So Important ?

Before we dive into the how, let's understand the why:

  • Accountability – Know exactly what commands were run, by whom, and when.
  • Auditability – Great for security reviews or compliance requirements.
  • Troubleshooting – Trace back actions that caused issues.
  • Documentation – Reuse commands or share with teammates.
  • Forensics – Investigate suspicious activity.

How Bash History Behaves  ( By Default )

Without any config everyone knows , Bash uses a file called ~/.bash_history in $HOME to save command history.

What is tricky here:

  • .bash_history not written to immediately – only when the session exits.
  • It can be overwritten by other sessions.
  • It lacks timestamps unless explicitly configured.
  • It doesn’t log failed attempts or commands from other users.

In this short article I'll show you one of the ways on how to make .bash_history keeps the record for you even though some user tries to hide tihngs by running the commands and exiting the shell abnormally by killing it with the well known command by hackers and sysadmin gurus:


$ kill -9 $$

The command forces the user you have logged into to kill the process of the bash (-bash). 

Here is how.

Enable Advanced Bash Logging

1. Enable Timestamps in History

Add this line to your ~/.bashrc or ~/.bash_profile:

export HISTTIMEFORMAT="%F %T "

This formats the date/time as YYYY-MM-DD HH:MM:SS.

After modifying the file, run:

source ~/.bashrc

Now, run:

history

And you’ll see timestamps next to your commands.

2. Increase History Size

The default history size is often too small. Let’s increase it:

export HISTSIZE=100000

export HISTFILESIZE=200000


Add these to ~/.bashrc as well.

3. Log Commands Immediately (Across Sessions)

By default, Bash only writes history when the shell exits. To log commands in real time, add the following to ~/.bashrc:

# Append to the history file, don't overwrite it

shopt -s histappend

# Immediately append command to history file after execution

PROMPT_COMMAND='history -a; history -n'

Explanation:

  • history -a: Append current session's command to ~/.bash_history
  • history -n: Read any new lines from the file (from other sessions)

4. Log All Commands to a Separate File (for each User)

To keep a separate, detailed log, you can use the trap command in combination with logger, or write to a custom file.

Add this to your ~/.bashrc:

LOG_FILE="$HOME/.bash_command_log"

trap 'echo "$(date "+%F %T") | $(whoami)@$(hostname) | $(pwd) | $BASH_COMMAND" >> "$LOG_FILE"' DEBUG

This logs every command as for example:

2025-10-10 14:25:02 | master_app@server01 | /var/www | systemctl restart nginx

This file can grow large over time – consider rotating it regularly with logrotate or similar tools.
To prevent the file 100% from being modified by the user itself you can make the log file  immutable with command

# chattr +i $HOME/.bash_command_log


5. Guarantee log security, Make copy of Logs to prevent hackers to modify them

If logging for audit/security purposes:

  • Store logs in append-only files (chattr +a logfile on ext4 FS)
  • store files with rsyslog service (see below)
  • Use remote logging (e.g., send via logger to syslog  / rsyslog or any other centralized logging service) / logcollector etc.
  • Monitor for tampering or suspicious gaps

6. Store file with rsyslog service

Create the file and set it proper permissions

# touch /var/log/bash_audit.log
# chmod 600 /var/log/bash_audit.log
# chown root:root /var/log/bash_audit.log

# vim /etc/rsyslog.d/bash_audit.conf

Add:

if $programname == 'bash_audit' then /var/log/bash_audit.log
& stop

# systemctl restart rsyslog


To later verify it works fine

# tail -f /var/log/bash_audit.log
# journalctl -t bash_audit

 

6. Add Global Bash Logging for All Users

Assuming that the bash_audit set program / name tag is already done as in step 5.
To apply logging system-wide, Edit /etc/profile /etc/bash_profile or /etc/bash.bashrc and include the same trap cmd and logging is ready. Ensure:

  • The log file is writable by users (or add users to a group that can append to file) or modify the command to use sudo logger for centralized syslog.
  • You test it carefully before deploying to all users.

     

     

     

    An improved wide user version of trap command would be something like this

# Bash command logging (readable layer)

trap 'CMD=$(history 1 | sed "s/^[ ]*[0-9]\+[ ]*//");
MSG="$(date "+%F %T") | $(whoami)@$(hostname) | $(pwd) | $CMD";

/usr/bin/logger -t bash_audit "$MSG"
' DEBUG

Make these two env variables read only for additional hardening 

readonly PROMPT_COMMAND
readonly HISTFILE

Note that you will need to edit passwordless login for sudo to logger

  • Setup auditd to make file read only

# apt install auditd audispd-plugins –yes

  • Test it with auditctl

# auditctl -a always,exit -F arch=b64 -S execve -F auid>=1000 -F auid!=4294967295 -k cmdlog
# auditctl -a always,exit -F arch=b32 -S execve -F auid>=1000 -F auid!=4294967295 -k cmdlog

  • Make rules permanent via cmdlog.rules

# vim /etc/audit/rules.d/cmdlog.rules

-a always,exit -F arch=b64 -S execve -F auid>=1000 -F auid!=4294967295 -k cmdlog
-a always,exit -F arch=b32 -S execve -F auid>=1000 -F auid!=4294967295 -k cmdlog

  • Load and lock audit rules

# augenrules –load
# auditctl -e 2

  • Check audit logs

# ausearch -k cmdlog -i
exe="/usr/bin/ls" argc=1 a0="ls"

7. Rotate Log Files Automatically with logrotate

Create a logrotate config like /etc/logrotate.d/bash_command_log:

/home/*/.bash_command_log {
daily
rotate 7
compress
missingok
notifempty
}

/var/log/bash_audit.log {
daily
rotate 7
compress
missingok
notifempty
}


This keeps logs for 7 days and compresses old ones.

8. Test Every command Logging is permanenty stored

After setting bash logging up up:

  1. Open a new terminal client with SSH session
  2. Run a few commands
  3. Check ~/.bash_command_log (or your alternative configured log location)

You should see a real-time record of every command executed.

Use tools like grep, awk, or fzf Command fuzzy finder to search through your command log efficiently. Example:

grep apt ~/.bash_command_log

You can further automate it and deploy it to multiple servers with Ansible or some shell scripting.
If you need it Ask me how to automate it?
Ask me how to automate it with Ansible or a shell script.

Wrapping it Up

With just a few lines in Bash config, basic history feature becomes a persistent, and timestamped static record  that’s invaluable for system admins, developers, and security teams.

Summary Checklist

  • Enable HISTTIMEFORMAT
  • Increase history size
  • Append history in real time
  • Log every command with trap DEBUG
  • Optionally send to rsyslog / syslogd / systemd-journald or other central log server (Fluentd / ELK Stack / Graylog)
  • Rotate logs with logrotate

How to Fix Windows Update When It Says “Up to Date” But Updates Are Missing

Monday, March 2nd, 2026

windows-os-update-up-to-date-but-OS_update-release-lacking-behind-fix-Windows-shows-updated-but-it-is-not

Knowing your system isn’t fully updated as OS BUILD Release does not match the latest one it has to  but  still Windows Update insists everything is “green good and  “Up to date.” is really weird and frustrating stuff Windows user can experience. It makes it even worser if you are like me and your computer is in a large corporate domain that is using Azure (Office 365) services for Auth.

If some updates fail silently or don’t install properly, your notebook / PC may be missing important security patches, Video / Sound Driver / Chipset driver fixes, or feature improvements etc, and with time it can lead due to Windows domain applied policies to left over your computer be considered Unsafe or Broken even dis-joined from the Domain.
 

Why Windows Update Says Up to Date but Update Are missing Happens ?

There might be mutiple scenarios but Common causes include:

  • Corrupted update cache
  • Interrupted installations (PC got hard shut down electricity power outage or
    laptop battery has discharged during update)
  • Broken Windows services (due to)
  • System file corruption (cause of viruses / malware or during mess left over of multiple windows updates over years)
  • Registry conflicts (Windows registry conflicts due to installed PC apps etc.)
  • Failed cumulative updates

Windows may mark updates as “processed” even if installation didn’t complete correctly.
Identifying Missing or Broken Windows updates is really hard sometimes.

Usually to capture it you will have to:
Check the Windows OS Build Release

from: Settings -> System -> About

windows-11-settings-system-about-OS-BUILD-release-screenshotpng

In this guide, will walk through proven methods to fix Windows Update when it’s stuck or falsely reporting success.

1. Try PC Restart First

Before diving into advanced fixes:

  • Restart PC.
  • Go Check Settings → Windows Update → Check for updates again.

Sometimes updates are downloaded but waiting for a reboot to complete installation and thus this oddity is observed.

2. Run the Built-In Windows Update Troubleshooter

Both Windows 10 and Windows 11 include a built-in repair tool (that is starting to get Legacy nowadays but still sometimes can help)

Steps:

  1. Open Settings
  2. Go to System → Troubleshoot → Other troubleshooters
  3. Find Windows Update
  4. Click Run

Let it complete the scan and apply any recommended fixes. Most time this won’t solve it but as it is easy to try out give it a try.

3. Manually Reset Windows Update Components

If Windows still erroneously thinks everything is installed but something is broken internally, resetting the update components often solves the problem.

CleanUp SoftwareDistribution update cache folder is perhaps Most Effective FIX

Cleaning the C:\Windows\SoftwareDistribution folder is actually one of the most effective fixes when Windows refuses to install updates but claims everything is up to date.

C:\Windows\SoftwareDistribution

This is where Windows temporarily stores:

  • Downloaded update files
  • Update installation logs
  • Temporary metadata
  • Cached update database

If this cache becomes corrupted, Windows Update may:

  • Fail silently
  • Not detect new updates
  • Show “Up to date” incorrectly
  • Get stuck at 0% or 100%

This method works in both Windows 10 and Windows 11.

What Happens When You Delete SoftwareDistribution?

Deleting (or renaming) the folder:

  • Does NOT delete installed updates
  • Does NOT break Windows
  • Forces Windows to rebuild the update cache
  • Forces a fresh update scan

It’s completely safe if you do it correct.

Recommended Method (Play Safe)

N!B! Do NOT delete the folder while update services are running.

Step 1: Stop Windows Update Services

Open Command Prompt as Administrator and run:
 

net stop wuauserv
net stop bits
net stop cryptSvc
net stop msiserver

Wait until all services stop successfully.

Step 2: Rename the Update Folder (Safer Than Deleting)

Rename Update Folders

move C:\Windows\SoftwareDistribution SoftwareDistribution.old

Enter in Windows Safe Mode (to enter it Press SHIFT and choose Restart)

Go to:

C:\Windows\SoftwareDistribution

Rename it to:

C:\Windows\SoftwareDistribution.old

If Windows refuses to move out, make sure services are stopped.

To do it via Safe Mode with Command Prompt only

move c:\Windows\SoftwareDistribution C:\Windows\SoftwareDistribution.old

Step 3: Restart Services

Back in Command Prompt:

net start wuauserv
net start bits
net start cryptSvc
net start msiserver

Restart Computer.
 

4. Use the Microsoft Update Catalog to Manually download recent applied
Update

Sometimes a specific update fails repeatedly but Windows doesn’t clearly report it.

You can manually download it from:

  • Microsoft Update Catalog

How to manually Instlal KB* Win update:

  1. Find the KB number (for example: KB5030219)
  2. Search for it in the catalog
  3. Download the version matching your system (x64, ARM64, etc.)
  4. Install manually

This bypasses Windows Update’s automatic system.

5. Use the Windows Installation Assistant

If feature updates (like 22H2 → 23H2) are not appearing, use:

  • Windows 11 Installation Assistant
  • Windows 10 Update Assistant

These tools force a full system upgrade while keeping files and apps intact.

6. Check for Corrupted System Files

Corrupted system files can prevent updates from applying properly.

Open Command Prompt as Administrator and run:

C:\Windows>  sfc /scannow

Then run:

C:\Windows> DISM /Online /Cleanup-Image /RestoreHealth

After both scans complete, restart and try updating again.

7. Make Sure You’re Not Paused or you are on a Metered connection

Windows may appear updated if:

  • Updates are paused
  • Your connection is set as metered
  • You’re on a managed/work PC with update policies

Check:

  • Settings → Windows Update → Advanced options

8. Check Your Windows Version Manually

Press Win + R, type:

winver

Compare your version with the latest available on Microsoft’s official release page
https://learn.microsoft.com/en-en/windows/release-health/windows11-release-information
to confirm whether you’re truly up to date.

9. Update your Video / Audio / Motherboard Chipsets and peripheral drivers to latest

Depending on the laptop brand or PC, Check for latest available install drivers from the Internet and apply it to PC.
Dell / HP and ASUS / ACER / MSI
Usually has their dedicated software that can do that quickly, i.e. as i'm using currently Dell notebook. There you can use Dell Comamnd Update / Dell SupportAssistant to do so
 

10. Move catroot folder (to clean up Windows Update package signatures)

What is catroot2 ?

The catroot2 folder is used by Microsoft Windows to store:

  • Windows Update package signatures
  • Cryptographic catalog files (.cat files)
  • Data used by the Cryptographic Services component
  • Information needed to validate and install updates
  • It plays a critical role in verifying update integrity.

move C:\Windows\System32\catroot2 catroot2.old

is used as a repair step for Windows Update issues because it resets the Catroot2 folder, which stores important update-related data.

11. Perform an In-Place Repair Upgrade (Last Resort)

If nothing works:

  1. Download the latest Windows ISO (Windows Installation Assistant)
  2. Mount it
  3. Run setup.exe
  4. Choose Keep personal files and apps

This reinstalls Windows without deleting your data and fixes deeply broken update components.

12. If none of these helps check Windows Logs for a clue

If you want to go even deeper, check Event Viewer logs under:

Windows Logs → Setup

That will show detailed update errors and will helpfully give you the clue on how to fix it.

Summary / close up

If Windows says “Up to date” but you suspect missing updates, don’t ignore it, as soon your OS will either become messed or you will miss critical Performance and Performance improvements / Stability Features. Even if PC continues work relatively stable the missing Security patches would be critical, and the computer exposure to the internet lefts you as an easy victim for your computer to be hacked or infected by some kind of encryption / ransomware worm etc. In most cases, the updates did not apply due to easy solvable issue and simple reset update components, a clean up of Update cache or manually installing the update solves the problem and WIndows gets back to the wanted OS update release. If this does not happen however you should check the system for Main system corrupted files