Posts Tagged ‘systemctl’

Build a Central Linux Logging Server to Collect, Store, and Visualize All Infrastructure node Logs

Friday, March 20th, 2026

build-a-central-linux-logging-server-to-collect-store-and-visualize-all-infrastructure-node-logs
If you manage multiple servers or collection of multiple services on many nodes within a company server infrastructure, you know the pain of dealing with logs scattered to multiple locations across systems. It is really crazy and takes up a lot of time and drains energy.
One server shows nothing, another rotated logs yesterday, and your app logs are buried somewhere in /var/log/app.

A central logging server solves this problem, as all logs collected, stored, and accessible in one single place.

In this article will present shortly how to build one using ELK Stack + Beats (lightweight agents) on a Linux server.

1. Architecture Overview

Here’s the typical flow looks like this:

[ Servers / Apps ] –> [ Filebeat / Metricbeat ] –> [ Logstash ] –> [ Elasticsearch ] –> [ Kibana / Grafana (Visualization) ]

  • Beats → Lightweight log shippers installed on all machines.
  • Logstash → Optional pipeline for parsing, filtering, and enriching logs.
  • Elasticsearch → Storage and search engine.
  • Kibana / Grafana → Visualization dashboards.

2. Prepare Your Central Logging Server

Requirements:

  • Debian Linux 12 recommended / Ubuntu or Fedora RHEL
  • At least 4 GB RAM (8+ GB for production ELK)
  • Plan enough SSD storage (logs grow fast)
  • Open ports: 5044 for Beats, 9200 for Elasticsearch, 5601 for Kibana

Install Prerequisites

# apt update && sudo apt install openjdk-17-jdk wget curl apt-transport-https -y

ELK requires Java, OpenJDK 17 should work fine.

3. Install Elasticsearch

# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.11.1-amd64.deb
# dpkg -i elasticsearch-8.11.1-amd64.deb
# systemctl enable elasticsearch
# systemctl start elasticsearch


Check ElasticSearch server is running:

# curl -X GET "localhost:9200/"

That should see the Cluster info in JSON format.

4. Install Kibana

# wget https://artifacts.elastic.co/downloads/kibana/kibana-8.11.1-amd64.deb
# dpkg -i kibana-8.11.1-amd64.deb
# systemctl enable kibana
# systemctl start kibana


Access Kibana URL in browser:

http://<server-ip>:5601

5. Install Logstash to Process logs before sending to Elasticserch

# wget https://artifacts.elastic.co/downloads/logstash/logstash-8.11.1.deb
# dpkg -i logstash-8.11.1.deb
# systemctl enable logstash
# systemctl start logstash

Logstash allows filtering and structuring logs before sending them to Elasticsearch. Example simple pipeline:

# vim /etc/logstash/conf.d/syslog.conf

input {
  beats {
    port => 5044
  }
}
filter {
  grok { match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:host} %{DATA:program}: %{GREEDYDATA:message}" } }
}
output {
  elasticsearch {
    hosts => [“localhost:9200”]
    index => "central-logs-%{+YYYY.MM.dd}"
  }
}

Start Logstash

# systemctl restart logstash

6. Install Beats on Client Machines

On each server you want to monitor:

# apt install filebeat metricbeat -y


Configure Filebeat

Edit config

# vim  /etc/filebeat/filebeat.yml

Set the output to your central server:

output.logstash:

hosts: [":5044"]

Start the agent:

systemctl enable filebeat
systemctl start filebeat

Do the same for Metricbeat if you want metrics like CPU, memory, disk.

7. Create Dashboards in Kibana or Grafana

  • In Kibana, use Discover to view logs.
  • Create visualizations for errors, warnings, top endpoints, etc.
  • Use Grafana if you want multi-source dashboards, combining logs and metrics.

8. Optional: Secure Your Logging Server

  • Enable TLS/SSL in Beats and Elasticsearch.
  • Use firewall rules to restrict access.
  • Create dedicated users in Elasticsearch for log access.

9. Maintenance Tips

  • Index Lifecycle Management → Rotate daily and delete old logs automatically.
  • Monitor disk usage → Logs grow fast. SSDs are better.
  • Filter noise → Don’t ship debug logs unless needed.
  • Backup Elasticsearch → Especially if logs are critical.

Sum Up, how it Works

  • All logs are centralized → easier troubleshooting.
  • Scalable → add new servers, Beats handle shipping automatically.
  • Searchable → find errors instantly using Elasticsearch.
  • Visual → dashboards in Kibana/Grafana give real-time insight.

How to Install and Use Kibana for Log Visualization

Wednesday, February 18th, 2026

/images/kibana-logo how to install it on linux
I saw Kibana in my professional career and I find it a very interesting tool for sysadmins, so I thought it might be helpful to someone out there to write a small article on how to install and use to to visualize data inside some elasticsearch software.

Kibana is an open-source data visualization and exploration tool used to analyze large volumes of data, especially logs. It is part of the ELK Stack (Elasticsearch, Logstash, Kibana), and is commonly used for centralized log management, security monitoring, and observability.

Kibana is often used in the so-called ELK pipeline for log file collection, analysis and visualization:

  • Elasticsearch is for searching, analyzing, and storing your data
  • Logstash (and Beats) is for collecting and transforming data, from any source, in any format
  • Kibana is a portal for visualizing the data and to navigate within the elastic stack
     

In this article, you'll learn how to:

  • Install Kibana
  • Connect it to Elasticsearch
  • Visualize log data
  • Use its basic features

Prerequisites

Before installing Kibana, make sure you have the following:

  • A Linux server running (Ubuntu / Debian / CentOS / RHEL)
  • Elasticsearch installed and running
  • Root or sudo access

Install Kibana

I. On Debian/Ubuntu
 

  1. Import the Elastic GPG key:

# wget -qO – https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add –

  1. Add the repository:

# echo "deb https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-8.x.list

  1. Update and install:


# apt update

# apt install kibana

II. On RHEL/CentOS Linux

  1. Create repo file:

# tee /etc/yum.repos.d/elastic.repo <<EOF

[elastic-8.x]

name=Elastic repository for 8.x packages

baseurl=https://artifacts.elastic.co/packages/8.x/yum

gpgcheck=1

gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

enabled=1

autorefresh=1

type=rpm-md

EOF

  1. Install Kibana:

# yum install kibana

2. Configure Kibana

The configuration file is located at:

/etc/kibana/kibana.yml

Edit the file:

# vim /etc/kibana/kibana.yml

Update or add the following:
 

# Server settings
server.port: 5601
server.host: "0.0.0.0"

# Elasticsearch connection
elasticsearch.hosts: [“http://localhost:9200”]

# Logging
logging.level: info

# Security (only if Elasticsearch security is enabled)
# elasticsearch.username: "kibana_system"
# elasticsearch.password: "your_password_here"

Optional: Set basic auth or SSL settings if needed.

 

3. Start and Enable Kibana

# systemctl enable kibana

# systemctl start kibana

Check status:

# systemctl status kibana

 

4. Access Kibana Web Interface

Open your browser and go to:

http://<your-server-ip>:5601

You’ll be welcomed with the Kibana dashboard.

5. Import and Visualize Logs

Option A: Use Filebeat to Send Logs

Install Filebeat on the server with logs and configure it to send data to Elasticsearch. Kibana will then be able to visualize it.

# apt install filebeat

# filebeat modules enable system

# filebeat setup

# systemctl start filebeat

Option B: Ingest Logs via Logstash or Elasticsearch API

If you already have data in Elasticsearch, Kibana will automatically detect indices.
 

6. Create Index Pattern

  1. In Kibana, go to Stack Management -> Index Patterns
  2. Click Create Index Pattern
  3. Enter the name (e.g., filebeat-*)
  4. Select the timestamp field (usually @timestamp)
  5. Save

Now Kibana knows how to query and visualize your data.

7. Create Visualizations and Dashboards

  1. Go to Visualize -> Create visualization
  2. Choose a type (bar, pie, line, etc.)
  3. Select an index pattern
  4. Configure metrics and buckets

You can then save visualizations and add them to dashboards.

8. Secure Kibana

  • Configure TLS/SSL for Kibana / ElasticSearch (such as Logstash)
  • Use additional Elastic Security features like RBAC (Role Based Access Control, SSO (Single Sign On)
  • Secure Kibana with a reverse proxy (e.g., Nginx + Basic Auth or Apache / Haproxy infront)

Example Nginx config simple snippet:

location / {

  proxy_pass http://localhost:5601;

  auth_basic "Restricted";

  auth_basic_user_file /etc/nginx/.htpasswd;

}

 

What is Kibana used for and what it can do for you?

Use Case

Description

Log Monitoring

Visualize system and application logs in real time

Security Analytics

Detect anomalies, failed logins, suspicious activity

DevOps Dashboards

Track uptime, error rates, and system performance

SIEM

Use Elastic Security for threat detection

 

Once Kibana is installed on a server, you typically use it to visualize and explore data stored in Elasticsearch. Here’s a practical guide with sample usage scenarios:

Access Kibana

After installation, Kibana usually runs on port 5601 by default.

http://<your-server-ip>:5601

  • Open this URL in a browser.
  • You should see the Kibana dashboard.

Connect to Elasticsearch

Kibana automatically connects to your Elasticsearch instance if installed locally.
You can verify the connection:

GET /_cluster/health

  • Go to Dev ToolsConsole in Kibana.
  • Run the above query to check cluster status.

Visualize Data

Kibana allows multiple types of visualizations:

  • Bar/line chart: trends over time.
  • Pie chart: distribution of values.
  • Data table: top IP addresses or most visited URLs.
  • Maps: geolocation of IP addresses.

Create Dashboards

  • Combine multiple visualizations in a Dashboard.
  • Useful for monitoring logs, metrics, or application performance.
  • Example: Create a dashboard with:

     

    • Requests per URL (bar chart)
    • Requests over time (line chart)
    • Top client IPs (data table)
    • Errors by type (pie chart)

 Search & Query Logs

  • Use Discover to search logs interactively.
  • Example KQL query:

status:500 AND url:"/login"

This finds all failed login requests.

Set Alerts (Optional)

  • Kibana’s Alerts and Actions can trigger notifications (email, Slack, etc.) when certain thresholds are crossed.
  • Example: alert if error responses exceed 100 in 5 minutes.

Once Kibana is installed on a server, you typically use it to visualize and explore data stored in Elasticsearch. Here’s a practical guide with sample usage scenarios:

Access Kibana

After installation, Kibana usually runs on port 5601 by default.

http://<your-server-ip>:5601

  • Open this URL in a browser.
  • You should see the Kibana dashboard.

Connect to Elasticsearch

Kibana automatically connects to your Elasticsearch instance if installed locally.
You can verify the connection:

GET /_cluster/health

  • Go to Dev ToolsConsole in Kibana.
  • Run the above query to check cluster status.

Visualize Data

Kibana allows multiple types of visualizations:

  • Bar/line chart: trends over time.
  • Pie chart: distribution of values.
  • Data table: top IP addresses or most visited URLs.
  • Maps: geolocation of IP addresses.

Create Dashboards

  • Combine multiple visualizations in a Dashboard.
  • Useful for monitoring logs, metrics, or application performance.
  • Example: Create a dashboard with:
     

    • Requests per URL (bar chart)
    • Requests over time (line chart)
    • Top client IPs (data table)
    • Errors by type (pie chart)

 Search & Query Logs

  • Use Discover to search logs interactively.
  • Example KQL query:

status:500 AND url:"/login"

This finds all failed login requests.

Set Alerts (Optional)

  • Kibana’s Alerts and Actions can trigger notifications (email, Slack, etc.) when certain thresholds are crossed.
  • Example: alert if error responses exceed 100 in 5 minutes.

Once Kibana is installed on a server, you typically use it to visualize and explore data stored in Elasticsearch. Here’s a practical guide with sample usage scenarios:

Access Kibana

After installation, Kibana usually runs on port 5601 by default.

http://your-server-ip:5601

  • Open this URL in a browser.
  • You should see the Kibana dashboard.

Connect to Elasticsearch

Kibana automatically connects to your Elasticsearch instance if installed locally.
You can verify the connection:

GET /_cluster/health

  • Go to Dev ToolsConsole in Kibana.
  • Run the above query to check cluster status.

Visualize Data

Kibana allows multiple types of visualizations:

  • Bar/line chart: trends over time.
  • Pie chart: distribution of values.
  • Data table: top IP addresses or most visited URLs.
  • Maps: geolocation of IP addresses.

Create Dashboards

  • Combine multiple visualizations in a Dashboard.
  • Useful for monitoring logs, metrics, or application performance.
  • Example: Create a dashboard with:

    • Requests per URL (bar chart)
    • Requests over time (line chart)
    • Top client IPs (data table)
    • Errors by type (pie chart)

 Search & Query Logs

  • Use Discover to search logs interactively.
  • Example KQL query:

status:500 AND url:"/login"

This finds all failed login requests.

Set Alerts (Optional)

  • Kibana’s Alerts and Actions can trigger notifications (email, Slack, etc.) when certain thresholds are crossed.
  • Example: alert if error responses exceed 100 in 5 minutes.

kibana-sample-dashboard-screenshot

Sample Kibana dashboard
 

kibana-geo-kibana-web-traffic-by-location

Kibana with connected servers to find out Geo Location
 

Summary closing words (what we did)

Step

Action

 1

Install Kibana from Elastic repo

2

Configure to connect to Elasticsearch

3

Start and enable the service

4

Access it via http://<ip>:5601

5

Ingest log data

6

Define index pattern

7

Create dashboards and visualizations

The idea of this article was just to introduce you to the existence of Elasticsearch / kibana and filebeat and logstack and not to give you a fully fine tuned install guide. The usual way to deploy Kibana on multiple servers of course is using a dockerized container version of it. There is plenty to learned on how to use kibana to do a monitoring of your machines. But most simple use is to directly access the locally visible kibana on a server and check the status of processes on the host instead of logging via SSH. Kibana can do pretty much


Some further useful Reading Resources

 

How to Optimize Debian Linux on old Computers to Get improved overall Speed, Performance and Stability

Tuesday, December 30th, 2025

tuning-debian-linux-to-work-quickly-and-smooth-on-old-pc-laptop-hardware

 

How to optimize Debian version 12.12 Linux OS to work responsive on Old ThinkPad laptops like from year 2008 Thinkpad R61 with Window Maker, zram, SSD etc.

Old computers aren’t obsolete but most worthy if you dont want to spend on extra hardware.

With the right setup, Debian Linux can run smoothly on hardware that’s more than a decade old. This article walks through a real-world, proven configuration using a classic ThinkPad R61 (Core 2 Duo, 4 GB RAM, SSD), but the principles apply to many older PCs as well.

Why use Debian Linux on old hardware?

Debian Stable is ideal for old hardware because it offers:

  • Low baseline resource usage
  • Long-term stability
  • Minimal background activity
  • Excellent support for lightweight desktops
  • Flexible and well organized and relatively easy to tune

Paired with a minimal window manager, Debian easily outperforms many “lightweight” distros that still ship heavy defaults.

Hardware Baseline PC setup

Test system:

  • Laptop: ThinkPad R61
  • CPU: Intel Core 2 Duo
  • RAM: 4 GB
  • Storage: SATA SSD
  • Graphics: Intel X3100 / NVIDIA NVS 140M
  • Desktop: Window Maker

This is a common configuration for late-2000s business laptops.

1. Desktop Environment: Keep It Simple

Heavy desktop environments is the main factor to slow down an old PC.
Where possible dont use the Desktop environment at all and stick to console.

Recommended:

  • Window Maker (used by myself)
  • Openbox
  • Fluxbox
  • IceWM

Avoid:

  • GNOME
  • KDE Plasma
  • Cinnamon

Window Maker is especially effective: no compositing, no animations, minimal memory usage.

2. Terminal Choice Matters

For console-based applications (games, tools, system utilities), use a terminal that correctly reports its size. Lets say you use xterm:

$ xterm

You can force a usable terminal size like this:

$ xterm -geometry 80×32 &

This avoids common issues with console applications failing due to incorrect terminal dimensions.

Install urxvt (best choice for terminal productivity)

Open a terminal and run:

apt update
# apt install rxvt-unicode

Optional (if you want tabbed terminal use suckless):

# sudo apt install suckless-tools

  • rxvt-unicode-256color → main terminal (n/a in debian) have to install third party  
  • rxvt-unicode-256color-perl → Perl extensions (tabs, URL click, etc.) (n/a in debian, installable via third party)
  • suckless-tools → includes tabbed, can be used as an alternative for tabs

a. Configure .Xresources

Create or edit ~/.Xresources:

$ vim~/.Xresources

Example for beautiful setup with tabs, transparency, and fonts:

! Basic appearance
! URxvt.font: xft:FiraCode Nerd Font Mono:size=12
 URxvt.background: [90]#1c1c1c
 URxvt.foreground: #c0c0c0
! URxvt.cursorColor: #ff5555
! URxvt.saveLines: 10000
! URxvt.scrollBar: false
! URxvt.borderLess: true

! Enable tabs using built-in tabbed extension
URxvt.perl-ext-common: default,tabbed

! Tab colors
URxvt.tabbed.tabbar-fg: 15
URxvt.tabbed.tabbar-bg: 0
URxvt.tabbed.tab-fg: 2
URxvt.tabbed.tab-bg: 8

! Keybindings for tabs
! Ctrl+Shift+N → new tab
URxvt.keysym.Control-Shift-N: perl:tabbed:new_tab
! Ctrl+Shift+W → close tab
URxvt.keysym.Control-Shift-W: perl:tabbed:close_tab
! Ctrl+Tab → next tab
URxvt.keysym.Control-Tab: perl:tabbed:next_tab
! Ctrl+Shift+Tab → previous tab
URxvt.keysym.Control-Shift-Tab: perl:tabbed:prev_tab
! Tabs keybindings
URxvt.keysym.Control-N: perl:tabbed:new_tab
URxvt.keysym.Control-W: perl:tabbed:close_tab

 

b. Apply .Xresources changes

Run:

$ xrdb ~/.Xresources

Then launch urxvt:

$ rxvt

  • Ctrl+Shift+T → new tab
  • Ctrl+Shift+W → close tab

c. Optional: Make it even cooler

  1. Install powerline fonts or Nerd Fonts (for fancy prompt icons):

# apt install fonts-firacode

  1. Enable URL clicking and clipboard (already enabled above)

  2. Combine with tmux for extra tabs/panes, session management, and more shortcuts.

3. Retain only last 500MB from journald

Retain only the past 500 MB:

# journalctl –vacuum-size=500M

This is exteremely useful as sometimes failing services might generate ton of unnecessery logs and might flood up the old machine hard disk.

4. Reduce journal memory footprint

# vim /etc/systemd/journald.conf

Set Storage=volatile
Set RuntimeMaxUse=50M

# systemctl restart systemd-journald

5. Trim services boot times

# systemd-analyze blame
# systemd-analyze critical-chain

This tells you which services slow down your boot the most.

6. Disable Unnecessary Services

Old systems benefit massively from disabling unused background services.

Check what’s enabled:

# systemctl list-unit-files –state=enabled

Common candidates to disable (if not needed):

# systemctl disable bluetooth
# systemctl disable cups
#systemctl disable avahi-daemon

#systemctl disable ModemManager

Each disabled service saves RAM and CPU cycles.

7. Dirty Page Tuning (Reduces Freezes)

Defaults favor servers, not laptops.

Edit:

vim /etc/sysctl.conf

Add:

vm.dirty_background_ratio=5
vm.dirty_ratio=10

This forces writeback earlier, preventing sudden stalls.

8. Memory Tuning: zram Done Right

Does zram make sense with 4 GB RAM and an SSD?

Yes it could, but only in moderation.

zram compresses memory in RAM and acts as fast swap. On a Core 2 Duo, compression overhead is small and the benefit is smoother multitasking.

Recommended zram configuration

Install zram-tools deb package:

# apt install zram-tools

Edit:

# vim /etc/default/zramswap

Set:

PERCENT=15

This creates ~600 MB of compressed swap — enough to absorb memory spikes without wasting RAM.

9. Keep Disk Swap (But Small)

Even with zram, disk swap is useful as a fallback.

Recommended:

  • 1–2 GB swap on SSD
  • zram should have higher priority than disk swap

Check:

# swapon –show

10. Swappiness and Cache Pressure

Tune the kernel to prefer RAM and zram first:

# vim /etc/sysctl.conf

Add:

vm.swappiness=10
vm.vfs_cache_pressure=50

Apply:

# sysctl -p

This prevents early swapping and keeps the system responsive.

11. CPU Governor: A Hidden Performance Win

Older ThinkPads often run conservative CPU governors.

Install tools:

# apt install cpufrequtils

Set a balanced governor:

# echo 'GOVERNOR="ondemand"' | sudo tee /etc/default/cpufrequtils

# systemctl restart cpufrequtils

This allows the CPU to ramp up quickly when needed.

12. Power and Thermal Management (ThinkPad-Specific)

Install TLP:

# apt install tlp
# systemctl enable tlp
# systemctl start tlp

TLP improves:

  • Battery life
  • Thermal behavior
  • SSD longevity

Defaults are usually perfect – no heavy tuning required.

13. Disable Watchdogs (If You Don’t Debug Kernels)

Watchdogs waste cycles on old CPUs.

Check:

# lsmod | grep watchdog

Disable:

# vim /etc/modprobe.d/blacklist.conf

Add:

blacklist iTCO_wdt
blacklist iTCO_vendor_support

Reboot.

14. Reduce systemd Noise

systemd logs aggressively by default.

Edit:

#vim/etc/systemd/journald.conf

Set:

Storage=volatile
RuntimeMaxUse=50M

Then:

#systemctl restart systemd-journald

Less disk I/O, faster boots.

15. Use tmpfs for caching and Volatile Junk

Put garbage in RAM, not SSD.

Edit:

# vim /etc/fstab

Add:

tmpfs /tmp tmpfs noatime,nosuid,nodev,mode=1777,size=256M 0 0

Optional:

tmpfs /var/tmp tmpfs noatime,nosuid,nodev,size=128M 0 0
# mount -a

16. IRQ Balance: Disable It (might slow down machine)

On single-socket old laptops, irqbalance can hurt.

Disable:

# systemctl disable irqbalance

Test performance; re-enable if needed.

17. Reduce systemd Timeout Delays

Old laptops often wait forever on dead hardware.

Edit:

# vim /etc/systemd/system.conf

Set:

DefaultTimeoutStartSec=10s
DefaultTimeoutStopSec=10s

18. Strip Kernel Modules You Don’t Use

If you don’t use:

  • FireWire
  • Bluetooth
  • Webcam

Blacklist them:

# vim /etc/modprobe.d/blacklist-extra.conf

Example:

blacklist firewire_ohci
blacklist firewire_core
blacklist uvcvideo
blacklist bluetooth

 

Faster boot, fewer interrupts.

19. X11 Performance Tweaks (Intel Graphics)

Create:

# vim/etc/X11/xorg.conf.d/20-intel.conf

Add:

Section "Device"
Identifier "Intel Graphics"
Driver "intel"
Option "TearFree" "false"
Option "AccelMethod" "sna"
EndSection

20. Disable IPv6 (if not used)

Saves a little RAM and startup time.

Edit:

# vim/etc/sysctl.conf

Add:

net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1

21.Lower Kernel Log Level verbosity

Stop kernel spam.

# dmesg -n 3

Make permanent:

# vim/etc/sysctl.conf

Add:

kernel.printk=3 4 1 3

22.Scheduler Latency (Advanced)

For desktop interactivity:

# vim/etc/sysctl.conf

Add:

kernel.sched_autogroup_enabled=1

Helps UI responsiveness under load.

23.Kill Browser Bloat (Biggest Win)

For Firefox ESR:

  • Disable telemetry
  • Enable tab unloading

browser.sessionstore.interval = 300000

No kernel tweak beats this.

 

a. Enable tab unloading (automatic tab discard)

  1. Open Firefox ESR
    Type

  2. about:config

  3. in the address bar.
  4. Accept the warning: “This might void your warranty.”

  5. Search for the following preference:

browser.tabs.unloadOnLowMemory

  • Default: false
  • Set to: true
     

This enables Firefox to unload inactive tabs automatically when memory is low.

b. Optional tuning

Some other preferences you can tweak:

Preference

Description

Suggested value

 

browser.tabs.maxSuspendedTabs

 

Maximum number of tabs that can be suspended

10–20

 

browser.tabs.autoHide

 

Auto-hide tabs while suspended (older ESR versions)

true

 

browser.tabs.loadInBackground

 

Background tabs load in suspended state

true

 

browser.sessionstore.interval

 

How often session is saved (ms)

15000

These may vary slightly depending on ESR version.

24. Graphics Considerations

ThinkPad R61 models typically have:

  • Intel X3100 → works well out of the box
  • NVIDIA NVS 140M → use nouveau driver

Recommendations:

  • Avoid proprietary legacy NVIDIA drivers
  • No compositing
  • Simple themes only

25. Extremely for Geeks, Build a Custom Kernel (Optional)

Only if you have plenty of time and you have a developers background and maniacal tendencies 🙂

Benefits:

  • Smaller kernel
  • Faster boot
  • Fewer interrupts

Cost:

  • Maintenance burden

26. Application Choices Matter More Than Tweaks

Keep in mind the application choices matter more than tweeks.
Even the best-tuned system can be ruined by heavy applications.

Recommended software:

  • Browser: Firefox ESR
  • File manager: PCManFM
  • Terminal: xterm, rxvt
  • Editor: nano, geany

Limit browser tabs and disable unnecessary extensions.

27. Things Not to Do

Avoid:

  • Huge zram sizes (50%+)
  • Do not Disable swap entirely
  • Beware of Aggressive kernel “performance hacks”
  • Disable any Heavy desktop effects if choosing to run MATE or alike GUI environment

Stability beats micro-optimizations.

Final Recommended Configuration

For a ThinkPad R61 with 4 GB RAM and SSD perhaps the best Linux configuration would be:

  • Debian Stable
  • Window Maker
  • zram: 15%
  • SSD swap: 1–2 GB
  • swappiness: 10
  • TLP enabled
  • No compositor
     

This setup would deliver:

  • Smooth multitasking
  • No UI lag
  • Minimal CPU overhead
  • Long-term stability

Conclusion

Old PCs don’t need to be fast necessery, but can be made work slightly faster, though the limits if used in a proper way with the right software and without out the eye candy nonse of today, they be still fully functionally used.

With Debian, a lightweight window manager, and sensible memory tuning, even 15-year-old + old hardware remains useful today for common daily tasks, and makes it not only useful but fun and different especially if you are a sysadmin or a developer who needs mostly console and a browser.

It gives you another perspective on how to do your computing in a simplier and more minimalistic way.

Of course do not expect the Old Hardware PC to be the perfect station for youtube maniacs, heavy gamers or complete newbies, who dont honor the old PC limited resources and don't want to have a bit of experimental approach to the PC.

Anyways by implementing before mentioned tweaks, they will reward you with reliability and simplicity  – something modern over complicated OS and Apps often lack.

Enjoy and Happy Christmas 2025 and Happy New Year 2026 soon ! 🙂

Why SSH Login Is Slow on Linux and How to Fix It

Thursday, December 18th, 2025

openssh_debug-and-fix-slow-connections-to-linux-how-to-original-openssh-logo

Slow SSH logins are one of those problems that don’t look serious at first — until you realize every connection takes 20–30 seconds to respond. The shell eventually appears, but the delay is long enough to break automation, frustrate users, and make admins suspicious of deeper system issues.

This article walks through some common causes of slow SSH logins and how to diagnose them efficiently on Linux servers.

1. DNS Lookups: The Most Common Culprit

By default, sshd performs a reverse DNS lookup on the connecting IP address. If DNS is misconfigured or unreachable, SSH will wait.

How to Test

From the server (measure how many seconds it takes to do ssh to the machine):
 

$ time ssh localhost

If localhost logins are instant but remote logins are slow, suspect DNS.

Check /etc/ssh/sshd_config:

UseDNS yes

Fix

Disable DNS lookups (at least temporary to test):

UseDNS no

Then restart SSH:

# systemctl restart sshd

Note: This does not reduce security in most environments and is safe for the majority of servers.

2. Broken or Slow PAM Modules

PAM (Pluggable Authentication Modules) can introduce delays — especially if modules depend on:

  • LDAP
  • Kerberos
  • Network home directories
  • Smart card services

Debug with Verbose SSH

From the client:

$ ssh -vvv user@remote-server

Look for pauses during:

debug1: Authentications that can continue:

Test PAM Delay

Temporarily disable PAM in /etc/ssh/sshd_config:

UsePAM no

Restart SSH and test again.
If login becomes instant, inspect /etc/pam.d/sshd.

3. Entropy Shortage on Virtual Machines

Older kernels or low-activity VMs can run out of entropy, causing SSH key operations to block.

Check Entropy Level

# cat /proc/sys/kernel/random/entropy_avail

Values below 100 may cause delays.

Fix

Install an entropy daemon (if on Deb based distro):

# apt install haveged

or on CentOS / RHEL / Fedora

# yum install rng-tools

Then start the service:

# systemctl enable –now haveged

4. GSSAPI Authentication Delay

SSH attempts Kerberos authentication even when not used.

Symptom

Delay occurs before password prompt appears.

Fix

Edit /etc/ssh/sshd_config:

GSSAPIAuthentication no

GSSAPICleanupCredentials no

Restart SSH afterward.

5. Slow Home Directory or Shell Initialization

Sometimes SSH is fast, but the shell is slow.

Test with a Minimal Shell

$ ssh user@server /bin/sh

If this is instant, check:

  • .bashrc
  • .profile
  • .bash_logout

Common mistakes:

  • Network calls (curl, wget)
  • Mounted NFS home directories
  • Broken PATH exports
  • Commands waiting on unavailable resources

6. Logging and Timing the Login Process

Enable SSH debug logging in /etc/ssh/sshd_config:

LogLevel DEBUG

Then watch logs:

# journalctl -u sshd -f

or:

# tail -f /var/log/auth.log

This allows you to see exactly where the delay happens.

7. A Systematic Troubleshooting Checklist

  1. Disable DNS lookups (UseDNS no)
  2. Disable GSSAPI
  3. Test PAM
  4. Check entropy
  5. Test minimal shell
  6. Review auth logs

In practice, 90% of slow SSH issues are DNS or PAM related.

Conclusion

But wait there might be much more behind the SSH slowness such as misconfigured LDAP or other infrastructure in the middle.
Slow SSH logins are rarely “just SSH.", and though this guide should help you with some sporadic random server issues, if the issues is present on a complex infra with multiple ssh servers, then  that is almost always a symptom of:

  • Network misconfiguration
  • Over-engineered authentication
  • Broken assumptions about system dependencies

Approaching the problem methodically saves hours of guesswork and restores what SSH is supposed to be, work without glitches.

How to Harden a Linux Server in 2025 – Practical Steps for Sysadmins to protect against hackers and bots

Thursday, December 11th, 2025

linux_server-hardening-practical-steps-for-sysadmins-protecting-machine-vs-hackers-and-bots-good-practices

Securing a Linux server has never been more importan than ever these days..
With automated attacks, AI-driven exploits, and increasingly complex infrastructure, even a small misconfiguration can lead to a serious breach.
But wait, you don't have to wait to get bumped by a random script kiddie. Good news is you can mitigate a bit attacks with just a few practical and pretty much standard steps, that can can drastically increase your server’s security.

Below is a straightforward, battle-tested hardening guide suitable for Debian, Ubuntu, CentOS, AlmaLinux, and most modern distributions.

1. Keep the System Updated (But Safely)

Outdated packages remain the #1 cause of server compromises.
On Debian/Ubuntu:

# apt update && apt upgrade -y

# apt install unattended-upgrades

On RHEL-based systems:
 

# dnf update -y

# dnf install dnf-automatic

Enable security-only auto-updates where possible. Full auto-updates may break production apps, so use them carefully.

2. Create a Non-Root User and Disable Direct Root Login
 

Attackers constantly brute-force “root”. Avoid letting them.
 

# adduser sysadmin

# usermod -aG sudo sysadmin

Then edit SSH:

# vim /etc/ssh/sshd_config

Set:

PermitRootLogin no

PasswordAuthentication no

And restart:

# systemctl restart sshd


Use SSH keys only.

3. Install a Firewall and Block Everything by Default

UFW (Debian/Ubuntu):

# ufw default deny incoming

# ufw default allow outgoing

#ufw allow ssh

# ufw enable

Firewalld (RHEL/AlmaLinux):

# systemctl enable firewalld –now

# firewall-cmd –permanent –add-service=ssh

# firewall-cmd –reload

Turn off any unneeded ports immediately.

4. Protect SSH with Fail2Ban

Fail2Ban watches log files for suspicious authentication attempts and blocks offenders.

# apt install fail2ban -y

or

# dnf install fail2ban -y

Enable:

# systemctl enable –now fail2ban

To harden SSH jail:

[sshd]

enabled = true

maxretry = 5

bantime = 1h

findtime = 10m

5. Enable Kernel Hardening

Install sysctl rules that protect against common attacks:

Create /etc/sysctl.d/99-hardening.conf:

kernel.kptr_restrict = 2

kernel.sysrq = 0

net.ipv4.conf.all.rp_filter = 1

net.ipv4.tcp_synack_retries = 2

net.ipv4.conf.all.accept_redirects = 0

net.ipv4.conf.all.send_redirects = 0

net.ipv4.conf.all.log_martians = 1

Apply:

# sysctl –system

6. Install and Configure AppArmor or SELinux

Mandatory Access Control significantly limits damage if a service gets compromised.

  • Ubuntu / Debian uses AppArmor by default — ensure it's enabled.
  • RHEL, AlmaLinux, Rocky use SELinux — keep it in enforcing mode unless absolutely necessary.

Check SELinux:

# getenforce

You want:

Enforcing but hopefully you will have to configure all your machine services to venerate and work correctly with selinux enabled.

7. Scan the System with Lynis

Lynis is the best open-source Linux security auditing tool.

# apt install lynis

# lynis audit system

It provides a security score and actionable suggestions.

8. Use 2FA for SSH (Optional but Highly Recommended)

Use Two Factor Authentication:

a. Freely with Oath toolkityou can read how in my previous article how to set up 2fa free software authentication on Linux

or

b. Install Google Authenticator:

# apt install libpam-google-authenticator

# google-authenticator

Enable in /etc/pam.d/sshd:

auth required pam_google_authenticator.so

And in SSH config:

ChallengeResponseAuthentication yes

Restart SSH.

9. Separate Services Using Containers or Systemd Isolation

Even simple servers can benefit from isolation.

Systemd sandbox options:

ProtectSystem=full

ProtectHome=true

ProtectKernelTunables=true

PrivateTmp=true

Add these inside a service file under:

/etc/systemd/system/yourservice.service

It prevents processes from touching parts of the system they shouldn’t.

10. Regular Backups Are Part of Security

A secure server with no backups is a disaster waiting to happen.

Use:

  • rsync
  • borgbackup
  • restic
  • Cloud object storage with versioning

Always encrypt backups and test restore procedures.

Conclusion

Hardening a Linux server in 2025 requires vigilance, good practices, and layered security. No single tool will protect your system — but when you combine SSH security, firewalls, Fail2Ban, kernel hardening, and backups, you eliminate the majority of attack vectors.

 

Speed Up Linux: 10 Underrated Hacks for Faster Workstations and Servers

Tuesday, December 9th, 2025

make-linux-faster-underrated-hacks-cute-tux-penguin-toolbox-logo

 

Most Linux “performance tuning guides” recycle the same tips: disable services, add RAM, install a lighter desktop … and then you end up with a machine that feels basically not too much quicker.

Below is a collection of practical, field-tested, sysadmin-grade hacks that actually change how responsive your system feels—on desktops, laptops, and even small VPS servers.

Most of these are rarely mentioned (or uknown) by novice sys admins / sys ops / system engineers / dev ops but time has proved them extremely effective.

1. Enable zram (compressed RAM) instead of traditional swap

Most distros ship with a slow swap partition on disk. Enabling zram keeps swap in memory and compresses it on the fly. On low/mid RAM systems, the difference is night and day.

On Debian/Ubuntu:

# apt install zram-tools

# systemctl enable –now zramswap.service

Expect snappier multitasking, fewer freezes, and far less disk thrashing.

2. Speed up SSH connections with ControlMaster multiplexing

If you SSH multiple times into the same server, you can cut connection time to almost zero using SSH socket multiplexing:

Add to ~/.ssh/config:

Host *

    ControlMaster auto

    ControlPath ~/.ssh/cm-%r@%h:%p

    ControlPersist 10m

Your next ssh/scp commands will feel instant.

 

3. Replace grep, find, and ls with faster modern tools

 

The classic GNU tools are fine—until you try the modern replacements:

  • ripgrep (rg) → insanely faster grep
  • fd → user-friendly, colored alternative to find
  • exa or lsd → modern ls with icons, colors, Git info

# apt install ripgrep fd-find exa

You’ll be surprised how often you use them.

4. Improve boot time via systemd-analyze

Run cmd:

# systemd-analyze blame

This shows what’s delaying your boot. Disable useless services like:
 

# systemctl disable bluetooth.service

# systemctl disable ModemManager

# systemctl disable cups

Great for lightweight servers and old laptops.

5. Make your terminal 2–3× faster with GPU rendering (Kitty/Alacritty)

Gnome Terminal and xterm are CPU-bound and choke when printing thousands of lines.

Switch to a GPU-rendered terminal:

  • Kitty
  • Alacritty

They scroll like butter – even on old ThinkPads.

6. Use eatmydata when installing packages

APT and DPKG spend a lot of time syncing packages safely to disk. If you're working in a Docker container, VM, or test machine where safety is less critical, use:

# apt install eatmydata

# eatmydata apt install PACKAGE

It cuts install time by 30–70% !

7. Enable TCP BBR congestion control (massive network speed boost)

Google’s BBR can double upload throughput on many servers.

Check if supported:

sysctl net.ipv4.tcp_available_congestion_control

Enable:

# echo "net.core.default_qdisc=fq" | sudo tee -a /etc/sysctl.conf

# echo "net.ipv4.tcp_congestion_control=bbr" | sudo tee -a /etc/sysctl.conf

# sysctl -p

On VPS hosts with slow TCP stacks, this is a cheat code.

8. Reduce SSD wear and speed up disk with noatime

Every time a file is read, Linux updates its “access time.” Useless on most systems.

Edit /etc/fstab and change:

defaults

to:

defaults,noatime

Fewer writes + slightly better I/O performance.

9. Use htop + iotop + dstat to diagnose “mystery lag”

When a Linux system hangs, it’s rarely the CPU. It’s almost always:

  • I/O wait
  • swap usage
  • blocked processes
  • misbehaving snaps or flatpaks

Install the golden server stats trio:

# apt install htop iotop dstat

Check:

  • htop → load, processes, swap
  • iotop → who’s killing your disk
  • dstat -cdlmn → real-time performance overview

Using them can solve you 90% of slowdowns in minutes.

10. Speed up your shell prompt dramatically

Fancy PS1 prompts (Git branches, Python envs, emojis) look nice but slow down your shell launch.

Fix it by using async Git status tools:

  • starship prompt
  • powerlevel10k for zsh

Both load instantly even in large repos.

11. Another underrated improvement (if not already done), 
/home directory storage location

Put your $HOME on a separate partition or disk.

Why?

  • faster reinstalls
  • easier backups
  • safer experiments
  • fewer “oops I nuked my system” moments by me or someone else who has account doing stuff on the system

Experienced sysadmins and a good corporate server still does this for a reason.

Final Thoughts

Performance isn’t just about raw hardware—it's about removing the subtle bottlenecks Linux accumulates over time. With the changes above, even a 10-year-old laptop or low-tier VPS feels like a new machine.

How to Install and Troubleshoot Grafana Server on Linux: Complete Step-by-Step Tutorial

Saturday, October 25th, 2025

install-grafana-on-linux-logo

If you’ve ever set up monitoring for your servers or applications, you’ve probably heard about Grafana.
It’s the de facto open-source visualization and dashboard tool for time-series data — capable of turning raw metrics into beautiful and insightful charts.

In this article, I’ll walk you through how to install Grafana on Linux, configure it properly, and deal with common problems that many sysadmins hit along the way.
This guide is written with real-world experience, blending tips from the official documentation and years of self-hosting tinkering.

1. Prerequisites

You’ll need:

  • A Linux machine — Ubuntu 22.04 LTS, Debian 11+, or CentOS 8 / Rocky Linux 9.
  • A superuser or user with sudo privileges.
  • At least 512 MB of RAM and 1 CPU core (Grafana is lightweight).
  • Internet access to fetch the repository and packages.

Optionally:

  • A domain name (for example: grafana.yourdomain.com)
  • Nginx/Apache if you want to enable HTTPS later.

2. Update Your System

Before any installation, ensure your system packages are up to date:

# Debian/Ubuntu
# apt update && sudo apt upgrade -y

# or

# dnf update -y # CentOS/RHEL/Rocky

3. Install Grafana (from the Official Repository)

Grafana provides an official APT and YUM repository. Always use it to get security and feature updates.

For Debian / Ubuntu:
 

# apt install -y software-properties-common apt-transport-https wget
# wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -
# echo "deb https://packages.grafana.com/oss/deb stable main" | \
# tee /etc/apt/sources.list.d/grafana.list
# apt update
# apt install grafana -y 


For CentOS / Rocky / RHEL:

# tee /etc/yum.repos.d/grafana.repo <<EOF
[grafana]
name=Grafana OSS
baseurl=https://packages.grafana.com/oss/rpm
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
EOF
# dnf install grafana -y

 

4. Start and Enable the Service

Once installation completes, start and enable Grafana to run on boot:
 

# systemctl daemon-reload
# systemctl enable grafana-server
# systemctl start grafana-server

Check status:

# systemctl status grafana-server


You should see Active: running.

5. Access the Grafana Web Interface

Grafana listens on port 3000 by default.

Open your browser and visit:

http://<your_server_ip>:3000

Login with the default credentials:

Username: admin
Password: admin

You’ll be asked to change the password — do it immediately for security.

6. Configure Grafana Basics

Configuration file location:

/etc/grafana/grafana.ini

Common settings to edit:

  • Port:

http_port = 8080

  • Root URL (for reverse proxy):

root_url = https://grafana.yourdomain.com/

  • Database: (change from SQLite to MySQL/PostgreSQL if desired)
[database]
type = mysql
host = 127.0.0.1:3306
name = grafana
user = grafana
password = supersecret_password582?255!#

Restart Grafana after making changes:

# systemctl restart grafana-server


7. Optional – Secure Grafana with HTTPS and Reverse Proxy

If Grafana is public-facing, always use HTTPS.

Install Nginx and Certbot:

# apt install nginx certbot python3-certbot-nginx -y
# certbot --nginx -d grafana.yourdomain.com

This automatically sets up SSL certificates and redirects traffic from HTTP → HTTPS.

8. Add a Data Source

Once logged in:

  1. Go to Connections → Data Sources → Add data source

  2. Choose Prometheus, InfluxDB, Loki, or your preferred backend

  3. Enter the connection URL (e.g., http://localhost:9090)

  4. Click “Save & Test”

If successful, you can start creating dashboards!

9. Troubleshooting Common Grafana Installation Issues

Even the smoothest installs sometimes go wrong.
Here are real-world fixes for typical problems sysadmins face:

Problem

Likely Cause

Fix

Service won’t start

Misconfigured grafana.ini or port in use

Run # journalctl -u grafana-server -xe to see logs. Correct syntax, change port, restart.

Port 3000 already in use

Another app (maybe NodeJS app or Jenkins)

sudo lsof -i :3000 → stop or reassign the conflicting service.

Blank login page / dashboard not loading

Browser cache or reverse proxy misconfig

Clear cache, check Nginx proxy settings (proxy_pass should point to http://localhost:3000).

APT key errors (Ubuntu 22.04+)

apt-key deprecated

Use signed-by=/usr/share/keyrings/grafana.gpg method as Grafana docs suggest.

“Bad Gateway” via proxy

Missing header forwarding

In Nginx config, ensure:

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;


“` |
| **Can’t access Grafana from LAN**

Most common reason Firewall blocking port

# ufw allow 3000/tcp


 (or adjust firewalld). 

 

10. Maintenance and Backup Tips
 

– Backup your dashboard database frequently: 
– SQLite: `/var/lib/grafana/grafana.db`
– MySQL/PostgreSQL: use your normal DB backup methods

– Watch logs regularly, e.g.: 

# journalctl -u grafana-server -f

  • Upgrade grafana safely:

# apt update && apt install grafana -y

(Always back up before upgrading!)

Conclusion

Grafana is a powerful, elegant way to visualize your system metrics, application logs, and alerts.
Installing it on Linux is straightforward — but knowing where it can fail and how to fix it is what separates a beginner from a seasoned sysadmin.

By following this guide, you not only get Grafana up and running, but also understand the moving parts behind the scenes — a perfect fit for any DIY server admin or DevOps engineer who loves keeping full control.

Optimizing Linux Server Performance Through Digital Minimalism and Running Services and System Cleanup

Friday, October 3rd, 2025

linux-logo-optimizing-linux-server-performance-digital-minimalism-software-cleanup

In today’s landscape of bloated software stacks, automated dependency chains, and background services that consume memory and CPU without notice, Linux system administrators and enthusiasts alike benefit greatly from embracing digital minimalism of what is setup on the server and to reduce it to the absolute minimum.

Digital minimalism in the context of Linux servers means removing what you don't need, disabling what you don't use, and optimizing what remains — all with the goal of increasing performance, improving security, and simplifying further maintenance.
In this article, we’ll walk through practical steps to declutter your Linux server, optimize resources, and regain control over what’s running and why.

1. Identify and Remove Unnecessary Packages

Over time, many systems accumulate unused packages — either from experiments, dependency installations, or unnecessary defaults.

On Debian/Ubuntu

Find orphaned packages:
 

# apt autoremove --dry-run


Remove unnecessary packages:
 

# apt autoremove
# apt purge <package-name>


List large installed packages:

# dpkg-query -Wf '${Installed-Size}\t${Package}\n' | sort -n | tail -n 20


On RHEL/CentOS/AlmaLinux:

Find orphaned packages:

# dnf autoremove

List packages sorted by size:

# rpm -qia --qf '%{SIZE}\t%{NAME}\n' | sort -n | tail -n 20


2. Audit and Disable Unused Services
 

Every running service consumes memory, CPU cycles, and opens potential attack surfaces.

List enabled services:
 

# systemctl list-unit-files --type=service --state=enabled

See currently running services:

# systemctl --type=service –state=running

Put some good effort to review and disable all unnecesssery

 

Disable unneeded services :

# systemctl disable --now bluetooth.service
# systemctl disable --now cups.service
# systemctl disable --now ModemManager.service

And so on

Useful services to disable (if unused):
 

Service

Purpose

When to Disable

cups.service

Printer daemon

On headless servers

bluetooth.service

Bluetooth stack

On servers without Bluetooth

avahi-daemon

mDNS/Zeroconf

Not needed on most servers

ModemManager

Modem management

If not using 3G/4G cards

NetworkManager

Dynamic net config

Prefer systemd-networkd for static setups


Simple Shell Script to List & Review Services
 

#!/bin/bash
echo "Enabled services:"
systemctl list-unit-files --state=enabled | grep service
echo ""
echo "Running services:"
systemctl --type=service --state=running

3. Optimize Startup and Boot Time

Analyze system boot performance:

# systemd-analyze

View which services take the longest:

# systemd-analyze blame
min 25.852s certbot.service
5min 20.466s logrotate.service
1min 29.748s plocate-updatedb.service
54.595s php5.6-fpm.service
43.445s systemd-logind.service
42.837s e2scrub_reap.service
37.915s apt-daily.service
35.604s mariadb.service
31.509s man-db.service
27.405s systemd-journal-flush.service
18.357s ifupdown-pre.service
14.672s dev-xvda2.device
13.523s rc-local.service
11.024s dpkg-db-backup.service
9.871s systemd-sysusers.service
...

 

Disable or mask long-running services that are not essential.


Why services masking is important?


Simply because after some of consequential updates, some unwanted service daemon might start up with the system boot.

Example:
 

# systemctl mask lvm2-monitor.service


4. Reduce Memory Usage (Especially on Low-RAM VPS)
 

Monitor memory usage:

# free -h
# top
# htop

Use lightweight alternatives:

Service

Heavy

Lightweight Alternative

Web server

Apache

Nginx / Caddy / Lighttpd

Database

MySQL

MariaDB / SQLite (if local)

Syslog

rsyslog

busybox syslog / systemd journal

Shell

bash

dash / ash

File manager

GNOME Files

mc / ranger (CLI)


5. Configure Swap (Only If Needed)
 

Having too much or too little swap can affect performance.


Check if swap is active:

# swapon --show


Create swap file (if needed):

# fallocate -l 1G /swapfile
# chmod 600 /swapfile
# mkswap /swapfile
# swapon /swapfile

Add to /etc/fstab for persistence:

/swapfile none swap sw 0 0

6. Clean Up Cron Jobs and Timers

Old scheduled tasks can silently run in the background and consume resources.

List user cron jobs:

crontab -l

Check system-wide cron jobs:

# ls /etc/cron.*
# ls -al /var/spool/cron/*


List systemd timers:

# systemctl list-timers


Disable any unneeded timers or outdated cron entries.

7. Optimize Logging and Log Rotation

Logs are essential but can grow large and fill up disk space quickly.

Check log size:

# du -sh /var/log/*

Force logrotate:
 

# logrotate -f /etc/logrotate.conf

Edit /etc/logrotate.conf or specific files in /etc/logrotate.d/* to reduce retention if needed.

8. Check for Zombie Processes and Old Users

Old users and zombie processes can indicate neglected cleanup or the server is (cracked) hacked.

List users:
 

cat /etc/passwd | cut -d: -f1

Remove unused accounts:
 

# userdel -r username


Check for zombie processes:
 

# ps aux | awk '{ if ($8 == "Z") print $0; }'

9. Disable IPv6 (if not used)

IPv6 can add unnecessary complexity and attack surface if you’re not using it.

To disable IPv6 temporarily:

# sysctl -w net.ipv6.conf.all.disable_ipv6=1
# sysctl -w net.ipv6.conf.default.disable_ipv6=1

To disable permanently, add to /etc/sysctl.conf:

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1


10. Final Thoughts: Less Is More

Digital minimalism is not just a personal tech trend — it's a philosophy of clarity, performance, and security. Every running process is a potential vulnerability. Every megabyte of RAM consumed by a useless service is wasted capacity. Every package installed increases the system’s complexity.

By regularly auditing, pruning, and simplifying your Linux server, you not only improve its performance and reliability, but you also reduce future maintenance headaches.

Minimalism = Maintainability.