Archive for January, 2026

Installing Freedoom, Freedoom Phase 2, and Other DOOMs on GNU / Linux

Thursday, January 29th, 2026

freedoom-game-on-linux-shot-killed-monster
You think good old DOOM game is Dead ? Think again. 

From graphing calculators and printers to ATMs, traffic signs, refrigerators, and even experimental brain-computer interfaces, DOOM has proven itself less a game and more a force of nature. For decades, developers and hackers have ported it onto devices never meant for fast-paced demon slaying, simply because they could. This strange tradition says as much about DOOM’s elegant design as it does about the creativity – and stubbornness – of the open-source community. Perhaps as a mean of extension people from the free realm decide to create a a playable and free remake of it.

If you want classic DOOM action game to have fun on Linux without proprietary game data, Freedoom is the answer.
It’s free, open source, and designed to be fully compatible with the original DOOM engine ,while still delivering you a monsters to kill, shotguns, and extremely questionable life choices.

This article walks you through installing Freedoom, Phase1  Freedoom Phase 2, and running them with popular DOOM source ports on Linux such as gzdoom and chocolate-doom and freedm.

What Is Freedoom (and Why Should You Care)?

Freedoom is a free replacement for the original DOOM game data (WAD files).

Freedoom comes in two main flavors:

  • Freedoom (Phase 1)Doom-compatible (monsters, maps, weapons)
  • Freedoom Phase 2Doom II – compatible (bigger maps, more enemies, more chaos)
     

Think of them as DOOM and DOOM II, but liberated from copyright hell.

1. Install Freedoom on Linux

On most Linux distributions, Freedoom is available directly from the package manager.

Debian / Ubuntu / Linux Mint

# apt install freedoom

freedoom2-screenshot-linux-phase2

This installs:

  • freedoom1.wad (Phase 1)
  • freedoom2.wad (Phase 2)

You now officially own free DOOM. Congratulations.

$ freedoom1

$ freedoom2

 

freedoom2-linux-game-play-screenshot-with-doom-monsters-free-wad


 

2. Choose a DOOM Engine (a.k.a. Source Port)

Freedoom doesn’t run by itself- you need a DOOM engine to play it. Luckily, Linux has lots of excellent options.

2.1. Chocolate Doom (Classic Experience)

Chocolate Doom aims to recreate exactly how DOOM felt in the 1990s. Same physics, same limits, same pixelated glory.

Doom engines closely-compatible with Vanilla Doom

Chocolate Doom aims to accurately reproduce the original DOS version of

Doom and other games based on the Doom engine in a form that can be

run on modern computers. Unlike most modern Doom engines, Chocolate Doom

is not derived from the Boom source port and does not inherit its

features (or bugs).

 

chocolate-doom package contains:

* Chocolate Doom, a port of Id Software's "Doom" (1993)

* Chocolate Heretic, a port of Raven Software's "Heretic" (1994)

* Chocolate Hexen, a port of Raven Software's "Hexen" (1995)

* Chocolate Strife, a recreation of Rogue Entertainment's "Strife" (1996)

These games are designed to behave as similar to the original DOS version as

is possible.

Chocolate Doom supports all flavors of Doom, including The Ultimate Doom, Doom

2 and Final Doom as well as Chex(R) Quest, HacX, Freedoom: Phase 1 and Phase 2

and FreeDM.

All Chocolate game engines require game data to play. For Chocolate Doom,

free game data is available in the freedoom package. Commercial game data for

all four engines can be packaged using "game-data-packager".

To install it simply run:

# apt install chocolate-doom

Run Freedoom:

$ chocolate-doom -iwad /usr/share/games/doom/freedoom1.wad

Or Doom II–style:

$ chocolate-doom -iwad /usr/share/games/doom/freedoom2.wad

2.2. PrBoom+ (Classic but Smoother)

PrBoom+ keeps the old-school feel but adds:

  • Better performance
  • Higher resolutions
  • Fewer headaches

To install it:

# apt install dsda-doom

Run dsda-doom:

$ dsda-doom -iwad freedoom2.wad

 

freedm-game-entry-screen

Best for:

  • Vanilla gameplay with modern comfort
  • Speedrunners
  • People who want fewer crashes than 1993 allowed

2.3. GZDoom (Modern, Fancy, Wild)

GZDoom is DOOM after drinking several energy drinks:

  • Hardware acceleration
  • Mouse look
  • Mods, lighting, shaders, chaos

Add necessery repository:

$ rm –f /etc/apt/trusted.gpg.d/drdteam.gpg $ rm –f /etc/apt/sources.list.d/drdteam.list
# wget -P /etc/apt/sources.list.d https://debian.drdteam.org/drdteam-$(dpkg –print-architecture).sources
# apt update # apt install gzdoom

 

Run Original Doom WAD:

$ gzdoom -iwad /var/www/files/doom2.wad

doom2-original-wad-gzdoom-doom-linux-screenshot-entry-screen

doom2-original-wad-gzdoom-doom-linux-screenshot-plasma-gun-shot

 

If you need the original doom wads check out my earlier article DOOM 1, DOOM 2, DOOM 3 game wad files for download / Playing Doom on Debian Linux via FreeDoom open source doom engine
Run Freedoom WAD:

$ gzdoom -iwad freedoom2.wad

Best for:

  • Mods
  • Modern controls
  • Turning DOOM into something unrecognizable (but fun)

3. Installing Other DOOM Engines

Once you have an engine, you can play nearly any DOOM-compatible game by pointing it at a WAD file.

Popular options include:

  • FreeDM – A free deathmatch-focused DOOM experience
  • Heretic / Hexen – Fantasy DOOM (if you own the WADs)
  • User-made megawads – Thousands exist, ranging from brilliant to unhinged

Most engines let you load extra content like this:

$ gzdoom -iwad freedoom2.wad -file coolmap.wad

freedoom-exit-screen-cool-doom-text-ascii 3.1. Installing FreeDM Port


Install it with:

# apt install freedm
$ freedm

 

freedm-doom-linux-shot


3.2. Where Are the WAD Files located ?

Typically they are installed to:

/usr/share/games/doom/

Common files:

  • freedoom1.wad
  • freedoom2.wad
  • freedm.wad

You can copy them anywhere if you prefer a custom setup.

3.3. Keyboard, Mouse, and Sanity Settings

After launching:

  • Open Options → Controls
  • Enable mouse look (if using GZDoom)
  • Rebind keys (your pinky deserves mercy)
  • Increase resolution unless you’re feeling retro

Final Thoughts, DOOM A Forever Classic

Freedoom proves that DOOM isn’t just a game—it’s a platform, a culture, and for those born in the mid and beginning of 90s a travel back machine.

On Linux, you get:

  • Free game data
  • Multiple engines
  • Infinite mods
  • Zero DRM
  • Maximum demon destruction 🙂

Whether you want authentic 1993 vibes or modernized mayhem, Freedoom and Linux are a perfect match.
For old school fans and maniacs as well as IT tech guys with nostalgia or hard core angry system administrators this game is a real bliss
as it can be a true anti-stressor and a perfect for as a mean of anger management 🙂

Enjoy the Blast ! Happy Doom-Ing!

How to add Bulgarian language to GCompris Kids education software on Debian 12 GNU / Linux installing gcompris via flatpak next generation package distribution tool

Monday, January 26th, 2026

https://www.pc-freak.net/images/install-gcompris-on-linux-via-flatpak-package-distribution-sandboxed-framework.png

As I have a small currently 5.5 years old Kid Dimitar at home and i'm doing my best to make him learn new things and advance in different areas of life and knowledge.

Today Decided to introduce him to Linux4Kids gcompris a KDE educational set of games for small children.
Once installed with simple

# apt install gcompris-qt


It works fine and default version installable from default Debian distribution is fine, except it does not support Bulgarian.

That is again not a nice suprise, as even some pseudo languages like Belarusian are there to set but Bulgarian missing on the default installable pack:

# dpkg -l |grep -i gcompris
ii gcompris-qt 3.1-2 amd64 educational games for small children
ii gcompris-qt-data 3.1-2 all data files for gcompris-qt

After some tampering and unable to find a native .deb port of the latest release and my undesire to move from debian 12 (bookworm) Desktop Linux laptop at the moment to Debian 13 Trixie, i've finally found a way to install it via flatpak:

For those who never used snap package ecosystem or flatpak, here is a shortly synthesis on it:

Flatpak is an open-source, next-generation framework for building, distributing, and running sandboxed desktop applications on Linux.
It enables developers to package apps once and run them on any Linux distribution by including all necessary dependencies.
Flatpak improves security by isolating applications from the host system. 

Flatpaks are containerized applications. They require more space because the bring along their own versions of their dependencies instead of relying on system versions.

While a single application will have greater space requirements, the base images [and potentially overlays] will get shared between them and each successive flatpak will potentially require less overhead.

The pros to using them is that flatpaks are often more current than their distribution packaged versions and they are somewhat isolated from the base system. The cons are that they're not managed with the rest of your system packages, can have slower start times, occasionally have permissions issues, and take up more space.

In some cases, flatpak is a better choice. Sometimes, it's not, and there's no way we can really determine that for you.

Tried up to my best to install the newest version of gcompris which as of time of writting this blog post is gcompris 25.1

 # apt info flatpak|grep -i 'descr' -A8 -B8

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

Recommends: ca-certificates, default-dbus-system-bus | dbus-system-bus, desktop-file-utils, hicolor-icon-theme, gtk-update-icon-cache, libpam-systemd, p11-kit, polkitd | policykit-1, shared-mime-info, xdg-desktop-portal (>= 1.6), xdg-desktop-portal-gtk (>= 1.6) | xdg-desktop-portal-backend, xdg-user-dirs
Suggests: avahi-daemon, malcontent-gui
Conflicts: xdg-app
Replaces: xdg-app
Homepage: https://flatpak.org/
Download-Size: 1,400 kB
APT-Manual-Installed: yes
APT-Sources: http://ftp.debian.org/debian bookworm/main amd64 Packages
Description: Application deployment framework for desktop apps
 Flatpak installs, manages and runs sandboxed desktop application bundles.
 Application bundles run partially isolated from the wider system, using
 containerization techniques such as namespaces to prevent direct access
 to system resources. Resources from outside the sandbox can be accessed
 via "portal" services, which are responsible for access control; for
 example, the Documents portal displays an "Open" dialog outside the
 sandbox, then allows the application to access only the selected file.

 

 

# apt install flatpak

# flatpak remote-add –if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
# flatpak install flathub org.kde.gcompris
# flatpak run org.kde.gcompris


If you have sound glitches of gcompris on Older laptops install all necessery for pipewire run it like that:

# apt install pipewire pipewire-audio-client-libraries pipewire-pulse
 

Try to run it manually with:


# env PULSE_LATENCY_MSEC=60 ; flatpak run org.kde.gcompris

If still sound glithches are present a workaround is to tune PipeWire buffer/quantum size:

PipeWire buffer/quantum size too aggressive?
 

Many crackling issues come from too small quantum. Create ~/.config/pipewire/pipewire.conf.d/99-custom.conf

# vim ~/.config/pipewire/pipewire.conf.d/99-custom.conf
and add:textcontext.properties = {
    default.clock.rate = 48000
    default.clock.quantum = 1024
    default.clock.min-quantum = 512
    default.clock.max-quantum = 2048
}

#systemctl –user restart pipewire pipewire-pulse
 

Create a new wrapper script to run you gcompris easily
 

# vim /usr/local/bin/gcompris.sh

#!/bin/bash
# little hack script to make music streamed via pulseaudio to not have severe glitches when running gcompris latest release on debian 12
# through flatpak
# if not working run cmd
# systemctl –user restart pipewire
LANG=bg_BG.UTF-8
SDL_AUDIODRIVER=pulseaudio
#flatpak run –device=all –socket=pulseaudio org.kde.gcompris
flatpak override –user –env=SDL_AUDIODRIVER=pulseaudio org.kde.gcompris
flatpak override –user –filesystem=~/.config/pipewire:ro org.kde.gcompris
LANG=bg_BG.UTF-8 flatpak run –socket=pulseaudio org.kde.gcompris

# chmod +x /usr/local/bin/gcompris.sh


Hence I run the wrapper script and let the kid enjoy the nice educational stuff while I enjoyed the nice kiddish peaceful music !

# /usr/local/bin/gcompris.sh


install-gcompris-on-linux-via-flatpak-package-distribution-sandboxed-framework

P.S. ! If you get issues with pipewire (if you're using one instead of pulseaudio as I do with my Mate desktop environment you can restart it and relaunch the gcompris nice addition to  tux4kids (see my previous article Tux for Kids (Tux Math, Tux Paint, Tux Typing) 3 games to develop your children Intellect):

# systemctl –user restart pipewire

Enjoy Gcompris !
 

Using GeoIP on Linux: Country-Based Filtering, Logging, and Traffic Control

Friday, January 16th, 2026

geoip-on-linux-country-based-filtering-logging-traffic-control-logo

GeoIP is one of those technologies that quietly sits in the background of many systems, yet it can be extremely powerful when used correctly. Whether you want to block traffic from specific countries, analyze access logs, or add geographic context to security events, GeoIP can be a valuable addition to your Linux toolbox.

In this article, we’ll go deeper and look at real GeoIP usage examples for:

  • Log analysis
  • Firewalls
  • Apache HTTP Server
  • HAProxy

All examples are based on typical GNU/Linux server environments.

What Is GeoIP? 

GeoIP maps an IP address to geographic data such as:

  • Country
  • City
  • ASN / ISP (depending on database)

Most modern systems use MaxMind GeoLite2 databases (

.mmdb

format).

Keep in Mind ! :
GeoIP data is approximate. VPNs, proxies, mobile networks, and CGNAT reduce accuracy. GeoIP should be treated as a heuristic, not a guarantee.

1. Installing GeoIP Databases on Linux deb based distro

On Debian / Ubuntu:

#

apt install geoipupdate

Configure

/etc/GeoIP.conf

with your MaxMind license key and run:  

# geoipupdate

Databases are usually stored in:

/usr/share/GeoIP/

2. GeoIP for Log Analysis (to get idea of where does your traffic origins from)

GeoIP with Apache HTTP Server

Apache can use GeoIP in two main ways:

  1. To do IP origin Logging 

  2. Do Access control based on IP origin

An altenartive GeoIP common use is to post-processing logs to find out attempts to breach your security.

Lets say you want to

Find top attacking countries against your SSHd service.

# grep "Failed password" /var/log/auth.log | \
awk '{print $(NF-3)}' | \
while read ip; do geoiplookup $ip; done |
\ sort | uniq -c | sort -nr


This command will provide you a visibility on attack sources georaphical Country origin

3. Installing Apache GeoIP Module

For legacy GeoIP (older systems):

# apt install libapache2-mod-geoip

For modern systems, GeoIP2 is preferred:

# apt install libapache2-mod-geoip2

Enable the module:

# a2enmod geoip2
# systemctl reload apache2

4. Configure GeoIP Logging in Apache (basic config)

Add country code to access logs:

LogFormat "%h %l %u %t \"%r\" %>s %b %{GEOIP_COUNTRY_CODE}e" geoip
CustomLog /var/log/apache2/access.log geoip

This allows you to analyze traffic by country later without blocking users.

5. Country-Based Filter Blocking in Apache based on IP origin

Example: allow only selected countries:

<IfModule mod_geoip2.c>
SetEnvIf GEOIP_COUNTRY_CODE ^(BG|DE)$ AllowCountry
Deny from all
Allow from env=AllowCountry
</IfModule>

Use this carefully. Blocking at the web server layer is better than firewall-level blocking, but still risky if you have global users.

6. Apply GeoIP to Apache Virtual Host

You can apply GeoIP rules per site:

<VirtualHost *:80>
ServerName example.com
<IfModule mod_geoip2.c>
SetEnvIf GEOIP_COUNTRY_CODE CN BlockCountry >
Deny from env=BlockCountry
</IfModule>
</VirtualHost>

This is useful when only specific applications need filtering.

Firewall vs Application Layer GeoIP (Pros and Cons)

Layer

Pros

Cons

Firewall

Early blocking

Hard to debug

Apache

Flexible per-site rules

App overhead

HAProxy

Centralized control

Requires careful config

Logs only

Safest

No blocking

7. Apply GeoIP to HAProxy

HAProxy is an excellent place to apply GeoIP logic because:

  • It sits in front of applications
  • ​Rules are fast and explicit
  • Logging is centralized

a. Preparing GeoIP Filtering to HAProxy service

HAProxy supports GeoIP2 via Lua or native ACLs using

.mmdb

Example directory:

/usr/share/GeoIP/GeoLite2-Country.mmdb

b. GeoIP-Based Access Control Lists ( ACLs ) in HAProxy

Basic country-based blocking:

frontend http_in
bind *:80

acl from_china src -m geoip CN
acl from_russia src -m geoip RU

http-request deny if from_china
http-request deny if from_russia

default_backend web_servers

This blocks traffic early, before it hits Apache or nginx.

c. GeoIP-Based Routing across different haproxy backends

Instead of blocking, you can route traffic differently:


acl eu_users src -m geoip DE FR NL
use_backend eu_backend if eu_users
default_backend global_backend

This is useful for:

  • Geo-based load balancing
  • Regional content
  • Legal compliance separation

d. GeoIP Logging config for HAProxy

Add country code to logs:

log-format "%ci:%cp [%t] %ft %b %s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC"

(%CC = country code)

This makes traffic analysis extremely efficient.

Keep in Mind !

Use HAProxy or web server level for enforcement, and firewall GeoIP only when absolutely necessary.

8. Fail2ban + GeoIP: Smarter Bans, Better Context

Fail2ban is excellent at reacting to abusive behavior, but by default it only sees IP addresses, not where they come from. Adding GeoIP allows you to:

  • Tag bans with country information
  • Apply different ban policies per region
  • Detect unusual behavior patterns

a. GeoIP-Enriched Fail2ban Logs

Fail2ban itself doesn’t natively evaluate GeoIP rules, but you can enrich logs post-ban.

Example action script (

/etc/fail2ban/action.d/geoip-notify.conf

):

 


[Definition]
actionban = echo "Banned from $(geoiplookup | cut -d: -f2)" >> /var/log/fail2ban-geoip.log
Enable it in a jail:
[sshd]
enabled = true
action = iptables[name=SSH] geoip-notify

Enable it in a jail:

[sshd]

enabled = true action = iptables[name=SSH] geoip-notify

Resulting log entry:

Banned 185.220.101.1 from Germany

This provides visibility without changing ban logic — a safe first step.


b. Use GeoIP-Aware Ban Policies 

You can also adjust ban times based on country.

Example strategy:

  • Short ban for local country
  • Longer ban for known high-noise regions

This is usually implemented via multiple jails and post-processing scripts rather than direct GeoIP matching inside Fail2ban.

Best practice:
Let Fail2ban do behavior detection — let GeoIP provide context, not decisions.

9. GeoIP with nftables (Linux Modern Firewall Layer)

iptables +

xt_geoip

is considered legacy. On modern systems, nftables is the preferred approach.

a. Using GeoIP Sets in nftables

nftables does not natively include GeoIP, but you can integrate GeoIP via generated IP sets.

Workflow:

  1. Convert GeoIP country IP ranges into nftables sets

  2. Load them dynamically

Example set definition:


table inet filter {
set geo_block {
type ipv4_addr
flags interval
}
}

Populate the set using a script:

nft add element inet filter geo_block { 1.2.3.0/24, 5.6.0.0/16 }

Then apply it:


chain input {
type filter hook input priority 0;
ip saddr @geo_block drop
}

b. Automating GeoIP ->  nftables

Typical automation pipeline:

GeoLite2 → country CSV → IP ranges → nftables set

Run this daily via cron.

Warning:

  • Large country sets = memory usage
  • Firewall reloads must be atomic
  • Test on non-production systems first

10. GeoIP Dashboard: Turning Logs into Insight

Blocking is optional — insight is mandatory.

a. Simple GeoIP Log Dashboard (CLI-Based)

Apache example:

# awk '{print $NF}' /var/log/apache2/access.log | \
sort | uniq -c | sort -nr

Where $NF contains country code.

Sample Result:

1243 US

987 DE

422 FR

310 CN

This already tells a story.

b. Visual Dashboard with ELK / Grafana

For larger environments:

HAProxy / Apache -> JSON logs Enrich logs with GeoIP

Send to:

  • ELK Stack
  • Loki + Grafana
  • Graylog

Metrics you want:

  • Requests per country
  • Errors per country
  • Bans per country
  • Login failures per country

This helps distinguish:

  • Marketing traffic
  • Legit users
  • Background Internet noise

11.  Create a Layered GeoIP Strategy

A sane, production-ready model using GeoIP would include something like:

  1. Logging first
    Apache / HAProxy logs with country codes

  2. Behavior detection
    Fail2ban reacts to abuse

  3. Traffic shaping
    HAProxy routes or rate-limits

  4. Firewall last
    nftables drops only obvious garbage

GeoIP is strongest when it supports decisions, not when it makes them alone.

12. Best Practices to consider

  • Prefer visibility over blocking
  • Avoid blanket country bans
  • Always log before denying

Combine GeoIP with:

  • Fail2ban
  • Rate limits
  • CAPTCHA or MFA
  • Keep GeoIP databases (regularly) updated
  • Test rules with real IPs before deploying

13. Common Mistakes to Avoid

Blocking entire continents Using GeoIP as authentication Applying firewall GeoIP without logs Forgetting database updates Assuming GeoIP accuracy

Close up

GeoIP is not a silver bullet against vampire attacks – but when used thoughtfully, it becomes a powerful signal enhancer and can give you a much broader understanding on what is going on inside your network traffic.

Whether you’re using it to filter out segment of evil intruders based on logs, routing traffic intelligently, or filtering obvious abusea, GeoIP fits naturally into a layered security model and is used across corporations and middle and even small sized businesses nowadays.

Used conservatively, GeoIP follows the classic Unix philosophy:

Small datasets, Simple rules, Real-world effectiveness, combined with rest of tools it gives info and ways to protect better your networks and server infra.

How to Easily Integrate AI Into Bash on Linux with ollama

Monday, January 12th, 2026

Essential Ollama Commands You Should Know | by Jaydeep Karale | Artificial Intelligence in Plain English

AI is more and more entering the Computer scene and so is also in the realm of Computer / Network management. Many proficient admins already start to get advantage to it.
AI doesn’t need a GUI, or a special cloud dashboard or fancy IDE so for console geeks Sysadmins / System engineers and Dev Ops it can be straightly integrated into the bash / zsh etc. and used to easify a bit your daily sys admin tasks.

If you live in the terminal, the most powerful place to add AI is Bash itself. With a few tools and a couple of lines of shell code, you can turn your terminal into an AI-powered assistant that writes commands, explains errors, and helps automate everyday Linux tasks.

No magic. No bloat. Just Unix philosophy with a brain.

1. AI as a Command-Line Tool

Instead of treating AI like a chatbot, treat it like any other CLI utility:

  • stdin → prompt
  • stdout → response
  • pipes → integration

Hardware Requirements of Ollama Server

2. Minimum VM (It runs, but you’ll hate it)

Only use this to test that things work.

  • vCPU: 2
  • RAM: 4 GB
  • Disk: 20 GB (SSD mandatory)
  • Storage type: ( HDD / network storage = bad )
  • Models: phi3, tinyllama

Expect:

  • Very slow pulls
  • Long startup times
  • Laggy responses

a. Recommended VM (Actually usable)

This is the sweet spot for Bash integration and daily CLI use.

  • vCPU: 4–6 (modern host CPU)
  • RAM: 8–12 GB
  • Disk: 30–50 GB local SSD
  • CPU type: host-passthrough (important!)
  • NUMA: off (for small VMs)

Models that feel okay:

  • phi3
  • mistral
  • llama3:8b (slow but tolerable)
 

b. “Feels Good” VM (CPU-only but not painful)

If you want it to feel responsive.

  • vCPU: 8
  • RAM: 16 GB
  • Disk: NVMe-backed storage
  • CPU flags: AVX2 enabled
  • Hugepages: optional but nice

Models:

  • llama3:8b
  • codellama:7b
  • mixtral (slow, but usable)
 

Hypervisor-Specific Advice (Important)

c. KVM / Proxmox (Best choice)

  • CPU type: host
  • Enable AES-NI, AVX, AVX2
  • Use virtio-scsi
  • Cache: writeback
  • IO thread: enabled

d. If running VM on VMware platform

  • Enable Expose hardware-assisted virtualization
  • Use paravirtual SCSI
  • Reserve memory if possible

e. VirtualBox (Not recommended)

  • Poor CPU feature exposure
  • Weak IO performance

Avoid if you can

f. Local AI With Ollama (Recommended)

If you want privacy, low latency, and no API keys, Ollama is currently the easiest way to run LLMs locally on Linux.

3. Install Ollama

# curl -fsSL https://ollama.com/install.sh | sh


Start the service:

# ollama serve

Pull a model (lightweight and fast):

# ollama pull llama3

Test it:

ollama run llama3 "Explain what the ls command does"

If that works, you’re ready to integrate.

To check whether all is properly setup after installed llama3:

root@haproxy2:~# ollama list
NAME             ID              SIZE      MODIFIED
llama3:latest    365c0bd3c000    4.7 GB    About an hour ago

root@haproxy2:~# systemctl status ollama
● ollama.service – Ollama Service
     Loaded: loaded (/etc/systemd/system/ollama.service; enabled; preset: enabled)
     Active: active (running) since Mon 2026-01-12 16:43:30 EET; 15min ago
   Main PID: 37436 (ollama)
      Tasks: 16 (limit: 6999)
     Memory: 5.0G
        CPU: 13min 5.264s
     CGroup: /system.slice/ollama.service
             ├─37436 /usr/local/bin/ollama serve
             └─37472 /usr/local/bin/ollama runner –model /usr/share/ollama/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53>

яну 12 16:45:34 haproxy2 ollama[37436]: llama_context: Flash Attention was auto, set to enabled
яну 12 16:45:34 haproxy2 ollama[37436]: llama_context:        CPU compute buffer size =   258.50 MiB
яну 12 16:45:34 haproxy2 ollama[37436]: llama_context: graph nodes  = 999
яну 12 16:45:34 haproxy2 ollama[37436]: llama_context: graph splits = 1
яну 12 16:45:34 haproxy2 ollama[37436]: time=2026-01-12T16:45:34.959+02:00 level=INFO source=server.go:1376 msg="llama runner>
яну 12 16:45:34 haproxy2 ollama[37436]: time=2026-01-12T16:45:34.989+02:00 level=INFO source=sched.go:517 msg="loaded runners>
яну 12 16:45:35 haproxy2 ollama[37436]: time=2026-01-12T16:45:35.000+02:00 level=INFO source=server.go:1338 msg="waiting for >
яну 12 16:45:35 haproxy2 ollama[37436]: time=2026-01-12T16:45:35.001+02:00 level=INFO source=server.go:1376 msg="llama runner>
яну 12 16:55:58 haproxy2 ollama[37436]: [GIN] 2026/01/12 – 16:55:58 | 200 |    3.669915ms |       127.0.0.1 | HEAD     "/"
яну 12 16:55:58 haproxy2 ollama[37436]: [GIN] 2026/01/12 – 16:55:58 | 200 |   42.244006ms |       127.0.0.1 | GET      "/api/>
root@haproxy2:~#

4. Turning AI Into a Bash Command

Let’s create a simple AI helper command called ai.

Add this to your ~/.bashrc or ~/.bash_aliases:

ai() {

  ollama run llama3 "$*"

}

Reload your shell:

$ source ~/.bashrc


Now you can do things like:

$ ai "Write a bash command to find large files"

 

$ ai "Explain this error: permission denied"

$ ai "Convert this sed command to awk"


At this point, AI is already a first-class CLI tool.

5. Using AI to Generate Bash Commands

One of the most useful patterns is asking AI to output only shell commands.

Example:

$ ai "Give me a bash command to recursively find .log files larger than 100MB. Output only the command."

Copy, paste, done.

You can even enforce this behavior with a wrapper:

aicmd() {

  ollama run llama3 "Output ONLY a valid bash command. No explanation. Task: $*"

}

Now:

$ aicmd "list running processes using more than 1GB RAM"

Danger note: always read commands before running them. AI is smart, not trustworthy.

6. AI for Explaining Commands and Logs

This is where AI shines.

Pipe output directly into it:

$ dmesg | tail -n 50 | ai "Explain what is happening here"

Or errors:

$ make 2>&1 | ai "Explain this error and suggest a fix"

You’ve just built a terminal-native debugger.

7. Smarter Bash History Search

You can even use AI to interpret your intent instead of remembering exact commands:

aih() {

  history | ai "From this bash history, find the best command for: $*"

}

Example:

$ aih "compress a directory into tar.gz"

It’s like Ctrl+R, but semantic.

8. Using AI With Shell Scripts

AI can help generate scripts inline:

$ ai "Write a bash script that monitors disk usage and sends a notification when it exceeds 90%"

You’re not replacing scripting skills – you’re accelerating them.

Think of AI as:

  • a junior sysadmin
  • a documentation search engine
  • a rubber duck that talks back

9. Where Ollama Stores Its Data

Depending on how it runs:

System service (most common)

Models live here:

/usr/share/ollama/.ollama/

Inside:

models/

blobs/

User-only install

~/.ollama/

Since you installed as root + systemd, use the first path.

See What’s Taking Space

# du -sh /usr/share/ollama/.ollama/*

Typical output:

  • models/ → metadata
  • blobs/ → the big files (GBs)

10. Remove Unused Models (Safe Way)

List models Ollama knows about:

# ollama list

Remove a model properly:

# ollama rm llama3

This removes metadata and unreferenced blobs.

Always try this first.

11. Full Manual Cleanup (Hard Reset)

If things are broken, stuck, or you want a clean slate:

Stop Ollama

# systemctl stop ollama

Delete all local models and cache

# rm -rf /usr/share/ollama/.ollama/models

# rm -rf /usr/share/ollama/.ollama/blobs

(Optional but safe)

# rm -rf /usr/share/ollama/.ollama/tmp

Start Ollama again

# systemctl start ollama

Ollama will recreate everything automatically.

Verify Cleanup Worked

# du -sh /usr/share/ollama/.ollama

# ollama list

You should see:

  • Very small disk usage
  • Empty model list

Prevent Disk Bloat (Highly Recommended)

Only pull small models on VMs

Stick to:

  • phi3
  • mistral
  • tinyllama

Remove models you don’t use

# ollama rm modelname

Set a custom data directory (optional)

If /usr is small, move Ollama data:

# systemctl stop ollama

# mkdir -p /opt/ollama-data

# chown -R ollama:ollama /opt/ollama-data

Edit service:

# systemctl edit ollama

Add:

[Service]

Environment=OLLAMA_HOME=/opt/ollama-data

Then:

# systemctl daemon-reload

# systemctl start ollama

Quick “Nuke It” One-Liner (Use With Care)

Deletes everything Ollama-related:

# systemctl stop ollama && rm -rf /usr/share/ollama/.ollama && systemctl start ollama

API-Based Option (Cloud Models)

If you prefer cloud models (OpenAI, Anthropic, etc.), the pattern is identical:

  • Use curl
  • Pass prompt
  • Parse output

Once AI returns text to stdout, Bash doesn’t care where it came from.

12. Best Practices how to use shell AI to not overload machine

Before you go wild:

  • Don’t auto-execute AI output
  • Don’t run AI as root
  • Treat responses as suggestions
  • Version-control important scripts
  • Keep prompts specific
     

AI is powerful — but Linux still assumes you know what you’re doing.

Sum it up

Adding AI to Bash isn’t about replacing skills.
It’s about removing friction.

When AI gets easy to use from the command line it is a great convenience for those who don't want to switch to browser all the time and copy / paste like crazy.
AI as a command-line tool fits perfectly into the Linux console:

  • composable
  • scriptable
  • optional
  • powerful

Once you’ve used AI from inside your shell for a few hours, going back to browser-based AI chat stuff like querying ChatGPT feels… slow and inefficient.
However keep in mind that ollama is away from perfect and has a lot of downsides and ChatGPT / Grok / DeepSeek often might give you better results, however as ollama is really isolated and non-depend on external sources your private quries will not get into a public AI historic database and you won't be tracked.
So everything has its Pros and Cons. I'm pretty sure that this tool and free AI tools like those will certainly have a good future and will be heavily used by system admins and
programmers in the coming future.
The terminal just got smarter. And it didn’t need a GUI to do it.