Posts Tagged ‘Edit’

How to Set Up SSH Two-Factor Authentication (2FA) on Linux Without Google Authenticator with OATH Toolkit

Wednesday, November 5th, 2025

install-2-factor-free-authentication-google-authentication-alternative-with-oath-toolkit-linux-logo

Most tutorials online on how to secure your SSH server with a 2 Factor Authentication 2FA will tell you to use Google Authenticator to secure SSH logins.

But what if you don’t want to depend on Google software – maybe for privacy, security, or ideological reasons ?

Luckily, you have a choice thanks to free oath toolkit.
The free and self-hosted alternative: OATH Toolkit has its own PAM module  libpam-oath to make the 2FA work  the openssh server.

OATH-Toolkit is a free software toolkit for (OTP) One-Time Password authentication using HOTP/TOTP algorithms. The software ships a small set of command line utilities covering most OTP operation related tasks.

In this guide, I’ll show you how to implement 2-Factor Authentication (TOTP) for SSH on any Linux system using OATH Toolkit, compatible with privacy-friendly authenticator apps like FreeOTP, Aegis, or and OTP.

It is worthy to check out OATH Toolkit author original post here, that will give you a bit of more insight on the tool.

1. Install the Required Packages

For Debian / Ubuntu systems:

# apt update
# apt install libpam-oath oathtool qrencode
...

For RHEL / CentOS / AlmaLinux:
 

# dnf install pam_oath oathtool

The oathtool command lets you test or generate one-time passwords (OTPs) directly from the command line.

2. Create a User Secret File

libpam-oath uses a file to store each user’s secret key (shared between your server and your phone app).

By default, it reads from:

/etc/users.oath

Let’s create it securely and set proper permissions to secure it:
 

# touch /etc/users.oath

# chmod 600 /etc/users.oath

Now, generate a new secret key for your user (replace hipo with your actual username):
 

# head -10 /dev/urandom | sha1sum | cut -c1-32

This generates a random 32-character key.
Example:

9b0e4e9fdf33cce9c76431dc8e7369fe

Add this to /etc/users.oath in the following format:

HOTP/T30 hipo - 9b0e4e9fdf33cce9c76431dc8e7369fe

HOTP/T30 means Time-based OTP with 30-second validity (standard TOTP).

Replace hipo with the Linux username you want to protect.

3. Add the Key to Your Authenticator App

Now we need to add that secret to your preferred authenticator app.

You can create a TOTP URI manually (to generate a QR code):

$ echo "otpauth://totp/hipo@jericho?secret=\
$(echo 9b0e4e9fdf33cce9c76431dc8e7369fe \
| xxd -r -p | base32)"

You can paste this URI into a QR code generator (e.g., https://qr-code-generator.com) and scan it using FreeOTP , Aegis, or any open TOTP app.
The FreeOTP Free Ap is my preferred App to use, you can install it via Apple AppStore or Google Play Store.

Alternatively, enter the Base32-encoded secret manually into your app:

# echo 9b0e4e9fdf33cce9c76431dc8e7369fe | xxd -r -p | base32

You can also use qrencode nice nifty tool to generate out of your TOTP code in ASCII mode and scan it with your Phone FreeOTP / Aegis App and add make it ready for use:

# qrencode –type=ANSIUTF8 otpauth://totp/hipo@jericho?secret=$( oathtool –verbose –totp 9b0e4e9fdf33cce9c76431dc8e7369fe –digits=6 -w 1 | grep Base32 | cut -d ' ' -f 3 )\&digits=6\&issuer=pc-freak.net\&period=30

qrencode-generation-of-scannable-QR-code-for-freeotp-or-other-TOTP-auth

qrencode will generate the code. We set the type to ANSI-UTF8 terminal graphics so you can generate this in an ssh login. It can also generate other formats if you were to incorporate this into a web interface. See the man page for qrencode for more options.
The rest of the line is the being encoded into the QR code, and is a URL of the type otpauth, with time based one-time passwords (totp). The user is “hipo@jericho“, though PAM will ignore the @jericho if you are not joined to a domain (I have not tested this with domains yet).

The parameters follow the ‘?‘, and are separated by ‘&‘.

otpauth uses a base32 hash of the secret password you created earlier. oathtool will generate the appropriate hash inside the block:

 $( oathtool –verbose –totp 9b0e4e9fdf33cce9c76431dc8e7369fe | grep Base32 | cut -d ' ' -f 3 )

We put the secret from earlier, and search for “Base32”. This line will contain the Base32 hash that we need from the output:

Hex secret: 9b0e4e9fdf33cce9c76431dc8e7369fe
Base32 secret: E24ABZ2CTW3CH3YIN5HZ2RXP
Digits: 6
Window size: 0
Step size (seconds): 30
Start time: 1970-01-01 00:00:00 UTC (0)
Current time: 2022-03-03 00:09:08 UTC (1646266148)
Counter: 0x3455592 (54875538)

368784 
From there we cut out the third field, “E24ABZ2CTW3CH3YIN5HZ2RXP“, and place it in the line.

Next, we set the number of digits for the codes to be 6 digits (valid values are 6, 7, and 8). 6 is sufficient for most people, and easier to remember.

The issuer is optional, but useful to differentiate where the code came from.

We set the time period (in seconds) for how long a code is valid to 30 seconds.

Note that: Google authenticator ignores this and uses 30 seconds whether you like it or not.

4. Configure PAM to Use libpam-oath

Edit the PAM configuration for SSH:

# vim /etc/pam.d/sshd

At the top of the file, add:

auth required pam_oath.so usersfile=/etc/users.oath window=30 digits=6

This tells PAM to check OTP codes against /etc/users.oath.

5. Configure SSH Daemon to Ask for OTP

Edit the SSH daemon configuration file:
 

# vim /etc/ssh/sshd_config

Ensure these lines are set:
 

UsePAM yes
challengeresponseauthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey keyboard-interactive
##KbdInteractiveAuthentication no
KbdInteractiveAuthentication yes

N.B.! The KbdInteractiveAuthentication yes variable is necessery on OpenSSH servers with version > of version 8.2_ .

In short This setup means:
1. The user must first authenticate with their SSH key (or local / LDAP password),
2. Then enter a valid one-time code generated from TOTP App from their phone.

You can also use Match directives to enforce 2FA under certain conditions, but not under others.
For example, if you didn’t want to be bothered with it while you are logging in on your LAN,
but do from any other network, you could add something like:

Match Address 127.0.0.1,10.10.10.0/8,192.168.5.0/24
    Authenticationmethods publickey

6. Restart SSH and Test It

Apply your configuration:
 

# systemctl restart ssh

Now, open a new terminal window and try logging in (don’t close your existing one yet, in case you get locked out):

$ ssh hipo@your-server-ip

You should see something like:

Verification code:

Enter the 6-digit code displayed in your FreeOTP (or similar) app.
If it’s correct, you’re logged in! Hooray ! 🙂

7. Test Locally and Secure the Secrets

If you want to test OTPs manually with a base32 encrypted output of hex string:

# oathtool --totp -b \
9b0e4e9fdf33cce9c76431dc8e7369fe

As above might be a bit confusing for starters, i recommend to use below few lines instead:

$ secret_hex="9b0e4e9fdf33cce9c76431dc8e7369fe"
$ secret_base32=$(echo $secret_hex | xxd -r -p | base32)
$ oathtool –totp -b "$secret_base32"
156874

You’ll get the same 6-digit code your authenticator shows – useful for debugging.

If you rerun the oathtool again you will get a difffefrent TOTP code, e.g. :

$ oathtool –totp -b "$secret_base32"
258158


Use this code as a 2FA TOTP auth code together with local user password (2FA + pass pair),  when prompted for a TOTP code, once you entered your user password first.

To not let anyone who has a local account on the system to be able to breach the 2FA additional password protection,
Ensure the secrets file is protected well, i.e.:

# chown root:root /etc/users.oath
# chmod 600 /etc/users.oath

How to Enable 2FA Only for Certain Users

If you want to force OTP only for admins, create a group ssh2fa:

# groupadd ssh2fa
# usermod -aG ssh2fa hipo

Then modify /etc/pam.d/sshd:

auth [success=1 default=ignore] pam_succeed_if.so \
user notingroup ssh2fa
auth required pam_oath.so usersfile=/etc/users.oath \
window=30 digits=6

Only users in ssh2fa will be asked for a one-time code.

Troubleshooting

Problem: SSH rejects OTP
Check /var/log/auth.log or /var/log/secure for more details.
Make sure your phone’s time is in sync (TOTP depends on accurate time).

Problem: Locked out after restart
Always keep one root session open until you confirm login works.

Problem: Everything seems configured fine but still the TOTP is not accepted by remote OpenSSHD.
– Check out the time on the Phone / Device where the TOTP code is generated is properly synched to an Internet Time Server
– Check the computer system clock is properly synchornized to the Internet Time server (via ntpd / chronyd etc.), below is sample:

  • hipo@jeremiah:~$ timedatectl status
                   Local time: Wed 2025-11-05 00:39:17 EET
               Universal time: Tue 2025-11-04 22:39:17 UTC
                     RTC time: Tue 2025-11-04 22:39:17
                    Time zone: Europe/Sofia (EET, +0200)
    System clock synchronized: yes
                  NTP service: n/a
              RTC in local TZ: no

Why Choose libpam-oath?

  • 100% Free Software (GPL)
  • Works completely offline / self-hosted
  • Compatible with any standard TOTP app (FreeOTP, Aegis, andOTP, etc.)
  • Doesn’t depend on Google APIs or cloud services
  • Lightweight (just one PAM module and a text file)

Conclusion

Two-Factor Authentication doesn’t have to rely on Google’s ecosystem.
With OATH Toolkit and libpam-oath, you get a simple, private, and completely open-source way to harden your SSH server against brute-force and stolen-key attacks.

Once configured, even if an attacker somehow steals your SSH key or password, they can’t log in without your phone’s one-time code – making your system dramatically safer.

How to Install and Use auditd for System Security Auditing on Linux

Thursday, September 25th, 2025

System auditing is essential for monitoring user activity, detecting unauthorized access, and ensuring compliance with security standards. On Linux, the Audit Daemon (auditd) provides powerful auditing capabilities for logging system events and actions.

This short article will walk you through installing, configuring, and using auditd to monitor your Linux system.

What is auditd?

auditd is the user-space component of the Linux Auditing System. It logs system calls, file access, user activity, and more — offering administrators a clear trail of what’s happening on the system.


1. Installing auditd

The auditd package is available by default in most major Linux distributions.

 On Debian/Ubuntu

# apt update
# apt install auditd audispd-plugins

 On CentOS/RHEL/Fedora

# yum install audit

After installation, start and enable the audit daemon

# systemctl start auditd

# systemctl enable auditd

Check its status

# systemctl status auditd

2. Setting Audit Rules

Once auditd is running, you need to define rules that tell it what to monitor.

Example: Monitor changes to /etc/passwd

# auditctl -w /etc/passwd -p rwxa -k passwd_monitor

Explanation:

  • -w /etc/passwd: Watch this file. When the file is accessed, the watcher will generate events.
  • -p rwxa: Monitor read, write, execute, and attribute changes
  • -k passwd_monitor: Assign a custom key name to identify logs. Later on, we could search for this (arbitrary) passwd string to identify events tagged with this key.

List active rules:

# auditctl -l

3. Common auditd Rules for Security Monitoring

Here are some common and useful auditd rules you can use to monitor system activity and enhance Linux system security. These rules are typically added to the /etc/audit/rules.d/audit.rules or /etc/audit/audit.rules file, depending on your system.

a. Monitor Access to /etc/passwd and /etc/shadow
 

-w /etc/passwd -p wa -k passwd_changes
-w /etc/shadow -p wa -k shadow_changes

  • Monitors read/write/attribute changes to password files.

b. Monitor sudoers file and directory
 

-w /etc/sudoers -p wa -k sudoers
-w /etc/sudoers.d/ -p wa -k sudoers

  • Tracks any change to sudo configuration files.

c. Monitor Use of chmod, chown, and passwd
 

-a always,exit -F arch=b64 -S chmod -S fchmod -S fchmodat -k perm_mod
-a always,exit -F arch=b64 -S chown -S fchown -S fchownat -k perm_mod
-a always,exit -F arch=b64 -S passwd -k passwd_changes

  • Watches permission and ownership changes.

d. Monitor User and Group Modifications

-w /etc/group -p wa -k group_mod
-w /etc/gshadow -p wa -k gshadow_mod
-w /etc/security/opasswd -p wa -k opasswd_mod

  • Catches user/group-related config changes.

e. Track Logins, Logouts, and Session Initiation

-w /var/log/lastlog -p wa -k logins
-w /var/run/faillock/ -p wa -k failed_login
-w /var/log/faillog -p wa -k faillog

  • Tracks login attempts and failures.

f. Monitor auditd Configuration Changes

-w /etc/audit/ -p wa -k auditconfig
-w /etc/audit/audit.rules -p wa -k auditrules

  • Watches changes to auditd configuration and rules.

g. Detect Changes to System Binaries

-w /bin/ -p wa -k bin_changes
-w /sbin/ -p wa -k sbin_changes
-w /usr/bin/ -p wa -k usr_bin_changes
-w /usr/sbin/ -p wa -k usr_sbin_changes

  • Ensures core binaries aren't tampered with.

h. Track Kernel Module Loading and Unloading

-a always,exit -F arch=b64 -S init_module -S delete_module -k kernel_mod

  • Detects dynamic kernel-level changes.

l. Monitor File Deletions

-a always,exit -F arch=b64 -S unlink -S unlinkat -S rename -S renameat -k delete

  • Tracks when files are removed or renamed.

m. Track Privilege Escalation via setuid/setgid

-a always,exit -F arch=b64 -S setuid -S setgid -k priv_esc

  • Helps detect changes in user or group privileges.

n. Track Usage of Dangerous Binaries (e.g., su, sudo, netcat)

-w /usr/bin/su -p x -k su_usage
-w /usr/bin/sudo -p x -k sudo_usage
-w /bin/nc -p x -k netcat_usage

  • Useful for catching potentially malicious command usage.

o. Monitor Cron Jobs

-w /etc/cron.allow -p wa -k cron_allow
-w /etc/cron.deny -p wa -k cron_deny
-w /etc/cron.d/ -p wa -k cron_d
-w /etc/crontab -p wa -k crontab
-w /var/spool/cron/ -p wa -k user_crontabs

  • Alerts on cron job creation/modification.

p. Track Changes to /etc/hosts and DNS Settings

-w /etc/hosts -p wa -k etc_hosts
-w /etc/resolv.conf -p wa -k resolv_conf

  • Monitors potential redirection or DNS manipulation.

q. Monitor Mounting and Unmounting of Filesystems

-a always,exit -F arch=b64 -S mount -S umount2 -k mounts

  • Useful for detecting USB or external drive activity.

r. Track Execution of New Programs

-a always,exit -F arch=b64 -S execve -k exec

  • Captures command execution (can generate a lot of logs).
     

A complete list of rules you can get from the hardening.rules auditd file place it under /etc/audit/rules.d/hardening.rules
and reload auditd to load the configurations.

Tips

  • Use ausearch -k <key> to search audit logs for matching rule.
  • Use auditctl -l to list active rules.
  • Use augenrules –load after editing rules in /etc/audit/rules.d/.


4. Reading Audit Logs

Audit logs events are stored in:

/var/log/audit/audit.log

By default, the location, this can be changed through /etc/auditd/auditd.conf

View recent entries:
 

# tail -f /var/log/audit/audit.log

Search by key:
 

# ausearch -k passwd_monitor

Generate a summary report:

# aureport -f

# aureport


Example: Show all user logins / IPs :

# aureport -au

 

5. Making Audit Rules Persistent

Rules added with auditctl are not persistent and will be lost on reboot. To make them permanent:

Edit the audit rules configuration:

# vim /etc/audit/rules.d/audit.rules

Add your rules, for example:

-w /etc/passwd -p rwxa -k passwd_monitor

Apply the rules:

# augenrules –load

7. Some use case examples of auditd in auditing Linux servers by sysadmins / security experts
 

Below are real-world, practical examples where auditd is actively used by sysadmins, security teams, or compliance officers to detect suspicious activity, meet compliance requirements, or conduct forensic investigations.

a. Detect Unauthorized Access to /etc/shadow

Use Case: Someone tries to read or modify password hashes.

Audit Rule:

-w /etc/shadow -p wa -k shadow_watch

Real-World Trigger:

sudo cat /etc/shadow

Check Logs:
 

# ausearch -k shadow_watch -i

Real Output:
 

type=SYSCALL msg=audit(09/18/2025 14:02:45.123:1078):

  syscall=openat

  exe="/usr/bin/cat"

  success=yes

  path="/etc/shadow"

  key="shadow_watch"

b. Detect Use of chmod to Make Files Executable

Use Case: Attacker tries to make a script executable (e.g., malware).

Audit Rule:

-a always,exit -F arch=b64 -S chmod -k chmod_detect

Real-World Trigger:
 

 # chmod +x /tmp/evil_script.sh

Check Logs:

# ausearch -k chmod_detect -i

c. Monitor Execution of nc (Netcat)

Use Case: Netcat is often used for reverse shells or unauthorized network comms.

Audit Rule:
 

-w /bin/nc -p x -k netcat_usage
 

Real-World Trigger:

nc -lvp 4444

Log Entry:

type=EXECVE msg=audit(09/18/2025 14:35:45.456:1123):

  argc=3 a0="nc" a1="-lvp" a2="4444"

  key="netcat_usage"

 

d. Alert on Kernel Module Insertion
 

Use Case: Attacker loads rootkit or malicious kernel module.

Audit Rule:

-a always,exit -F arch=b64 -S init_module -S delete_module -k kernel_mod

Real-World Trigger:

# insmod myrootkit.ko

Audit Log:
 

type=SYSCALL msg=audit(09/18/2025 15:00:13.100:1155):

  syscall=init_module

  exe="/sbin/insmod"

  key="kernel_mod"

e. Watch for Unexpected sudo Usage

Use Case: Unusual use of sudo might indicate privilege escalation.

Audit Rule:

-w /usr/bin/sudo -p x -k sudo_watch

Real-World Trigger:

sudo whoami

View Log:
 

# ausearch -k sudo_watch -i


f. Monitor Cron Job Modification

Use Case: Attacker schedules persistence via cron.

Audit Rule:

-w /etc/crontab -p wa -k cron_mod

Real-World Trigger:
 

echo "@reboot /tmp/backdoor" >> /etc/crontab

Logs:
 

type=SYSCALL msg=audit(09/18/2025 15:05:45.789:1188):

  syscall=open

  path="/etc/crontab"

  key="cron_mod"

g. Detect File Deletion or Renaming
 

Use Case: Attacker removes logs or evidence.

Audit Rule:

-a always,exit -F arch=b64 -S unlink -S unlinkat -S rename -S renameat -k file_delete

Real-World Trigger:

# rm -f /var/log/syslog

Logs:
 

type=SYSCALL msg=audit(09/18/2025 15:10:33.987:1210):

  syscall=unlink

  path="/var/log/syslog"

  key="file_delete"


h. Detect Script or Malware Execution
 

Use Case: Capture any executed command.

Audit Rule:
 

-a always,exit -F arch=b64 -S execve -k exec

Real-World Trigger:

/tmp/myscript.sh

Log View:

# ausearch -k exec -i | grep /tmp/myscript.sh

l. Detect Manual Changes to /etc/hosts

Use Case: DNS hijacking or phishing setup.

Audit Rule:

-w /etc/hosts -p wa -k etc_hosts

Real-World Trigger:
 

# echo "1.2.3.4 google.com" >> /etc/hosts

Logs:

type=SYSCALL msg=audit(09/18/2025 15:20:11.444:1234):

  path="/etc/hosts"

  syscall=open

  key="etc_hosts"


8. Enable Immutable Mode (if necessery)

For enhanced security, you can make audit rules immutable, preventing any changes until reboot:

# auditctl -e 2


To make this setting persistent, add the following to the end of /etc/audit/rules.d/audit.rules:

-e 2


Common Use Cases

Here are a few more examples of what you can monitor:

Monitor all sudo usage:

# auditctl -w /var/log/auth.log -p wa -k sudo_monitor


Monitor a directory for file access:

# auditctl -w /home/username/important_dir -p rwxa -k dir_watch

Audit execution of a specific command (e.g., rm):

# auditctl -a always,exit -F arch=b64 -S unlink,unlinkat -k delete_cmd

(Adjust arch=b64 to arch=b32 if on 32-bit system.)

9. Managing the Audit Log Size

Audit logs can grow large over time. To manage log rotation and size, edit:
 

# vim /etc/audit/auditd.conf

Set log rotation options like:

max_log_file = 8

num_logs = 5

Then restart auditd:
 

# systemctl restart auditd

Conclusion

The Linux Audit Daemon (auditd) is a powerful tool to track system activity, enhance security, and meet compliance requirements. With just a few configuration steps, you can monitor critical files, user actions, and system behavior in real time.

 

References

  • man auditd
  • man auditctl
  • Linux Audit Wiki

 

Windows 10 install local Proxy server to Save bandwidth on a slow and limited Mobile Phone HotSpot network Shared connections

Wednesday, August 20th, 2025

https://pc-freak.net/images/how-to-use-local-proxy-to-speed-up-internet-speed-connectivity-on-windows-os-with-squid-and-privoxy

If you're running on Internet ISP that is providing via a Internet / Wifi Router device with a 3G / 4G / 5G etc. but your receiving point location is situated somewhere very far in a places like High mountains lets say Rila Mountain or  Alps on a very distant places where Internet coverate of Inetner Service Provider is low or very low but you need still to Work / Play / Entertain on the Net frequently.
Hence you will cenrtainly be looking for a ways to Speed Up / Optimize the Internet connectivity somehow.
You cannot do miracles but certainly the daily operations and a pack up of repeating traffic can be achieved by using installing and using simple local proxy server.

The advantages of using a proxy are even more besides the speed up of Internet connection lines, here is the Pros you get by using the proxy:
 

  • Using Caches frequently accessed content (e.g., images, scripts, web pages).
  • Blocks ads and trackers (reduces bandwidth).
  • Compresses data (if needed)
  • Can serve multiple local devices if needed.
     

To save bandwidth on a slow and limited connectivity Internet router or mobile phone hotspot using Windows 10, you can install a local proxy server that:

Here’s a step-by-step guide to set this up:
 

Install a local caching proxy server on Windows 10 to reduce bandwidth usage over a mobile hotspot.


1. Install Squid (Caching Proxy Server)

Squid is a powerful and widely used open-source caching proxy.

Download Squid for Windows

Download Squid for Windows from:

https://squid.acmeconsulting.it/download (Unofficial, stable build)

or compile it manually (if you're having an own Linux or BSD router that is passing on the traffic)

2. Install Squid Proxy sever on Windows


2.1. Extract or install the downloaded Squid package.


 

2.2. Install it as a Windows Service

Open Command Prompt (Admin) and run:

C:\\Users\\hipo\\Downloads> squid -i

Initialize cache directories:
 

C:\\Users\\hipo\\Downloads> squid -z

 

3. Configure Squid Proxy via squid.conf


3.1. Open squid.conf

usually in

C:\\Squid\\etc\\squid\\squid.conf
 

3.2. Edit key lines:  

http_port 3128
cache_dir ufs c:/squid/var/cache 100 16 256
access_log c:/squid/var/logs/access.log
cache_log c:/squid/var/logs/cache.log
maximum_object_size 4096 KB
cache_mem 64 MB

 

 

3.3. Allow local access:

 

acl localnet src 192.168.0.0/16
http_access allow localnet

(Adjust IP ranges according to your network.)

 

Here's a ready-to-use Squid configuration file optimized for Running on Windows 10:

  • Caching web content to save bandwidth
  • Blocking ads and trackers
  • Allowing local device connections

 

Location for the squid Config File
 

The Windows squid installer should have setup the Squid proxy by default inside C:\Squid so the full path to squid.conf should be:
Place this as

squid.conf

in:

C:\\Squid\\etc\\squid\\squid.conf

 

# BASIC CONFIGURATION
http_port 3128
visible_hostname localhost

# CACHE SETTINGS
cache_mem 128 MB
maximum_object_size 4096 KB
maximum_object_size_in_memory 512 KB
cache_dir ufs c:/squid/var/cache 100 16 256
cache_log c:/squid/var/logs/cache.log
access_log c:/squid/var/logs/access.log

# DNS
dns_nameservers 8.8.8.8 1.1.1.1

# ACLs (Access Control Lists)
acl localhost src 127.0.0.1/32
acl localnet src 192.168.0.0/16
acl Safe_ports port 80      # HTTP
acl Safe_ports port 443     # HTTPS
acl Safe_ports port 21      # FTP
acl CONNECT method CONNECT

# BLOCKED DOMAINS (Ad/Tracking)
acl ads dstdomain .doubleclick.net .googlesyndication.com .googleadservices.com
acl ads dstdomain .ads.yahoo.com .adnxs.com .track.adform.net
http_access deny ads

# SECURITY & ACCESS CONTROL
http_access allow localhost
http_access allow localnet
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny all

# REFRESH PATTERNS (Cache aggressively)
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern -i \.jpg$       10080   90%     43200
refresh_pattern -i \.png$       10080   90%     43200
refresh_pattern -i \.gif$       10080   90%     43200
refresh_pattern -i \.css$       10080   90%     43200
refresh_pattern -i \.js$        10080   90%     43200
refresh_pattern -i \.html$      1440    90%     10080
refresh_pattern .               0       20%     4320

# LOGGING
logfile_rotate 10

 

 

4. Start the Squid Win Service from Admin command prompt

C:\Users\hipo> net start squid


5. Test the Proxy

 

Set the proxy server in your Windows proxy settings:
 

  • Go to Settings > Network & Internet > Proxy
     
  • Enable Manual proxy setup:

Address: 127.0.0.1

Port: 3128

Browse the web — Squid will now cache content locally.

Make sure

C:\Squid\var\cache

and

C:\Squid\var\logs

exist.

You can expand the ad block list by importing public blocklists. Let me know if you want help with that.

To share this proxy with other local devices, ensure they’re on the same network and allowed via ACL.
 

6. Block Ads and Save More Bandwidth with the Proxy

You can modify Squid to:

Block ad domains (using

acl

rules or a blacklist)

Limit download sizes

Restrict background updates or telemetry

Example rule to block a domain:

acl ads dstdomain .doubleclick.net .ads.google.com http_access deny ads


7. Use Aternative lightweight Proxy Privoxy (Lightweight filtering proxy) 

What is Privoxy?

Privoxy is a lightweight, highly customizable proxy server focused on privacy protection, content filtering, and web page optimization.

Unlike caching proxies (like Squid), Privoxy doesn’t store data locally—but it filters and blocks unnecessary traffic before it even reaches your browser.

7.1. Why Use Privoxy to Speed Up Internet?

Here's how Privoxy helps:

Feature Benefit
 Blocks Ads & Banners Reduces page load size and clutter
 Stops Trackers Prevents background data requests
Filters Pop-ups Improves usability and safety
Speeds Up Web Browsing By stripping unwanted content
Low Resource Usage Works on older or low-spec systems

 

Privoxy is easier to set up than Squid and usually much more simple and fits well if you want something simpler and more light weight and is also great for ad/tracker blocking.
To install and use it it comes to 4 simple steps

  1. Download from: https://www.privoxy.org/

  2. Install and run it.

  3. Configure browser/system to use proxy lets say on:

    127.0.0.1:8118

  4. Customize

    config.txt

    to add block rules.

7.2. Configure Your Web Browser or System Proxy

Set your browser/system to use the local Privoxy proxy:

Proxy address:

127.0.0.1

Port:

8118

On Windows:

Go to Settings > Network & Internet > Proxy

Enable Manual Proxy Setup

Enter:

Address:

127.0.0.1

Port:

8118

Save

7.3: Enable Privoxy Filtering and Blocking Rules

Privoxy comes with built-in rules for:

  • Ad blocking
  • Tracker blocking
  • Cookie management
  • Script filtering

You can customize filters in the configuration files via following configs:

Main config:

C:\\Program Files (x86)\\Privoxy\\config

 

Action files:

C:\\Program Files (x86)\\Privoxy\\default.action

 

Filter files:

C:\\Program Files (x86)\\Privoxy\\default.filter

 

7.4. Example to Block All Ads with Privoxy

Look in

default.action

and ensure these are uncommented:

 

{ +block }


Or add specific ad server domains:

{ +block{Ad Servers} }
.com.doubleclick.net
.ads.google.com
.adnxs.com

 

You can further use community-maintained blocklists for stronger Ads filtering.

 

Privoxy does not compress traffic, so to speed up even further with privoxy you might Compress traffic to do so use ziproxy (the http traffic compressor).

Now all your HTTP traffic is routed through Privoxy and you will notice search engines and repeatingly accessed websites pictures and Internet resources such as css / javscript / htmls etc. will give a boost !

How to install and use memcached on Debian GNU / Linux to share php sessions between DNS round robined Apache webservers

Monday, November 9th, 2020

apache-load-balancing-keep-persistent-php-sessions-memcached-logo

Recently I had to come up with a solution to make A bunch of websites hosted on a machine to be high available. For the task haproxy is one of logical options to use. However as I didn't wanted to set new IP addresses and play around to build a cluster. I decided the much more simplistic approach to use 2 separate Machines each running Up-to-date same version of Apache Webserver as front end and using a shared data running on Master-to-Master MySQL replication database as a backend. For the load balancing itself I've used a simple 2 multiple DNS 'A' Active records, configured via the Bind DNS name server an Round Robin DNS load balancing for each of the domains, to make them point to the the 2 Internet IP addresses (XXX.XXX.XXX.4 and YYY.YYY.YYY.5) each configured on the 2 Linux servers eth0.

So far so good, this setup worked but immediately, I've run another issue as I found out the WordPress and Joomla based websites's PHP sessions are lost, as the connectivity by the remote client browser reaches one time on XXX…4 and one time on YYY…4 configured listerner on TCP port 80 and TCP p. 443. In other words if request comes up to Front end Apache worker webserver 1 with opened channel data is sent back to Client Browser and the next request is sent due to the other IP resolved by the DNS server to come to Apache worker webserver 2 of course webserver 2 has no idea about this previous session data and it gets confused and returns soemething like a 404 or 500 or any other error … not exciting really huh …

I've thought about work around and as I didn't wanted to involve thirty party stuff as Privoxy / Squid  / Varnish / Polipo etc. just as that would add extra complexity as if I choose to use haproxy from the beginning, after short investigation came to a reason to use memcached as a central PHP sessions storage.

php-memcached-apache-workers-webbrowser-keep-sessions-diagram
 

Why I choose memcached ?


Well it is relatively easy to configure, it doesn't come with mambo-jambo unreadable over-complicated configuration and the time to configure everything is really little as well as the configuration is much straight forward, plus I don't need to occupy more IP addresses and I don't need to do any changes to the already running 2 WebServers on 2 separate Linux hosts configured to be reachable from the Internet.
Of course using memcached is not a rock solid and not the best solution out there, as there is risk that if a memcached dies out for some reason all sessions stored in are lost as they're stored only in volatile memory, as well as there is a drawback that if a communication was done via one of the 2 webservers and one of them goes down sessions that were known by one of Apache's workers disappears.

So let me proceed and explain you the steps to take to configure memcached as a central session storage system.
 

1. Install memcached and php-memcached packages


To enable support for memcached besides installing memcached daemon, you need to have the php-memcached which will provide the memcached.so used by Apache loaded php script interpretter module.

On a Debian / Ubuntu and other deb based GNU / Linux it should be:

webserver1:~# apt-get install memcached php-memcached

TO use php-memcached I assume Apache and its support for PHP is already installed with lets say:
 

webserver1:~# apt-get install php libapache2-mod-php php-mcrypt


On CentOS / RHEL / Fedora Linux it is a little bit more complicated as you'll need to install php-pear and compile the module with pecl

 

[root@centos ~]# yum install php-pear

[root@centos ~]# yum install php-pecl-memcache


Compile memcache

[root@centos ~]# pecl install memcache

 

2. Test if memcached is properly loaded in PHP


Once installed lets check if memcached service is running and memcached support is loaded as module into PHP core.

 

webserver1:~# ps -efa  | egrep memcached
nobody   14443     1  0 Oct23 ?        00:04:34 /usr/bin/memcached -v -m 64 -p 11211 -u nobody -l 127.0.0.1 -l 192.168.0.1

root@webserver1:/# php -m | egrep memcache
memcached


To get a bit more verbose information on memcache version and few of memcached variable settings:

root@webserver1:/# php -i |grep -i memcache
/etc/php/7.4/cli/conf.d/25-memcached.ini
memcached
memcached support => enabled
libmemcached version => 1.0.18
memcached.compression_factor => 1.3 => 1.3
memcached.compression_threshold => 2000 => 2000
memcached.compression_type => fastlz => fastlz
memcached.default_binary_protocol => Off => Off
memcached.default_connect_timeout => 0 => 0
memcached.default_consistent_hash => Off => Off
memcached.serializer => php => php
memcached.sess_binary_protocol => On => On
memcached.sess_connect_timeout => 0 => 0
memcached.sess_consistent_hash => On => On
memcached.sess_consistent_hash_type => ketama => ketama
memcached.sess_lock_expire => 0 => 0
memcached.sess_lock_max_wait => not set => not set
memcached.sess_lock_retries => 5 => 5
memcached.sess_lock_wait => not set => not set
memcached.sess_lock_wait_max => 150 => 150
memcached.sess_lock_wait_min => 150 => 150
memcached.sess_locking => On => On
memcached.sess_number_of_replicas => 0 => 0
memcached.sess_persistent => Off => Off
memcached.sess_prefix => memc.sess.key. => memc.sess.key.
memcached.sess_randomize_replica_read => Off => Off
memcached.sess_remove_failed_servers => Off => Off
memcached.sess_sasl_password => no value => no value
memcached.sess_sasl_username => no value => no value
memcached.sess_server_failure_limit => 0 => 0
memcached.store_retry_count => 2 => 2
Registered save handlers => files user memcached


Make sure /etc/default/memcached (on Debian is enabled) on CentOS / RHELs this should be /etc/sysconfig/memcached

webserver1:~# cat default/memcached 
# Set this to no to disable memcached.
ENABLE_MEMCACHED=yes

As assured on server1 memcached + php is ready to be used, next login to Linux server 2 and repeat the same steps install memcached and the module and check it is showing as loaded.

Next place under some of your webservers hosted websites under check_memcached.php below PHP code
 

<?php
if (class_exists('Memcache')) {
    $server = 'localhost';
    if (!empty($_REQUEST[‘server’])) {
        $server = $_REQUEST[‘server’];
    }
    $memcache = new Memcache;
    $isMemcacheAvailable = @$memcache->connect($server);

    if ($isMemcacheAvailable) {
        $aData = $memcache->get('data');
        echo '<pre>';
        if ($aData) {
            echo '<h2>Data from Cache:</h2>';
            print_r($aData);
        } else {
            $aData = array(
                'me' => 'you',
                'us' => 'them',
            );
            echo '<h2>Fresh Data:</h2>';
            print_r($aData);
            $memcache->set('data', $aData, 0, 300);
        }
        $aData = $memcache->get('data');
        if ($aData) {
            echo '<h3>Memcache seem to be working fine!</h3>';
        } else {
            echo '<h3>Memcache DOES NOT seem to be working!</h3>';
        }
        echo '</pre>';
    }
}

if (!$isMemcacheAvailable) {
    echo 'Memcache not available';
}

?>


Launch in a browser https://your-dns-round-robined-domain.com/check_memcached.php, the browser output should be as on below screenshot:

check_memcached-php-script-website-screenshot

3. Configure memcached daemons on both nodes

All we need to set up is the listen IPv4 addresses

On Host Webserver1
You should have in /etc/memcached.conf

-l 127.0.0.1
-l 192.168.0.1

webserver1:~# grep -Ei '\-l' /etc/memcached.conf 
-l 127.0.0.1
-l 192.168.0.1


On Host Webserver2

-l 127.0.0.1
-l 192.168.0.200

 

webserver2:~# grep -Ei '\-l' /etc/memcached.conf
-l 127.0.0.1
-l 192.168.0.200

 

4. Configure memcached in php.ini

Edit config /etc/php.ini (on CentOS / RHEL) or on Debians / Ubuntus etc. modify /etc/php/*/apache2/php.ini (where depending on the PHP version you're using your php location could be different lets say /etc/php/5.6/apache2/php.ini):

If you wonder where is the php.ini config in your case you can usually get it from the php cli:

webserver1:~# php -i | grep "php.ini"
Configuration File (php.ini) Path => /etc/php/7.4/cli
Loaded Configuration File => /etc/php/7.4/cli/php.ini

 

! Note: That on on PHP-FPM installations (where FastCGI Process Manager) is handling PHP requests,path would be rather something like:
 

/etc/php5/fpm/php.ini

in php.ini you need to change as minimum below 2 variables
 

session.save_handler =
session.save_path =


By default session.save_path would be set to lets say session.save_path = "

/var/lib/php7/sessions"


To make php use a 2 central configured memcached servers on webserver1 and webserver2 or even more memcached configured machines set it to look as so:

session.save_path="192.168.0.200:11211, 192.168.0.1:11211"


Also modify set

session.save_handler = memcache


Overall changed php.ini configuration on Linux machine 1 ( webserver1 ) and Linux machine 2 ( webserver2 ) should be:

session.save_handler = memcache
session.save_path="192.168.0.200:11211, 192.168.0.1:11211"

 

Below is approximately how it should look on both :

webserver1: ~# grep -Ei 'session.save_handler|session.save_path' /etc/php.ini
;; session.save_handler = files
session.save_handler = memcache
;     session.save_path = "N;/path"
;     session.save_path = "N;MODE;/path"
;session.save_path = "/var/lib/php7/sessions"
session.save_path="192.168.0.200:11211, 192.168.0.1:11211"
;       (see session.save_path above), then garbage collection does *not*
 

 

webserver2: ~# grep -Ei 'session.save_handler|session.save_path' /etc/php.ini
;; session.save_handler = files
session.save_handler = memcache
;     session.save_path = "N;/path"
;     session.save_path = "N;MODE;/path"
;session.save_path = "/var/lib/php7/sessions"
session.save_path="192.168.0.200:11211, 192.168.0.1:11211"
;       (see session.save_path above), then garbage collection does *not*


As you can see I have configured memcached on webserver1 to listen on internal local LAN IP 192.168.0.200 and on Local LAN eth iface 192.168.0.1 on TCP port 11211 (this is the default memcached connections listen port), for security or obscurity reasons you might choose another empty one. Make sure to also set the proper firewalling to that port, the best is to enable connections only between 192.168.0.200 and 192.168.0.1 on each of machine 1 and machine 2.

loadbalancing2-php-sessions-scheme-explained
 

5. Enable Memcached for session redundancy


Next step is to configure memcached to allow failover (e.g. use both memcached on 2 linux hosts) and configure session redundancy.
Configure /etc/php/7.3/mods-available/memcache.ini or /etc/php5/mods-available/memcache.ini or respectively to the right location depending on the PHP installed and used webservers version.
 

webserver1 :~#  vim /etc/php/7.3/mods-available/memcache.ini

; configuration for php memcached module
; priority=20
; settings to write sessions to both servers and have fail over
memcache.hash_strategy=consistent
memcache.allow_failover=1
memcache.session_redundancy=3
extension=memcached.so

 

webserver2 :~# vim /etc/php/7.3/mods-available/memcache.ini

; configuration for php memcached module
; priority=20
; settings to write sessions to both servers and have fail over
memcache.hash_strategy=consistent
memcache.allow_failover=1
memcache.session_redundancy=3
extension=memcached.so

 

memcache.session_redundancy directive must be equal to the number of memcached servers + 1 for the session information to be replicated to all the servers. This is due to a bug in PHP.
I have only 2 memcached configured that's why I set it to 3.
 

6. Restart Apache Webservers

Restart on both machines webserver1 and webserver2 Apache to make php load memcached.so
 

webserver1:~# systemctl restart httpd

webserver2:~# systemctl restart httpd

 

7. Restart memcached on machine 1 and 2

 

webserver1 :~# systemctl restart memcached

webserver2 :~# systemctl restart memcached

 

8. Test php sessions are working as expected with a php script

Copy to both website locations to accessible URL a file test_sessions.php:
 

<?php  
session_start();

if(isset($_SESSION[‘georgi’]))
{
echo "Sessions is ".$_SESSION[‘georgi’]."!\n";
}
else
{
echo "Session ID: ".session_id()."\n";
echo "Session Name: ".session_name()."\n";
echo "Setting 'georgi' to 'cool'\n";
$_SESSION[‘georgi’]='cool';
}
?>

 

Now run the test to see PHP sessions are kept persistently:
 

hipo@jeremiah:~/Desktop $ curl -vL -s https://www.pc-freak.net/session.php 2>&1 | grep 'Set-Cookie:'
< Set-Cookie: PHPSESSID=micir464cplbdfpo36n3qi9hd3; expires=Tue, 10-Nov-2020 12:14:32 GMT; Max-Age=86400; path=/

hipo@jeremiah:~/Desktop $ curl -L –cookie "PHPSESSID=micir464cplbdfpo36n3qi9hd3" http://83.228.93.76/session.php http://213.91.190.233/session.php
Session is cool!
Session is cool!

 

Copy to the locations that is resolving to both DNS servers some sample php script such as sessions_test.php  with below content:

<?php
    header('Content-Type: text/plain');
    session_start();
    if(!isset($_SESSION[‘visit’]))
    {
        echo "This is the first time you're visiting this server\n";
        $_SESSION[‘visit’] = 0;
    }
    else
            echo "Your number of visits: ".$_SESSION[‘visit’] . "\n";

    $_SESSION[‘visit’]++;

    echo "Server IP: ".$_SERVER[‘SERVER_ADDR’] . "\n";
    echo "Client IP: ".$_SERVER[‘REMOTE_ADDR’] . "\n";
    print_r($_COOKIE);
?>

Test in a Web Opera / Firefox / Chrome browser.

You should get an output in the browser similar to:
 

Your number of visits: 15
Server IP: 83.228.93.76
Client IP: 91.92.15.51
Array
(
    [_ga] => GA1.2.651288003.1538922937
    [__utma] => 238407297.651288003.1538922937.1601730730.1601759984.45
    [__utmz] => 238407297.1571087583.28.4.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=(not provided)
    [shellInABox] => 467306938:1110101010
    [fpestid] => EzkIzv_9OWmR9PxhUM8HEKoV3fbOri1iAiHesU7T4Pso4Mbi7Gtt9L1vlChtkli5GVDKtg
    [__gads] => ID=8a1e445d88889784-22302f2c01b9005b:T=1603219663:RT=1603219663:S=ALNI_MZ6L4IIaIBcwaeCk_KNwmL3df3Z2g
    [PHPSESSID] => mgpk1ivhvfc2d0daq08e0p0ec5
)

If you want to test php sessions are working with text browser or from another external script for automation use something as below PHP code:
 

<?php
// save as "session_test.php" inside your webspace  
ini_set('display_errors', 'On');
error_reporting(6143);

session_start();

$sessionSavePath = ini_get('session.save_path');

echo '<br><div style="background:#def;padding:6px">'
   , 'If a session could be started successfully <b>you should'
   , ' not see any Warning(s)</b>, otherwise check the path/folder'
   , ' mentioned in the warning(s) for proper access rights.<hr>';
echo "WebServer IP:" . $_SERVER[‘SERVER_ADDR’] . "\n<br />";
if (empty($sessionSavePath)) {
    echo 'A "<b>session.save_path</b>" is currently',
         ' <b>not</b> set.<br>Normally "<b>';
    if (isset($_ENV[‘TMP’])) {
        echo  $_ENV[‘TMP’], ‘” ($_ENV[“TMP”]) ';
    } else {
        echo '/tmp</b>" or "<b>C:\tmp</b>" (or whatever',
             ' the OS default "TMP" folder is set to)';
    }    
    echo ' is used in this case.';
} else {
    echo 'The current "session.save_path" is "<b>',
         $sessionSavePath, '</b>".';
}

echo '<br>Session file name: "<b>sess_', session_id()
   , '</b>".</div><br>';
?>

You can download the test_php_sessions.php script here.

To test with lynx:

hipo@jeremiah:~/Desktop $ lynx -source 'https://www.pc-freak.net/test_php_sessions.php'
<br><div style="background:#def;padding:6px">If a session could be started successfully <b>you should not see any Warning(s)</b>, otherwise check the path/folder mentioned in the warning(s) for proper access rights.<hr>WebServer IP:83.228.93.76
<br />The current "session.save_path" is "<b>tcp://192.168.0.200:11211, tcp://192.168.0.1:11211</b>".<br>Session file name: "<b>sess_5h18f809b88isf8vileudgrl40</b>".</div><br>

Improve Apache Load Balancing with mod_cluster – Apaches to Tomcats Application servers Get Better Load Balancing

Thursday, March 31st, 2016

improve-apache-load-balancing-with-mod_cluster-apaches-to-tomcats-application-servers-get-better-load-balancing-mod_cluster-logo


Earlier I've blogged on How to set up Apache to to serve as a Load Balancer for 2, 3, 4  etc. Tomcat / other backend application servers with mod_proxy and mod_proxy_balancer, however though default Apache provided mod_proxy_balancer works fine most of the time, If you want a more precise and sophisticated balancing with better load distribuion you will probably want to install and use mod_cluster instead.

 

So what is Mod_Cluster and why use it instead of Apache proxy_balancer ?
 

Mod_cluster is an innovative Apache module for HTTP load balancing and proxying. It implements a communication channel between the load balancer and back-end nodes to make better load-balancing decisions and redistribute loads more evenly.

Why use mod_cluster instead of a traditional load balancer such as Apache's mod_balancer and mod_proxy or even a high-performance hardware balancer?

Thanks to its unique back-end communication channel, mod_cluster takes into account back-end servers' loads, and thus provides better and more precise load balancing tailored for JBoss and Tomcat servers. Mod_cluster also knows when an application is undeployed, and does not forward requests for its context (URL path) until its redeployment. And mod_cluster is easy to implement, use, and configure, requiring minimal configuration on the front-end Apache server and on the back-end servers.
 


So what is the advantage of mod_cluster vs mod proxy_balancer ?

Well here is few things that turns the scales  in favour for mod_cluster:

 

  •     advertises its presence via multicast so as workers can join without any configuration
     
  •     workers will report their available contexts
     
  •     mod_cluster will create proxies for these contexts automatically
     
  •     if you want to, you can still fine-tune this behaviour, e.g. so as .gif images are served from httpd and not from workers…
     
  •     most importantly: unlike pure mod_proxy or mod_jk, mod_cluster knows exactly how much load there is on each node because nodes are reporting their load back to the balancer via special messages
     
  •     default communication goes over AJP, you can use HTTP and HTTPS

 

1. How to install mod_cluster on Linux ?


You can use mod_cluster either with JBoss or Tomcat back-end servers. We'll install and configure mod_cluster with Tomcat under CentOS; using it with JBoss or on other Linux distributions is a similar process. I'll assume you already have at least one front-end Apache server and a few back-end Tomcat servers installed.

To install mod_cluster, first download the latest mod_cluster httpd binaries. Make sure to select the correct package for your hardware architecture – 32- or 64-bit.
Unpack the archive to create four new Apache module files: mod_advertise.so, mod_manager.so, mod_proxy_cluster.so, and mod_slotmem.so. We won't need mod_advertise.so; it advertises the location of the load balancer through multicast packets, but we will use a static address on each back-end server.

Copy the other three .so files to the default Apache modules directory (/etc/httpd/modules/ for CentOS).
Before loading the new modules in Apache you have to remove the default proxy balancer module (mod_proxy_balancer.so) because it is not compatible with mod_cluster.

Edit the Apache configuration file (/etc/httpd/conf/httpd.conf) and remove the line

 

LoadModule proxy_balancer_module modules/mod_proxy_balancer.so

 


Create a new configuration file and give it a name such as /etc/httpd/conf.d/mod_cluster.conf. Use it to load mod_cluster's modules:

 

 

 

LoadModule slotmem_module modules/mod_slotmem.so
LoadModule manager_module modules/mod_manager.so
LoadModule proxy_cluster_module modules/mod_proxy_cluster.so

In the same file add the rest of the settings you'll need for mod_cluster something like:

And for permissions and Virtualhost section

Listen 192.168.180.150:9999

<virtualhost  192.168.180.150:9999="">

    <directory>
        Order deny,allow
        Allow from all 192.168
    </directory>

    ManagerBalancerName mymodcluster
    EnableMCPMReceive
</virtualhost>

ProxyPass / balancer://mymodcluster/


The above directives create a new virtual host listening on port 9999 on the Apache server you want to use for load balancing, on which the load balancer will receive information from the back-end application servers. In this example, the virtual host is listening on IP address 192.168.204.203, and for security reasons it allows connections only from the 192.168.0.0/16 network.
The directive ManagerBalancerName defines the name of the cluster – mymodcluster in this example. The directive EnableMCPMReceive allows the back-end servers to send updates to the load balancer. The standard ProxyPass and ProxyPassReverse directives instruct Apache to proxy all requests to the mymodcluster balancer.
That's all you need for a minimal configuration of mod_cluster on the Apache load balancer. At next server restart Apache will automatically load the file mod_cluster.conf from the /etc/httpd/conf.d directory. To learn about more options that might be useful in specific scenarios, check mod_cluster's documentation.

While you're changing Apache configuration, you should probably set the log level in Apache to debug when you're getting started with mod_cluster, so that you can trace the communication between the front- and the back-end servers and troubleshoot problems more easily. To do so, edit Apache's configuration file and add the line LogLevel debug, then restart Apache.
 

2. How to set up Tomcat appserver for mod_cluster ?
 

Mod_cluster works with Tomcat version 6, 7 and 8, to set up the Tomcat back ends you have to deploy a few JAR files and make a change in Tomcat's server.xml configuration file.
The necessary JAR files extend Tomcat's default functionality so that it can communicate with the proxy load balancer. You can download the JAR file archive by clicking on "Java bundles" on the mod_cluster download page. It will be saved under the name mod_cluster-parent-1.2.6.Final-bin.tar.gz.

Create a new directory such as /root/java_bundles and extract the files from mod_cluster-parent-1.2.6.Final-bin.tar.gz there. Inside the directory /root/java_bundlesJBossWeb-Tomcat/lib/*.jar you will find all the necessary JAR files for Tomcat, including two Tomcat version-specific JAR files – mod_cluster-container-tomcat6-1.2.6.Final.jar for Tomcat 6 and mod_cluster-container-tomcat7-1.2.6.Final.jar for Tomcat 7. Delete the one that does not correspond to your Tomcat version.

Copy all the files from /root/java_bundlesJBossWeb-Tomcat/lib/ to your Tomcat lib directory – thus if you have installed Tomcat in

/srv/tomcat

run the command:

 

cp -rpf /root/java_bundles/JBossWeb-Tomcat/lib/* /srv/tomcat/lib/.

 

Then edit your Tomcat's server.xml file

/srv/tomcat/conf/server.xml.


After the default listeners add the following line:

 

<listener classname="org.jboss.modcluster.container.catalina.standalone.ModClusterListener" proxylist="192.168.204.203:9999"> </listener>



This instructs Tomcat to send its mod_cluster-related information to IP 192.168.180.150 on TCP port 9999, which is what we set up as Apache's dedicated vhost for mod_cluster.
While that's enough for a basic mod_cluster setup, you should also configure a unique, intuitive JVM route value on each Tomcat instance so that you can easily differentiate the nodes later. To do so, edit the server.xml file and extend the Engine property to contain a jvmRoute, like this:
 

.

 

<engine defaulthost="localhost" jvmroute="node2" name="Catalina"></engine>


Assign a different value, such as node2, to each Tomcat instance. Then restart Tomcat so that these settings take effect.

To confirm that everything is working as expected and that the Tomcat instance connects to the load balancer, grep Tomcat's log for the string "modcluster" (case-insensitive). You should see output similar to:

Mar 29, 2016 10:05:00 AM org.jboss.modcluster.ModClusterService init
INFO: MODCLUSTER000001: Initializing mod_cluster ${project.version}
Mar 29, 2016 10:05:17 AM org.jboss.modcluster.ModClusterService connectionEstablished
INFO: MODCLUSTER000012: Catalina connector will use /192.168.180.150


This shows that mod_cluster has been successfully initialized and that it will use the connector for 192.168.204.204, the configured IP address for the main listener.
Also check Apache's error log. You should see confirmation about the properly working back-end server:

[Tue Mar 29 10:05:00 2013] [debug] proxy_util.c(2026): proxy: ajp: has acquired connection for (192.168.204.204)
[Tue Mar 29 10:05:00 2013] [debug] proxy_util.c(2082): proxy: connecting ajp://192.168.180.150:8009/ to  192.168.180.150:8009
[Tue Mar 29 10:05:00 2013] [debug] proxy_util.c(2209): proxy: connected / to  192.168.180.150:8009
[Tue Mar 29 10:05:00 2013] [debug] mod_proxy_cluster.c(1366): proxy_cluster_try_pingpong: connected to backend
[Tue Mar 29 10:05:00 2013] [debug] mod_proxy_cluster.c(1089): ajp_cping_cpong: Done
[Tue Mar 29 10:05:00 2013] [debug] proxy_util.c(2044): proxy: ajp: has released connection for (192.168.180.150)


This Apache error log shows that an AJP connection with 192.168.204.204 was successfully established and confirms the working state of the node, then shows that the load balancer closed the connection after the successful attempt.

You can start testing by opening in a browser the example servlet SessionExample, which is available in a default installation of Tomcat.
Access this servlet through a browser at the URL http://balancer_address/examples/servlets/servlet/SessionExample. In your browser you should see first a session ID that contains the name of the back-end node that is serving your request – for instance, Session ID: 5D90CB3C0AA05CB5FE13121E4B23E670.node2.

Next, through the servlet's web form, create different session attributes. If you have a properly working load balancer with sticky sessions you should always (that is, until your current browser session expires) access the same node, with the previously created session attributes still available.

To test further to confirm load balancing is in place, at the same time open the same servlet from another browser. You should be redirected to another back-end server where you can conduct a similar session test.
As you can see, mod_cluster is easy to use and configure. Give it a try to address sporadic single-back-end overloads that cause overall application slowdowns.