Posts Tagged ‘requests’

Windows 10 install local Proxy server to Save bandwidth on a slow and limited Mobile Phone HotSpot network Shared connections

Wednesday, August 20th, 2025

https://pc-freak.net/images/how-to-use-local-proxy-to-speed-up-internet-speed-connectivity-on-windows-os-with-squid-and-privoxy

If you're running on Internet ISP that is providing via a Internet / Wifi Router device with a 3G / 4G / 5G etc. but your receiving point location is situated somewhere very far in a places like High mountains lets say Rila Mountain or  Alps on a very distant places where Internet coverate of Inetner Service Provider is low or very low but you need still to Work / Play / Entertain on the Net frequently.
Hence you will cenrtainly be looking for a ways to Speed Up / Optimize the Internet connectivity somehow.
You cannot do miracles but certainly the daily operations and a pack up of repeating traffic can be achieved by using installing and using simple local proxy server.

The advantages of using a proxy are even more besides the speed up of Internet connection lines, here is the Pros you get by using the proxy:
 

  • Using Caches frequently accessed content (e.g., images, scripts, web pages).
  • Blocks ads and trackers (reduces bandwidth).
  • Compresses data (if needed)
  • Can serve multiple local devices if needed.
     

To save bandwidth on a slow and limited connectivity Internet router or mobile phone hotspot using Windows 10, you can install a local proxy server that:

Here’s a step-by-step guide to set this up:
 

Install a local caching proxy server on Windows 10 to reduce bandwidth usage over a mobile hotspot.


1. Install Squid (Caching Proxy Server)

Squid is a powerful and widely used open-source caching proxy.

Download Squid for Windows

Download Squid for Windows from:

https://squid.acmeconsulting.it/download (Unofficial, stable build)

or compile it manually (if you're having an own Linux or BSD router that is passing on the traffic)

2. Install Squid Proxy sever on Windows


2.1. Extract or install the downloaded Squid package.


 

2.2. Install it as a Windows Service

Open Command Prompt (Admin) and run:

C:\\Users\\hipo\\Downloads> squid -i

Initialize cache directories:
 

C:\\Users\\hipo\\Downloads> squid -z

 

3. Configure Squid Proxy via squid.conf


3.1. Open squid.conf

usually in

C:\\Squid\\etc\\squid\\squid.conf
 

3.2. Edit key lines:  

http_port 3128
cache_dir ufs c:/squid/var/cache 100 16 256
access_log c:/squid/var/logs/access.log
cache_log c:/squid/var/logs/cache.log
maximum_object_size 4096 KB
cache_mem 64 MB

 

 

3.3. Allow local access:

 

acl localnet src 192.168.0.0/16
http_access allow localnet

(Adjust IP ranges according to your network.)

 

Here's a ready-to-use Squid configuration file optimized for Running on Windows 10:

  • Caching web content to save bandwidth
  • Blocking ads and trackers
  • Allowing local device connections

 

Location for the squid Config File
 

The Windows squid installer should have setup the Squid proxy by default inside C:\Squid so the full path to squid.conf should be:
Place this as

squid.conf

in:

C:\\Squid\\etc\\squid\\squid.conf

 

# BASIC CONFIGURATION
http_port 3128
visible_hostname localhost

# CACHE SETTINGS
cache_mem 128 MB
maximum_object_size 4096 KB
maximum_object_size_in_memory 512 KB
cache_dir ufs c:/squid/var/cache 100 16 256
cache_log c:/squid/var/logs/cache.log
access_log c:/squid/var/logs/access.log

# DNS
dns_nameservers 8.8.8.8 1.1.1.1

# ACLs (Access Control Lists)
acl localhost src 127.0.0.1/32
acl localnet src 192.168.0.0/16
acl Safe_ports port 80      # HTTP
acl Safe_ports port 443     # HTTPS
acl Safe_ports port 21      # FTP
acl CONNECT method CONNECT

# BLOCKED DOMAINS (Ad/Tracking)
acl ads dstdomain .doubleclick.net .googlesyndication.com .googleadservices.com
acl ads dstdomain .ads.yahoo.com .adnxs.com .track.adform.net
http_access deny ads

# SECURITY & ACCESS CONTROL
http_access allow localhost
http_access allow localnet
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny all

# REFRESH PATTERNS (Cache aggressively)
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern -i \.jpg$       10080   90%     43200
refresh_pattern -i \.png$       10080   90%     43200
refresh_pattern -i \.gif$       10080   90%     43200
refresh_pattern -i \.css$       10080   90%     43200
refresh_pattern -i \.js$        10080   90%     43200
refresh_pattern -i \.html$      1440    90%     10080
refresh_pattern .               0       20%     4320

# LOGGING
logfile_rotate 10

 

 

4. Start the Squid Win Service from Admin command prompt

C:\Users\hipo> net start squid


5. Test the Proxy

 

Set the proxy server in your Windows proxy settings:
 

  • Go to Settings > Network & Internet > Proxy
     
  • Enable Manual proxy setup:

Address: 127.0.0.1

Port: 3128

Browse the web — Squid will now cache content locally.

Make sure

C:\Squid\var\cache

and

C:\Squid\var\logs

exist.

You can expand the ad block list by importing public blocklists. Let me know if you want help with that.

To share this proxy with other local devices, ensure they’re on the same network and allowed via ACL.
 

6. Block Ads and Save More Bandwidth with the Proxy

You can modify Squid to:

Block ad domains (using

acl

rules or a blacklist)

Limit download sizes

Restrict background updates or telemetry

Example rule to block a domain:

acl ads dstdomain .doubleclick.net .ads.google.com http_access deny ads


7. Use Aternative lightweight Proxy Privoxy (Lightweight filtering proxy) 

What is Privoxy?

Privoxy is a lightweight, highly customizable proxy server focused on privacy protection, content filtering, and web page optimization.

Unlike caching proxies (like Squid), Privoxy doesn’t store data locally—but it filters and blocks unnecessary traffic before it even reaches your browser.

7.1. Why Use Privoxy to Speed Up Internet?

Here's how Privoxy helps:

Feature Benefit
 Blocks Ads & Banners Reduces page load size and clutter
 Stops Trackers Prevents background data requests
Filters Pop-ups Improves usability and safety
Speeds Up Web Browsing By stripping unwanted content
Low Resource Usage Works on older or low-spec systems

 

Privoxy is easier to set up than Squid and usually much more simple and fits well if you want something simpler and more light weight and is also great for ad/tracker blocking.
To install and use it it comes to 4 simple steps

  1. Download from: https://www.privoxy.org/

  2. Install and run it.

  3. Configure browser/system to use proxy lets say on:

    127.0.0.1:8118

  4. Customize

    config.txt

    to add block rules.

7.2. Configure Your Web Browser or System Proxy

Set your browser/system to use the local Privoxy proxy:

Proxy address:

127.0.0.1

Port:

8118

On Windows:

Go to Settings > Network & Internet > Proxy

Enable Manual Proxy Setup

Enter:

Address:

127.0.0.1

Port:

8118

Save

7.3: Enable Privoxy Filtering and Blocking Rules

Privoxy comes with built-in rules for:

  • Ad blocking
  • Tracker blocking
  • Cookie management
  • Script filtering

You can customize filters in the configuration files via following configs:

Main config:

C:\\Program Files (x86)\\Privoxy\\config

 

Action files:

C:\\Program Files (x86)\\Privoxy\\default.action

 

Filter files:

C:\\Program Files (x86)\\Privoxy\\default.filter

 

7.4. Example to Block All Ads with Privoxy

Look in

default.action

and ensure these are uncommented:

 

{ +block }


Or add specific ad server domains:

{ +block{Ad Servers} }
.com.doubleclick.net
.ads.google.com
.adnxs.com

 

You can further use community-maintained blocklists for stronger Ads filtering.

 

Privoxy does not compress traffic, so to speed up even further with privoxy you might Compress traffic to do so use ziproxy (the http traffic compressor).

Now all your HTTP traffic is routed through Privoxy and you will notice search engines and repeatingly accessed websites pictures and Internet resources such as css / javscript / htmls etc. will give a boost !

Zabbix Power Shell PS1 script to write zero or one if string is matched inside log file

Monday, December 2nd, 2024

How to Install and Configure Zabbix Server and Client on Rocky Linux 9 - Cộng Đồng Linux

At work we had setup zabbix log file processing for few servers for a service that is doing a Monitoring Health Checks for a a special application via an encrypted strong encrypted tunnel. The app based on the check reports whether the remote side has processed data or not.
As me and my team are not maintainers of the zabbix-server where the zabbix-agents are sending the data, there is a multiple content of data being sent in simply "" empty strings via a zabbix Item setup. Those empty strings however gets stored in the zabbix-server database and since this check is made frequently about 500 hundred records of empty string lines are being written to the zabbix server, we got complaint by the zabbix adminsitrators, that we have to correct our Monitoring setup to not flood the zabbix-server.

Since zabbix cannot catch up the "" empty string and we cannot supress the string from being written in the Item, we needed a way to change the monitoring so that the configured Application check returns 1 (on error) and 0 (on success).

Zabbix even though advanced has a strange when zabbix log[] function, e.g. 

log[/path/to/log,,,,skip]

log function, used to analyze a log file and cut out last or first lines of a file simmilar to UNIX's  head and tail over log files this is described in the zabbix log file monitoring here . If a string is matched it can return string 1, but if nothing gets matched the result is empty string "" and this empty string cannot be used in a way to analyze the data with Item is used.

There is plenty of discussions online for this weird behavior and many people do offer different approaches to solve the strange situation, but as we have tried with our colleagues sys admins  none of those really worked out.

Thus we decided to use the classical way to work around, e.g. to simply use a powershell script that would check a number of lines inside a provided log file analyze if a string gets found and print out value of "1" if the string is matched or "0" "if not and this PS1 script to be set to run via a standard zabbix userparameter script.

This worked well, as all of us are mainly managing Linux systems, and we don't have enough knowledge on powershell we have used our internal Aartificial Intelligence (AI) clone tool to LibreChat – A free and open source ChatGPT clone.

LibreChat includes OpenAI's models, but also others — both open-source and closed-source — and its website promises "seamless integration" with AI services from OpenAI, Azure, Anthropic, and Google — as well as GPT-4, Gemini Vision, and many others. ("Every AI in one place," explains LibreChat's home page.) Plugins even let you make requests to DALL-E or Stable Diffusion for image generations. (LibreChat also offers a database that tracks "conversation state" — making it possible to switch to a different AI model in mid-conversation…)

$logfile = "C:\path\to\your\logfile.log"
$searchString = "-1"
 
# Get the last 140 lines
$lines = Get-Content $logfile -Tail 140
 
# Filter lines containing the search string
$found = $lines | Where-Object { $_ -match [regex]::Escape($searchString) }
 
# Output found lines or 0 if none were found
if ($found) {
    $found | ForEach-Object { $_ }
} else {
    Write-Host 0
}

You can download and the return_zero_or_one-if-string-matches-in-log-powershell.ps1 script here

Fix weird double logging in haproxy.log file due to haproxy.cfg misconfiguration

Tuesday, March 8th, 2022

haproxy-logging-front-network-back-network-diagram

While we were building a new machine that will serve as a Haproxy server proxy frontend to tunnel some traffic to a number of backends, 
came across weird oddity. The call requests sent to the haproxy and redirected to backend servers, were being written in the log twice.

Since we have two backend Application servers hat are serving the request, my first guess was this is caused by the fact haproxy
tries to connect to both nodes on each sent request, or that the double logging is caused by the rsyslogd doing something strange on each
received query. The rsyslog configuration configured to send via local6 facility to rsyslog is like that:

$ModLoad imudp
$UDPServerAddress 127.0.0.1
$UDPServerRun 514
#2022/02/02: HAProxy logs to local6, save the messages
local6.*                                                /var/log/haproxy.log


The haproxy basic global and defaults and frontend section config is like that:
 

global
    log          127.0.0.1 local6 debug
    chroot       /var/lib/haproxy
    pidfile      /run/haproxy.pid
    stats socket /var/lib/haproxy/haproxy.sock mode 0600 level admin
    maxconn      4000
    user         haproxy
    group        haproxy
    daemon

 

defaults
    mode        tcp
    log         global
#    option      dontlognull
#    option      httpclose
#    option      httplog
#    option      forwardfor
    option      redispatch
    option      log-health-checks
    timeout connect 10000 # default 10 second time out if a backend is not found
    timeout client 300000
    timeout server 300000
    maxconn     60000
    retries     3
 

listen FRONTEND1
        bind 10.71.80.5:63750
        mode tcp
        option tcplog
        log global
        log-format [%t]\ %ci:%cp\ %bi:%bp\ %b/%s:%sp\ %Tw/%Tc/%Tt\ %B\ %ts\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq
        balance roundrobin
        timeout client 350000
        timeout server 350000
        timeout connect 35000
        server backend-host1 10.80.50.1:13750 weight 1 check port 15000
        server backend-host2 10.80.50.2:13750 weight 2 check port 15000

After quick research online on why this odd double logging of request ends in /var/log/haproxy.log it turns out this is caused by the 

log global  defined double under the defaults section as well as in the frontend itself, hence to resolve simply had to comment out the log global in Frontend, so it looks like so:

listen FRONTEND1
        bind 10.71.80.5:63750
        mode tcp
        option tcplog
  #      log global
        log-format [%t]\ %ci:%cp\ %bi:%bp\ %b/%s:%sp\ %Tw/%Tc/%Tt\ %B\ %ts\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq
        balance roundrobin
        timeout client 350000
        timeout server 350000
        timeout connect 35000
        server backend-host1 10.80.50.1:13750 weight 1 check port 15000
        server backend-host2 10.80.50.2:13750 weight 2 check port 15000

 


Next just reloaded haproxy and one request started leaving only one trail inside haproxy.log as expected 🙂
 

Apache disable requests to not log to access.log Logfile through SetEnvIf and dontlog httpd variables

Monday, October 11th, 2021

apache-disable-certain-strings-from-logging-to-access-log-logo

Logging to Apache access.log is mostly useful as this is a great way to keep log on who visited your website and generate periodic statistics with tools such as Webalizer or Astats to keep track on your visitors and generate various statistics as well as see the number of new visitors as well most visited web pages (the pages which mostly are attracting your web visitors), once the log analysis tool generates its statistics, it can help you understand better which Web spiders visit your website the most (as spiders has a predefined) IP addresses, which can give you insight on various web spider site indexation statistics on Google, Yahoo, Bing etc. . Sometimes however either due to bugs in web spiders algorithms or inconsistencies in your website structure, some of the web pages gets double visited records inside the logs, this could happen for example if your website uses to include iframes.

Having web pages accessed once but logged to be accessed twice hence is erroneous and unwanted, and though that usually have to be fixed by the website programmers, if such approach is not easily doable in the moment and the website is running on critical production system, the double logging of request can be omitted thanks to a small Apache log hack with SetEnvIf Apache config directive. Even if there is no double logging inside Apache log happening it could be that some cron job or automated monitoring scripts or tool such as monit is making periodic requests to Apache and this is garbling your Log Statistics results.

In this short article hence I'll explain how to do remove certain strings to not get logged inside /var/log/httpd/access.log.

1. Check SetEnvIf is Loaded on the Webserver
 

On CentOS / RHEL Linux:

# /sbin/apachectl -M |grep -i setenvif
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using localhost.localdomain. Set the 'ServerName' directive globally to suppress this message
 setenvif_module (shared)


On Debian / Ubuntu Linux:

/usr/sbin/apache2ctl -M |grep -i setenvif
AH00548: NameVirtualHost has no effect and will be removed in the next release /etc/apache2/sites-enabled/000-default.conf:1
 setenvif_module (shared)


2. Using SetEnvIf to omit certain string to get logged inside apache access.log


SetEnvIf could be used either in some certain domain VirtualHost configuration (if website is configured so), or it can be set as a global Apache rule from the /etc/httpd/conf/httpd.conf 

To use SetEnvIf  you have to place it inside a <Directory …></Directory> configuration block, if it has to be enabled only for a Certain Apache configured directory, otherwise you have to place it in the global apache config section.

To be able to use SetEnvIf, only in a certain directories and subdirectories via .htaccess, you will have defined in <Directory>

AllowOverride FileInfo


The general syntax to omit a certain Apache repeating string from keep logging with SetEnvIf is as follows:
 

SetEnvIf Request_URI "^/WebSiteStructureDirectory/ACCESS_LOG_STRING_TO_REMOVE$" dontlog


General syntax for SetEnvIf is as follows:

SetEnvIf attribute regex env-variable

SetEnvIf attribute regex [!]env-variable[=value] [[!]env-variable[=value]] …

Below is the overall possible attributes to pass as described in mod_setenvif official documentation.
 

  • Host
  • User-Agent
  • Referer
  • Accept-Language
  • Remote_Host: the hostname (if available) of the client making the request.
  • Remote_Addr: the IP address of the client making the request.
  • Server_Addr: the IP address of the server on which the request was received (only with versions later than 2.0.43).
  • Request_Method: the name of the method being used (GET, POST, etc.).
  • Request_Protocol: the name and version of the protocol with which the request was made (e.g., "HTTP/0.9", "HTTP/1.1", etc.).
  • Request_URI: the resource requested on the HTTP request line – generally the portion of the URL following the scheme and host portion without the query string.

Next locate inside the configuration the line:

CustomLog /var/log/apache2/access.log combined


To enable filtering of included strings, you'll have to append env=!dontlog to the end of line.

 

CustomLog /var/log/apache2/access.log combined env=!dontlog

 

You might be using something as cronolog for log rotation to prevent your WebServer logs to become too big in size and hard to manage, you can append env=!dontlog to it in same way.

If you haven't used cronolog is it is perhaps best to show you the package description.

server:~# apt-cache show cronolog|grep -i description -A10 -B5
Version: 1.6.2+rpk-2
Installed-Size: 63
Maintainer: Debian QA Group <packages@qa.debian.org>
Architecture: amd64
Depends: perl:any, libc6 (>= 2.4)
Description-en: Logfile rotator for web servers
 A simple program that reads log messages from its input and writes
 them to a set of output files, the names of which are constructed
 using template and the current date and time.  The template uses the
 same format specifiers as the Unix date command (which are the same
 as the standard C strftime library function).
 .
 It intended to be used in conjunction with a Web server, such as
 Apache, to split the access log into daily or monthly logs:
 .
   TransferLog "|/usr/bin/cronolog /var/log/apache/%Y/access.%Y.%m.%d.log"
 .
 A cronosplit script is also included, to convert existing
 traditionally-rotated logs into this rotation format.

Description-md5: 4d5734e5e38bc768dcbffccd2547922f
Homepage: http://www.cronolog.org/
Tag: admin::logging, devel::lang:perl, devel::library, implemented-in::c,
 implemented-in::perl, interface::commandline, role::devel-lib,
 role::program, scope::utility, suite::apache, use::organizing,
 works-with::logfile
Section: web
Priority: optional
Filename: pool/main/c/cronolog/cronolog_1.6.2+rpk-2_amd64.deb
Size: 27912
MD5sum: 215a86766cc8d4434cd52432fd4f8fe7

If you're using cronolog to daily rotate the access.log and you need to filter out the strings out of the logs, you might use something like in httpd.conf:

 

CustomLog "|/usr/bin/cronolog –symlink=/var/log/httpd/access.log /var/log/httpd/access.log_%Y_%m_%d" combined env=!dontlog


 

3. Disable Apache logging access.log from certain USERAGENT browser
 

You can do much more with SetEnvIf for example you might want to omit logging requests from a UserAgent (browser) to end up in /dev/null (nowhere), e.g. prevent any Website requests originating from Internet Explorer (MSIE) to not be logged.

SetEnvIf User_Agent "(MSIE)" dontlog

CustomLog /var/log/apache2/access.log combined env=!dontlog


4. Disable Apache logging from requests coming from certain FQDN (Fully Qualified Domain Name) localhost 127.0.0.1 or concrete IP / IPv6 address

SetEnvIf Remote_Host "dns.server.com$" dontlog

CustomLog /var/log/apache2/access.log combined env=!dontlog


Of course for this to work, your website should have a functioning DNS servers and Apache should be configured to be able to resolve remote IPs to back resolve to their respective DNS defined Hostnames.

SetEnvIf recognized also perl PCRE Regular Expressions, if you want to filter out of Apache access log requests incoming from multiple subdomains starting with a certain domain hostname.

 

SetEnvIf Remote_Host "^example" dontlog

– To not log anything coming from localhost.localdomain address ( 127.0.0.1 ) as well as from some concrete IP address :

SetEnvIf Remote_Addr "127\.0\.0\.1" dontlog

SetEnvIf Remote_Addr "192\.168\.1\.180" dontlog

– To disable IPv6 requests that be coming at the log even though you don't happen to use IPv6 at all

SetEnvIf Request_Addr "::1" dontlog

CustomLog /var/log/apache2/access.log combined env=!dontlog


– Note here it is obligatory to escape the dots '.'


5. Disable robots.txt Web Crawlers requests from being logged in access.log

SetEnvIf Request_URI "^/robots\.txt$" dontlog

CustomLog /var/log/apache2/access.log combined env=!dontlog

Using SetEnvIfNoCase to read incoming useragent / Host / file requests case insensitve

The SetEnvIfNoCase is to be used if you want to threat incoming originators strings as case insensitive, this is useful to omit extraordinary regular expression SetEnvIf rules for lower upper case symbols.

SetEnvIFNoCase User-Agent "Slurp/cat" dontlog
SetEnvIFNoCase User-Agent "Ask Jeeves/Teoma" dontlog
SetEnvIFNoCase User-Agent "Googlebot" dontlog
SetEnvIFNoCase User-Agent "bingbot" dontlog
SetEnvIFNoCase Remote_Host "fastsearch.net$" dontlog

Omit from access.log logging some standard web files .css , .js .ico, .gif , .png and Referrals from own domain

Sometimes your own site scripts do refer to stuff on your own domain that just generates junks in the access.log to keep it off.

SetEnvIfNoCase Request_URI "\.(gif)|(jpg)|(png)|(css)|(js)|(ico)|(eot)$" dontlog

 

SetEnvIfNoCase Referer "www\.myowndomain\.com" dontlog

CustomLog /var/log/apache2/access.log combined env=!dontlog

 

6. Disable Apache requests in access.log and error.log completely


Sometimes at rare cases the produced Apache logs and error log is really big and you already have the requests logged in another F5 Load Balancer or Haproxy in front of Apache WebServer or alternatively the logging is not interesting at all as the Web Application served written in ( Perl / Python / Ruby ) does handle the logging itself. 
I've earlier described how this is done in a good amount of details in previous article Disable Apache access.log and error.log logging on Debian Linux and FreeBSD

To disable it you will have to comment out CustomLog or set it to together with ErrorLog to /dev/null in apache2.conf / httpd.conf (depending on the distro)
 

CustomLog /dev/null
ErrorLog /dev/null


7. Restart Apache WebServer to load settings
 

An important to mention is in case you have Webserver with multiple complex configurations and there is a specific log patterns to omit from logs it might be a very good idea to:

a. Create /etc/httpd/conf/dontlog.conf / etc/apache2/dontlog.conf
add inside all your custom dontlog configurations
b. Include dontlog.conf from /etc/httpd/conf/httpd.conf / /etc/apache2/apache2.conf

Finally to make the changes take affect, of course you will need to restart Apache webserver depending on the distro and if it is with systemd or System V:

For systemd RPM based distro:

systemctl restart httpd

or for Deb based Debian etc.

systemctl apache2 restart

On old System V scripts systems:

On RedHat / CentOS etc. restart Apache with:
 

/etc/init.d/httpd restart


On Deb based SystemV:
 

/etc/init.d/apache2 restart


What we learned ?
 

We have learned about SetEnvIf how it can be used to prevent certain requests strings getting logged into access.log through dontlog, how to completely stop certain browser based on a useragent from logging to the access.log as well as how to omit from logging certain requests incoming from certain IP addresses / IPv6 or FQDNs and how to stop robots.txt from being logged to httpd log.


Finally we have learned how to completely disable Apache logging if logging is handled by other external application.
 

Stop haproxy log requests to /var/log/messages / Disable haproxy double logging

Friday, June 25th, 2021

haproxy-logo

On a CentOS Linux release 7.9.2009 (Core) I've running haproxies on two KVM virtual machines that are configured in a High Avaialability cluster with Corosync and Pacemaker, the machines are inherited from another admin (I did not install the servers hardware) and OS but have been received the system for support.
The old sysadmins seems to not care much about the system so they've left the haprxoy with Double logging one time under separate configured log in /var/log/haproxy/haproxyprod.log and each Haproxy TCP mode flown request has been double logged to /var/log/messages as well. As you can guess this shouldn't be so because we're wasting Hard drive space so to fix that I had to stop haproxy doble logging to /var/log/messages.

The logging is done under a separate local pointer local6 the /etc/haproxy/haproxyprod.cfg goes as follows:
 

[root@haproxy01 ~]# cat /etc/haproxy/haproxyprod.cfg

global
    # log <address> [len ] [max level [min level]]
    log 127.0.0.1 local6 debug

 

The logging is handled by rsyslog via the local6, so obviously to keep out the logging from /var/log/messages
The logging to the separate log file configuration in rsyslog is as follows:

local6.*                                                /var/log/haproxy/haproxyprod.log

It turned to be really easy to prevent haproxy get its requests log to /var/log/messages all I had to change is under /etc/rsyslogd.conf

local6.none config has to be placed for /var/log/messages the full line configuration in /etc/rsyslog.conf that stopped double logging is:

# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none;local5.none;local6.none                /var/log/messages

 

Disable DNS recursion and AXFR requests in BIND on Debian Linux and FreeBSD / How to test a nameserver if AXFR requests are allowed with dig command

Monday, March 15th, 2010

I am playing with bind on a newly configured server and therefore doing my best to configure the nameserver in a good manner. In that manner of thoughts I remembered about the good old “recursion” which could pose a security hole in your DNS systems. I won’t buffle on how bad it is for a BIND domain resolver to have Domain recursion switched on, there is plenty of information you can read further online. Anyways here is a brief overview on recursion:
Recursive DNS is essentially the opposite of Custom DNS. Custom DNS is an authoritative DNS service that allows others to find your domain, and Recursive DNS allows you to resolve other people’s domains.

So considering the above definition if you decide to leave the default behaviour of the Bind nameserver (which by the way is also default behaviour of many other DNS servers including Microsoft DNS), this would mean that your DNS will be left open for the whole world to be able to serve resolve requests for any domain name requested by end users. In other words somebody out there might decide to use your nameserver to resolve all internet domains, like: google.com, yahoo.co.uk etc.

It is wise to enable recursion only for localhost on your bind name server, So to achieve that on Debian:
Open /etc/bind/named.conf.options and insert into it
Right before the options {

acl recurseallow { 1.2.3.4; 127.0.0.1; };

Also in the options {} include the following lines:

allow-recursion { recurseallow; };recursion yes;

On FreeBSD you need to include the same in /var/named/etc/namedb/named.conf by default or any other location if you have some specific named.conf file location.

Another truly Vital things to include in /etc/bind/named.conf.options on Debian Lenny among options {} is:

auth-nxdomain no;

Including this in the options {} configuration block would completely disable AXFR transfer requests on your nameserver on FreeBSD the procedure is absolutely analogous, just open /var/named/etc/namedb/named.conf and include the auth-nxdomain no; in the options configuration block.

To stress out the importance of disable AXFR it’s important to know that if you don’t disable the AXFR which is enabled by default in many nameservers out there you’re risking that a malicious person could list the whole zone files for each and every of the configured domains in the DNS server and consequently the attacker can learn a lot about the DNS topology of your network etc.
So to complete the article I’m gonna give an example on how the dig command can be used in order to check a certain DNS server if it has enabled the AXFR requests (e.g. if it’s vulnerable to this type of DNS information leak).

dig @somenameserver.net somedomainname.net axfr

In the above example somenameserver.net = is a random name server hosting a specific DNS domain
somedomainname.net = is the DNS domain name / (a.k.a. zone file) hosted on somenameserver.net

If everything is configured properly in your the namesever you’re running the axfr test against you should see something like:

; <<>> DiG 9.6.1-P1 <<>> @somenameserver.net somedomainname.net axfr
; (1 server found)
;; global options: +cmd
; Transfer failed.