Posts Tagged ‘option’

Deny DHCP Address by MAC on Linux

Thursday, October 8th, 2020

Deny DHCP addresses by MAC ignore MAC to not be DHCPD leased on GNU / Linux howto

I have not blogged for a long time due to being on a few weeks vacation and being in home with a small cute baby. However as a hardcore and a bit of dumb System administrator, I have spend some of my vacation and   worked on bringing up the the www.pc-freak.net and the other Websites hosted as a high availvailability ones living on a 2 Webservers running on a Master to Master MySQL Replication backend database, this is oll hosted on  servers, set to run as a round robin DNS hosts on 2 servers one old Lenove ThinkCentre Edge71 as well as a brand new real Lenovo server Lenovo ThinkServer SD350 with 24 CPUs and a 32 GB of RAM
To assure Internet Connectivity is having a good degree of connectivity and ensure websites hosted on both machines is not going to die if one of the 2 pair configured Fiber Optics Internet Providers Bergon.NET has some Issues, I've rented another Internet Provider Line is set bought from the VIVACOM Mobile Fiber Internet provider – that is a 1 Gigabit Fiber Optics Line.
Next to that to guarantee there is no Database, Webserver, MailServer, Memcached and other running services did not hit downtimes due to Electricity power outage, two Powerful Uninterruptable Power Supplies (UPS)  FPS Fortron devices are connected to the servers each of which that could keep the machine and the connected switches and Servers for up to 1 Hour.

The machines are configured to use dhcpd to distributed IP addresses and the Main Node is set to distribute IPs, however as there is a local LAN network with more of a personal Work PCs, Wireless Devices and Testing Computers and few Virtual machines in the Network and the IPs are being distributed in a consequential manner via a ISC DHCP server.

As always to make everything work properly hence, I had again some a bit weird non-standard requirement to make some of the computers within the Network with Static IP addresses and the others to have their IPs received via the DHCP (Dynamic Host Configuration Protocol) and add some filter for some of the Machine MAC Addresses which are configured to have a static IP addresses to prevent the DHCP (daemon) server to automatically reassign IPs to this machines.

After a bit of googling and pondering I've done it and some of the machines, therefore to save others the efforts to look around How to set Certain Computers / Servers Network Card MAC (Interfaces) MAC Addresses  configured on the LAN network to use Static IPs and instruct the DHCP server to ingnore any broadcast IP addresses leases – if they're to be destined to a set of IGNORED MAcs, I came up with this small article.

Here is the DHCP server /etc/dhcpd/dhcpd.conf from my Debian GNU / Linux (Buster) 10.4

 

option domain-name "pcfreak.lan";
option domain-name-servers 8.8.8.8, 8.8.4.4, 208.67.222.222, 208.67.220.220;
max-lease-time 891200;
authoritative;
class "black-hole" {
    match substring (hardware, 1, 6);
    ignore booting;
}
subclass "black-hole" 18:45:91:c3:d9:00;
subclass "black-hole" 70:e2:81:13:44:11;
subclass "black-hole" 70:e2:81:13:44:12;
subclass "black-hole" 00:16:3f:53:5d:11;
subclass "black-hole" 18:45:9b:c6:d9:00;
subclass "black-hole" 16:45:93:c3:d9:09;
subclass "black-hole" 16:45:94:c3:d9:0d;/etc/dhcpd/dhcpd.conf
subclass "black-hole" 60:67:21:3c:20:ec;
subclass "black-hole" 60:67:20:5c:20:ed;
subclass "black-hole" 00:16:3e:0f:48:04;
subclass "black-hole" 00:16:3e:3a:f4:fc;
subclass "black-hole" 50:d4:f5:13:e8:ba;
subclass "black-hole" 50:d4:f5:13:e8:bb;
subnet 192.168.0.0 netmask 255.255.255.0 {
        option routers                  192.168.0.1;
        option subnet-mask              255.255.255.0;
}
host think-server {
        hardware ethernet 70:e2:85:13:44:12;
        fixed-address 192.168.0.200;
}
default-lease-time 691200;
max-lease-time 891200;
log-facility local7;

To spend you copy paste efforts a file with Deny DHCP Address by Mac Linux configuration is here
/home/hipo/info
Of course I have dumped the MAC Addresses to omit a data leaking but I guess the idea behind the MAC ADDR ignore is quite clear

The main configuration doing the trick to ignore a certain MAC ALenovo ThinkServer SD350ddresses that are reachable on the Connected hardware switch on the device is like so:

class "black-hole" {
    match substring (hardware, 1, 6);
    ignore booting;
}
subclass "black-hole" 18:45:91:c3:d9:00;


The Deny DHCP Address by MAC is described on isc.org distribution lists here but it seems the documentation on the topic on how to Deny / IGNORE DHCP Addresses by MAC Address on Linux has been quite obscure and limited online.

As you can see in above config the time via which an IP is freed up and a new IP lease is done from the server is severely maximized as often DHCP servers do use a max-lease-time like 1 hour (3600) seconds:, the reason for increasing the lease time to be to like 10 days time is that the IPs in my network change very rarely so it is a waste of CPU cycles to do a frequent lease.

default-lease-time 691200;
max-lease-time 891200;


As you see to Guarantee resolving works always as expected I have configured – Google Public DNS and OpenDNS IPs

option domain-name-servers 8.8.8.8, 8.8.4.4, 208.67.222.222, 208.67.220.220;


One hint to make is, after setting up all my desired config in the standard config location /etc/dhcp/dhcpd.conf it is always good idea to test configuration before reloading the running dhcpd process.

 

root@pcfreak: ~# /usr/sbin/dhcpd -t
Internet Systems Consortium DHCP Server 4.4.1
Copyright 2004-2018 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/
Config file: /etc/dhcp/dhcpd.conf
Database file: /va/home/hipo/infor/lib/dhcp/dhcpd.leases
PID file: /var/run/dhcpd.pid
 

That's all folks with this sample config the IPs under subclass "black-hole", which are a local LAN Static IP Addresses will never be offered leasess anymore from the ISC DHCP.
Hope this stuff helps someone, enjoy and in case if you need a colocation of a server or a website hosting for a really cheap price on this new set High Availlability up described machines open an inquiry on https://web.www.pc-freak.net.

 

How to enable HaProxy logging to a separate log /var/log/haproxy.log / prevent HAProxy duplicate messages to appear in /var/log/messages

Wednesday, February 19th, 2020

haproxy-logging-basics-how-to-log-to-separate-file-prevent-duplicate-messages-haproxy-haproxy-weblogo-squares
haproxy  logging can be managed in different form the most straight forward way is to directly use /dev/log either you can configure it to use some log management service as syslog or rsyslogd for that.

If you don't use rsyslog yet to install it: 

# apt install -y rsyslog

Then to activate logging via rsyslogd we can should add either to /etc/rsyslogd.conf or create a separte file and include it via /etc/rsyslogd.conf with following content:
 

Enable haproxy logging from rsyslogd


Log haproxy messages to separate log file you can use some of the usual syslog local0 to local7 locally used descriptors inside the conf (be aware that if you try to use some wrong value like local8, local9 as a logging facility you will get with empty haproxy.log, even though the permissions of /var/log/haproxy.log are readable and owned by haproxy user.

When logging to a local Syslog service, writing to a UNIX socket can be faster than targeting the TCP loopback address. Generally, on Linux systems, a UNIX socket listening for Syslog messages is available at /dev/log because this is where the syslog() function of the GNU C library is sending messages by default. To address UNIX socket in haproxy.cfg use:

log /dev/log local2 


If you want to log into separate log each of multiple running haproxy instances with different haproxy*.cfg add to /etc/rsyslog.conf lines like:

local2.* -/var/log/haproxylog2.log
local3.* -/var/log/haproxylog3.log


One important note to make here is since rsyslogd is used for haproxy logging you need to have enabled in rsyslogd imudp and have a UDP port listener on the machine.

E.g. somewhere in rsyslog.conf or via rsyslog include file from /etc/rsyslog.d/*.conf needs to have defined following lines:

$ModLoad imudp
$UDPServerRun 514


I prefer to use external /etc/rsyslog.d/20-haproxy.conf include file that is loaded and enabled rsyslogd via /etc/rsyslog.conf:

# vim /etc/rsyslog.d/20-haproxy.conf

$ModLoad imudp
$UDPServerRun 514​
local2.* -/var/log/haproxy2.log


It is also possible to produce different haproxy log output based on the severiy to differentiate between important and less important messages, to do so you'll need to rsyslog.conf something like:
 

# Creating separate log files based on the severity
local0.* /var/log/haproxy-traffic.log
local0.notice /var/log/haproxy-admin.log

 

Prevent Haproxy duplicate messages to appear in /var/log/messages

If you use local2 and some default rsyslog configuration then you will end up with the messages coming from haproxy towards local2 facility producing doubled simultaneous records to both your pre-defined /var/log/haproxy.log and /var/log/messages on Proxy servers that receive few thousands of simultanous connections per second.
This is a problem since doubling the log will produce too much data and on systems with smaller /var/ partition you will quickly run out of space + this haproxy requests logging to /var/log/messages makes the file quite unreadable for normal system events which are so important to track clearly what is happening on the server daily.

To prevent the haproxy duplicate messages you need to define somewhere in rsyslogd usually /etc/rsyslog.conf local2.none near line of facilities configured to log to file:

*.info;mail.none;authpriv.none;cron.none;local2.none     /var/log/messages

This configuration should work but is more rarely used as most people prefer to have haproxy log being written not directly to /dev/log which is used by other services such as syslogd / rsyslogd.

To use /dev/log to output logs from haproxy configuration in global section use config like:
 

global
        log /dev/log local2 debug
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

The log global directive basically says, use the log line that was set in the global section for whole config till end of file. Putting a log global directive into the defaults section is equivalent to putting it into all of the subsequent proxy sections.

Using global logging rules is the most common HAProxy setup, but you can put them directly into a frontend section instead. It can be useful to have a different logging configuration as a one-off. For example, you might want to point to a different target Syslog server, use a different logging facility, or capture different severity levels depending on the use case of the backend application. 

Insetad of using /dev/log interface that is on many distributions heavily used by systemd to store / manage and distribute logs,  many haproxy server sysadmins nowdays prefer to use rsyslogd as a default logging facility that will manage haproxy logs.
Admins prefer to use some kind of mediator service to manage log writting such as rsyslogd or syslog, the reason behind might vary but perhaps most important reason is  by using rsyslogd it is possible to write logs simultaneously locally on disk and also forward logs  to a remote Logging server  running rsyslogd service.

Logging is defined in /etc/haproxy/haproxy.cfg or the respective configuration through global section but could be also configured to do a separate logging based on each of the defined Frontend Backends or default section. 
A sample exceprt from this section looks something like:

#———————————————————————
# Global settings
#———————————————————————
global
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#———————————————————————
defaults
    mode                    tcp
    log                     global
    option                  tcplog
    #option                  dontlognull
    #option http-server-close
    #option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 7
    #timeout http-request    10s
    timeout queue           10m
    timeout connect         30s
    timeout client          20m
    timeout server          10m
    #timeout http-keep-alive 10s
    timeout check           30s
    maxconn                 3000

 

 

# HAProxy Monitoring Config
#———————————————————————
listen stats 192.168.0.5:8080                #Haproxy Monitoring run on port 8080
    mode http
    option httplog
    option http-server-close
    stats enable
    stats show-legends
    stats refresh 5s
    stats uri /stats                            #URL for HAProxy monitoring
    stats realm Haproxy\ Statistics
    stats auth hproxyauser:Password___          #User and Password for login to the monitoring dashboard

 

#———————————————————————
# frontend which proxys to the backends
#———————————————————————
frontend ft_DKV_PROD_WLPFO
    mode tcp
    bind 192.168.233.5:30000-31050
    option tcplog
    log-format %ci:%cp\ [%t]\ %ft\ %b/%s\ %Tw/%Tc/%Tt\ %B\ %ts\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq
    default_backend Default_Bakend_Name


#———————————————————————
# round robin balancing between the various backends
#———————————————————————
backend bk_DKV_PROD_WLPFO
    mode tcp
    # (0) Load Balancing Method.
    balance source
    # (4) Peer Sync: a sticky session is a session maintained by persistence
    stick-table type ip size 1m peers hapeers expire 60m
    stick on src
    # (5) Server List
    # (5.1) Backend
    server Backend_Server1 10.10.10.1 check port 18088
    server Backend_Server2 10.10.10.2 check port 18088 backup


The log directive in above config instructs HAProxy to send logs to the Syslog server listening at 127.0.0.1:514. Messages are sent with facility local2, which is one of the standard, user-defined Syslog facilities. It’s also the facility that our rsyslog configuration is expecting. You can add more than one log statement to send output to multiple Syslog servers.

Once rsyslog and haproxy logging is configured as a minumum you need to restart rsyslog (assuming that haproxy config is already properly loaded):

# systemctl restart rsyslogd.service

To make sure rsyslog reloaded successfully:

systemctl status rsyslogd.service


Restarting HAproxy

If the rsyslogd logging to 127.0.0.1 port 514 was recently added a HAProxy restart should also be run, you can do it with:
 

# /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)


Or to restart use systemctl script (if haproxy is not used in a cluster with corosync / heartbeat).

# systemctl restart haproxy.service

You can control how much information is logged by adding a Syslog level by

    log         127.0.0.1 local2 info


The accepted values are the standard syslog security level severity:

Value Severity Keyword Deprecated keywords Description Condition
0 Emergency emerg panic System is unusable A panic condition.
1 Alert alert   Action must be taken immediately A condition that should be corrected immediately, such as a corrupted system database.
2 Critical crit   Critical conditions Hard device errors.
3 Error err error Error conditions  
4 Warning warning warn Warning conditions  
5 Notice notice   Normal but significant conditions Conditions that are not error conditions, but that may require special handling.
6 Informational info   Informational messages  
7 Debug debug   Debug-level messages Messages that contain information normally of use only when debugging a program.

 

Logging only errors / timeouts / retries and errors is done with option:

Note that if the rsyslog is configured to listen on different port for some weird reason you should not forget to set the proper listen port, e.g.:
 

  log         127.0.0.1:514 local2 info

option dontlog-normal

in defaults or frontend section.

You most likely want to enable this only during certain times, such as when performing benchmarking tests.

(or log-format-sd for structured-data syslog) directive in your defaults or frontend
 

Haproxy Logging shortly explained


The type of logging you’ll see is determined by the proxy mode that you set within HAProxy. HAProxy can operate either as a Layer 4 (TCP) proxy or as Layer 7 (HTTP) proxy. TCP mode is the default. In this mode, a full-duplex connection is established between clients and servers, and no layer 7 examination will be performed. When in TCP mode, which is set by adding mode tcp, you should also add option tcplog. With this option, the log format defaults to a structure that provides useful information like Layer 4 connection details, timers, byte count and so on.

Below is example of configured logging with some explanations:

Log-format "%ci:%cp [%t] %ft %b/%s %Tw/%Tc/%Tt %B %ts %ac/%fc/%bc/%sc/%rc %sq/%bq"

haproxy-logged-fields-explained
Example of Log-Format configuration as shown above outputted of haproxy config:

Log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"

haproxy_http_log_format-explained1

To understand meaning of this abbreviations you'll have to closely read  haproxy-log-format.txt. More in depth info is to be found in HTTP Log format documentation


haproxy_logging-explained

Logging HTTP request headers

HTTP request header can be logged via:
 

 http-request capture

frontend website
    bind :80
    http-request capture req.hdr(Host) len 10
    http-request capture req.hdr(User-Agent) len 100
    default_backend webservers


The log will show headers between curly braces and separated by pipe symbols. Here you can see the Host and User-Agent headers for a request:

192.168.150.1:57190 [20/Dec/2018:22:20:00.899] website~ webservers/server1 0/0/1/0/1 200 462 – – —- 1/1/0/0/0 0/0 {mywebsite.com|Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/71.0.3578.80 } "GET / HTTP/1.1"

 

Haproxy Stats Monitoring Web interface


Haproxy is having a simplistic stats interface which if enabled produces some general useful information like in above screenshot, through which
you can get a very basic in browser statistics and track potential issues with the proxied traffic for all configured backends / frontends incoming outgoing
network packets configured nodes
 experienced downtimes etc.

haproxy-statistics-report-picture

The basic configuration to make the stats interface accessible would be like pointed in above config for example to enable network listener on address
 

https://192.168.0.5:8080/stats


with hproxyuser / password config would be:

# HAProxy Monitoring Config
#———————————————————————
listen stats 192.168.0.5:8080                #Haproxy Monitoring run on port 8080
    mode http
    option httplog
    option http-server-close
    stats enable
    stats show-legends
    stats refresh 5s
    stats uri /stats                            #URL for HAProxy monitoring
    stats realm Haproxy\ Statistics
    stats auth hproxyauser:Password___          #User and Password for login to the monitoring dashboard

 

 

Sessions states and disconnect errors on new application setup

Both TCP and HTTP logs include a termination state code that tells you the way in which the TCP or HTTP session ended. It’s a two-character code. The first character reports the first event that caused the session to terminate, while the second reports the TCP or HTTP session state when it was closed.

Here are some essential termination codes to track in for in the log:
 

Here are some termination code examples most commonly to see on TCP connection establishment errors:

Two-character code    Meaning
—    Normal termination on both sides.
cD    The client did not send nor acknowledge any data and eventually timeout client expired.
SC    The server explicitly refused the TCP connection.
PC    The proxy refused to establish a connection to the server because the process’ socket limit was reached while attempting to connect.


To get all non-properly exited codes the easiest way is to just grep for anything that is different from a termination code –, like that:

tail -f /var/log/haproxy.log | grep -v ' — '


This should output in real time every TCP connection that is exiting improperly.

There’s a wide variety of reasons a connection may have been closed. Detailed information about all possible termination codes can be found in the HAProxy documentation.
To get better understanding a very useful reading to haproxy Debug errors with  is in haproxy-logging.txt in that small file are collected all the cryptic error messages codes you might find in your logs when you're first time configuring the Haproxy frontend / backend and the backend application behind.

Another useful analyze tool which can be used to analyze Layer 7 HTTP traffic is halog for more on it just google around.

Cracking zip protected password files on GNU/Linux and FreeBSD

Wednesday, October 5th, 2011

crack-zip-freebsd

Its not very common, but sometimes it happens you have to crack some downloaded file from thepiratebay.com or some other big torrent tracker. An example scenario would be downloading a huge words dictionary (a rainbow tables) dictionary etc., which was protected by the author with a password and zipped.

Fortunately Mark Lehmann developed a software called fcrackzip which is capable of brute forcing zip protected file passwords straight on UNIX like operating systems (GNU/Linux, FreeBSD).

fcrackzip is available from package repositories on Debian and Ubuntu Linuces to install via apt:

linux:~# apt-get install frackzip
...

fcrackzip is also available on FreeBSD via the ports tree and can be installed with:

freebsd# cd /usr/ports/security/fcrackzip
freebsd# make install cleam

On Debian it's worthy to have a quick look on the README file:

linux:~# cat /usr/share/doc/fcrackzip/READMESee fcrackzip.txt (which is derived from the manpage), or fcrackzip.html

There is a web page with more information at
http://lehmann.home.ml.org/fcrackzip.html or
http://www.goof.com/pcg/marc/fcrackzip.html

A sample password-protected .zip file is included as "noradi.zip". It's
password has 6 lower case characters, and fcrackzip will find it (and a
number of false positives) with

fcrackzip -b -c a -p aaaaaa ./noradi.zip

which will take between one and thirty minutes on typical machines.

To find out which of these passwords is the right one either try them out
or use the –use-unzip option.

Marc

Cracking the noradi.zip password protected sample file on my dual core 1.8 ghz box with 2gb, it took 30 seconds.

linux:~# time fcrackzip -u -b -c a -p aaaaaa noradi.zip

PASSWORD FOUND!!!!: pw == noradi

real 0m29.627s
user 0m29.530s
sys 0m0.064s

Of course the sample set password for noradi.zip is pretty trivial and with more complex passwords, sometimes cracking the password can take up to 30 minutes or an hour and it all depends on the specific case, but at least now we the free software users have a new tool in the growing arsenal of free software programs 😉

Here are the options passed on to the above fcrackzip command:

-uTry to decompress with the detected possible archive passwords using unzip (This is necessery to precisely find the archive password, otherwise it will just print out a number of possible matching archive passwords and you have to try each of the passwords one by one. Note that this option depends on a working unzip version installed.)

-c ainclude all charsets to be tried with the generated passwords

-bSelect brute force mode – Tries all possible combinations of letters specified

-p aaaaaainit-password string (Look up for a password between the password length 6 characters long)

FCrackZip is partly written in assembler and thus is generally works fast, to reduce the CPU load fcrackzip will put on the processor its also capable of using external words dictionary file by passing it the option:

-DThe file should be in a format one word per line and be preliminary alphabetically sorted with let's say sort

Also fcrackzip supports parallel file brute force, for example if you have 10 zip files protected with passwords it can paralelly try to brute force the pwds.

As of time of writting frackzip reached version 1.0 and seems to be pretty stable. Happy cracking.
Just to make sure fcrackzip's source is not lost somewhere in the line in the long future to come, I've created a fcrackzip download mirror here

Heroes of Might and Magic 2: Best old-school turn based strategic game to play on your Android mobile phone

Monday, June 16th, 2014

heroes-of-might-and-magic-2-for-your-mobile-smartphone-android-screenshot

Probably many people which are my age (I'm aged 30 now), spent many days and sleepless nights being totally addicted playing probably one of the most addictive (and in my view greatest strategy game of all time) – Heroes of Might and Magic II (HOMM2).
In that thoughts it will be a great news for you if you're owning smartphone that you can turn-back some nice memories and play (for free) a port of Heroes 2 for Android.

Free Heroes 2 Android port is it is made to support multiple screened devices so game  version could be played on both Android Tablet a tiny screen smart phone or a middle sized mobile. Also Free Heroes 2 mobile port allows you to choose 'The Magnifying glass' option on first game boot, so if you're on a tiny screened mobile you can still zoom by pointing on a game object. Free Heroes 2 Android port is there thanks to Gerhard Stein who is also an author of OpenTyrian mobile phone port and the amazing old computer jump-and-run arcade Commander Keen. Game pointer controls of FHeroes2 are pretty convenient and playing the game is almost as confortable as played with a PC mouse.
Free Heroes 2 is port of Free Heroes 2 engineFree implementation of Heroes of the Might and Magic II engine in SDL and because SDL is platform independent Free Heroes is also available for both Windows / Linux. Maybe here is time too mention that Heroes2 original DOS game works perfectly on any modern Linux distribution when started through DOSBOX DOS emulator.

By default Free Heroes2  has no game campaign support yet. In order to enable campaing support into Free Heroes 2, download FULL Heroes 2 gameput data files to mobile SD card to dir app-data/net.sourceforge.fheroes2 and campaign option will be there too.

heroes_of_might_and_magic_ii_play-on-android-best-strategy-old-game-for-android

Heroes of Might and Magic 2 – The succession Wars (or HEROES2, as it is widely known in Gamers communities) is a turn based strategy game  from year 1996 developed by Jon Van Caneghem by his New World Computing company it was marketed under on the market under brand of 3DO Company. Heroes II was voted the sixth-best PC game of all time by PC Gamer in May 1997. Heroes2 has also a game expansion pack called the Price of Loyalty released in 1997 as well as Heroes of Might and Magic II – Gold – from 1998. The game graphic design looks very beautiful and combined with the soundtrack makes playing it an awesome and calming experience. The game is very notable especially for soundtrack which is all of a beautiful classical music.

Heroes-of-might-and-magic-2-best-games-of-all-time-screenshot-HOMM2
(Picture taken and copyrighted by Wikipedia)

Gameplay

The titular heroes (horse) are player characters who can recruit armies, move around the map, capture resources, and engage in combat. The heroes also incorporate some role-playing game elements; they possess a set of statistics that confer bonuses to an army, artifacts that enhance their powers, and knowledge of magical spells that can be used to attack enemies or produce strategic benefits. Also, heroes gain experience levels from battle, such that veteran heroes are significantly more powerful than inexperienced ones.

On a typical map, players begin a game with one town of a chosen alignment. Each town alignment hosts a unique selection of creatures from which the player can build an army. Town alignment also determines other unique traits such as native hero classes, special bonuses or abilities, and leanings toward certain skills or kinds of magic.

heroes-of-might-and-magic-town_castle-sorceress-screenshot

Towns play a central role in the games since they are the primary source of income and new recruits. A typical objective in each game is to capture all enemy towns. Maps may also start with neutral towns, which do not send out heroes but may still be captured by any player. It is therefore possible, and common, to have more towns than players on a map. When captured, a town retains its alignment type, allowing the new owner to create a mixed army. A player or team is eliminated when no towns or heroes are left under their control. Usually the last player or team remaining is the victor.

As heroes visit special locations called obelisks, pieces are removed from a jigsaw puzzle-like map, gradually revealing 'The Ultimate Artifact location to the player. Once found, it confers immense bonuses to the player capable of breaking a stalemate: the grail can be taken back to a town and used to build a special structure, while the ultimate artifact provides the bonuses directly through possession.

heroes-of-might-and-magic-2-battle-for-castle-screenshot

Whenever a player engages in battle

The game changes from the adventure map display to a combat screen, which is based on either a hexagonal or square grid. In this mode, the game mimics the turn-based tactics genre, as the engaged armies must carry through the battle without the opportunity to reinforce or gracefully retreat. With few exceptions, combat must end with the losing army deserting, being destroyed, or paying a heavy price in gold to surrender. Surrendering allows the player to keep the remaining units intact. Battles can be led army army to army or castles / villages can be fight and (captured) occupied. Owning a town gives your hero daily an income of money later used to buy and upgrade castle buildings.

heroes-of-might-and-magic-ii-the-succession-wars-wizard-castle-building-options-screenshot

Also you your moved heroes could overtake mines producing different goods like minerals, sulfur, gold, emeralds etc. Building different buildings and building war units for army usually cost gold and some kind of resource.

Game Story

Heroes II history continues after Heroes I. Ending of Heroes I results in Lord Morglin Ironfist's victory. In the following years, he has successfully unified the continent of Enroth and secured his rule as king. Upon the king's death, his two sons, Archibald and Roland, vie for the crown. Archibald orchestrates a series of events that lead to Roland's exile. Archibald is then declared the new king, while Roland organizes a resistance. Each alignment is represented by one of the game's two campaigns. Archibald's campaign features the three "evil" town alignments, while Roland's campaign features the three "good" town alignments.

If Archibald is victorious, Roland's rebellion is crushed, and Roland himself is imprisoned in Castle Ironfist, leaving Archibald the uncontested ruler of Enroth. The  ending, however, results in Roland's victory, with Archibald being turned to stone by Roland's court wizard, Tanir.

If you're more interested to play modern games and get some more games modern games more entertaining take a look at Kevin Martin's JoyofAndroid Best Adnroid Games post here.

Enjoy

Optimizing Linux TCP/IP Networking to increase Linux Servers Performance

Tuesday, April 8th, 2008

optimize-linux-servers-for-network-performance-to-increase-speed-and-decrease-hardware-costs-_tyan-exhibits-hpc-optimized-server-platforms-featuring-intel-xeon-processor-e7-4800-v3-e5-2600-supercomputing-15_full

Some time ago I thought of ways to optimize my Linux Servers network performance.

Even though there are plenty of nice articles on the topic on how to better optimize Linux server performance by tunning up the kernel sysctl (variables).

Many of the articles I found was not structed in enough understandable way so I decided togoogle around and  found few interesting websites which gives a good overview on how one can speed up a bit and decrease overall server loads by simply tuning few basic kernel sysctl variables.

Below article is a product of my research on the topic on how to increase my GNU / Linux servers performance which are mostly running LAMP (Linux / Apache / MySQL / PHP) together with Qmail mail servers.

The article is focusing on Networking as networking is usual bottleneck for performance.
Below are the variables I found useful for optimizing the Linux kernel Network stack.

Implementing the variables might reduce your server load or if not decrease server load times and CPU utilization, they would at lease increase thoroughput so more users will be able to access your servers with (hopefully) less interruptions.
That of course would save you some Hardware costs and raise up your Servers efficiency.

Here are the variables themselves and some good example:
 

# values.net.ipv4.ip_forward = 0 ( Turn off IP Forwarding )

net.ipv4.conf.default.rp_filter = 1

# ( Control Source route verification )
net.ipv4.conf.default.accept_redirects = 0

# ( Disable ICMP redirects )
net.ipv4.conf.all.accept_redirects = 0 ( same as above )
net.ipv4.conf.default.accept_source_route = 0

# ( Disable IP source routing )
net.ipv4.conf.all.accept_source_route = 0
( - || - )net.ipv4.tcp_fin_timeout = 40

# ( Decrease FIN timeout ) - Useful on busy/high load
serversnet.ipv4.tcp_keepalive_time = 4000 ( keepalive tcp timeout )
net.core.rmem_default = 786426 - Receive memory stack size ( a good idea to increase it if your server receives big files )
net.ipv4.tcp_rmem = "4096 87380 4194304"
net.core.wmem_default = 8388608 ( Reserved Memory per connection )
net.core.wmem_max = 8388608
net.core.optmem_max = 40960
( maximum amount of option memory buffers )

# like a homework investigate by yourself what the variables below stand for :)
net.ipv4.tcp_max_tw_buckets = 360000
net.ipv4.tcp_reordering = 5
net.core.hot_list_length = 256
net.core.netdev_max_backlog = 1024

 

# Below are newly added experimental
#net.core.rmem_max = 16777216
#net.core.wmem_max = 16777216
##kernel.msgmni = 1024
##kernel.sem = 250 256000 32 1024
##vm.swappiness=0
kernel.sched_migration_cost=5000000

 

Also a good sysctl.conf file which one might want to substitite or use as a skele for some productive server is ready for download here


Even if you can't reap out great CPU reduction benefits from integrating above values or similar ones, your overall LAMP performance to end customers should increase – at some occasions dramatically, at others little bit but still noticable.

If you're unsure on exact kernel variable values to use check yourself what should be the best values that fits you according to your server Hardware – usually this is done by experimenting and reading the kernel documentation as provided for each one of uplisted variables.

Above sysctl.conf is natively created to run on Debian and on other distributions like CentOS, Fedora Slackware some values might either require slight modifications.

Hope this helps and gives you some idea of how network optimization in Linux is usually done. Happy (hacking) tweakening !

Drawing GANTT Charts and Project Management on Linux, (Microsoft Project substitute for Unix)

Tuesday, October 12th, 2010

I'm studying Project Management, right now. In that spirit of thoughts I and a couple of other guys are building a Project Plan.
As it Project Plan it's necessary to put a GANTT Chart in it to show visually the project timeline (the phases), the duration and the inter-relation between the different tasks which leads the project to an actual completion.

After a bit of thorough research online on available software to deal with project management and particularly, ones that are capable to build a GANTT charts on Linux / BSD.

I've come with the following list of software capable to be a substitute for the Microsoft Project software.
Redmine GANTT Chart

GANTT chart Redmine

1. Gantt Project
GANTTProject chart GANTTProject Chart

2. Gnome Planner
Planner GANTT Gnome Chart Planner GANTT Chone Chart

3. Task Juggler Project Manager with GANTT Capability for (KDE)
Task Juggler

4. JxProject – This software is not free, though it can be considered almost free
Take a look also at:
5. Trac , though it doesn't really support GANTT charts it's a lovely software to be used for PM.
Trac Project Management

Another option you have is to try out:
6. PHProjekt

Update 20.09.2016 – PHPProject Old download link is no longer active

It is this link http://www.phprojekt.com/, but the page doesn’t seem to be active any more. I thought you might want to update.

If you are looking for an alternative please check out http://wiht.link/PHProjekt-PM, it may make a suitable replacement.

Kind Regards,
Tom Wilcox


That piece of softwre really looks promising, especially if we consider that it's web based and how much essential is today to have an anline tools for doing the ordinary desktop jobs.

You can even check an online demo of the PHPProjekt software here

If you're a type of KDE user you definitely has to try out Kplato

As I've tested the software the software is easy to be used, however it still is missing some essential parts that Microsoft Project includes so it's not 100% substitute.
Also it's not able to open Microsoft Project (MPP) files, neither able to save the charts in the .mpp format.

Moving ahead I've came across DotProject DotProject Gantt Chart
DottProject Gantt Chart

I haven't took the time to test it myself but however, as I go through the software website the project looked quite good.
Lastly you can take a look at: 7. PStricks as a mean of project management, however I think it doesn't support GANTT chart building.
>

How to shutdown Windows after 1, 2, 3, 4 etc. X hours with a batch script – Shutdown / Reboot / Logoff Windows with a quick command

Wednesday, August 17th, 2016

https://www.pc-freak.net/images/windows-pc-server-shutdown-after-3-5-hours-howto-shutdown-windows-with-command-batch

I recently wondered how it is possible to shutdown Windows in some prior set time lets say in 30 minutes, 1 hour, 3 hours or 8 hours.

That's handy especially on servers that are being still in preparation install time and you have left some large files copy job (if you're migration files) from Old server environment to a new one
or if you just need to let your home WIndows PC shutdown to save electricity after some time (a very useful example is if you're downloading some 200GB of data which are being estimated to complete in 3 hours but you need to get out and be back home in 2 or 4 days and you don't want to bother connecting remotely to your PC with VNC or teamviewer then just scheduling the PC / server to shutdown in 3 hours with a simple is perfect solution to the task, here is how:

1. Open Command Prompt (E.g. Start menu -> Run and type CMD.EXE)

2. Type in command prompt

 

shutdown -s -t 10800

 

If you by mistake has typed it to shutdown earlier and suddenly you find out your PC needs to be running for a short more time in order to cancel the scheduled Shutdown type:

 

shutdown -a


Shutdown Windows command -s flag has also a possibiltiy to not shutdown but just logoff or if you just need to have the system rebooted a reboot option:
 


options    effect
-l         to log off
-r         to reboot

If you need to shutdown the PC after half an hour use instead the command:

 

shutdown -s -t 1800


shutdown-windows-pc-with-command-in-half-an-hour-screenshot.gif


Half an hour is 1800 seconds for one hour delayed shutdown use 3600 for 3 hours, that would be 3*3600 10800, for 5 hours 5*3600 = 18000 seconds and so on

 


An alternative way to do it with a short VBscript, here is an example:

Set objShell = CreateObject("WScript.Shell")

Dim Input
Input = "10:00"

'Input = InputBox("Enter the shutdown time here.","", "10:00")

For i = 1 to 2

CurrentTime = Time & VbCrLf

If Left(CurrentTime,5) = Input Then

objShell.Run "shutdown -s -t 00", 0
WScript.Quit 1

Else

WScript.Sleep 1000

End If

i=i-1

Next

Enjoy

Change Skype for Business UI to Lync – Skype for Business Lync Theme / Remove Skype for Business coloroful UI and Switch to Lync simple interface

Wednesday, February 17th, 2016

Revert-skype-for-business-Lync-User-Interface-change-skype-for-business-theme-skin

If you are working in a large corporation such as (HP or HPE – Hewlett Packard Enterprise (HPE is the new splitted company brand name for the Software and Servers division of ex-HP) / IBM / Dell or any other company with the size of top fortune companies and you Computer Domain admistrator has forced your work PC to already use Skype for Business instead of the good tested Lync Client along with the goodies and PROS of having the newer Skype for Business (S4B) as usual for old fashened users like me and the avarage employee the New S4B interface will turn into nightmare with all this circled names and more buttons and the annoying Skype Blue Theme.

For anyone who has even basic idea of design and aesthetics, I believe the default Theme of Skype For Business  will be evaluated as a serious "interface downgrade" compared to the simple looking Interface and White Skin of Lync Client.

With this said it will be logical for the end user like me to desire to customize a bit default S4B Skin to make it more elegant looking like Lync 2013 client but guess what there is a Surprise if you google around, Skype For Business just like the regular Skype client doesn't have integrated support for Skins / Themes.
To make the horror complete, many big corporations are choosing to migrate their Email infrastructure from the classical and well tested Windows Domain with Exchange Server to  Microsoft Office 365 (Cloud services),
which makes the dependency on M$ products even bigger and in the long run control and spying on people's email and information (people's data security even worsers) as you know how hackable Windows prooved to be over the years.


Well for those who remember the good old times of IRC (Internet Relay Chat) and ICQ (I seek you) 🙂 and even Jabber when chatting emerged and boomed into popularity all the chat clients nomatter whether it was a free software under GPL / BSD license or it was a Proprietary licensed software, there was always alternative on the Interface Outlook of the Chat clients and on practically all popular Chat / Audio / Video communication Standards / Protocols, there used to be some option for the users to use either a different client or to customize the outook of the program.

Well now the big surprise with Skype Protocol which was purchased by Microsoft some years ago back is this terrible already M$ program doesn't have any option for changing the Theme and even basic customization besides the ones provided by default by Microsoft. For my surprise such a trivial and everybody used program like Skype with perhaps already 1.5 or 2 Billion or even more users doesn't have even basic support for customization !!!
To make the Skype program use horror story even worser Microsoft does upgrade the Skype client agressively and for the last 3 or 4 years Skype is owned by Microsoft the interface gets changing slightly or even completely with every next release.

Now with latest Skype versions since a 1.5 year or so the agressiveness of the program even increased further as it wants you to automatically upgrade, every time you run Skype.
With this in mind and the fact, I have to spend about 8 to 10 years on the PC with Skype for Business switched on on my notebook with no option to use Lync for communication because of the Domain Exchange forcing the changes to all of the users within our EMEA.

So after some serious digging on the Internet, the only work around to change the Skype For Business Theme available by a couple of sources is to Revert Back the Skype User Interface to Lync 2013 Client by changing a value to the Windows registry and get back the good old elegant Lync interface instead of S4B.

The Windows registry value that needs to be changed is:

[HKEY_CURRENT_USER\Software\Microsoft\Office\Lync]


The default value there is:
 

Windows Registry Editor Version 5.00
[HKEY_CURRENT_USER\Software\Microsoft\Office\Lync]
"CanSharePptInCollab"=dword:00000001
"CanAppShareInCollab"=dword:00000001
"CanShareOneNoteInCollab"=dword:00000001
"EnableSkypeUI"=hex:01,00,00,00

 

The value has to be changed to:

Windows Registry Editor Version 5.00
[HKEY_CURRENT_USER\Software\Microsoft\Office\Lync]
"CanSharePptInCollab"=dword:00000001
"CanAppShareInCollab"=dword:00000001
"CanShareOneNoteInCollab"=dword:00000001
"EnableSkypeUI"=hex:00,00,00,00


restart_lync_screenshot-ms-windows

The value "EnableSkypeUI"=hex:01,00,00,00 – instructs so Skype for Business UI is used:
"EnableSkypeUI"=hex:00,00,00,00 – instructs S4B to revert back to Old Lync interface

For a little bit more on the value check out also articles – Alternate Between The Microsoft Lync and Skype for Business
and Managing the Skype Client UI in Skype for Business.

To modify about registry setting you will either have to manually run regedit from Start -> Run ->  cmd.exe or use Windows button + R and type inside run box:
 

regedit


Or even better just use and Run (Click over twice) on this skype.reg (download) sciprt which will modify registry

Because the Domain administrator has forced a policy to automatically offer Change of Lync Interface to Skype for Business on every notebook boot to disable EnableSkypeUI registry value and make Skype appear in the good old Lync UI, I've created also a tiny Batch script lync_ui.bat with following content:
 

cd \
cd \Users\georgi\scripts
 regedit /s skype.reg
exit


You can download lync_ui.bat from here

Note that both skype.reg and lync_ui.bat should be existing in my case in C:\Users\georgi\scipts , change this path to whatever your username is and create scripts folder in your User Home dir.
If unsure about the home directory name you can check it from command prompt with:
 

C:\Users\georgi> echo %HOMEPATH%
\Users\georgi


To make the lync_ui.bat (script invoking skype.reg)  be executed on every PC boot, you need to add it to:

Start -> All Programs -> StartUp

https://www.pc-freak.net/images/how-to-add-script-to-windows-startup-screenshto-microsoft-windows-7

Well this is it now you will have back the Lync UI, Enjoy! 🙂

 

How to renew self signed QMAIL toaster and QMAIL rocks expired SSL pem certificate

Friday, September 2nd, 2011

qmail_toaster_logo-fix-qmail-rocks-expired-ssl-pem-certificate

One of the QMAIL server installs, I have installed very long time ago. I've been notified by clients, that the certificate of the mail server has expired and therefore I had to quickly renew the certificate.

This qmail installation, SSL certificates were located in /var/qmail/control under the names servercert.key and cervercert.pem

Renewing the certificates with a new self signed ones is pretty straight forward, to renew them I had to issue the following commands:

1. Generate servercert encoded key with 1024 bit encoding

debian:~# cd /var/qmail/control
debian:/var/qmail/control# openssl genrsa -des3 -out servercert.key.enc 1024
Generating RSA private key, 1024 bit long modulus
...........++++++
.........++++++
e is 65537 (0x10001)
Enter pass phrase for servercert.key.enc:
Verifying - Enter pass phrase for servercert.key.enc:

In the Enter pass phrase for servercert.key.enc I typed twice my encoded key password, any password is good, here though using a stronger one is better.

2. Generate the servercert.key file

debian:/var/qmail/control# openssl rsa -in servercert.key.enc -out servercert.key
Enter pass phrase for servercert.key.enc:
writing RSA key

3. Generate the certificate request

debian:/var/qmail/control# openssl req -new -key servercert.key -out servercert.csr
debian:/var/qmail/control# openssl rsa -in servercert.key.enc -out servercert.key
Enter pass phrase for servercert.key.enc:writing RSA key
root@soccerfame:/var/qmail/control# openssl req -new -key servercert.key -out servercert.csr
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:UK
State or Province Name (full name) [Some-State]:London
Locality Name (eg, city) []:London
Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company
Organizational Unit Name (eg, section) []:My Org
Common Name (eg, YOUR name) []:
Email Address []:admin@adminmail.com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

In the above prompts its necessery to fill in the company name and location, as each of the prompts clearly states.

4. Sign the just generated certificate request

debian:/var/qmail/control# openssl x509 -req -days 9999 -in servercert.csr -signkey servercert.key -out servercert.crt

Notice the option -days 9999 this option instructs the newly generated self signed certificate to be valid for 9999 days which is quite a long time, the reason why the previous generated self signed certificate expired was that it was built for only 365 days

5. Fix the newly generated servercert.pem permissions debian:~# cd /var/qmail/control
debian:/var/qmail/control# chmod 640 servercert.pem
debian:/var/qmail/control# chown vpopmail:vchkpw servercert.pem
debian:/var/qmail/control# cp -f servercert.pem clientcert.pem
debian:/var/qmail/control# chown root:qmail clientcert.pem
debian:/var/qmail/control# chmod 640 clientcert.pem

Finally to load the new certificate, restart of qmail is required:

6. Restart qmail server

debian:/var/qmail/control# qmailctl restart
Restarting qmail:
* Stopping qmail-smtpd.
* Sending qmail-send SIGTERM and restarting.
* Restarting qmail-smtpd.

Test the newly installed certificate

To test the newly installed SSL certificate use the following commands:

debian:~# openssl s_client -crlf -connect localhost:465 -quiet
depth=0 /C=UK/ST=London/L=London/O=My Org/OU=My Company/emailAddress=admin@adminmail.com
verify error:num=18:self signed certificate
verify return:1
...
debian:~# openssl s_client -starttls smtp -crlf -connect localhost:25 -quiet
depth=0 /C=UK/ST=London/L=London/O=My Org/OU=My Company/emailAddress=admin@adminmail.com
verify error:num=18:self signed certificate
verify return:1
250 AUTH LOGIN PLAIN CRAM-MD5
...

If an error is returned like 32943:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol:s23_clnt.c:607: this means that SSL variable in the qmail-smtpdssl/run script is set to 0.

To solve this error, change SSL=0 to SSL=1 in /var/qmail/supervise/qmail-smtpdssl/run and do qmailctl restart

The error verify return:1 displayed is perfectly fine and it's more of a warning than an error as it just reports the certificate is self signed.

Resolving “nf_conntrack: table full, dropping packet.” flood message in dmesg Linux kernel log

Wednesday, March 28th, 2012

nf_conntrack_table_full_dropping_packet
On many busy servers, you might encounter in /var/log/syslog or dmesg kernel log messages like

nf_conntrack: table full, dropping packet

to appear repeatingly:

[1737157.057528] nf_conntrack: table full, dropping packet.
[1737157.160357] nf_conntrack: table full, dropping packet.
[1737157.260534] nf_conntrack: table full, dropping packet.
[1737157.361837] nf_conntrack: table full, dropping packet.
[1737157.462305] nf_conntrack: table full, dropping packet.
[1737157.564270] nf_conntrack: table full, dropping packet.
[1737157.666836] nf_conntrack: table full, dropping packet.
[1737157.767348] nf_conntrack: table full, dropping packet.
[1737157.868338] nf_conntrack: table full, dropping packet.
[1737157.969828] nf_conntrack: table full, dropping packet.
[1737157.969928] nf_conntrack: table full, dropping packet
[1737157.989828] nf_conntrack: table full, dropping packet
[1737162.214084] __ratelimit: 83 callbacks suppressed

There are two type of servers, I've encountered this message on:

1. Xen OpenVZ / VPS (Virtual Private Servers)
2. ISPs – Internet Providers with heavy traffic NAT network routers
 

I. What is the meaning of nf_conntrack: table full dropping packet error message

In short, this message is received because the nf_conntrack kernel maximum number assigned value gets reached.
The common reason for that is a heavy traffic passing by the server or very often a DoS or DDoS (Distributed Denial of Service) attack. Sometimes encountering the err is a result of a bad server planning (incorrect data about expected traffic load by a company/companeis) or simply a sys admin error…

– Checking the current maximum nf_conntrack value assigned on host:

linux:~# cat /proc/sys/net/ipv4/netfilter/ip_conntrack_max
65536

– Alternative way to check the current kernel values for nf_conntrack is through:

linux:~# /sbin/sysctl -a|grep -i nf_conntrack_max
error: permission denied on key 'net.ipv4.route.flush'
net.netfilter.nf_conntrack_max = 65536
error: permission denied on key 'net.ipv6.route.flush'
net.nf_conntrack_max = 65536

– Check the current sysctl nf_conntrack active connections

To check present connection tracking opened on a system:

:

linux:~# /sbin/sysctl net.netfilter.nf_conntrack_count
net.netfilter.nf_conntrack_count = 12742

The shown connections are assigned dynamicly on each new succesful TCP / IP NAT-ted connection. Btw, on a systems that work normally without the dmesg log being flooded with the message, the output of lsmod is:

linux:~# /sbin/lsmod | egrep 'ip_tables|conntrack'
ip_tables 9899 1 iptable_filter
x_tables 14175 1 ip_tables

On servers which are encountering nf_conntrack: table full, dropping packet error, you can see, when issuing lsmod, extra modules related to nf_conntrack are shown as loaded:

linux:~# /sbin/lsmod | egrep 'ip_tables|conntrack'
nf_conntrack_ipv4 10346 3 iptable_nat,nf_nat
nf_conntrack 60975 4 ipt_MASQUERADE,iptable_nat,nf_nat,nf_conntrack_ipv4
nf_defrag_ipv4 1073 1 nf_conntrack_ipv4
ip_tables 9899 2 iptable_nat,iptable_filter
x_tables 14175 3 ipt_MASQUERADE,iptable_nat,ip_tables

 

II. Remove completely nf_conntrack support if it is not really necessery

It is a good practice to limit or try to omit completely use of any iptables NAT rules to prevent yourself from ending with flooding your kernel log with the messages and respectively stop your system from dropping connections.

Another option is to completely remove any modules related to nf_conntrack, iptables_nat and nf_nat.
To remove nf_conntrack support from the Linux kernel, if for instance the system is not used for Network Address Translation use:

/sbin/rmmod iptable_nat
/sbin/rmmod ipt_MASQUERADE
/sbin/rmmod rmmod nf_nat
/sbin/rmmod rmmod nf_conntrack_ipv4
/sbin/rmmod nf_conntrack
/sbin/rmmod nf_defrag_ipv4

Once the modules are removed, be sure to not use iptables -t nat .. rules. Even attempt to list, if there are any NAT related rules with iptables -t nat -L -n will force the kernel to load the nf_conntrack modules again.

Btw nf_conntrack: table full, dropping packet. message is observable across all GNU / Linux distributions, so this is not some kind of local distribution bug or Linux kernel (distro) customization.
 

III. Fixing the nf_conntrack … dropping packets error

– One temporary, fix if you need to keep your iptables NAT rules is:

linux:~# sysctl -w net.netfilter.nf_conntrack_max=131072

I say temporary, because raising the nf_conntrack_max doesn't guarantee, things will get smoothly from now on.
However on many not so heavily traffic loaded servers just raising the net.netfilter.nf_conntrack_max=131072 to a high enough value will be enough to resolve the hassle.

– Increasing the size of nf_conntrack hash-table

The Hash table hashsize value, which stores lists of conntrack-entries should be increased propertionally, whenever net.netfilter.nf_conntrack_max is raised.

linux:~# echo 32768 > /sys/module/nf_conntrack/parameters/hashsize
The rule to calculate the right value to set is:
hashsize = nf_conntrack_max / 4

– To permanently store the made changes ;a) put into /etc/sysctl.conf:

linux:~# echo 'net.netfilter.nf_conntrack_count = 131072' >> /etc/sysctl.conf
linux:~# /sbin/sysct -p

b) put in /etc/rc.local (before the exit 0 line):

echo 32768 > /sys/module/nf_conntrack/parameters/hashsize

Note: Be careful with this variable, according to my experience raising it to too high value (especially on XEN patched kernels) could freeze the system.
Also raising the value to a too high number can freeze a regular Linux server running on old hardware.

– For the diagnosis of nf_conntrack stuff there is ;

/proc/sys/net/netfilter kernel memory stored directory. There you can find some values dynamically stored which gives info concerning nf_conntrack operations in "real time":

linux:~# cd /proc/sys/net/netfilter
linux:/proc/sys/net/netfilter# ls -al nf_log/

total 0
dr-xr-xr-x 0 root root 0 Mar 23 23:02 ./
dr-xr-xr-x 0 root root 0 Mar 23 23:02 ../
-rw-r--r-- 1 root root 0 Mar 23 23:02 0
-rw-r--r-- 1 root root 0 Mar 23 23:02 1
-rw-r--r-- 1 root root 0 Mar 23 23:02 10
-rw-r--r-- 1 root root 0 Mar 23 23:02 11
-rw-r--r-- 1 root root 0 Mar 23 23:02 12
-rw-r--r-- 1 root root 0 Mar 23 23:02 2
-rw-r--r-- 1 root root 0 Mar 23 23:02 3
-rw-r--r-- 1 root root 0 Mar 23 23:02 4
-rw-r--r-- 1 root root 0 Mar 23 23:02 5
-rw-r--r-- 1 root root 0 Mar 23 23:02 6
-rw-r--r-- 1 root root 0 Mar 23 23:02 7
-rw-r--r-- 1 root root 0 Mar 23 23:02 8
-rw-r--r-- 1 root root 0 Mar 23 23:02 9

 

IV. Decreasing other nf_conntrack NAT time-out values to prevent server against DoS attacks

Generally, the default value for nf_conntrack_* time-outs are (unnecessery) large.
Therefore, for large flows of traffic even if you increase nf_conntrack_max, still shorty you can get a nf_conntrack overflow table resulting in dropping server connections. To make this not happen, check and decrease the other nf_conntrack timeout connection tracking values:

linux:~# sysctl -a | grep conntrack | grep timeout
net.netfilter.nf_conntrack_generic_timeout = 600
net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 120
net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 60
net.netfilter.nf_conntrack_tcp_timeout_established = 432000
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
net.netfilter.nf_conntrack_tcp_timeout_last_ack = 30
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_close = 10
net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300
net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 300
net.netfilter.nf_conntrack_udp_timeout = 30
net.netfilter.nf_conntrack_udp_timeout_stream = 180
net.netfilter.nf_conntrack_icmp_timeout = 30
net.netfilter.nf_conntrack_events_retry_timeout = 15
net.ipv4.netfilter.ip_conntrack_generic_timeout = 600
net.ipv4.netfilter.ip_conntrack_tcp_timeout_syn_sent = 120
net.ipv4.netfilter.ip_conntrack_tcp_timeout_syn_sent2 = 120
net.ipv4.netfilter.ip_conntrack_tcp_timeout_syn_recv = 60
net.ipv4.netfilter.ip_conntrack_tcp_timeout_established = 432000
net.ipv4.netfilter.ip_conntrack_tcp_timeout_fin_wait = 120
net.ipv4.netfilter.ip_conntrack_tcp_timeout_close_wait = 60
net.ipv4.netfilter.ip_conntrack_tcp_timeout_last_ack = 30
net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait = 120
net.ipv4.netfilter.ip_conntrack_tcp_timeout_close = 10
net.ipv4.netfilter.ip_conntrack_tcp_timeout_max_retrans = 300
net.ipv4.netfilter.ip_conntrack_udp_timeout = 30
net.ipv4.netfilter.ip_conntrack_udp_timeout_stream = 180
net.ipv4.netfilter.ip_conntrack_icmp_timeout = 30

All the timeouts are in seconds. net.netfilter.nf_conntrack_generic_timeout as you see is quite high – 600 secs = (10 minutes).
This kind of value means any NAT-ted connection not responding can stay hanging for 10 minutes!

The value net.netfilter.nf_conntrack_tcp_timeout_established = 432000 is quite high too (5 days!)
If this values, are not lowered the server will be an easy target for anyone who would like to flood it with excessive connections, once this happens the server will quick reach even the raised up value for net.nf_conntrack_max and the initial connection dropping will re-occur again …

With all said, to prevent the server from malicious users, situated behind the NAT plaguing you with Denial of Service attacks:

Lower net.ipv4.netfilter.ip_conntrack_generic_timeout to 60 – 120 seconds and net.ipv4.netfilter.ip_conntrack_tcp_timeout_established to stmh. like 54000

linux:~# sysctl -w net.ipv4.netfilter.ip_conntrack_generic_timeout = 120
linux:~# sysctl -w net.ipv4.netfilter.ip_conntrack_tcp_timeout_established = 54000

This timeout should work fine on the router without creating interruptions for regular NAT users. After changing the values and monitoring for at least few days make the changes permanent by adding them to /etc/sysctl.conf

linux:~# echo 'net.ipv4.netfilter.ip_conntrack_generic_timeout = 120' >> /etc/sysctl.conf
linux:~# echo 'net.ipv4.netfilter.ip_conntrack_tcp_timeout_established = 54000' >> /etc/sysctl.conf