Posts Tagged ‘databases’

Using GeoIP on Linux: Country-Based Filtering, Logging, and Traffic Control

Friday, January 16th, 2026

geoip-on-linux-country-based-filtering-logging-traffic-control-logo

GeoIP is one of those technologies that quietly sits in the background of many systems, yet it can be extremely powerful when used correctly. Whether you want to block traffic from specific countries, analyze access logs, or add geographic context to security events, GeoIP can be a valuable addition to your Linux toolbox.

In this article, we’ll go deeper and look at real GeoIP usage examples for:

  • Log analysis
  • Firewalls
  • Apache HTTP Server
  • HAProxy

All examples are based on typical GNU/Linux server environments.

What Is GeoIP? 

GeoIP maps an IP address to geographic data such as:

  • Country
  • City
  • ASN / ISP (depending on database)

Most modern systems use MaxMind GeoLite2 databases (

.mmdb

format).

Keep in Mind ! :
GeoIP data is approximate. VPNs, proxies, mobile networks, and CGNAT reduce accuracy. GeoIP should be treated as a heuristic, not a guarantee.

1. Installing GeoIP Databases on Linux deb based distro

On Debian / Ubuntu:

#

apt install geoipupdate

Configure

/etc/GeoIP.conf

with your MaxMind license key and run:  

# geoipupdate

Databases are usually stored in:

/usr/share/GeoIP/

2. GeoIP for Log Analysis (to get idea of where does your traffic origins from)

GeoIP with Apache HTTP Server

Apache can use GeoIP in two main ways:

  1. To do IP origin Logging 

  2. Do Access control based on IP origin

An altenartive GeoIP common use is to post-processing logs to find out attempts to breach your security.

Lets say you want to

Find top attacking countries against your SSHd service.

# grep "Failed password" /var/log/auth.log | \
awk '{print $(NF-3)}' | \
while read ip; do geoiplookup $ip; done |
\ sort | uniq -c | sort -nr


This command will provide you a visibility on attack sources georaphical Country origin

3. Installing Apache GeoIP Module

For legacy GeoIP (older systems):

# apt install libapache2-mod-geoip

For modern systems, GeoIP2 is preferred:

# apt install libapache2-mod-geoip2

Enable the module:

# a2enmod geoip2
# systemctl reload apache2

4. Configure GeoIP Logging in Apache (basic config)

Add country code to access logs:

LogFormat "%h %l %u %t \"%r\" %>s %b %{GEOIP_COUNTRY_CODE}e" geoip
CustomLog /var/log/apache2/access.log geoip

This allows you to analyze traffic by country later without blocking users.

5. Country-Based Filter Blocking in Apache based on IP origin

Example: allow only selected countries:

<IfModule mod_geoip2.c>
SetEnvIf GEOIP_COUNTRY_CODE ^(BG|DE)$ AllowCountry
Deny from all
Allow from env=AllowCountry
</IfModule>

Use this carefully. Blocking at the web server layer is better than firewall-level blocking, but still risky if you have global users.

6. Apply GeoIP to Apache Virtual Host

You can apply GeoIP rules per site:

<VirtualHost *:80>
ServerName example.com
<IfModule mod_geoip2.c>
SetEnvIf GEOIP_COUNTRY_CODE CN BlockCountry >
Deny from env=BlockCountry
</IfModule>
</VirtualHost>

This is useful when only specific applications need filtering.

Firewall vs Application Layer GeoIP (Pros and Cons)

Layer

Pros

Cons

Firewall

Early blocking

Hard to debug

Apache

Flexible per-site rules

App overhead

HAProxy

Centralized control

Requires careful config

Logs only

Safest

No blocking

7. Apply GeoIP to HAProxy

HAProxy is an excellent place to apply GeoIP logic because:

  • It sits in front of applications
  • ​Rules are fast and explicit
  • Logging is centralized

a. Preparing GeoIP Filtering to HAProxy service

HAProxy supports GeoIP2 via Lua or native ACLs using

.mmdb

Example directory:

/usr/share/GeoIP/GeoLite2-Country.mmdb

b. GeoIP-Based Access Control Lists ( ACLs ) in HAProxy

Basic country-based blocking:

frontend http_in
bind *:80

acl from_china src -m geoip CN
acl from_russia src -m geoip RU

http-request deny if from_china
http-request deny if from_russia

default_backend web_servers

This blocks traffic early, before it hits Apache or nginx.

c. GeoIP-Based Routing across different haproxy backends

Instead of blocking, you can route traffic differently:


acl eu_users src -m geoip DE FR NL
use_backend eu_backend if eu_users
default_backend global_backend

This is useful for:

  • Geo-based load balancing
  • Regional content
  • Legal compliance separation

d. GeoIP Logging config for HAProxy

Add country code to logs:

log-format "%ci:%cp [%t] %ft %b %s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC"

(%CC = country code)

This makes traffic analysis extremely efficient.

Keep in Mind !

Use HAProxy or web server level for enforcement, and firewall GeoIP only when absolutely necessary.

8. Fail2ban + GeoIP: Smarter Bans, Better Context

Fail2ban is excellent at reacting to abusive behavior, but by default it only sees IP addresses, not where they come from. Adding GeoIP allows you to:

  • Tag bans with country information
  • Apply different ban policies per region
  • Detect unusual behavior patterns

a. GeoIP-Enriched Fail2ban Logs

Fail2ban itself doesn’t natively evaluate GeoIP rules, but you can enrich logs post-ban.

Example action script (

/etc/fail2ban/action.d/geoip-notify.conf

):

 


[Definition]
actionban = echo "Banned from $(geoiplookup | cut -d: -f2)" >> /var/log/fail2ban-geoip.log
Enable it in a jail:
[sshd]
enabled = true
action = iptables[name=SSH] geoip-notify

Enable it in a jail:

[sshd]

enabled = true action = iptables[name=SSH] geoip-notify

Resulting log entry:

Banned 185.220.101.1 from Germany

This provides visibility without changing ban logic — a safe first step.


b. Use GeoIP-Aware Ban Policies 

You can also adjust ban times based on country.

Example strategy:

  • Short ban for local country
  • Longer ban for known high-noise regions

This is usually implemented via multiple jails and post-processing scripts rather than direct GeoIP matching inside Fail2ban.

Best practice:
Let Fail2ban do behavior detection — let GeoIP provide context, not decisions.

9. GeoIP with nftables (Linux Modern Firewall Layer)

iptables +

xt_geoip

is considered legacy. On modern systems, nftables is the preferred approach.

a. Using GeoIP Sets in nftables

nftables does not natively include GeoIP, but you can integrate GeoIP via generated IP sets.

Workflow:

  1. Convert GeoIP country IP ranges into nftables sets

  2. Load them dynamically

Example set definition:


table inet filter {
set geo_block {
type ipv4_addr
flags interval
}
}

Populate the set using a script:

nft add element inet filter geo_block { 1.2.3.0/24, 5.6.0.0/16 }

Then apply it:


chain input {
type filter hook input priority 0;
ip saddr @geo_block drop
}

b. Automating GeoIP ->  nftables

Typical automation pipeline:

GeoLite2 → country CSV → IP ranges → nftables set

Run this daily via cron.

Warning:

  • Large country sets = memory usage
  • Firewall reloads must be atomic
  • Test on non-production systems first

10. GeoIP Dashboard: Turning Logs into Insight

Blocking is optional — insight is mandatory.

a. Simple GeoIP Log Dashboard (CLI-Based)

Apache example:

# awk '{print $NF}' /var/log/apache2/access.log | \
sort | uniq -c | sort -nr

Where $NF contains country code.

Sample Result:

1243 US

987 DE

422 FR

310 CN

This already tells a story.

b. Visual Dashboard with ELK / Grafana

For larger environments:

HAProxy / Apache -> JSON logs Enrich logs with GeoIP

Send to:

  • ELK Stack
  • Loki + Grafana
  • Graylog

Metrics you want:

  • Requests per country
  • Errors per country
  • Bans per country
  • Login failures per country

This helps distinguish:

  • Marketing traffic
  • Legit users
  • Background Internet noise

11.  Create a Layered GeoIP Strategy

A sane, production-ready model using GeoIP would include something like:

  1. Logging first
    Apache / HAProxy logs with country codes

  2. Behavior detection
    Fail2ban reacts to abuse

  3. Traffic shaping
    HAProxy routes or rate-limits

  4. Firewall last
    nftables drops only obvious garbage

GeoIP is strongest when it supports decisions, not when it makes them alone.

12. Best Practices to consider

  • Prefer visibility over blocking
  • Avoid blanket country bans
  • Always log before denying

Combine GeoIP with:

  • Fail2ban
  • Rate limits
  • CAPTCHA or MFA
  • Keep GeoIP databases (regularly) updated
  • Test rules with real IPs before deploying

13. Common Mistakes to Avoid

Blocking entire continents Using GeoIP as authentication Applying firewall GeoIP without logs Forgetting database updates Assuming GeoIP accuracy

Close up

GeoIP is not a silver bullet against vampire attacks – but when used thoughtfully, it becomes a powerful signal enhancer and can give you a much broader understanding on what is going on inside your network traffic.

Whether you’re using it to filter out segment of evil intruders based on logs, routing traffic intelligently, or filtering obvious abusea, GeoIP fits naturally into a layered security model and is used across corporations and middle and even small sized businesses nowadays.

Used conservatively, GeoIP follows the classic Unix philosophy:

Small datasets, Simple rules, Real-world effectiveness, combined with rest of tools it gives info and ways to protect better your networks and server infra.

Digital Vigilance: Practical Cyber Defense for the New Era of All connected dependency on technology

Friday, October 24th, 2025

 

Introduction

There was a time when cybersecurity was mostly about erecting a firewall, installing antivirus software and hoping no one clicked a suspicious link. That era is steadily fading. Today, as more work moves to the cloud, as AI tools proliferate, and as threat actors adopt business-like models, the battlefield has shifted dramatically. According to analysts at Gartner, 2025 brings some of the most significant inflections in cybersecurity in recent memory. 

In this article we’ll cover the major trends, why they matter, and — importantly — what you as an individual or sysadmin can start doing today to stay ahead.

1. Generative AI: Weapon and Shield

AI / ML (Machine Learning)) is now deeply ingrained in both the offence and defence sides of cybersecurity.

  • On the defence side: AI models help detect anomalies, process huge volumes of logs, and automate responses. 
  • On the offence side: Attackers use AI to craft more convincing phishing campaigns, automate vulnerability discovery, generate fake identities or even design malware. 
  • Data types are changing: It’s no longer just databases and spreadsheets. Unstructured data (images, video, text) used by AI models is now a primary risk.

What to do:

  • Make sure any sensitive AI-training data or inference logs are stored securely.
  • Build anomaly-detection systems that don’t assume “normal” traffic anymore.
  • Flag when your organisation uses AI tools: do you know what data the tool uses, where it stores it, who can access it?

2. Zero Trust Isn’t Optional Anymore

 

cyber-security-threats-to-watch-in-2025

The old model — trust everything inside the perimeter, block everything outside — is obsolete. Distributed workforces, cloud services, edge devices: they all blur the perimeter. Hence the rise of Zero Trust Architecture (ZTA) — “never trust, always verify.” INE+1

Key features:

  • Every device, every user, every session must be authenticated and authorised.
  • Least-privilege access: users should have the minimum permissions needed.
  • Micro-segmentation: limit lateral movement in networks.
  • Real-time monitoring and visibility of sessions and devices.

What to do:

  • Audit your devices and users: who has broad permissions? Who accesses critical systems?
  • Implement multifactor authentication (MFA) everywhere you can.
  • Review network segmentation: can a compromised device access everything? If yes, that’s a red flag.
     

3. Ransomware & RaaS – The Business Model of Cybercrime

Cybercriminals are organizing like businesses: they have supply chains, service models, profit centres. The trend of Ransomware‑as‑a‑Service (RaaS) continues to expand. Dataconomy+1

What’s changed:

  • Ransomware doesn’t just encrypt data. Attackers often steal data first, then threaten to release it. 
  • Attackers are picking higher-value targets and critical infrastructure.
  • The attack surface has exploded: IoT devices, cloud mis-configurations, unmanaged identity & access.

What to do:

  • Back up your critical systems regularly — test restores, not just backups.
  • Keep systems patched (though even fully patched systems can be attacked, so patching is necessary but not sufficient).
  • Monitor for abnormal behaviour: large data exfiltration, new admin accounts, sudden access from odd places.
  • Implement strong incident response procedures: when it happens, how do you contain it?

4. Supply Chains, IoT & Machine Identities

Modern IT is no longer just endpoints and servers. We have IoT devices, embedded systems, cloud services, machine-to-machine identities. According to Gartner, machine identities are expanding attack surfaces if unmanaged.

Key issues:

  • Devices (especially IoT) often ship with weak/default credentials.
  • Machine identities: software services, APIs, automation tools need their own identity/access management.
  • Supply chains: your vendor might be the weakest link — compromise of software or hardware upstream affects you.

What to do:

  • Create an inventory of all devices and services — yes all.
  • Enforce device onboarding processes: credentials changed, firmware up-to-date, network segmented.
  • Review your vendors: what security standards do they follow? Do they give you visibility into their supply chain risk?
     

5. Cloud & Data Privacy — New Rules, New Risks

As data moves into the cloud and into AI systems, the regulatory and technical risks converge. For example, new laws like the EU AI Act will start affecting how organisations handle AI usage and data. Source: Gcore
Cloud environments also bring mis-configurations, improper access controls, shadow-IT and uncontrolled data sprawl. techresearchs.com+1

What to do:
 

  • If using cloud services, check settings for major risk zones (e.g., S3 buckets, unsecured APIs).
  • Implement strong Identity & Access Management (IAM) controls for cloud resources.
  • Make data-privacy part of your security plan: what data you collect, where it is stored, for how long.
  • Perform periodic audits and compliance checks especially if you handle users from different jurisdictions.
     

6. Skills, Culture & Burn-out — The Human Factor

Often overlooked: no matter how good your tech is, people and culture matter. Gartner Security behaviour programs help reduce human-error incidents — and they’re becoming more essential.
Also, the cybersecurity talent shortage and burnout among security teams is real.

What to do:

 

  • Invest in security awareness training: phishing simulation, strong password practices, device hygiene.
  • Foster a culture where security is everyone’s responsibility, not just the “IT team’s problem.”
  • For small teams: consider managed security services or cloud-based monitoring to lean on external support.

7. What This Means for Smaller Organisations & Individual Users

Often the big reports focus on enterprises. But smaller organisations (and individual users) are just as vulnerable — sometimes more so, because they have fewer resources and less mature security.
Here are some concrete actions:

  • Use strong, unique passwords and a password manager.
  • Enable MFA everywhere (email, online services, VPNs).
  • Keep your systems updated — OS, applications, firmware.
  • Be suspicious of unexpected communications (phishing).
  • Have an incident response plan: who do you call if things go wrong?
  • Backup your data offline and test restores.
  • If you run services (web-server, mail server): monitor logs, check for new accounts, stray network connections.
     

Conclusion

Cybersecurity in 2025 is not a “set once and forget” system. It’s dynamic, multi-layered and deeply integrated into business functions and personal habits. The trends above — generative AI, zero trust, supply chain risks, cloud data sprawl — are changing the rules of the game.
Thus for all of us and especially sysadmins / system engineers or Site Reliabiltiy Managers (SRE), Developers, Testers or whatever you call it this meen we need to keep learning, be careful with the tech stuff we use, and build security as a continuous practice rather than a one-off box-to-tick.

 

Howto Fix “sysstat Cannot open /var/log/sysstat/sa no such file or directory” on Debian / Ubuntu Linux

Monday, February 15th, 2016

sysstast-no-such-file-or-directory-fix-Debian-Ubuntu-Linux-howto
I really love sysstat and as a console maniac I tend to install it on every server however by default there is some <b>sysstat</b> tuning once installed to make it work, for those unfamiliar with <i>sysstat</i> I warmly recommend to check, it here is in short the package description:<br /><br />
 

server:~# apt-cache show sysstat|grep -i desc -A 15
Description: system performance tools for Linux
 The sysstat package contains the following system performance tools:
  – sar: collects and reports system activity information;
  – iostat: reports CPU utilization and disk I/O statistics;
  – mpstat: reports global and per-processor statistics;
  – pidstat: reports statistics for Linux tasks (processes);
  – sadf: displays data collected by sar in various formats;
  – nfsiostat: reports I/O statistics for network filesystems;
  – cifsiostat: reports I/O statistics for CIFS filesystems.
 .
 The statistics reported by sar deal with I/O transfer rates,
 paging activity, process-related activities, interrupts,
 network activity, memory and swap space utilization, CPU
 utilization, kernel activities and TTY statistics, among
 others. Both UP and SMP machines are fully supported.
Homepage: http://pagesperso-orange.fr/sebastien.godard/

 

If you happen to install sysstat on a Debian / Ubuntu server with:

server:~# apt-get install –yes sysstat


, and you try to get some statistics with sar command but you get some ugly error output from:

 

server:~# sar Cannot open /var/log/sysstat/sa20: No such file or directory


And you wonder how to resolve it and to be able to have the server log in text databases periodically the nice sar stats load avarages – %idle, %iowait, %system, %nice, %user, then to FIX that Cannot open /var/log/sysstat/sa20: No such file or directory

You need to:

server:~# vim /etc/default/sysstat


By Default value you will find out sysstat stats it is disabled, e.g.:

ENABLED="false"

Switch the value to "true"

ENABLED="true"


Then restart sysstat init script with:

server:~# /etc/init.d/sysstat restart

However for those who prefer to do things from menu Ncurses interfaces and are not familiar with Vi Improved, the easiest way is to run dpkg reconfigure of the sysstat:

server:~# dpkg –reconfigure


sysstat-reconfigure-on-gnu-linux

 

root@server:/# sar
Linux 2.6.32-5-amd64 (pcfreak) 15.02.2016 _x86_64_ (2 CPU)

0,00,01 CPU %user %nice %system %iowait %steal %idle
0,15,01 all 24,32 0,54 3,10 0,62 0,00 71,42
1,15,01 all 18,69 0,53 2,10 0,48 0,00 78,20
10,05,01 all 22,13 0,54 2,81 0,51 0,00 74,01
10,15,01 all 17,14 0,53 2,44 0,40 0,00 79,49
10,25,01 all 24,03 0,63 2,93 0,45 0,00 71,97
10,35,01 all 18,88 0,54 2,44 1,08 0,00 77,07
10,45,01 all 25,60 0,54 3,33 0,74 0,00 69,79
10,55,01 all 36,78 0,78 4,44 0,89 0,00 57,10
16,05,01 all 27,10 0,54 3,43 1,14 0,00 67,79


Well that's it now sysstat error resolved, text reporting stats data works again, Hooray! 🙂

Fix MySQL ibdata file size – ibdata1 file growing too large, preventing ibdata1 from eating all your server disk space

Thursday, April 2nd, 2015

fix-solve-mysql-ibdata-file-size-ibdata1-file-growing-too-large-and-preventing-ibdata1-from-eating-all-your-disk-space-innodb-vs-myisam

If you're a webhosting company hosting dozens of various websites that use MySQL with InnoDB  engine as a backend you've probably already experienced the annoying problem of MySQL's ibdata1 growing too large / eating all server's disk space and triggering disk space low alerts. The ibdata1 file, taking up hundreds of gigabytes is likely to be encountered on virtually all Linux distributions which run default MySQL server <= MySQL 5.6 (with default distro shipped my.cnf). The excremental ibdata1 raise appears usually due to a application software bug on how it queries the database. In theory there are no limitation for ibdata1 except maximum file size limitation set for the filesystem (and there is no limitation option set in my.cnf) meaning it is quite possible that under certain conditions ibdata1 grow over time can happily fill up your server LVM (Storage) drive partitions.

Unfortunately there is no way to shrink the ibdata1 file and only known work around (I found) is to set innodb_file_per_table option in my.cnf to force the MySQL server create separate *.ibd files under datadir (my.cnf variable) for each freshly created InnoDB table.
 

1. Checking size of ibdata1 file

On Debian / Ubuntu and other deb based Linux servers datadir is /var/lib/mysql/ibdata1

server:~# du -hsc /var/lib/mysql/ibdata1
45G     /var/lib/mysql/ibdata1
45G     total


2. Checking info about Databases and Innodb storage Engine

server:~# mysql -u root -p
password:

mysql> SHOW DATABASES;
+——————–+
| Database           |
+——————–+
| information_schema |
| bible              |
| blog               |
| blog-sezoni        |
| blogmonastery      |
| daniel             |
| ezmlm              |
| flash-games        |


Next step is to get some understanding about how many existing InnoDB tables are present within Database server:

 

mysql> SELECT COUNT(1) EngineCount,engine FROM information_schema.tables WHERE table_schema NOT IN ('information_schema','performance_schema','mysql') GROUP BY engine;
+————-+——–+
| EngineCount | engine |
+————-+——–+
|         131 | InnoDB |
|           5 | MEMORY |
|         584 | MyISAM |
+————-+——–+
3 rows in set (0.02 sec)

To get some more statistics related to InnoDb variables set on the SQL server:
 

mysqladmin -u root -p'Your-Server-Password' var | grep innodb


Here is also how to find which tables use InnoDb Engine

mysql> SELECT table_schema, table_name
    -> FROM INFORMATION_SCHEMA.TABLES
    -> WHERE engine = 'innodb';

+————–+————————–+
| table_schema | table_name               |
+————–+————————–+
| blog         | wp_blc_filters           |
| blog         | wp_blc_instances         |
| blog         | wp_blc_links             |
| blog         | wp_blc_synch             |
| blog         | wp_likes                 |
| blog         | wp_wpx_logs              |
| blog-sezoni  | wp_likes                 |
| icanga_web   | cronk                    |
| icanga_web   | cronk_category           |
| icanga_web   | cronk_category_cronk     |
| icanga_web   | cronk_principal_category |
| icanga_web   | cronk_principal_cronk    |


3. Check and Stop any Web / Mail / DNS service using MySQL

server:~# ps -efl |grep -E 'apache|nginx|dovecot|bind|radius|postfix'

Below cmd should return empty output, (e.g. Apache / Nginx / Postfix / Radius / Dovecot / DNS etc. services are properly stopped on server).

4. Create Backup dump all MySQL tables with mysqldump

Next step is to create full backup dump of all current MySQL databases (with mysqladmin):

server:~# mysqldump –opt –allow-keywords –add-drop-table –all-databases –events -u root -p > dump.sql
server:~# du -hsc /root/dump.sql
940M    dump.sql
940M    total

 

If you have free space on an external backup server or remotely mounted attached (NFS or SAN Storage) it is a good idea to make a full binary copy of MySQL data (just in case something wents wrong with above binary dump), copy respective directory depending on the Linux distro and install location of SQL binary files set (in my.cnf).
To check where are MySQL binary stored database data (check in my.cnf):

server:~# grep -i datadir /etc/mysql/my.cnf
datadir         = /var/lib/mysql

If server is CentOS / RHEL Fedora RPM based substitute in above grep cmd line /etc/mysql/my.cnf with /etc/my.cnf

if you're on Debian / Ubuntu:

server:~# /etc/init.d/mysql stop
server:~# cp -rpfv /var/lib/mysql /root/mysql-data-backup

Once above copy completes, DROP all all databases except, mysql, information_schema (which store MySQL existing user / passwords and Access Grants and Host Permissions)

5. Drop All databases except mysql and information_schema

server:~# mysql -u root -p
password:

 

mysql> SHOW DATABASES;

DROP DATABASE blog;
DROP DATABASE sessions;
DROP DATABASE wordpress;
DROP DATABASE micropcfreak;
DROP DATABASE statusnet;

          etc. etc.

ACHTUNG !!! DON'T execute!DROP database mysql; DROP database information_schema; !!! – cause this might damage your User permissions to databases

6. Stop MySQL server and add innodb_file_per_table and few more settings to prevent ibdata1 to grow infinitely in future

server:~# /etc/init.d/mysql stop

server:~# vim /etc/mysql/my.cnf
[mysqld]
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_buffer_pool_size=4G

Delete files taking up too much space – ibdata1 ib_logfile0 and ib_logfile1

server:~# cd /var/lib/mysql/
server:~#  rm -f ibdata1 ib_logfile0 ib_logfile1
server:~# /etc/init.d/mysql start
server:~# /etc/init.d/mysql stop
server:~# /etc/init.d/mysql start
server:~# ps ax |grep -i mysql

 

You should get no running MySQL instance (processes), so above ps command should return blank.
 

7. Re-Import previously dumped SQL databases with mysql cli client

server:~# cd /root/
server:~# mysql -u root -p < dump.sql

Hopefully import should went fine, and if no errors experienced new data should be in.

Altearnatively if your database is too big and you want to import it in less time to mitigate SQL downtime, instead import the database with:

server:~# mysql -u root -p
password:
mysql>  SET FOREIGN_KEY_CHECKS=0;
mysql> SOURCE /root/dump.sql;
mysql> SET FOREIGN_KEY_CHECKS=1;

 

If something goes wrong with the import for some reason, you can always copy over sql binary files from /root/mysql-data-backup/ to /var/lib/mysql/
 

8. Connect to mysql and check whether databases are listable and re-check ibdata file size

Once imported login with mysql cli and check whther databases are there with:

server:~# mysql -u root -p
SHOW DATABASES;

Next lets see what is currently the size of ibdata1, ib_logfile0 and ib_logfile1
 

server:~# du -hsc /var/lib/mysql/{ibdata1,ib_logfile0,ib_logfile1}
19M     /var/lib/mysql/ibdata1
1,1G    /var/lib/mysql/ib_logfile0
1,1G    /var/lib/mysql/ib_logfile1
2,1G    total

Now ibdata1 will grow, but only contain table metadata. Each InnoDB table will exist outside of ibdata1.
To better understand what I mean, lets say you have InnoDB table named blogdb.mytable.
If you go into /var/lib/mysql/blogdb, you will see two files
representing the table:

  •     mytable.frm (Storage Engine Header)
  •     mytable.ibd (Home of Table Data and Table Indexes for blogdb.mytable)

Now construction will be like that for each of MySQL stored databases instead of everything to go to ibdata1.
MySQL 5.6+ admins could relax as innodb_file_per_table is enabled by default in newer SQL releases.


Now to make sure your websites are working take few of the hosted websites URLs that use any of the imported databases and just browse.
In my case ibdata1 was 45GB after clearing it up I managed to save 43 GB of disk space!!!

Enjoy the disk saving! 🙂

Make MySQL existing users to have access from any or particular host after SQL migration

Tuesday, July 1st, 2014

make_mysql_existing_users_have-access-from-any-or-particular-host-after-SQL-migration
Recently I've done a migration of MySQL server from host A (running and configured to serve requests on (localhost – 127.0.0.1) to host B (server2.host.com)
There are already existing users in mysql which are allowed to only access the database server from localhost as until now the applciation was sending SQL queries straight on localhost. Now the architecture has to change to use the MySQL Database remotely.

Hence I've migrated the MySQL server by dumping all the existing the databases on MySQL host A  with:

mysqldump -u root -p --all-databases > alldbs_dump.sql


And then importing the databases on host B with

mysql -u root -p < alldbs_dump.sql

Though this migrated the data from Host A to Host B, still the application on Host A was failing to succesfully use its data from database on Host B, because of inability to properly authenticate. It couldn't authenticate because MySQL on Host B's users are not configured to have access from IP address of Host A, but only allowed the application users to be able to connect on localhost..

I've used following SQL CLI query to check Hosts allowed to connect to MySQL (in this case localhost):

# mysql -u root -p
mysql> use mysql;
mysql> select * from user where user like '%eameiotest%' and Host='localhost';

 

To fix that I logged on MySQL server on Host B with mysql cli and issued for each of the users the application was using:

UPDATE mysql.user SET Host='%' WHERE Host='localhost' AND User='eameiotest';
 

UPDATE mysql.user SET Host='%' WHERE Host='localhost' AND User='eameiotest2';
 

UPDATE mysql.user SET Host='%' WHERE Host='localhost' AND User='eameiotest3';

 

On execution, If you get errors like:
 

ERROR 1062 (23000): Duplicate entry '%-eameiotest' for key 'PRIMARY'


Don't think that there is no solution, as I've read some threads online claiming the only way to get around this issue is to dump mysql database and re-import it, this is not necessery. There is a work around to this MySQL bug.

To work-around the error, you will first have to set the user allowed access host to empty – ' ' :

 

UPDATE mysql.user SET Host='' WHERE Host='localhost' AND User='eameiotest';
 

UPDATE mysql.user SET Host='' WHERE Host='localhost' AND User='eameiotest2';
 

UPDATE mysql.user SET Host='' WHERE Host='localhost' AND User='eameiotest3';


And re-issue again commands:
 

UPDATE mysql.user SET Host='%' WHERE Host='localhost' AND User='eameiotest';
 

UPDATE mysql.user SET Host='%' WHERE Host='localhost' AND User='eameiotest2';
 

UPDATE mysql.user SET Host='%' WHERE Host='localhost' AND User='eameiotest3';


You might want to also issue:
 

GRANT ALL PRIVILEGES ON yourdatabase-name.* TO 'eameiotest1'@'server-host';

GRANT ALL PRIVILEGES ON yourdatabase-name.* TO 'eameiotest2'@'server-host';

GRANT ALL PRIVILEGES ON yourdatabase-name.* TO 'eameiotest3'@'server-host';
 

This should have solve the app connection issues, Cheers 🙂

 

 

Mysql: How to disable single database without dropping or renaming it

Wednesday, January 22nd, 2014

mysql rename forbid disable database howto logo, how to disable single database without dropping it
A colleague of mine working on MySQL database asked me How it is possible to disable a MySQL database. He is in situation where the client has 2 databases and application and is not sure which of the two databases the application uses. Therefore the client asked one of the database is disabled and wait for few hours and see if something will break / stop working and in that way determine which of the two database is used by application.

My first guess was to backup both databases and drop one of them, then if it is the wrong one to restore from the SQL dump backup, however this wasn't acceptable solution. So second I though of RENAME of database to another one and then reverting the name, however as it is written in MySQL documentation RENAME database function was removed from MySQL (found to be dangerous) since version 5.1.23 onwards. Anyhow there is a quick hack to rename mysql database using a for loop shell script one below:

mysql -e "CREATE DATABASE \`new_database\`;"
for table in `mysql -B -N -e "SHOW TABLES;" old_database`
do
  mysql -e "RENAME TABLE \`old_database\`.\`$table\` to \`new_database\`.\`$table\`"
  done
  mysql -e "DROP DATABASE \`old_database\`;"

Other possible solution was to change permissions of Application used username, however this was also complicated from mysql cli, hence I thought of installing and using PHPMyAdmin to make modify of db user permissions easier but on this server there wasn't Apache installed and MySQL is behind a firewall and only accessible via java tomcat host.

Finally after some pondering what can be done I came with solution to request to disable mysql database using chmod in /var/lib/mysql/data/, i.e.:

sql-server:~# chmod 0 /var/lib/mysql/databasename

Where databasename is the same as the database is named listable via mysql cli.

After doing it that way with no need to restart MySQL server database stopped to appear in show databases; and client confirmed that disabled database is no longer needed so we proceeded dropping it.

Hope this little article will help someone out there. Cheers :

MySQL SSL Configure Howto – How to Make MySQL communication secured

Wednesday, January 15th, 2014

mysql-over-ssl-how-to-configure-logo how to configure ssl on mysql server

Recently I've been asked How to make communication to MySQL database encrypted. The question was raised by a fellow developer who works on developing a Desktop standalone application in Delphi Programming Language with DevArt an (SQL Connection Component capable to connect Delphi applications to multiple databases like MySQL, Oracle, PostgreSQL, Interbase, Firebird etc.

Communicating in Secured form to MySQL database is not common task to do, as MySQL usually communicates to applications hosted on same server or applications to communicate to MySQL are in secured DMZ or administrated via phpMyAdmin web interface.

MySQL supports encrypted connections to itself using Secure Socket Layer (SSL) encryption. Setting up MySQL db to be communicated encrypted is a must for standalone Desktop applications which has to extract / insert data via remote SQL.
Configuring SQL to support communicated queries encrpytion is supported by default and easily configured on most standard Linux version distributions (Debian, RHEL, Fedora) with no need to recompile it.
1. Generate SSL Certificates

$ mkdir /etc/mysql-ssl && cd mysql-ssl

# Create CA certificate
$ openssl genrsa 2048 > ca-key.pem
$ openssl req -new -x509 -nodes -days 3600 \
         -key ca-key.pem -out ca-cert.pem

Create server certificate, remove passphrase, and sign it
server-cert.pem is public key, server-key.pem is private key
$ openssl req -newkey rsa:2048 -days 3600 \
         -nodes -keyout server-key.pem -out server-req.pem

$ openssl rsa -in server-key.pem -out server-key.pem
$ openssl x509 -req -in server-req.pem -days 3600 \
         -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem

Create client certificate, remove passphrase, and sign it
client-cert.pem is public key and client-key.pem is private key
$ openssl req -newkey rsa:2048 -days 3600 \
         -nodes -keyout client-key.pem -out client-req.pem

$ openssl rsa -in client-key.pem -out client-key.pem
$ openssl x509 -req -in client-req.pem -days 3600 \
         -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 -out client-cert.pem

After generating the certificates, verify them:

$ openssl verify -CAfile ca-cert.pem server-cert.pem client-cert.pem
 

2. Add SSL support variables to my.cnf

Once SSL key pair files are generated in order to active SSL encryption support in MySQL server, add to (/etc/my.cnf,  /etc/mysql/my.cnf, /usr/local/etc/my.cnf … ) or wherever config is depending on distro

# SSL
ssl-ca=/etc/mysql-ssl/ca-cert.pem
ssl-cert=/etc/mysql-ssl/server-cert.pem
ssl-key=/etc/mysql-ssl/server-key.pem

3. Restart MySQL server

/etc/init.d/mysqld restart
...

4. Create SQL user to require SSL login

Create new user with access to database;

GRANT ALL ON Sql_User_DB.* TO Sql_User@localhost;
FLUSH PRIVILEGES;

To create administrator privileges user:

GRANT ALL PRIVILEGES ON *.* TO ‘ssluser’@'%’ IDENTIFIED BY ‘pass’ REQUIRE SSL;
FLUSH PRIVILEGES;

5. Test SSL Connection with MySQL CLI client or with few lines of PHP

To use mysql cli for testing whether SSL connection works:

$ mysql -u ssluser -p'pass' –ssl-ca /etc/mysql-ssl/client-cert.pem –ssl-cert /etc/mysql-ssl/client-key.pem

Once connected to MySQL to verify SSL connection works fine:

mysql> SHOW STATUS LIKE 'Ssl_Cipher';
 +---------------+--------------------+
| Variable_name | Value              |
 +---------------+--------------------+
| Ssl_cipher    | DHE-RSA-AES256-SHA |
+---------------+--------------------+

If you get this output this means MySQL SSL Connection is working as should.

Alternative way is to use test-mysqli-ssl.php script to test availability to mysql over SSL.

$conn=mysqli_init();
mysqli_ssl_set($conn, '/etc/mysql-ssl/client-key.pem', '/etc/mysql-ssl/client-cert.pem', NULL, NULL, NULL);
if (!mysqli_real_connect($conn, '127.0.0.1', 'ssluser', 'pass')) { die(); }
$res = mysqli_query($conn, 'SHOW STATUS like "Ssl_cipher"');
print_r(mysqli_fetch_row($res));
mysqli_close($conn);

Note: Change username password according to your user / pass before using the script

That's all now you have mysql communicating queries data over SSL

 

How to revoke user global privileges in MySQL

Saturday, August 14th, 2010

I’ve recently realized one of the users I have added to one of the MySQL servers I do managehas actually some included a list of some of the global privileges.
This extra privileges the user had was actually something that was to be removed since theuser is not supposed to be able to list all existent databases on the MySQL server and things alike:

You can see below the excact SQL queries I had to issue to revoke the granted global privileges for the username.
Note to change the username before you execute the below queries:

REVOKE ALL PRIVILEGES ON * . * FROM 'username'@'localhost';
REVOKE GRANT OPTION ON * . * FROM 'username'@'localhost';
GRANT USAGE ON * . * TO 'username'@'localhost' WITH MAX_QUERIES_PER_HOUR 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0 MAX_USER_CONNECTIO

 

WordPress blog MySQL data maintainance valuable plugin WP-OPTIMIZE

Tuesday, April 12th, 2011

The more my blog is growing the slower it becomes, this is a tendency I’ve noticed since a couple of weeks.

Very fortunately while reading some random articles online I’ve came across a super valuable wordpress plugin called WP-OPTIMIZE

I think it’s best if I present instead of taking the time to explain what the WP-optimize does for a wordpress powered blog:

WP-Optimize is a WordPress 2.9++ database cleanup and optimization tool. It doesn’t require PhpMyAdmin to optimize your database tables. It allows you to remove post revisions, comments in the spam queue, un-approved comments within few clicks.

Additionally you can rename any username to another username too.

For example: If you want to rename default ‘admin’ username to ‘someothername’; just put ‘admin’ (without quotes) to the Old username field and the new username to the New username field, then click “PROCESS”)

Now in short to rephrase above text, during MySQL database requests a lot of database starts needing the so called MySQL optimization , this operation becomes necessery especially for databases which are very actively used and is related to the basic design of the mysql server.

Very often many tables starts having a lot of garbage (repetitive) data and therefore read and writes from the SQL server gets slower and slower day by day.

Thus the MySQL server has it’s famous OPTIMIZE TABLE command which does wipe out all the garbage unnecessery data stored in a tables/s and hence seriously impacts the later read/write table operations.

Now to go back to wordpress the same optimization issues, very often are a cause for a performance bottleneck and some smart guy has came with the great idea to create a specific wordpress plugin to fix such an optimize table issues

The WP-Optimize plugin has 5 major options which somehow could have a positive impact on blog’s performance, these are:
 

  • Remove all Post revisions
  • Remove all auto draft posts
  • Clean marked Spam comments
  • lean Unapproved comments
  • Optimize database tables

Apart from the nice performance boost applicaiton the wp-optimize plugin has one super valuable function, it could change the default wordpress blog administrator user admin to some other desired username, for example rename the default blog administrator username “admin” user to “john”.

From a security perspective this is another must have feature in every wordpress blog as it can kill many of the possible brute force attacks with some web brute force softwares like Hydra

Installing and using wp-optimize is a real piece of cake, here is a very quick few command lines to install it:

host:~# cd /var/www/blog/wp-content/plugins/
host:/var/www/blog/wp-content/plugins:# wget https://www.pc-freak.net/files/wp-optimize.0.9.2.zip
host:/var/www/blog/wp-content/plugins:# unzip wp-optimize.0.9.2.zip
...

To launch WP-OPTIMIZE and fix many speed bottlenecks caused by unoptimized tables, or just hanging around in database old spam comments, go and login with your adminsitrator to wordpress.

In the left pane menu starting up with Dashboard somewhere near the end of the vertical menu you will notice a button reading WP-Optimize .
Click over the Wp-Optimize button and a screen like the one in below’s screenshot will appear in your browser screen:

wp optimize plugin database optimization options screen

I have personally decided to use just 2 of all the 5 available primary plugin options, I decided to select only:
 

  • Clean marked Spam comments
  • Optimize database tables

Scrolling below the PROCEED button I could also notice a number of tables which were missing optimization and hence required an optimize table to be executed on each one of them.
Further on I pressed the PROCESS button and after a couple of minutes (2, 3 minutes) of waiting the wp-optimize plugin was done it’s selected tasks:

In the screenshot below, you see all my blog tables which were optimized by the plugin:
WP-Optimize optimized blog tables screenshot

It’s interesting to say few words about the Remove All Posts revisions plugin option, as many might be wondering what this plugin option really means.

When writting posts, wordpress has an option to restore to a certain point of the write out and makes a sort of different versions in time of each of it’s written posts.

Therefore later restoration if something gots deleted by mistake is facilitated, this is actually the all meaning of Remove All Posts revisions

With time and the increase wp posts the amount of Posts Revisions is growing on and on and just taking space and possibly making the access to the database entries slower, thus it might be a good idea to also remove them as, it’s absolutely rare to use the wp post restoration function.
However I’ve personally decided to not wipe out the old posts revisions as someday i might need it and I’m a bit scared that it might cause me some database issues.

I’ll be glad if somebody has tried the Posts Revisions wp-Optimize funciton and is happy with the results.