Posts Tagged ‘close’

How to monitor Postfix Mail server work correct with simple one liner Zabbix user parameter script / Simple way to capture and report SMTP machine issues Zabbix template

Thursday, June 22nd, 2023

setup-zabbix-smtp-mail-monitoring-postfix-qmail-exim-with-easy-userparameter-script-and-template-zabbix-logo

In this article, I'm going to show you how to setup a very simple monitoring if a local running SMTP (Postfix / Qmail / Exim) is responding correctly on basic commands. The check would helpfully keep you in track to know whether your configured Linux server local MTA (Mail Transport Agent) is responding on requests on TCP / IP protocol Port 25, as well as a check for process existence of master (that is the main postfix) proccess, as well as the usual postfix spawned sub-processes qmgr (the postfix queue manager), tsl mgr (TLS session cache and PRNG manager), pickup (Postfix local mail pickup) – or email receiving process.

 

Normally a properly configured postfix installation on a Linux whatever you like distribution would look something like below:

#  ps -ef|grep -Ei 'master|postfix'|grep -v grep
root        1959       1  0 Jun21 ?        00:00:00 /usr/libexec/postfix/master -w
postfix     1961    1959  0 Jun21 ?        00:00:00 qmgr -l -t unix -u
postfix     4542    1959  0 Jun21 ?        00:00:00 tlsmgr -l -t unix -u
postfix  2910288    1959  0 11:28 ?        00:00:00 pickup -l -t unix -u

At times, during mail server restarts the amount of processes that are sub spawned by postfix, may very and if you a do a postfix restart

# systemctl restart postfix

The amout of spawned processes running as postfix username might decrease, and only qmgr might be available for second thus in the consequential shown Template the zabbix processes check to make sure the Postfix is properly operational on the Linux machine is made to check for the absolute minumum of 

1. master (postfix process) that runs with uid root
2. and one (postfix) username binded proccess 

If the amount of processes on the host is less than this minimum number and the netcat is unable to simulate a "half-mail" sent, the configured Postfix alarm Action (media and Email) will take place, and you will get immediately notified, that the monitored Mail server has issue!

The idea is to use a small one liner connection with netcat and half simulate a normal SMTP transaction just like you would normally do:

 

root@pcfrxen:/root # telnet localhost 25
Trying 127.0.0.1…
Connected to localhost.
Escape character is '^]'.
220 This is Mail2 Pc-Freak.NET ESMTP
HELO localhost
250 This is Mail2 Pc-Freak.NET
MAIL FROM:<hipopo@pc-freak.net>
250 ok
RCPT TO:<hip0d@remote-smtp-server.com>

 

and then disconnect the connection.

1. Create new zabbix userparameter_smtp_check.conf file

The simple userparameter one liner script to do the task looks like this:

# vi /etc/zabbix/zabbix_agent.d/userparameter_smtp_check.conf

UserParameter=smtp.check,(if [[ $(echo -e “HELO localhost\n MAIL FROM: root@$HOSTNAME\n RCPT TO: report-email@your-desired-mail-server.com\n  QUIT\n” | /usr/bin/nc localhost 25 -w 5 2>&1 | grep -Ei ‘220\s.*\sESMTP\sPostfix|250\s\.*|250\s\.*\sOk|250\s\.*\sOk|221\.*\s\w’|wc -l) == ‘5’ ]]; then echo "SMTP OK 1"; else echo "SMTP NOK 0"; fi)

Set the proper permissions so either file is owned by zabbix:zabbix or it is been able to be read from all system users.
 

# chmod a+r /etc/zabbix/zabbix_agent.d/userparameter_smtp_check.conf

2. Create a new Template for the Mail server monitoring
 


 

Just like any other template name it with what fits you as you see, I've call it PROD SMTP Monitoring, as the template is prepared to specifically monitor In Production Linux machines, and a separate template is used to monitor the Quality Assurance (QAs) as well as PreProd (Pre Productions).

3. Create the followng Items and Depedent Item to process zabbix-agent received data from the Userparam script
 

Above is the list of basic Items and Dependent Item you will need to configure inside the SMTP Check zabbix Template.

The Items should have the following content and configurations:
 

/postfix-main-proc-service-item-zabbix-shot


*Name: postfix_main_proc.service
Type: Zabbix agent(active)
*Key: proc.num[master,root]
Type of Information: Numeric (unassigned)
*Update interval: 30s
Custom Intervals: Flexible
*History storage period: 90d
*Trend storage period: 365d
Show Value: as is
Applications: Postfix Checks
Populated host inventory field: -None-
Description: The item counts master daemon process that runs Postfix daemons on demand

Where the arguments pased to proc.num[] function are:
  master is the process that is being looked up for and root is the username with which the the postfix master daemon is running. If you need to adapt it for qmail or exim that shouldn't be a big deal you only have to in advance check the exact processes that are normally running on the machine
and configure a similar process check for it.

*Name: postfix_sub_procs.service_cnt
Type: Zabbix agent(active)
*Key: proc.num[,postfix]
Type of information: Numeric (unassigned)
Update Interval: 30s
*History Storage period: Storage Period 90d
*Trend storage period: Storage Period 365d
Description: The item counts master daemon processes that runs postfix daemons on demand.

Here the idea with this Item is to check the number of processes that are running with user / groupid that is postfix. Again for other SMPT different from postfix, just set it to whatever user / group 
you would like zabbix to look up for in Linux the process list. As you can see here the check for existing postfix mta process is done every 30 seconds (for more critical environments you can put it to less).

For simple zabbix use this Dependent Item is not necessery required. But as we would like to process more closely the output of the userparameter smtp script, you have to set it up.
If you want to write graphical representation by sending data to Grafana.

*Name: postfix availability check
Key: postfix_boolean_check[boolean]
Master Item: PROD SMTP Monitoring: postfix availability check
Type of Information: Numeric unassigned
*History storage period: Storage period 90d
*Trend storage period: 365d

Applications: Postfix Checks

Description: It returns boolean value of SMTP check
1 – True (SMTP is OK)
0 – False (SMTP does not responds)

Enabled: Tick

*Name: postfix availability check
*Key: smtp.check
Custom intervals: Flexible
*Update interval: 30 m
History sotrage period: Storage Period 90d
Applications: Postfix Checks
Populates host inventory field: -None-
Description: This check is testing if the SMTP relay is reachable, without actual sending an email
Enabled: Tick

4. Configure following Zabbix Triggers

 

Note: The severity levels you should have previosly set in Zabbix up to your desired ones.

Name: postfix master root process is not running
*Problem Expression: {PROD SMTP Monitoring:proc.num[master,root].last()}<1

OK event generation: Recovery expression
*Recovery Expression: {PROD SMTP Monitoring:proc.num[master,root].last()}>=1
Allow manual close: Tick

Description: The item counts master daemon process that runs Postfix daemon on demand.
Enabed: Tick

I would like to have an AUTO RESOLVE for any detected mail issues, if an issue gets resolved. That is useful especially if you don't have the time to put the Zabbix monitoring in Maintainance Mode during Operating system planned updates / system reboots or unexpected system reboots due to electricity power loss to the server colocated – Data Center / Rack . 


*Name: postfix master sub processes are not running
*Problem Expression: {P09 PROD SMTP Monitoring:proc.num[,postfix].last()}<1
PROBLEM event generation mode: Single
OK event closes: All problems

*Recovery Expression: {P09 PROD SMTP Monitoring:proc.num[,postfix].last()}>=1
Problem event generation mode: Single
OK event closes: All problems
Allow manual close: Tick
Enabled: Tick

Name: SMTP connectivity check
Severity: WARNING
*Expression: {PROD SMTP Monitoring:postfix_boolen_check[boolean].last()}=0
OK event generation: Expression
PROBLEM even generation mode: SIngle
OK event closes: All problems

Allow manual close: Tick
Enabled: Tick

5. Configure respective Zabbix Action

 

zabbix-configure-Actions-screenshotpng
 

As the service is tagged with 'pci service' tag we define the respective conditions and according to your preferences, add as many conditions as you need for the Zabbix Action to take place.

NOTE! :
Assuming that communication chain beween Zabbix Server -> Zabbix Proxy (if zabbix proxy is used) -> Zabbix Agent works correctly you should start receiving that from the userparameter script in Zabbix with the configured smtp.check userparam key every 30 minutes.

Note that this simple nc check will keep a trail records inside your /var/log/maillog for each netcat connection, so keep in mind that in /var/log/maillog on each host which has configured the SMTP Check zabbix template, you will have some records  similar to:

# tail -n 50 /var/log/maillog
2023-06-22T09:32:18.164128+02:00 lpgblu01f postfix/smtpd[2690485]: improper command pipelining after HELO from localhost[127.0.0.1]:  MAIL FROM: root@your-machine-fqdn-address.com\n RCPT TO: your-supposable-receive-addr@whatever-mail-address.com\n  QUIT\n
2023-06-22T09:32:18.208888+02:00 lpgblu01f postfix/smtpd[2690485]: 32EB02005B: client=localhost[127.0.0.1]
2023-06-22T09:32:18.209142+02:00 lpgblu01f postfix/smtpd[2690485]: disconnect from localhost[127.0.0.1] helo=1 mail=1 rcpt=1 quit=1 commands=4
2023-06-22T10:02:18.889440+02:00 lpgblu01f postfix/smtpd[2747269]: connect from localhost[127.0.0.1]
2023-06-22T10:02:18.889553+02:00 lpgblu01f postfix/smtpd[2747269]: improper command pipelining after HELO from localhost[127.0.0.1]:  MAIL FROM: root@your-machine-fqdn-address.com\n RCPT TO: your-supposable-receive-addr@whatever-mail-address.com\n  QUIT\n
2023-06-22T10:02:18.933933+02:00 lpgblu01f postfix/smtpd[2747269]: E3ED42005B: client=localhost[127.0.0.1]
2023-06-22T10:02:18.934227+02:00 lpgblu01f postfix/smtpd[2747269]: disconnect from localhost[127.0.0.1] helo=1 mail=1 rcpt=1 quit=1 commands=4
2023-06-22T10:32:26.143282+02:00 lpgblu01f postfix/smtpd[2804195]: connect from localhost[127.0.0.1]
2023-06-22T10:32:26.143439+02:00 lpgblu01f postfix/smtpd[2804195]: improper command pipelining after HELO from localhost[127.0.0.1]:  MAIL FROM: root@your-machine-fqdn-address.com\n RCPT TO: your-supposable-receive-addr@whatever-mail-address.com\n  QUIT\n
2023-06-22T10:32:26.186681+02:00 lpgblu01f postfix/smtpd[2804195]: 2D7F72005B: client=localhost[127.0.0.1]
2023-06-22T10:32:26.186958+02:00 lpgblu01f postfix/smtpd[2804195]: disconnect from localhost[127.0.0.1] helo=1 mail=1 rcpt=1 quit=1 commands=4
2023-06-22T11:02:26.924039+02:00 lpgblu01f postfix/smtpd[2860398]: connect from localhost[127.0.0.1]
2023-06-22T11:02:26.924160+02:00 lpgblu01f postfix/smtpd[2860398]: improper command pipelining after HELO from localhost[127.0.0.1]:  MAIL FROM: root@your-machine-fqdn-address.com\n RCPT TO: your-supposable-receive-addr@whatever-mail-address.com\n  QUIT\n
2023-06-22T11:02:26.963014+02:00 lpgblu01f postfix/smtpd[2860398]: EB08C2005B: client=localhost[127.0.0.1]
2023-06-22T11:02:26.963257+02:00 lpgblu01f postfix/smtpd[2860398]: disconnect from localhost[127.0.0.1] helo=1 mail=1 rcpt=1 quit=1 commands=4
2023-06-22T11:32:29.145553+02:00 lpgblu01f postfix/smtpd[2916905]: connect from localhost[127.0.0.1]
2023-06-22T11:32:29.145664+02:00 lpgblu01f postfix/smtpd[2916905]: improper command pipelining after HELO from localhost[127.0.0.1]:  MAIL FROM: root@your-machine-fqdn-address.com\n RCPT TO: your-supposable-receive-addr@whatever-mail-address.com\n  QUIT\n
2023-06-22T11:32:29.184539+02:00 lpgblu01f postfix/smtpd[2916905]: 2CF7D2005B: client=localhost[127.0.0.1]
2023-06-22T11:32:29.184729+02:00 lpgblu01f postfix/smtpd[2916905]: disconnect from localhost[127.0.0.1] helo=1 mail=1 rcpt=1 quit=1 commands=4

 

 

That's all folks use the :
Configuration -> Host (menu)

and assign the new SMTP check template to as many of the Linux hosts where you have setup the Userparameter script and Enjoy the new mail server monitoring at hand.

Webserver farm behind Load Balancer Proxy or how to preserve incoming internet IP to local net IP Apache webservers by adding additional haproxy header with remoteip

Monday, April 18th, 2022

logo-haproxy-apache-remoteip-configure-and-check-to-have-logged-real-ip-address-inside-apache-forwarded-from-load-balancer

Having a Proxy server for Load Balancing is a common solutions to assure High Availability of Web Application service behind a proxy.
You can have for example 1 Apache HTTPD webservers serving traffic Actively on one Location (i.e. one city or Country) and 3 configured in the F5 LB or haproxy to silently keep up and wait for incoming connections as an (Active Failure) Backup solution

Lets say the Webservers usually are set to have local class C IPs as 192.168.0.XXX or 10.10.10.XXX and living in isolated DMZed well firewalled LAN network and Haproxy is configured to receive traffic via a Internet IP 109.104.212.13 address and send the traffic in mode tcp via a NATTed connection (e.g. due to the network address translation the source IP of the incoming connections from Intenet clients appears as the NATTed IP 192.168.1.50.

The result is that all incoming connections from haproxy -> webservers will be logged in Webservers /var/log/apache2/access.log wrongly as incoming from source IP: 192.168.1.50, meaning all the information on the source Internet Real IP gets lost.

load-balancer-high-availailibility-haproxy-apache
 

How to pass Real (Internet) Source IPs from Haproxy "mode tcp" to Local LAN Webservers  ?
 

Usually the normal way to work around this with Apache Reverse Proxies configured is to use HTTP_X_FORWARDED_FOR variable in haproxy when using HTTP traffic application that is proxied (.e.g haproxy.cfg has mode http configured), you have to add to listen listener_name directive or frontend Frontend_of_proxy

option forwardfor
option http-server-close

However unfortunately, IP Header preservation with X_FORWADED_FOR  HTTP-Header is not possible when haproxy is configured to forward traffic using mode tcp.

Thus when you're forced to use mode tcp to completely pass any traffic incoming to Haproxy from itself to End side, the solution is to
 

  • Use mod_remoteip infamous module that is part of standard Apache installs both on apache2 installed from (.deb) package  or httpd rpm (on redhats / centos).

 

1. Configure Haproxies to send received connects as send-proxy traffic

 

The idea is very simple all the received requests from outside clients to Haproxy are to be send via the haproxy to the webserver in a PROXY protocol string, this is done via send-proxy

             send-proxy  – send a PROXY protocol string

Rawly my current /etc/haproxy/haproxy.cfg looks like this:
 

global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        user haproxy
        group haproxy
        daemon
        maxconn 99999
        nbproc          1
        nbthread 2
        cpu-map         1 0
        cpu-map         2 1


defaults
        log     global
       mode    tcp


        timeout connect 5000
        timeout connect 30s
        timeout server 10s

    timeout queue 5s
    timeout tunnel 2m
    timeout client-fin 1s
    timeout server-fin 1s

                option forwardfor

    retries                 15

 

 

frontend http-in
                mode tcp

                option tcplog
        log global

                option logasap
                option forwardfor
                bind 109.104.212.130:80
    fullconn 20000
default_backend http-websrv
backend http-websrv
        balance source
                maxconn 3000

stick match src
    stick-table type ip size 200k expire 30m
        stick on src


        server ha1server-1 192.168.0.205:80 check send-proxy weight 254 backup
        server ha1server-2 192.168.1.15:80 check send-proxy weight 255
        server ha1server-3 192.168.2.30:80 check send-proxy weight 252 backup
        server ha1server-4 192.168.1.198:80 check send-proxy weight 253 backup
                server ha1server-5 192.168.0.1:80 maxconn 3000 check send-proxy weight 251 backup

 

 

frontend https-in
                mode tcp

                option tcplog
                log global

                option logasap
                option forwardfor
        maxconn 99999
           bind 109.104.212.130:443
        default_backend https-websrv
                backend https-websrv
        balance source
                maxconn 3000
        stick on src
    stick-table type ip size 200k expire 30m


                server ha1server-1 192.168.0.205:443 maxconn 8000 check send-proxy weight 254 backup
                server ha1server-2 192.168.1.15:443 maxconn 10000 check send-proxy weight 255
        server ha1server-3 192.168.2.30:443 maxconn 8000 check send-proxy weight 252 backup
        server ha1server-4 192.168.1.198:443 maxconn 10000 check send-proxy weight 253 backup
                server ha1server-5 192.168.0.1:443 maxconn 3000 check send-proxy weight 251 backup

listen stats
    mode http
    option httplog
    option http-server-close
    maxconn 10
    stats enable
    stats show-legends
    stats refresh 5s
    stats realm Haproxy\ Statistics
    stats admin if TRUE

 

After preparing your haproxy.cfg and reloading haproxy in /var/log/haproxy.log you should have the Real Source IPs logged in:
 

root@webserver:~# tail -n 10 /var/log/haproxy.log
Apr 15 22:47:34 pcfr_hware_local_ip haproxy[2914]: 159.223.65.16:58735 [15/Apr/2022:22:47:34.586] https-in https-websrv/ha1server-2 1/0/+0 +0 — 7/7/7/7/0 0/0
Apr 15 22:47:34 pcfr_hware_local_ip haproxy[2914]: 20.113.133.8:56405 [15/Apr/2022:22:47:34.744] https-in https-websrv/ha1server-2 1/0/+0 +0 — 7/7/7/7/0 0/0
Apr 15 22:47:35 pcfr_hware_local_ip haproxy[2914]: 54.36.148.248:15653 [15/Apr/2022:22:47:35.057] https-in https-websrv/ha1server-2 1/0/+0 +0 — 7/7/7/7/0 0/0
Apr 15 22:47:35 pcfr_hware_local_ip haproxy[2914]: 185.191.171.35:26564 [15/Apr/2022:22:47:35.071] https-in https-websrv/ha1server-2 1/0/+0 +0 — 8/8/8/8/0 0/0
Apr 15 22:47:35 pcfr_hware_local_ip haproxy[2914]: 213.183.53.58:42984 [15/Apr/2022:22:47:35.669] https-in https-websrv/ha1server-2 1/0/+0 +0 — 6/6/6/6/0 0/0
Apr 15 22:47:35 pcfr_hware_local_ip haproxy[2914]: 159.223.65.16:54006 [15/Apr/2022:22:47:35.703] https-in https-websrv/ha1server-2 1/0/+0 +0 — 7/7/7/7/0 0/0
Apr 15 22:47:36 pcfr_hware_local_ip haproxy[2914]: 192.241.113.203:30877 [15/Apr/2022:22:47:36.651] https-in https-websrv/ha1server-2 1/0/+0 +0 — 4/4/4/4/0 0/0
Apr 15 22:47:36 pcfr_hware_local_ip haproxy[2914]: 185.191.171.9:6776 [15/Apr/2022:22:47:36.683] https-in https-websrv/ha1server-2 1/0/+0 +0 — 5/5/5/5/0 0/0
Apr 15 22:47:36 pcfr_hware_local_ip haproxy[2914]: 159.223.65.16:64310 [15/Apr/2022:22:47:36.797] https-in https-websrv/ha1server-2 1/0/+0 +0 — 6/6/6/6/0 0/0
Apr 15 22:47:36 pcfr_hware_local_ip haproxy[2914]: 185.191.171.3:23364 [15/Apr/2022:22:47:36.834] https-in https-websrv/ha1server-2 1/1/+1 +0 — 7/7/7/7/0 0/0

 

2. Enable remoteip proxy protocol on Webservers

Login to each Apache HTTPD and to enable remoteip module run:
 

# a2enmod remoteip


On Debians, the command should produce a right symlink to mods-enabled/ directory
 

# ls -al /etc/apache2/mods-enabled/*remote*
lrwxrwxrwx 1 root root 31 Mar 30  2021 /etc/apache2/mods-enabled/remoteip.load -> ../mods-available/remoteip.load

 

3. Modify remoteip.conf file and allow IPs of haproxies or F5s

 

Configure RemoteIPTrustedProxy for every Source IP of haproxy to allow it to send X-Forwarded-For header to Apache,

Here are few examples, from my apache working config on Debian 11.2 (Bullseye):
 

webserver:~# cat remoteip.conf
RemoteIPHeader X-Forwarded-For
RemoteIPTrustedProxy 192.168.0.1
RemoteIPTrustedProxy 192.168.0.205
RemoteIPTrustedProxy 192.168.1.15
RemoteIPTrustedProxy 192.168.0.198
RemoteIPTrustedProxy 192.168.2.33
RemoteIPTrustedProxy 192.168.2.30
RemoteIPTrustedProxy 192.168.0.215
#RemoteIPTrustedProxy 51.89.232.41

On RedHat / Fedora other RPM based Linux distrubutions, you can do the same by including inside httpd.conf or virtualhost configuration something like:
 

<IfModule remoteip_module>
      RemoteIPHeader X-Forwarded-For
      RemoteIPInternalProxy 192.168.0.0/16
      RemoteIPTrustedProxy 192.168.0.215/32
</IfModule>


4. Enable RemoteIP Proxy Protocol in apache2.conf / httpd.conf or Virtualhost custom config
 

Modify both haproxy / haproxies config as well as enable the RemoteIP module on Apache webservers (VirtualHosts if such used) and either in <VirtualHost> block or in main http config include:

RemoteIPProxyProtocol On


5. Change default configured Apache LogFormat

In Domain Vhost or apache2.conf / httpd.conf

Default logging Format will be something like:
 

LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined


or
 

LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined

 

Once you find it in /etc/apache2/apache2.conf / httpd.conf or Vhost, you have to comment out this by adding shebang infont of sentence make it look as follows:
 

LogFormat "%v:%p %a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined
LogFormat "%a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%a %l %u %t \"%r\" %>s %O" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent


The Changed LogFormat instructs Apache to log the client IP as recorded by mod_remoteip (%a) rather than hostname (%h). For a full explanation of all the options check the official HTTP Server documentation page apache_mod_config on Custom Log Formats.

and reload each Apache server.

on Debian:

# apache2ctl -k reload

On CentOS

# systemctl restart httpd


6. Check proxy protocol is properly enabled on Apaches

 

remoteip module will enable Apache to expect a proxy connect header passed to it otherwise it will respond with Bad Request, because it will detect a plain HTML request instead of Proxy Protocol CONNECT, here is the usual telnet test to fetch the index.htm page.

root@webserver:~# telnet localhost 80
Trying 127.0.0.1…
Connected to localhost.
Escape character is '^]'.
GET / HTTP/1.1

HTTP/1.1 400 Bad Request
Date: Fri, 15 Apr 2022 19:04:51 GMT
Server: Apache/2.4.51 (Debian)
Content-Length: 312
Connection: close
Content-Type: text/html; charset=iso-8859-1

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>400 Bad Request</title>
</head><body>
<h1>Bad Request</h1>
<p>Your browser sent a request that this server could not understand.<br />
</p>
<hr>
<address>Apache/2.4.51 (Debian) Server at grafana.pc-freak.net Port 80</address>
</body></html>
Connection closed by foreign host.

 

root@webserver:~# telnet localhost 80
Trying 127.0.0.1…
Connected to localhost.
Escape character is '^]'.
HEAD / HTTP/1.1

HTTP/1.1 400 Bad Request
Date: Fri, 15 Apr 2022 19:05:07 GMT
Server: Apache/2.4.51 (Debian)
Connection: close
Content-Type: text/html; charset=iso-8859-1

Connection closed by foreign host.


To test it with telnet you can follow the Proxy CONNECT syntax and simulate you're connecting from a proxy server, like that:
 

root@webserver:~# telnet localhost 80
Trying 127.0.0.1…
Connected to localhost.
Escape character is '^]'.
CONNECT localhost:80 HTTP/1.0

HTTP/1.1 301 Moved Permanently
Date: Fri, 15 Apr 2022 19:13:38 GMT
Server: Apache/2.4.51 (Debian)
Location: https://zabbix.pc-freak.net
Cache-Control: max-age=900
Expires: Fri, 15 Apr 2022 19:28:38 GMT
Content-Length: 310
Connection: close
Content-Type: text/html; charset=iso-8859-1

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>301 Moved Permanently</title>
</head><body>
<h1>Moved Permanently</h1>
<p>The document has moved <a href="https://zabbix.pc-freak.net">here</a>.</p>
<hr>
<address>Apache/2.4.51 (Debian) Server at localhost Port 80</address>
</body></html>
Connection closed by foreign host.

You can test with curl simulating the proxy protocol CONNECT with:

root@webserver:~# curl –insecure –haproxy-protocol https://192.168.2.30

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta name="generator" content="pc-freak.net tidy">
<script src="https://ssl.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-2102595-3";
urchinTracker();
</script>
<script type="text/javascript">
var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://");
document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E"));
</script>
<script type="text/javascript">
try {
var pageTracker = _gat._getTracker("UA-2102595-6");
pageTracker._trackPageview();
} catch(err) {}
</script>

 

      –haproxy-protocol
              (HTTP) Send a HAProxy PROXY protocol v1 header at the beginning of the connection. This is used by some load balancers and reverse proxies
              to indicate the client's true IP address and port.

              This option is primarily useful when sending test requests to a service that expects this header.

              Added in 7.60.0.


7. Check apache log if remote Real Internet Source IPs are properly logged
 

root@webserver:~# tail -n 10 /var/log/apache2/access.log

213.183.53.58 – – [15/Apr/2022:22:18:59 +0300] "GET /proxy/browse.php?u=https%3A%2F%2Fsteamcommunity.com%2Fmarket%2Fitemordershistogram%3Fcountry HTTP/1.1" 200 12701 "https://www.pc-freak.net" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:98.0) Gecko/20100101 Firefox/98.0"
88.198.48.184 – – [15/Apr/2022:22:18:58 +0300] "GET /blog/iq-world-rank-country-smartest-nations/?cid=1330192 HTTP/1.1" 200 29574 "-" "Mozilla/5.0 (compatible; DataForSeoBot/1.0; +https://dataforseo.com/dataforseo-bot)"
213.183.53.58 – – [15/Apr/2022:22:19:00 +0300] "GET /proxy/browse.php?u=https%3A%2F%2Fsteamcommunity.com%2Fmarket%2Fitemordershistogram%3Fcountry
HTTP/1.1" 200 9080 "https://www.pc-freak.net" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:98.0) Gecko/20100101 Firefox/98.0"
159.223.65.16 – – [15/Apr/2022:22:19:01 +0300] "POST //blog//xmlrpc.php HTTP/1.1" 200 5477 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36"
159.223.65.16 – – [15/Apr/2022:22:19:02 +0300] "POST //blog//xmlrpc.php HTTP/1.1" 200 5477 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36"
213.91.190.233 – – [15/Apr/2022:22:19:02 +0300] "POST /blog/wp-admin/admin-ajax.php HTTP/1.1" 200 1243 "https://www.pc-freak.net/blog/wp-admin/post.php?post=16754&action=edit" "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0"
46.10.215.119 – – [15/Apr/2022:22:19:02 +0300] "GET /images/saint-Paul-and-Peter-holy-icon.jpg HTTP/1.1" 200 134501 "https://www.google.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.75 Safari/537.36 Edg/100.0.1185.39"
185.191.171.42 – – [15/Apr/2022:22:19:03 +0300] "GET /index.html.latest/tutorials/tutorials/penguins/vestnik/penguins/faith/vestnik/ HTTP/1.1" 200 11684 "-" "Mozilla/5.0 (compatible; SemrushBot/7~bl; +http://www.semrush.com/bot.html)"

116.179.37.243 – – [15/Apr/2022:22:19:50 +0300] "GET /blog/wp-content/cookieconsent.min.js HTTP/1.1" 200 7625 "https://www.pc-freak.net/blog/how-to-disable-nginx-static-requests-access-log-logging/" "Mozilla/5.0 (compatible; Baiduspider-render/2.0; +http://www.baidu.com/search/spider.html)"
116.179.37.237 – – [15/Apr/2022:22:19:50 +0300] "GET /blog/wp-content/plugins/google-analytics-dashboard-for-wp/assets/js/frontend-gtag.min.js?ver=7.5.0 HTTP/1.1" 200 8898 "https://www.pc-freak.net/blog/how-to-disable-nginx-static-requests-access-log-logging/" "Mozilla/5.0 (compatible; Baiduspider-render/2.0; +http://www.baidu.com/search/spider.html)"

 

You see from above output remote Source IPs in green are properly logged, so haproxy Cluster is correctly forwarding connections passing on in the Haproxy generated Initial header the Real IP of its remote connect IPs.


Sum it up, What was done?


HTTP_X_FORWARD_FOR is impossible to set, when haproxy is used on mode tcp and all traffic is sent as received from TCP IPv4 / IPv6 Network stack, e.g. modifying any HTTP sent traffic inside the headers is not possible as this might break up the data.

Thus Haproxy was configured to send all its received data by sending initial proxy header with the X_FORWARDED usual Source IP data, then remoteip Apache module was used to make Apache receive and understand haproxy sent Header which contains the original Source IP via the send-proxy functionality and example was given on how to test the remoteip on Webserver is working correctly.

Finally you've seen how to check configured haproxy and webserver are able to send and receive the End Client data with the originator real source IP correctly and those Internet IP is properly logged inside both haproxy and apaches.

Howto debug and remount NFS hangled filesystem on Linux

Monday, August 12th, 2019

nfsnetwork-file-system-architecture-diagram

If you're using actively NFS remote storage attached to your Linux server it is very useful to get the number of dropped NFS connections and in that way to assure you don't have a remote NFS server issues or Network connectivity drops out due to broken network switch a Cisco hub or other network hop device that is routing the traffic from Source Host (SRC) to Destination Host (DST) thus, at perfect case if NFS storage and mounted Linux Network filesystem should be at (0) zero dropped connectios or their number should be low. Firewall connectivity between Source NFS client host and Destination NFS Server and mount should be there (set up fine) as well as proper permissions assigned on the server, as well as the DST NFS should be not experiencing I/O overheads as well as no DNS issues should be present (if NFS is not accessed directly via IP address).
In below article which is mostly for NFS novice admins is described shortly few of the nuances of working with NFS.
 

1. Check nfsstat and portmap for issues

One indicator that everything is fine with a configured NFS mount is the number of dropped NFS connections
or with a very low count of dropped connections, to check them if you happen to administer NFS

nfsstat

 

linux:~# nfsstat -o net
Server packet stats:
packets    udp        tcp        tcpconn
0          0          0          0  


nfsstat is useful if you have to debug why occasionally NFS mounts are getting unresponsive.

As NFS is so dependent upon portmap service for mapping the ports, one other point to check in case of Hanged NFSes is the portmap service whether it did not crashed due to some reason.

 

linux:~# service portmap status
portmap (pid 7428) is running…   [portmap service is started.]

 

linux:~# ps axu|grep -i rpcbind
_rpc       421  0.0  0.0   6824  3568 ?        Ss   10:30   0:00 /sbin/rpcbind -f -w


A useful commands to debug further rcp caused issues are:

On client side:

 

rpcdebug -m nfs -c

 

On server side:

 

rpcdebug -m nfsd -c

 

It might be also useful to check whether remote NFS permissions did not changed with the good old showmount cmd

linux:~# showmount -e rem_nfs_server_host


Also it is useful to check whether /etc/exports file was not modified somehow and whether the NFS did not hanged due to attempt of NFS daemon to reload the new configuration from there, another file to check while debugging is /etc/nfs.conf – are there group / permissions issues as well as the usual /var/log/messages and the kernel log with dmesg command for weird produced NFS client / server or network messages.

nfs-utils disabled serving NFS over UDP in version 2.2.1. Arch core updated to 2.3.1 on 21 Dec 2017 (skipping over 2.2.1.) If UDP stopped working then, add udp=y under [nfsd] in /etc/nfs.conf. Then restart nfs-server.service.

If the remote NFS server is running also Linux it is useful to check its /etc/default/nfs-kernel-server configuration

At some stall cases it might be also useful to remount the NFS (but as there might be a process on the Linux server) trying to read / write data from the remote NFS mounted FS it is a good idea to check (whether a process / service) on the server is not doing I/O operations on the NFS and if such is existing to kill the process in question with fuser
 

linux:~# fuser -k [mounted-filesystem]
 

 

2. Diagnose the problem interactively with htop


    Htop should be your first port of call. The most obvious symptom will be a maxed-out CPU.
    Press F2, and under "Display options", enable "Detailed CPU time". Press F1 for an explanation of the colours used in the CPU bars. In particular, is the CPU spending most of its time responding to IRQs, or in Wait-IO (wio)?
 

3. Get more extensive Mount info with mountstats

 

nfs-utils package contains mountstats command which is very useful in debugging further the issues identified

$ mountstats
Stats for example:/tank mounted on /tank:
  NFS mount options: rw,sync,vers=4.2,rsize=524288,wsize=524288,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,soft,proto=tcp,port=0,timeo=15,retrans=2,sec=sys,clientaddr=xx.yy.zz.tt,local_lock=none
  NFS server capabilities: caps=0xfbffdf,wtmult=512,dtsize=32768,bsize=0,namlen=255
  NFSv4 capability flags: bm0=0xfdffbfff,bm1=0x40f9be3e,bm2=0x803,acl=0x3,sessions,pnfs=notconfigured
  NFS security flavor: 1  pseudoflavor: 0

 

NFS byte counts:
  applications read 248542089 bytes via read(2)
  applications wrote 0 bytes via write(2)
  applications read 0 bytes via O_DIRECT read(2)
  applications wrote 0 bytes via O_DIRECT write(2)
  client read 171375125 bytes via NFS READ
  client wrote 0 bytes via NFS WRITE

RPC statistics:
  699 RPC requests sent, 699 RPC replies received (0 XIDs not found)
  average backlog queue length: 0

READ:
    338 ops (48%)
    avg bytes sent per op: 216    avg bytes received per op: 507131
    backlog wait: 0.005917     RTT: 548.736686     total execute time: 548.775148 (milliseconds)
GETATTR:
    115 ops (16%)
    avg bytes sent per op: 199    avg bytes received per op: 240
    backlog wait: 0.008696     RTT: 15.756522     total execute time: 15.843478 (milliseconds)
ACCESS:
    93 ops (13%)
    avg bytes sent per op: 203    avg bytes received per op: 168
    backlog wait: 0.010753     RTT: 2.967742     total execute time: 3.032258 (milliseconds)
LOOKUP:
    32 ops (4%)
    avg bytes sent per op: 220    avg bytes received per op: 274
    backlog wait: 0.000000     RTT: 3.906250     total execute time: 3.968750 (milliseconds)
OPEN_NOATTR:
    25 ops (3%)
    avg bytes sent per op: 268    avg bytes received per op: 350
    backlog wait: 0.000000     RTT: 2.320000     total execute time: 2.360000 (milliseconds)
CLOSE:
    24 ops (3%)
    avg bytes sent per op: 224    avg bytes received per op: 176
    backlog wait: 0.000000     RTT: 30.250000     total execute time: 30.291667 (milliseconds)
DELEGRETURN:
    23 ops (3%)
    avg bytes sent per op: 220    avg bytes received per op: 160
    backlog wait: 0.000000     RTT: 6.782609     total execute time: 6.826087 (milliseconds)
READDIR:
    4 ops (0%)
    avg bytes sent per op: 224    avg bytes received per op: 14372
    backlog wait: 0.000000     RTT: 198.000000     total execute time: 198.250000 (milliseconds)
SERVER_CAPS:
    2 ops (0%)
    avg bytes sent per op: 172    avg bytes received per op: 164
    backlog wait: 0.000000     RTT: 1.500000     total execute time: 1.500000 (milliseconds)
FSINFO:
    1 ops (0%)
    avg bytes sent per op: 172    avg bytes received per op: 164
    backlog wait: 0.000000     RTT: 2.000000     total execute time: 2.000000 (milliseconds)
PATHCONF:
    1 ops (0%)
    avg bytes sent per op: 164    avg bytes received per op: 116
    backlog wait: 0.000000     RTT: 1.000000     total execute time: 1.000000 (milliseconds)


nfs-utils disabled serving NFS over UDP in version 2.2.1. Arch core updated to 2.3.1 on 21 Dec 2017 (skipping over 2.2.1.) If UDP stopped working then, add udp=y under [nfsd] in /etc/nfs.conf. Then restart nfs-server.service.
 

4. Check for firewall issues
 

If all fails make sure you don't have any kind of firewall issues. Sometimes firewall changes on remote server or somewhere in the routing servers might lead to stalled NFS mounts.

 

To use properly NFS as you should know as a minimum you need to have opened as ports is Port 111 (TCP and UDP) and 2049 (TCP and UDP) on the NFS server (side) as well as any traffic inspection routers on the road from SRC (Linux client host) and NFS Storage destination DST server.

There are also ports for Cluster and client status (Port 1110 TCP for the former, and 1110 UDP for the latter) as well as a port for the NFS lock manager (Port 4045 TCP and UDP) but having this opened or not depends on how the NFS is configured. You can further determine which ports you need to allow depending on which services are needed cross-gateway.
 

5. How to Remount a Stalled unresponsive NFS filesystem mount

 

At many cases situation with remounting stalled NFS filesystem is not so easy but if you're lucky a standard mount and remount should do the trick.

Most simple way to remout the NFS (once you're sure this might not disrupt any service) – don't blame me if you break something is with:
 

umount -l /mnt/NFS_mnt_point
mount /mnt/NFS_mnt_point


Note that the lazy mount (-l) umount opt is provided here as very often this is the only way to unmount a stalled NFS mount.

Sometimes if you have a lot of NFS mounts and all are inacessible it is useful to remount all NFS mounts, if the remote NFS is responsive this should be possible with a simple for bash loop:

for P in $(mount | awk '/type nfs / {print $3;}'); do echo $P; echo "sudo umount $P && sudo mount $P" && echo "ok :)"; done


If you cd /mnt/NFS_mnt_point and try ls and you get

$ ls
.: Stale File Handle

 

You will need to unmount the FS with forceful mount flag

umount -f /mnt/NFS_mnt_point
 

Sum it up


In this article, I've shown you a few simple ways to debug what is wrong with a Stalled / Hanged NFS filesystem present on a NFS server mounted on a Linux client server.
Above was explained the common issues caused by NFS portmap (rpcbind) dependency, how to its status is fine, some further diagnosis with htop and mountstat was pointed. I've pointed the minimum amount of TCP / UDP ports 2049 and 111 that needs to be opened for the NFS communication to work and finally explained on how to remount a stalled NFS single or all attached mount on a NFS client to restore to normal operations.
As NFS is a whole ocean of things and the number of ways it is used are too extensive this article is just a general info useful for the NFS dummy admin for more robust configs read some good book on NFS such as Managing NFS and NIS, 2nd Edition – O'Reilly Media and for Kernel related NFS debugging make sure you check as a minimum ArchLinux's NFS troubleshooting guide and sourceforge's NFS Troubleshoting and Optimizing NFS Performance guides.

 

How to add (.srt , .sub) subtitles to .flv flash movie video on Linux

Friday, April 15th, 2011

how-to-add-srt-subtitles-to-flv-flash-movie-video-on-linux
If you're on Linux the questions like, how can I convert between video and audio formats, how to do photo editing etc. etc. have always been a taugh question as with it's diversity Linux often allows too many ways to do the same things.

In the spirit of questioning I have been recently curious, how can a subtitles be added to a flash video (.flv) video?

After some research online I've come up with the below suggested solution which uses mplayer to do the flash inclusion of the subtitles file.

mplayer your_flash_movie.flv -fs -subfont-text-scale 3

While including the subtitles to the .flv file, it's best to close up all the active browsers and if running something else on the desktop close it up.
Note that above's mplayer example for (.srt and .sub) subtitle files example is only appropriate for a .flv movie files which already has a third party published subtitle files.

What is interesting is that often if you want to make custom subtitles to let's say a video downloaded from Youtube on Linux the mplayer way pointed above will be useless. Why?

Well the Linux programs that allows a user to add custom subtitles to a movie does not support the flv (flash video) file format.

My idea on how to create custom subtitles and embed them into a flv movie file is very simple and it goes like this:

1. Convert the .flv file format to let's say .avi or .mpeg
2. Use gnome-subitles or subtitleeditor to create the subtitles for the .avi or .mpeg file
3. Convert back the .avi/.mpeg file with included subtitles to .flv (flash video format)

This methodology is really long and time consuming, but pitily as far as my understanding goes it's the only way to do that on your Linux until now.

To make the conversations between .flv and .avi format you will need to use the ffmpeg – (FFMpeg command line tool video converter), here is how:

– Convert .flv to .avi

debian:~# /usr/bin/ffmpeg -i input_flvfilename.flv output_avifilename.avi

– Convert .avi file to .flv

debian:~# /usr/bin/ffmpeg -y -i /path/to/your/avi/input_avifilename.avi -acodec mp3 -ar 22050 -f flv
/path/to/your/flv/output_flvfilename.flv

The required overall tools which you will have to have installed on your Debian or Ubuntu Linux are:

1. ffmpeg
2. gnome-subtitles
3. subtitleeditor
4. mplayer

You will also have to spend some time to get to know gnome-subtitles or subtitleeditor, but it won't be that long until you get the idea on how to use them.

Ditaa convert ASCII diagrams into bitmap graphic (pictures)

Monday, May 12th, 2014

ditta_convert-ascii-art-diagram-to-png-jpg-picture
As part of my passion for ASCII art, I've found another interesting tool useful to ASCII art maniacs like me, the tool is called ditta and is able to convert manually drawn ASCII art diagrams to graphics, below is tool description from my debian apt-cache as well as a screenshot:

 apt-cache show ditaa|grep -i ditaa -A 4

Package: ditaa
Priority: optional
Section: graphics
Installed-Size: 164
Maintainer: David Paleino <dapal@debian.org>

Filename: pool/main/d/ditaa/ditaa_0.9+ds1-2_all.deb
Size: 107270
MD5sum: 05ec52d9274b954b053f1835ca5d7a7f
SHA1: 792d91d05fff2a2a19c0ebce317351d138436c18
SHA256: c4319d32e7918aab782e2f38cdad745bc9023f9f09a999033d983095ee4f70d5

 DiTAA is a small command-line utility that can convert diagrams drawn using
 ASCII art ("drawings" that contain characters that resemble lines, like | /
 and -), into proper bitmap graphics.
 .
 DiTAA also uses special markup syntax to increase the possibilities of shapes
 and symbols that can be rendered.
Homepage: http://ditaa.org

 

To install ditaa on Debian and Ubuntu Linux:

debian:~# apt-get install --yes ditaa
...

Ditaa text diagram to Graphics converter is also available in Fedora Linux and in Source RPMs to be used on Redhat Based RPM distributions.
To install in most of RPM based Linuxes:

[root@fedora:~]# yum install -y ditaa

For most people probably Ditta will not be of any value except as a PoC and of a Hack value just like Ditaa's home page suggests. Nomatter that Ditta is cool but has just 2 drawback it doesn't understand non-latin characters i.e. Cyrillic and requires Java Virtual Machine .. but if you're a real geek you will do  the sacrifice to install a whole bunch of the heavy java for the sake of some oldschool fun 🙂 Being written in Java makes Ditta multi-platform, but you will need a Java VM version of at least 1.6 (it doesn't work with Java 1.5).

The format Ditta understands is close to HTML

<ditaa [optional parameters]>
 
... (some ditaa-code) ...
 
</ditaa>


There are also special tags understood by Ditta which are automatically turned into shaped graphical buttons and forms.
 

Possible tags

Not all shape selector tags are documented on the ditaa site. A quick source scan revealed:

tag Description
{c} decision(Choice)
{d} document
{io} input/output, parallelogram
{mo} manual operation
{o} ellipse, circle
{s} storage
{tr} trapezoid (looks like an inverted {mo} )

Here is an example Ditta code
 

<ditaa round noedgesep right>
+--------+   +-------+    +-------+
|        | --+ ditaa +--> |       |
|  Text  |   +-------+    |diagram|
|Document|   |!magic!|    |       |
|     {d}|   |  c478 |    |       |
+---+----+   +-------+    +-------+
:                         ^
|       Lots of work      :
+-------------------------+
</ditaa>

This Ditta code will generate following picture:

ditaa_convert-ascii-art-picture-to-graphic-png-jpg-bmp


To learn more on ditta please check Ditaa's Project homepage on Sourceforge
Many thanks to Cybercity's 30 Cool Open Source Software of 2013 for inspiring this post.

Restart hung Mac OS application – How to kill programs in Mac OS – alternative of Windows CTRL + ALT + DEL

Friday, May 23rd, 2014

mac-kill-hung-application-killing-freezed-application-on-mac-osx
If you happen to have the rare case of having a hung MAC OS X application and you're coming from a Linux / Windows background you will be certainly wonderhing how to kill Mac OS X hung application.
In Mac OS the 3 golden buttons to kill crashed application are:

 

COMMAND + OPTION + ESCAPE

Command + Option + Escape

while pressed simultaneously is the Mac Computer equivalent of Windows CTRL + ALT + DEL

mac-os-force-kill-hung-application-force-program-exit-mac-OSX

Holding together COMMAND  + OPTION + ESCAPE on MAC OS brings up the Force Quit Window showing and letting you choose between the list of open applications. To close freezed MAC application, choose it and Press the Force Quit Button this will kill immediately that application.  

To directly end application without invoking the choose Force Quit Window menu, to force a hanging app quit right click on its icon in Dock (CTRL + Click) and choose "Force Quit” from context menu.

A little bit more on why applications hung in MAC OS. Each application in MAC OS has its event queue. Event queue is created on initial application launch, event queue is buffer that accepts input from system (could be user input from kbd or mouse, messages passed from other programs etc.). Program is hanging when system detects queued events are not being used.

macOSX-EventQueue-reason-for-application-hung-in-macs
Other reasons for Mac OS hanging program is whether you're attaching detaching new hardware peripherals (i.e. problems caused by improper mount / unmounts), same hang issues are often observed on BSD and Linux. Sometimes just re-connecting (mouse, external hdd etc.) resolves it.
Program hungs due to buggy software are much rarer in Macs just like in IPhones and Ipads due to fact mac applications are very well tested until published in appstore.

Issues with program hungs in Mac sometimes happen after "sleep mode" during "system wake" function – closing, opening macbook. If a crashed program is of critical importance and you don't want to "Force Quit" with COMMAND + OPTION + ESC. Try send PC to sleep mode for a minute or 2 by pressing together OPTION + COMMAND + EJECT.

An alternative approach to solve hanging app issue is to Force-quit Finder and Dock to try that, launch Terminal

And type there:

# killall Dock

Other useful to know Mac OS keyboard combination is COMMAND + OPTION + POWERHold together Command and Option and after a while press Power – This is a shortcut to instruct your Mac PC to reboot.

Ancient Christian Coptic Oriental Orthodox icons – The reason for asymmetric body members in early Christian iconography

Monday, July 30th, 2012

While checking some information on Coptic Eastern Oriental faith, I've stumbled upon a very beautiful (and unique) ancient Orthodox Christian icon depicting Saint Menas and our saviour the Lord Jesus Christ, below is the very beautiful icon

Saint Mena (Sv. Mina) and The Lord Jesus Christ icon from 6th century

Saint Mena (Sv. Mina) and The Lord Jesus Christ icon from 6th century

As you can see the iconography is very interesting, the images differ from modern day iconography the portraits are not looking so serious but looks like "childish". This childish forms and faces on the early Christian iconography is not accidental; it expresses the childish like pure faith our Christian devoted ancestors had. This early Christian faith and spiritual life icon is obviously in conjunction with our Saviour Jesus Christ words as red in the Gospel according to Matthew:
 

At that time the disciples came to Jesus, saying, "Who then is greatest in the kingdom of heaven?" Then Jesus called a little child to Him, set him in the midst of them, and said, "Assuredly, I say to you, unless you are converted and become as little children, you will by no means enter the kingdom of heaven. Therefore whoever humbles himself as this little child is the greatest in the kingdom of heaven. Whoever receives one little child like this in My name receives Me.

Matthew 18:1-5

This icon as well as the early Christian icons are very different from nowdays iconography probably for reason;
the images difference, the seriousness and the lack of brightness in the faces of nowdays iconography is a clear sign of the great decay of both Orthodox Christian as well as the down-fall of spiritual life worldly.
I've seen similar childish looking image icons in some Bulgarian ancient relics museums in my child years and always thought the depictions are so kiddish because iconographers of that time did not have the painting knowledge and skills to draw better ones.
Now as I know Christianity much better than then, I understand my previous assumption for the reason of the kiddish looking images is wrong.
Saint Mena (Sv. Mina) and The Lord Jesus Christ icon from 6th century

Very interesting in the early Christian iconography are the shapes. If you take a close look to above icon, you will notice the disparity of the two body members; the hands, head and eyes are unusually big. My guess for the lack of correspondence of body members is the attempt of early iconographers to put accent on most important members of our bodies;

– The head (holding the mind and thoughts of the saints)
– The hands through which the daily food is raised and the eyes through which the world is comprehended are much bigger than in a real person portrait.
– The mouth which is almost the size of the eyes; obvious reference that for early Christian contemplating was much more precious (important) thing, than speech.
This is also in accordance with the New Testament holy scriptures which says like so concerning the tongue:
 

8 But no one can tame the tongue; it is a restless evil and full of deadly poison.
9 With it we bless our Lord and Father, and with it we curse men, who have been made in the likeness of God;
10 from the same mouth come both blessing and cursing. My brethren, these things ought not to be this way.
11 Does a fountain send out from the same opening both fresh and bitter water?
12 Can a fig tree, my brethren, produce olives, or a vine produce figs?
Nor can salt water produce fresh.

Notice also the Halos of the two saints, the size of the halos is almost one third of the whole body of the saints. The Gospel hold by our and all humanity Saviour Jesus Christ is also enormous sized; corresponding almost the height of the arm of Christ on the icon.
The size of the Gospel stresses out the importance of the Holy Bible writtings for early Christians. Nowdays the size of Gospels or Holy Bible especially among protestant Churches "tradition" is becoming smaller and smaller following the spirit of the time proclaiming mobility …

Today the iconography Orthodox Christian "school" has severely changed and the icon images are much more complicated than in ancient times.
The complication of images and elements on Orthodox Icons is a "mirror" of the internal complicated world of us modern-time Christians. This over-complication of our internal spiritual world, does separate us from God instead of uniting us as it is well known in Holy Orthodox Christian tradition God is best known through simplicity and pureness in life thoughts and actions.

The Coptic Oriental Orthodox Church is the only Church, where there is still iconographers drawing in the style of the ancient times childish looking icons. The reason Copts preserved this ancient iconography is that they have conservated big portion of the ancient faith rejecting the decisions of all 7 Orthodox Ecumenical Church Councils. Copts still accept only ecumenical council decisions up to the III-rd ecumenical council. This is also the reason why Eastern Oriental Orthodox Christians are considered not in official communion with the rest of Eastern Orthodox Churches. I had the opportunity by God's grace to meet an Coptic Orthodox Christian (a guy called Baky); From what I've seen and experienced within the few months with Baky my conclusion is Coptic Orthodox layman faith is much stronger than the one in most of other Orthodox Christians I know. The official standpoint of our Eastern Orthodox Church concerning the copts are that they're in heresy and not really orthodox. I'm not sure if this is really true, since I have spend few months with this Coptic Christian brother this autumn and winter and from what I've seen and heard as well as researched on coptic Orthodox it seems their overall Church teaching, Holy Liturgies and everything is very much orthodox (with very little service and faith differences). Here are few beautiful Coptic Orthodox Christian icons still being drawn in the spirit of early days Christianity.

Saint Abba Anthony the Great Coptic Oriental orthodox Icon

Abba (saint) Anthony the Great the father of Orthodox Christian Monastic Life

Coptic Orthodox Oriental Icon Abba Anthony and saint Paul

Coptic Orthodox Oriental Icon of Saint Anothony the Great – "the founder" of Monastic life

Coptic Oriental Orthodox Icon Tobias old testamential Book story

Tobias Old Testamential Story coptic icon

Holy Family Flight into Egypt Coptic Orthodox Icon

Holy Family – Flight into Egypt Coptic Orthodox oriental icon

Christ the Saviour Coptic Oriental Orthodox icon

Christ the Saviour – Coptic Oriental Orthodox icon

Holy Theotokos Coptic Oriental Orthodox icon

Holy Theotokos Coptic Oriental icon

Saint Athanasius defender of pure orthodoxy Oriental Orthodox icon

Saint Athnasius coptic orth icon

The Dormition of Holy Theotokos Mother Mary Coptic Orthodox Oriental Icon

The Dormition of Virgin Mary Coptic icon

How to delete your linkedin account – I don’t want to be LINKED IN!

Tuesday, April 24th, 2012

I don't want to be linkedin, Linkedin is a fake and non-sense time wasting anti-social network

I've decided to delete my linkedin account as I don't see any good in constact connectiodness and being part of many "social" networks which if one thinks in deeply are not social but anti-social.

You just stay at home staring at a screen and it will be like this until the end of your days and even worser for the generations to come. Computer revolution or digital revolution is in reality huge devolutin (devil-lution)

To delete the linkedin account I used a short tutorial provided by This post

How to delete your linkedin account picture

TO reach to your Profile settings, use upper right corner of your browser and follow the menus:

Settings -> Account -> Close your account

Once, trying to delete your account, linkedin will try to manipulate you to stay in Linkedin by pushing some of your contacts, pointing how you will get disconnected from him.
I'm amazed how impudent this guys can be, actually, its not just them. If you have tried or deleted your facebook account before time you will have faced, exactly the same thing. A profile (person picture) which was recently browsed by you will be shown to you and be said you will be unable to connect with him any more. Well who cares if it is God's will we will connect again 🙂

The problem with us modern people is we're so deluded that we have started relying more on technology and human knowledge than to God. For most people who are atheists relying more on technology than on God for their lives seems reasanable However for us Christians putting more trust in technology than in Gods providence for us is sinful and deadly.
I'm starting to get the conclusion, non-technological societies are more happier than technological ones. In that sense, we the Bulgarians are blessed, because technology is not so widely spread.

Barcodes are dangerous for human freedom! Technology not trustable!

Monday, April 23rd, 2012

This post will be short as I'm starting to think long posts are mostly non-sense. Have you people all wondered of barcoding?
All world stores around the world have now barcoding. Barcode numbers regulations are being orchestrated by certain bodies, we people have no control over. Barcoding makes us dependent on technology as only technology can be used to read and store barcodes. It is technology that issues the barcodes. We have come to a point, where we humans trust more technology than our physical fellows. Trusting technology more than the close people to us is very dangerous. What if technology is not working as we expect it to?
What if there are hidden ways to control technology that we're not aware of?

Technology concepts are getting more and more crazy and abstract.
Thinks about the virtualization for a while. Virtualuzation is being praised loudly these days and everyone is turnning to it thinking it is cheap and realiable? The facts I've seen and the little of experience I had with it were way less than convicable.
Who came with this stupid idea, oh yes I remember IBM came with this insane idea some about 40 years ago … We had sanity for a while not massively adopting IBM's virtualization bulk ideas and now people got crazy again to use a number of virtualization technologies.
If you think for a while Virtualization is unreality (unexistence) of matter over another unreality. The programs that makes computers "runs" are not existent in practice, they only exist in some electricity form. Its just a sort of electric field if you think on it on a conceptual level …
As we trust all our lives nowdays on technology, how do we know this technological stored information is not altered by other fields, how we can be sure it always acts as we think it does and should? Was it tested for at least 40 years before adoption as any new advancement should be.
Well Of course not! Everything new is just placed in our society without too much thinking. Someone gives the money for production, someone else buys it and installs it and its ready to go. Or at least that's how the consumers thinks and we have become all consumers. This is a big LIE we're constantly being convinced in!
It is not ready to work, it is not tested and we don't know what the consequence of it will be!
Technology and Genetically Modified Food are not so different in this that they both can produce unexpected results in our lives. And they're already producing the bad fruits as you should have surely seen.
You can see more and more people are getting sick, more people go to doctor more people have to live daily with medication to live a miserable dishealthy I wouldn't say live but "poor" existence …
Next time they tell you new technology is good for you and will make your life better, Don't believe them! This is not necessery true.
Though todays technology can do you good, In my view the harm seriously exceeds the good.

The last two days

Saturday, March 17th, 2007

The last few days went smoothly in general No drastic problems in work which was a great blessing for me. Praise the Lord Oh My Soul :] We lost two afternoons with Narf writting (actually translating) a text for a presentation we have in Business Communication scheduled for Tuesday with Mrs. Svetla Stoyanova (a vice rector) in the college. It became like a tradition the eating of rise with vegatables in the new chineese restaurant. I used to watch two very intriguing films about the Quantum Physics/Mechanics which is on a way of prooving the spiritually exist. On a low level the Universe is even more magnificient than on the surface. I also watched very interesting film about “The Secret of the Water” which was based on a scientifically researched fact that water is “alive” and have memory just like a computer RAM. Yesterday Night we went out just for 20 or 30 minutes and Met a friend of Nomen who offered us to drink beer. He told us a very interesting fact that in Dubai Arabian men who are usually close relatives walk over the streets holding their hands Like gays (For their traditions this is something like great honor), even in the pubs this close relatives used to sit one in other :]]. After that we were just on our way home when we met Paco we walked for some time and speak about our Faith in Christ. Paco said he didn’t feel spiritually okay the last days so I tried to confort him about some truths that flowed out of my heart. Today I was on a coffee with Lily and She got depressed again (I really hate this thing). I heard a great Christian Industrial band on http://christianindustrial.net/ (Dead Turns Alive), what I personally like in the band is the old school sounding they were able to put in their music, actually they played EBM here is a link to their video you can enjoy http://www.deadturnsalive.com/video/influence.avi .I was in my grandmother this evening I really feel bad about this old sweet granny. She is so good hearted and like a model for what we the believers should be sadly she has real problems with her Ears and Seeing the Diabet is making her suffer badly… I hope she’ll be better. I prayed The Lord to help at least a little with, Laying her hands. During the day and in the mornings and the Evenings for a group of ppl and the World in general. There is also a good news. I speak with Slavcho (A brother in Christ), a month ago he suffered an amputation of his right leg. Slavcho is in a really terrible material/financial state. Every month he is living on the edge. Living with 120 lv. per month where 40 lv. from the sum he has to give for the rent of the apartment they live with his aunt. But God knows our needs and did a glorious thing for him. Slavcho told me his testimony today. He was in two protestant Churches in Plovdvid where he is currently for examination of his health state after the amputation. He told his story how he grown as an orphan how he believed/received Christ and how hard his life was. One of the Churches member decided they’ll collect tithe will donate him the money. Yes Praise be to the Lord Shabbaoth The Lord of Hosts.. HalleluYah . Seems like there are some real servents the world has. Whilst the Devil is taking inch after inch after inch of the earth leading so many blind ppl to his Satanic Kingdom. I looked at http://adsx.com. The probable technology matching the Prophecies for which we can read in The Book of Revelation. The technology now serves 500 Big Hospitals in the USA. Only for 5 or 6 months they put it in a new 200 Hospitals … But I guess this is just the beginning of the integration of the “Number of the Beast” Into the World Economy/Trade/HealthCare/Structure …END—–