Archive for the ‘Various’ Category

How to enable HaProxy logging to a separate log /var/log/haproxy.log / prevent HAProxy duplicate messages to appear in /var/log/messages

Wednesday, February 19th, 2020

haproxy-logging-basics-how-to-log-to-separate-file-prevent-duplicate-messages-haproxy-haproxy-weblogo-squares
haproxy  logging can be managed in different form the most straight forward way is to directly use /dev/log either you can configure it to use some log management service as syslog or rsyslogd for that.

If you don't use rsyslog yet to install it: 

# apt install -y rsyslog

Then to activate logging via rsyslogd we can should add either to /etc/rsyslogd.conf or create a separte file and include it via /etc/rsyslogd.conf with following content:
 

Enable haproxy logging from rsyslogd


Log haproxy messages to separate log file you can use some of the usual syslog local0 to local7 locally used descriptors inside the conf (be aware that if you try to use some wrong value like local8, local9 as a logging facility you will get with empty haproxy.log, even though the permissions of /var/log/haproxy.log are readable and owned by haproxy user.

When logging to a local Syslog service, writing to a UNIX socket can be faster than targeting the TCP loopback address. Generally, on Linux systems, a UNIX socket listening for Syslog messages is available at /dev/log because this is where the syslog() function of the GNU C library is sending messages by default. To address UNIX socket in haproxy.cfg use:

log /dev/log local2 


If you want to log into separate log each of multiple running haproxy instances with different haproxy*.cfg add to /etc/rsyslog.conf lines like:

local2.* -/var/log/haproxylog2.log
local3.* -/var/log/haproxylog3.log


One important note to make here is since rsyslogd is used for haproxy logging you need to have enabled in rsyslogd imudp and have a UDP port listener on the machine.

E.g. somewhere in rsyslog.conf or via rsyslog include file from /etc/rsyslog.d/*.conf needs to have defined following lines:

$ModLoad imudp
$UDPServerRun 514


I prefer to use external /etc/rsyslog.d/20-haproxy.conf include file that is loaded and enabled rsyslogd via /etc/rsyslog.conf:

# vim /etc/rsyslog.d/20-haproxy.conf

$ModLoad imudp
$UDPServerRun 514​
local2.* -/var/log/haproxy2.log


It is also possible to produce different haproxy log output based on the severiy to differentiate between important and less important messages, to do so you'll need to rsyslog.conf something like:
 

# Creating separate log files based on the severity
local0.* /var/log/haproxy-traffic.log
local0.notice /var/log/haproxy-admin.log

 

Prevent Haproxy duplicate messages to appear in /var/log/messages

If you use local2 and some default rsyslog configuration then you will end up with the messages coming from haproxy towards local2 facility producing doubled simultaneous records to both your pre-defined /var/log/haproxy.log and /var/log/messages on Proxy servers that receive few thousands of simultanous connections per second.
This is a problem since doubling the log will produce too much data and on systems with smaller /var/ partition you will quickly run out of space + this haproxy requests logging to /var/log/messages makes the file quite unreadable for normal system events which are so important to track clearly what is happening on the server daily.

To prevent the haproxy duplicate messages you need to define somewhere in rsyslogd usually /etc/rsyslog.conf local2.none near line of facilities configured to log to file:

*.info;mail.none;authpriv.none;cron.none;local2.none     /var/log/messages

This configuration should work but is more rarely used as most people prefer to have haproxy log being written not directly to /dev/log which is used by other services such as syslogd / rsyslogd.

To use /dev/log to output logs from haproxy configuration in global section use config like:
 

global
        log /dev/log local2 debug
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

The log global directive basically says, use the log line that was set in the global section for whole config till end of file. Putting a log global directive into the defaults section is equivalent to putting it into all of the subsequent proxy sections.

Using global logging rules is the most common HAProxy setup, but you can put them directly into a frontend section instead. It can be useful to have a different logging configuration as a one-off. For example, you might want to point to a different target Syslog server, use a different logging facility, or capture different severity levels depending on the use case of the backend application. 

Insetad of using /dev/log interface that is on many distributions heavily used by systemd to store / manage and distribute logs,  many haproxy server sysadmins nowdays prefer to use rsyslogd as a default logging facility that will manage haproxy logs.
Admins prefer to use some kind of mediator service to manage log writting such as rsyslogd or syslog, the reason behind might vary but perhaps most important reason is  by using rsyslogd it is possible to write logs simultaneously locally on disk and also forward logs  to a remote Logging server  running rsyslogd service.

Logging is defined in /etc/haproxy/haproxy.cfg or the respective configuration through global section but could be also configured to do a separate logging based on each of the defined Frontend Backends or default section. 
A sample exceprt from this section looks something like:

#———————————————————————
# Global settings
#———————————————————————
global
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#———————————————————————
defaults
    mode                    tcp
    log                     global
    option                  tcplog
    #option                  dontlognull
    #option http-server-close
    #option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 7
    #timeout http-request    10s
    timeout queue           10m
    timeout connect         30s
    timeout client          20m
    timeout server          10m
    #timeout http-keep-alive 10s
    timeout check           30s
    maxconn                 3000

 

 

# HAProxy Monitoring Config
#———————————————————————
listen stats 192.168.0.5:8080                #Haproxy Monitoring run on port 8080
    mode http
    option httplog
    option http-server-close
    stats enable
    stats show-legends
    stats refresh 5s
    stats uri /stats                            #URL for HAProxy monitoring
    stats realm Haproxy\ Statistics
    stats auth hproxyauser:Password___          #User and Password for login to the monitoring dashboard

 

#———————————————————————
# frontend which proxys to the backends
#———————————————————————
frontend ft_DKV_PROD_WLPFO
    mode tcp
    bind 192.168.233.5:30000-31050
    option tcplog
    log-format %ci:%cp\ [%t]\ %ft\ %b/%s\ %Tw/%Tc/%Tt\ %B\ %ts\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq
    default_backend Default_Bakend_Name


#———————————————————————
# round robin balancing between the various backends
#———————————————————————
backend bk_DKV_PROD_WLPFO
    mode tcp
    # (0) Load Balancing Method.
    balance source
    # (4) Peer Sync: a sticky session is a session maintained by persistence
    stick-table type ip size 1m peers hapeers expire 60m
    stick on src
    # (5) Server List
    # (5.1) Backend
    server Backend_Server1 10.10.10.1 check port 18088
    server Backend_Server2 10.10.10.2 check port 18088 backup


The log directive in above config instructs HAProxy to send logs to the Syslog server listening at 127.0.0.1:514. Messages are sent with facility local2, which is one of the standard, user-defined Syslog facilities. It’s also the facility that our rsyslog configuration is expecting. You can add more than one log statement to send output to multiple Syslog servers.

Once rsyslog and haproxy logging is configured as a minumum you need to restart rsyslog (assuming that haproxy config is already properly loaded):

# systemctl restart rsyslogd.service

To make sure rsyslog reloaded successfully:

systemctl status rsyslogd.service


Restarting HAproxy

If the rsyslogd logging to 127.0.0.1 port 514 was recently added a HAProxy restart should also be run, you can do it with:
 

# /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)


Or to restart use systemctl script (if haproxy is not used in a cluster with corosync / heartbeat).

# systemctl restart haproxy.service

You can control how much information is logged by adding a Syslog level by

    log         127.0.0.1 local2 info


The accepted values are the standard syslog security level severity:

Value Severity Keyword Deprecated keywords Description Condition
0 Emergency emerg panic System is unusable A panic condition.
1 Alert alert   Action must be taken immediately A condition that should be corrected immediately, such as a corrupted system database.
2 Critical crit   Critical conditions Hard device errors.
3 Error err error Error conditions  
4 Warning warning warn Warning conditions  
5 Notice notice   Normal but significant conditions Conditions that are not error conditions, but that may require special handling.
6 Informational info   Informational messages  
7 Debug debug   Debug-level messages Messages that contain information normally of use only when debugging a program.

 

Logging only errors / timeouts / retries and errors is done with option:

Note that if the rsyslog is configured to listen on different port for some weird reason you should not forget to set the proper listen port, e.g.:
 

  log         127.0.0.1:514 local2 info

option dontlog-normal

in defaults or frontend section.

You most likely want to enable this only during certain times, such as when performing benchmarking tests.

(or log-format-sd for structured-data syslog) directive in your defaults or frontend
 

Haproxy Logging shortly explained


The type of logging you’ll see is determined by the proxy mode that you set within HAProxy. HAProxy can operate either as a Layer 4 (TCP) proxy or as Layer 7 (HTTP) proxy. TCP mode is the default. In this mode, a full-duplex connection is established between clients and servers, and no layer 7 examination will be performed. When in TCP mode, which is set by adding mode tcp, you should also add option tcplog. With this option, the log format defaults to a structure that provides useful information like Layer 4 connection details, timers, byte count and so on.

Below is example of configured logging with some explanations:

Log-format "%ci:%cp [%t] %ft %b/%s %Tw/%Tc/%Tt %B %ts %ac/%fc/%bc/%sc/%rc %sq/%bq"

haproxy-logged-fields-explained
Example of Log-Format configuration as shown above outputted of haproxy config:

Log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"

haproxy_http_log_format-explained1

To understand meaning of this abbreviations you'll have to closely read  haproxy-log-format.txt. More in depth info is to be found in HTTP Log format documentation

 

Logging HTTP request headers

HTTP request header can be logged via:
 

 http-request capture

frontend website
    bind :80
    http-request capture req.hdr(Host) len 10
    http-request capture req.hdr(User-Agent) len 100
    default_backend webservers


The log will show headers between curly braces and separated by pipe symbols. Here you can see the Host and User-Agent headers for a request:

192.168.150.1:57190 [20/Dec/2018:22:20:00.899] website~ webservers/server1 0/0/1/0/1 200 462 – – —- 1/1/0/0/0 0/0 {mywebsite.com|Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/71.0.3578.80 } "GET / HTTP/1.1"

 

Haproxy Stats Monitoring Web interface


Haproxy is having a simplistic stats interface which if enabled produces some general useful information like in above screenshot, through which
you can get a very basic in browser statistics and track potential issues with the proxied traffic for all configured backends / frontends incoming outgoing
network packets configured nodes
 experienced downtimes etc.

haproxy-statistics-report-picture

The basic configuration to make the stats interface accessible would be like pointed in above config for example to enable network listener on address
 

https://192.168.0.5:8080/stats


with hproxyuser / password config would be:

# HAProxy Monitoring Config
#———————————————————————
listen stats 192.168.0.5:8080                #Haproxy Monitoring run on port 8080
    mode http
    option httplog
    option http-server-close
    stats enable
    stats show-legends
    stats refresh 5s
    stats uri /stats                            #URL for HAProxy monitoring
    stats realm Haproxy\ Statistics
    stats auth hproxyauser:Password___          #User and Password for login to the monitoring dashboard

 

 

Sessions states and disconnect errors on new application setup

Both TCP and HTTP logs include a termination state code that tells you the way in which the TCP or HTTP session ended. It’s a two-character code. The first character reports the first event that caused the session to terminate, while the second reports the TCP or HTTP session state when it was closed.

Here are some essential termination codes to track in for in the log:
 

Here are some termination code examples most commonly to see on TCP connection establishment errors:

Two-character code    Meaning
—    Normal termination on both sides.
cD    The client did not send nor acknowledge any data and eventually timeout client expired.
SC    The server explicitly refused the TCP connection.
PC    The proxy refused to establish a connection to the server because the process’ socket limit was reached while attempting to connect.


To get all non-properly exited codes the easiest way is to just grep for anything that is different from a termination code –, like that:

tail -f /var/log/haproxy.log | grep -v ' — '


This should output in real time every TCP connection that is exiting improperly.

There’s a wide variety of reasons a connection may have been closed. Detailed information about all possible termination codes can be found in the HAProxy documentation.
To get better understanding a very useful reading to haproxy Debug errors with  is in haproxy-logging.txt in that small file are collected all the cryptic error messages codes you might find in your logs when you're first time configuring the Haproxy frontend / backend and the backend application behind.

Another useful analyze tool which can be used to analyze Layer 7 HTTP traffic is halog for more on it just google around.

Procedure Instructions to safe upgrade CentOS / RHEL Linux 7 Core to latest release

Thursday, February 13th, 2020

safe-upgrade-CentOS-and_Redhat_Enterprise_Linux_RHEL-7-to-latest-stable-release

Generally upgrading both RHEL and CentOS can be done straight with yum tool just we're pretty aware and mostly anyone could do the update, but it is good idea to do some
steps in advance to make backup of any old basic files that might help us to debug what is wrong in case if the Operating System fails to boot after the routine Machine OS restart
after the upgrade that is usually a good idea to make sure that machine is still bootable after the upgrade.

This procedure can be shortened or maybe extended depending on the needs of the custom case but the general framework should be useful anyways to someone that's why
I decided to post this.

Before you go lets prepare a small status script which we'll use to report status of  sysctl installed and enabled services as well as the netstat connections state and
configured IP addresses and routing on the system.

The script show_running_services_netstat_ips_route.sh to be used during our different upgrade stages:
 

# script status ###
echo "STARTED: $(date '+%Y-%m-%d_%H-%M-%S'):" | tee /root/logs/yumcheckupdate-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out
systemctl list-unit-files –type=service | grep enabled
systemctl | grep ".service" | grep "running"
netstat -tulpn
netstat -r
ip a s
/sbin/route -n
echo "ENDED $(date '+%Y-%m-%d_%H-%M-%S'):" | tee /root/logs/yumcheckupdate-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out
####

 

– Save the script in any file like /root/status.sh

– Make the /root/logs directoriy.
 

[root@redhat: ~ ]# mkdir /root/logs
[root@redhat: ~ ]# vim /root/status.sh
[root@redhat: ~ ]# chmod +x /root/status.sh

 

1. Get a dump of CentOS installed version release and grub-mkconfig generated os_probe

 

[root@redhat: ~ ]# cat /etc/redhat-release  > /root/logs/redhat-release-vorher-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out
[root@redhat: ~ ]# cat /etc/grub.d/30_os-prober > /root/logs/grub2-efi-vorher-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out

 

2. Clear old versionlock marked RPM packages (if there are such)

 

On servers maintained by multitude of system administrators just like the case is inside a Global Corporations and generally in the corporate world , where people do access the systems via LDAP and more than a single person
has superuser privileges. It is a good prevention measure to use yum package management  functionality to RPM based Linux distributions called  versionlock.
versionlock for those who hear it for a first time is locking the versions of the installed RPM packages so if someone by mistake or on purpose decides to do something like :

[root@redhat: ~ ]# yum install packageversion

Having the versionlock set will prevent the updated package to be installed with a different branch package version.

Also it will prevent a playful unknowing person who just wants to upgrade the system without any deep knowledge to be able to
run

[root@redhat: ~ ]# yum upgrade

update and leave the system in unbootable state, that will be only revealed during the next system reboot.

If you haven't used versionlock before and you want to use it you can do it with:

[root@redhat: ~ ]# yum install yum-plugin-versionlock

To add all the packages for compiling C code and all the interdependend packages, you can do something like:

 

[root@redhat: ~ ]# yum versionlock gcc-*

If you want to clear up the versionlock, once it is in use run:

[root@redhat: ~ ]#  yum versionlock clear
[root@redhat: ~ ]#  yum versionlock list

 

3.  Check RPC enabled / disabled

 

This step is not necessery but it is a good idea to check whether it running on the system, because sometimes after upgrade rpcbind gets automatically started after package upgrade and reboot. 
If we find it running we'll need to stop and mask the service.

 

# check if rpc enabled
[root@redhat: ~ ]# systemctl list-unit-files|grep -i rpc
var-lib-nfs-rpc_pipefs.mount                                      static
auth-rpcgss-module.service                                        static
rpc-gssd.service                                                  static
rpc-rquotad.service                                               disabled
rpc-statd-notify.service                                          static
rpc-statd.service                                                 static
rpcbind.service                                                   disabled
rpcgssd.service                                                   static
rpcidmapd.service                                                 static
rpcbind.socket                                                    disabled
rpc_pipefs.target                                                 static
rpcbind.target                                                    static

[root@redhat: ~ ]# systemctl status rpcbind.service
● rpcbind.service – RPC bind service
   Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; disabled; vendor preset: enabled)
   Active: inactive (dead)

 

[root@redhat: ~ ]# systemctl status rpcbind.socket
● rpcbind.socket – RPCbind Server Activation Socket
   Loaded: loaded (/usr/lib/systemd/system/rpcbind.socket; disabled; vendor preset: enabled)
   Active: inactive (dead)
   Listen: /var/run/rpcbind.sock (Stream)
           0.0.0.0:111 (Stream)
           0.0.0.0:111 (Datagram)
           [::]:111 (Stream)
           [::]:111 (Datagram)

 

4. Check any previously existing downloaded / installed RPMs (check yum cache)

 

yum install package-name / yum upgrade keeps downloaded packages via its operations inside its cache directory structures in /var/cache/yum/*.
Hence it is good idea to check what were the previously installed packages and their count.

 

[root@redhat: ~ ]# cd /var/cache/yum/x86_64/;
[root@redhat: ~ ]# find . -iname '*.rpm'|wc -l

 

5. List RPM repositories set on the server

 

 [root@redhat: ~ ]# yum repolist
Loaded plugins: fastestmirror, versionlock
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
Determining fastest mirrors
repo id                                                                                 repo name                                                                                                            status
!atos-ac/7/x86_64                                                                       Atos Repository                                                                                                       3,128
!base/7/x86_64                                                                          CentOS-7 – Base                                                                                                      10,019
!cr/7/x86_64                                                                            CentOS-7 – CR                                                                                                         2,686
!epel/x86_64                                                                            Extra Packages for Enterprise Linux 7 – x86_64                                                                          165
!extras/7/x86_64                                                                        CentOS-7 – Extras                                                                                                       435
!updates/7/x86_64                                                                       CentOS-7 – Updates                                                                                                    2,500

 

This step is mandatory to make sure you're upgrading to latest packages from the right repositories for more concretics check what is inside in confs /etc/yum.repos.d/ ,  /etc/yum.conf 
 

6. Clean up any old rpm yum cache packages

 

This step is again mandatory but a good to follow just to have some more clearness on what packages is our upgrade downloading (not to mix up the old upgrades / installs with our newest one).
For documentation purposes all deleted packages list if such is to be kept under /root/logs/yumclean-install*.out file

[root@redhat: ~ ]# yum clean all |tee /root/logs/yumcleanall-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out

 

7. List the upgradeable packages's latest repository provided versions

 

[root@redhat: ~ ]# yum check-update |tee /root/logs/yumcheckupdate-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out

 

Then to be aware how many packages we'll be updating:

 

[root@redhat: ~ ]#  yum check-update | wc -l

 

8. Apply the actual uplisted RPM packages to be upgraded

 

[root@redhat: ~ ]# yum update |tee /root/logs/yumupdate-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out

 

Again output is logged to /root/logs/yumcheckupate-*.out 

 

9. Monitor downloaded packages count real time

 

To make sure yum upgrade is not in some hanging state and just get some general idea in which state of the upgrade is it e.g. Download / Pre-Update / Install  / Upgrade/ Post-Update etc.
in mean time when yum upgrade is running to monitor,  how many packages has the yum upgrade downloaded from remote RPM set repositories:

 

[root@redhat: ~ ]#  watch "ls -al /var/cache/yum/x86_64/7Server/…OS-repository…/packages/|wc -l"

 

10. Run status script to get the status again

 

[root@redhat: ~ ]# sh /root/status.sh |tee /root/logs/status-before-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out

 

11. Add back versionlock for all RPM packs

 

Set all RPM packages installed on the RHEL / CentOS versionlock for all packages.

 

#==if needed
# yum versionlock \*

 

 

12. Get whether old software configuration is not messed up during the Package upgrade (Lookup the logs for .rpmsave and .rpmnew)

 

During the upgrade old RPM configuration is probably changed and yum did automatically save .rpmsave / .rpmnew saves of it thus it is a good idea to grep the prepared logs for any matches of this 2 strings :
 

[root@redhat: ~ ]#   grep -i ".rpm" /root/logs/yumupdate-server-host-2020-01-20_14-30-41.out
[root@redhat: ~ ]#  grep -i ".rpmsave" /root/logs/yumupdate-server-host-2020-01-20_14-30-41.out
[root@redhat: ~ ]#  grep -i ".rpmnew" /root/logs/yumupdate-server-host-2020-01-20_14-30-41.out


If above commands returns output usually it is fine if there is is .rpmnew output but, if you get grep output of .rpmsave it is a good idea to review the files compare with the original files that were .rpmsaved with the 
substituted config file and atune the differences with the changes manually made for some program functionality.

What are the .rpmsave / .rpmnew files ?
This files are coded files that got triggered by the RPM install / upgrade due to prewritten procedures on time of RPM build.

 

If a file was installed as part of a rpm, it is a config file (i.e. marked with the %config tag), you've edited the file afterwards and you now update the rpm then the new config file (from the newer rpm) will replace your old config file (i.e. become the active file).
The latter will be renamed with the .rpmsave suffix.

If a file was installed as part of a rpm, it is a noreplace-config file (i.e. marked with the %config(noreplace) tag), you've edited the file afterwards and you now update the rpm then your old config file will stay in place (i.e. stay active) and the new config file (from the newer rpm) will be copied to disk with the .rpmnew suffix.
See e.g. this table for all the details. 

In both cases you or some program has edited the config file(s) and that's why you see the .rpmsave / .rpmnew files after the upgrade because rpm will upgrade config files silently and without backup files if the local file is untouched.

After a system upgrade it is a good idea to scan your filesystem for these files and make sure that correct config files are active and maybe merge the new contents from the .rpmnew files into the production files. You can remove the .rpmsave and .rpmnew files when you're done.


If you need to get a list of all .rpmnew .rpmsave files on the server do:

[root@redhat: ~ ]#  find / -print | egrep "rpmnew$|rpmsave$

 

13. Reboot the system 

To check whether on next hang up or power outage the system will boot normally after the upgrade, reboot to test it.

 

you can :

 

[root@redhat: ~ ]#  reboot

 

either

[root@redhat: ~ ]#  shutdown -r now


or if on newer Linux with systemd in ues below systemctl reboot.target.

[root@redhat: ~ ]#  systemctl start reboot.target

 

14. Get again the system status with our status script after reboot

[root@redhat: ~ ]#  sh /root/status.sh |tee /root/logs/status-after-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out

 

15. Clean up any versionlocks if earlier set

 

[root@redhat: ~ ]# yum versionlock clear
[root@redhat: ~ ]# yum versionlock list

 

16. Check services and logs for problems

 

After the reboot Check closely all running services on system make sure every process / listening ports and services on the system are running fine, just like before the upgrade.
If the sytem had firewall,  check whether firewall rules are not broken, e.g. some NAT is not missing or anything earlier configured to automatically start via /etc/rc.local or some other
custom scripts were run and have done what was expected. 
Go through all the logs in /var/log that are most essential /var/log/boot.log , /var/log/messages … yum.log etc. that could reveal any issues after the boot. In case if running some application server or mail server check /var/log/mail.log or whenever it is configured to log.
If the system runs apache closely check the logs /var/log/httpd/error.log or php_errors.log for any strange errors that occured due to some issues caused by the newer installed packages.
Usually most of the cases all this should be flawless but a multiple check over your work is a stake for good results.
 

How to install jcmd on CentOS 7 to diagnose running Java Virtual Machine crashing applications

Tuesday, January 14th, 2020

how-to-install-jcmd-centos-jcdm-java-logo-1

jcmd utility is well known in the Brane New wonderful world of Java but if you're like me a classical old school sysadmins and non-java developer you probably never heard it hence before going straight into how to install it on CentOS 7 Linux servers, I'll shortly say few words on what it is.
jcmd is used to send diagnostic requests to running Java Virtual Machine (JVM) it is available in both in Oracle Java as well as OpenJDK.
The requests jcmd sends to VM are based on the running Java PID ID and are pretty useful for controlling Java Flight Recordings, troubleshoot, and diagnose JVM and Java Applications. It must be used on the same machine where the JVM is running.

 Used without arguments or with the -l option, jcmd prints the list of running Java processes with their process id, their main class and their command line arguments.
When a main class is specified on the command line, jcmd sends the diagnostic command request to all Java processes for which the command line argument is a substring of the Java process' main class.
jcmd could be useful if the JConsoleJMX (Java Management Extensions) can't be used for some reason on the server or together with Java Visual VM (visual interface for viewing detailed info about Java App).

In most Linux distributionsas as  of year 2020 jcmd is found in  java-*-openjdk-headless.

To have jcmd on lets say Debian GNU / Linux, you're up to something like:

 

apt-get install –yes openjdk-12-jdk-headless
 

or

apt-get install openjdk-11-jdk-headless
however in CentOS
7 jcmd is not found in java*openjdk*headless but instead to have it on server, thus it take me a while to look up where it is foundso after hearing
from some online post it is part of package java*openjdk*devel* to make sure this so true, I've used the  –download-only option
 

 yum install –downloadonly –downloaddir=/tmp java-1.8.0-openjdk-devel

 


So the next question was how to inspect the downloaded rpm package into /tmp usually, this is possible via Midnight Commander (mc) easily to view contents, however as this
server did not have installed mc due to security policies I had to do it differently after pondering a while on how to to list the RPM package file content come up using following command

 

 

 rpm2cpio java-1.8.0-openjdk-devel-1.8.0.232.b09-0.el7_7.x86_64.rpm |cpio -idmv|grep -i jcmd|less

 


To then install  java-1.8.0-openjdk-devel-1.8.0.232.b09-0.el7_7.x86_64.rpm to do so run:

 

 

 

yum -y install  java-1.8.0-openjdk-devel-1.8.0*


Once the jcmd, I've created the following bash script  that was set that was tracking for application errors and checking whether the JBoss application server pool-available-count is not filled up and hence jboss refuses to serve connections  through jboss-cli.sh query automatically launching jcmd to get various diagnostic data about Java Virtual Machine (e.g. a running snapshot) – think of it like the UNIX top for debugging or Windows System Monitor but run one time. 

 

 

 

# PID_OF_JAVA=$(pgrep -l java)
# jcmd $PID_OF_JAVA GC.heap_dump GC.heap_dump_file-$(date '+%Y-%m-%d_%H-%M-%S').jfr
# jcmd $PID_OF_JAVA Thread.print > Thread.print-$(date '+%Y-%m-%d_%H-%M-%S').jfr


The produced log files can then be used by the developer to visualize some Java specific stuff "Flight recordings" like in below screenshot:

jcmd-java-tool-dump-jfr-flight-recordings-visualized-java 

If you're interested on some other interesting tools that can be used to Monitor  and Debug a Running Java VM take a look at Java's official documentation Monitoring Tools.
So that's all Mission Accomplished 🙂 Now the Java Application developer could observe the log and tell why exactly the application crashed after the multitude of thrown Exceptions in the JBoss server.log.

Linux show largest sized packages / Which Deb, RPM Linux installed package use most disk space and How to Free Space for critical system updates

Sunday, January 12th, 2020

linux-show-largest-sized-packages-which-deb-rpm-linux-package-use-most-disk-space-to-free-space-for-critical-system-updates

A very common problem that happens on both Linux installed servers and Desktop Linux is a starting to fill / (root partition). This problem could happen due to several reasons just to point few of them out of my experience low disk space (ending free space) could be due to:

– Improper initial partitioning / bad space planning / or OS install made in a hurry (due to time constrains)
– Linux installed on old laptop machine with low Hard Disk Drive capacity (e.g. 80 Giga / 160 GB)
– Custom user partitioning on install time aiming for a small root partition originally and changing space requirements in time
– Due to increasing space taken by Linux updates / user stored files etc / distribution OS Level upgrades dist-upgrades.
– Improperly assigned install time partitions cause of lack of knowledge to understand how partitioning is managed.
– Due to install being made in a hurry

– Linux OS installed on a Cloud based VPN (e.g. running) in a Cloud Instance that is hosted in Amazon EC2, Linode, Digital Ocean, Hostgator etc.

So here is a real time situation that happened me many times, you're launching an apt-get upgrade / apt-get dist-upgrade or yum upgrade the packages are about to start downloading or downloaded and suddenly you get a message of not enough disk space to apply OS package updates …
That's nasty stuff mostly irritating and here there are few approaches to take.

a. perhaps easiest you can ofcourse extend the partition (with a free spaced other Primary or Extended partition) with something like:

parted (the disk partitioning manipulator for Linux), gparted (in case if Desktop with GUI / XOrg server running)

b. if not enough space on the Hard Disk Drive or SSD (Solid State Drive) and you have a budget to buy and free laptop / PC slot to place another physical HDD to clone it to a larger sized HDD and use some kind of partition clone tool, such as:

or any of the other multiple clone tools available in Linux.

But what if you don't have the option for some reason to extend the paritiotn, how can you apply the Critical Security Errata Updates issued to patch security vulnerabilities reported by well known CVEs?
Well you can start with the obvious easy you can start removing unnecessery stuff from the system (if home is also stored on the / – root partiiton) to delete something from there, even delete the /usr/local/man pages if you don't plan to read it free some logs by archiving purging logs from /var/log/* …

But if this is not possible, a better approach is simply try to remove / purge any .deb / .rpm whatever distro package manager packages that are not necessery used and just hanging around, that is often the case especially on Linux installed on Notebooks for a personal home use, where with years you have installed a growing number of packages which you don't actively use but installed just to take a look, while hunting for Cool Linux games and you wanted to give a try to Battle of Wesnoth  / FreeCIV / AlienArena / SuperTux Kart / TuxRacer etc.  or some GUI heavy programs like Krita / Inskape / Audacity etc.

To select which package might be not needed and just takes space hence you need to to list all installed packages on the system ordered by their size this is different in Debian based Linuces e.g. – Debian GNU / Linux / Ubuntu / Mint etc. and RPM based ones Fedora / CentOS / OpenSuSE

 

1. List all RPM installed packages by Size on CentOS / SuSE
 

Finding how much space each of the installed rpm packages take on the HDD and displaying them in a sorted order is done with:

rpm -qa –queryformat '%10{size} – %-25{name} \t %{version}\n' | sort -n

From the command above,  the '%10{size}' option aligns the size of the package to the right with a padding of 10 characters. The '%-25{name} aligns the name of the package to the left, padded to 25 characters. The '%{version} indicates the version and 'sort -n' flag sorts the packages according to size from the smallest to the largest in bytes.

 

2. List all installed RPM packages sorted by size on Fedora

Fedora has introduced the dnf package manager instead of yum, to get how much size individual rpm package occupies on system:

dnf info samba
Available Packages
Name        : samba
Arch        : x86_64
Epoch       : 2
Version     : 4.1.20
Release     : 1.fc21
Size        : 558 k
Repo        : updates
Summary     : Server and Client software to interoperate with Windows machines
URL         : http://www.samba.org/
License     : GPLv3+ and LGPLv3+
Description : Samba is the standard Windows interoperability suite of programs
            : for Linux and Unix.

 

To get a list of all packages on system with their size

dnf info * | grep -i "Installed size" |sort -n

 

3. List all installed DEB packages on Debian / Ubuntu / Mint etc. with dpkg / aptitude / apt-get and wajig

 

The most simple way to get a list of largest packages is through dpkg

 

# dpkg-query -Wf '${Installed-Size}\t${Package}\n' | sort -n
        brscan4
6       default-jre
6       libpython-all-dev
6       libtinfo-dev
6       python-all
6       python-all-dev
6       task-cinnamon-desktop
6       task-cyrillic
6       task-desktop
6       task-english
6       task-gnome-desktop
6       task-laptop
6       task-lxde-desktop
6       task-mate-desktop
6       task-print-server
6       task-ssh-server
6       task-xfce-desktop
8       mysql-client
8       printer-driver-all



207766    libwine
215625    google-chrome-stable
221908    libwine
249401    frogatto-data
260717    linux-image-4.19.0-5-amd64
262512    linux-image-4.19.0-6-amd64
264899    mame
270589    fonts-noto-extra
278903    skypeforlinux
480126    metasploit-framework


above cmd displays packages in size order, largest package last, but the output will include also size of packages, that used to exist,
have been removed but was not purged. Thus if you find  a package that is shown as very large by size but further dpkg -l |grep -i package-name shows package as purged e.g. package state is not 'ii' but 'rc', the quickest work around is to purge all removed packages, that are still not purged and have some configuration remains and other chunks of data that just take space for nothing with:

# dpkg –list |grep "^rc" | cut -d " " -f 3 | xargs sudo dpkg –purge


Be cautious when you execute above command, because if for some reason you uninstalled a package with the idea to keep old configuration files only and in case if you decide to use it some time in future to reuse already custom made configs but do run above purge commands all such package saved kept configs will disappear.
For people who don't want to mess up with, uninstalled but present packages use this to filter out ready to be purged state packages.

# dpkg-query -Wf '${db:Status-Status} ${Installed-Size}\t${Package}\n' | sed -ne 's/^installed //p'|sort -n


aptitude – (high level ncurses interface like to package management) can also be easily used to list largest size packages eating up your hard drive in both interactive or cli mode, like so:

 

# aptitude search –sort '~installsize' –display-format '%p %I' '~i' | head
metasploit-framework 492 MB
skypeforlinux 286 MB
fonts-noto-extra 277 MB
mame 271 MB
linux-image-4.19.0-6-amd64 269 MB
linux-image-4.19.0-5-amd64 267 MB
frogatto-data 255 MB
libwine 227 MB
google-chrome-stable 221 MB
libwine:i386 213 MB

 

  • –sort is package sort order, and ~installsize specifies a package sort policy.
  • installsize means 'sort on (estimated) installed size', and the preceding ~ means sort descending (since default for all sort policies is ascending).
  • –display-format changes the <you guessed :->. The format string '%p %I' tells aptitude to output package name, then installed size.
  • '~i' tells aptitude to search only installed packages.

How much a certain .deb package removal will free up on the disk can be seen with apt-get as well to do so for the famous 3D acceleration Graphic Card (enabled) or not  test game extremetuxracer

apt-get –assume-no –purge remove "texlive*" | grep "be freed" | 
   awk '{print $4, $5}'

Perhaps,  the easiest to remember and more human readable output biggest packages occupied space on disk is to install and use a little proggie called wajig to do so

 

# apt install –yes wajig

 

Here is how to pick up 10 biggest size packages.

root@jeremiah:/home/hipo# wajig large|tail -n 10
fonts-noto-cjk-extra               204,486      installed
google-chrome-stable               215,625      installed
libwine                            221,908      installed
frogatto-data                      249,401      installed
linux-image-4.19.0-5-amd64         260,717      installed
linux-image-4.19.0-6-amd64         262,512      installed
mame                               264,899      installed
fonts-noto-extra                   270,589      installed
skypeforlinux                      278,903      installed
metasploit-framework               480,126      installed


As above example lists a short package name and no description for those who want get more in depth knowledge on what exactly is the package bundle used for use:

# aptitude search –sort '~installsize' –display-format '%30p %I %r %60d' '~i' |head


%30p %I %r %60d display more information in your format string, or change field widths, enhanced format string

Meaning of parameters is:

  • %30p : package name in field width=30 char
  • %I : estimated install size
  • %r : 'reverse depends count': approximate number of other installed packages which depend upon this package
  • %60d : package's short description in field width=60 char

wajig is capable is a python written and idea is to easify Debian console package management (so you don't have to all time remember when and with which arguments to use apt-get / apt-cache etc.), below is list of commands it accepts.

 

root@jeremiah:/home/hipo## wajig commands
addcdrom           Add a Debian CD/DVD to APT's list of available sources
addrepo            Add a Launchpad PPA (Personal Package Archive) repository
aptlog             Display APT log file
autoalts           Mark the Alternative to be auto-set (using set priorities)
autoclean          Remove no-longer-downloadable .deb files from the download cache
autodownload       Do an update followed by a download of all updated packages
autoremove         Remove unused dependency packages
build              Get source packages, unpack them, and build binary packages from them.
builddeps          Install build-dependencies for given packages
changelog          Display Debian changelog of a package
clean              Remove all deb files from the download cache
contents           List the contents of a package file (.deb)
dailyupgrade       Perform an update then a dist-upgrade
dependents         Display packages which have some form of dependency on the given package
describe           Display one-line descriptions for the given packages
describenew        Display one-line descriptions of newly-available packages
distupgrade        Comprehensive system upgrade
download           Download one or more packages without installing them
editsources        Edit list of Debian repository locations for packages
extract            Extract the files from a package file to a directory
fixconfigure       Fix an interrupted install
fixinstall         Fix an install interrupted by broken dependencies
fixmissing         Fix and install even though there are missing dependencies
force              Install packages and ignore file overwrites and depends
hold               Place packages on hold (so they will not be upgraded)
info               List the information contained in a package file
init               Initialise or reset wajig archive files
install            Package installer
installsuggested   Install a package and its Suggests dependencies
integrity          Check the integrity of installed packages (through checksums)
large              List size of all large (>10MB) installed packages
lastupdate         Identify when an update was last performed
listall            List one line descriptions for all packages
listalternatives   List the objects that can have alternatives configured
listcache          List the contents of the download cache
listcommands       Display all wajig commands
listdaemons        List the daemons that wajig can start, stop, restart, or reload
listfiles          List the files that are supplied by the named package
listhold           List packages that are on hold (i.e. those that won't be upgraded)
listinstalled      List installed packages
listlog            Display wajig log file
listnames          List all known packages; optionally filter the list with a pattern
listpackages       List the status, version, and description of installed packages
listscripts        List the control scripts of the package of deb file
listsection        List packages that belong to a specific section
listsections       List all available sections
liststatus         Same as list but only prints first two columns, not truncated
localupgrade       Upgrade using only packages that are already downloaded
madison            Runs the madison command of apt-cache
move               Move packages in the download cache to a local Debian mirror
new                Display newly-available packages
newdetail          Display detailed descriptions of newly-available packages
news               Display the NEWS file of a given package
nonfree            List packages that don't meet the Debian Free Software Guidelines
orphans            List libraries not required by any installed package 
policy             From preferences file show priorities/policy (available)
purge              Remove one or more packages and their configuration files
purgeorphans       Purge orphaned libraries (not required by installed packages)
purgeremoved       Purge all packages marked as deinstall
rbuilddeps         Display the packages which build-depend on the given package
readme             Display the README file(s) of a given package
recdownload        Download a package and all its dependencies
recommended        Display packages installed as Recommends and have no dependents
reconfigure        Reconfigure package
reinstall          Reinstall the given packages
reload             Reload system daemons (see LIST-DAEMONS for available daemons)
remove             Remove packages (see also PURGE command)
removeorphans      Remove orphaned libraries
repackage          Generate a .deb file from an installed package
reportbug          Report a bug in a package using Debian BTS (Bug Tracking System)
restart            Restart system daemons (see LIST-DAEMONS for available daemons)
rpm2deb            Convert an .rpm file to a Debian .deb file
rpminstall         Install an .rpm package file
search             Search for package names containing the given pattern
searchapt          Find nearby Debian package repositories
show               Provide a detailed description of package
sizes              Display installed sizes of given packages
snapshot           Generates a list of package=version for all installed packages
source             Retrieve and unpack sources for the named packages
start              Start system daemons (see LIST-DAEMONS for available daemons)
status             Show the version and available versions of packages
statusmatch        Show the version and available versions of matching packages
stop               Stop system daemons (see LISTDAEMONS for available daemons)
tasksel            Run the task selector to install groups of packages
todo               Display the TODO file of a given package
toupgrade          List versions of upgradable packages
tutorial           Display wajig tutorial
unhold             Remove listed packages from hold so they are again upgradeable
unofficial         Search for an unofficial Debian package at apt-get.org
update             Update the list of new and updated packages
updatealternatives Update default alternative for things like x-window-manager
updatepciids       Updates the local list of PCI ids from the internet master list
updateusbids       Updates the local list of USB ids from the internet master list
upgrade            Conservative system upgrade
upgradesecurity    Do a security upgrade
verify             Check package's md5sum
versions           List version and distribution of given packages
whichpackage       Search for files matching a given pattern within packages

 

4. List installed packages order by size in Arch Linux

ArchLinux is using the funny named package manager – pacman (a nice prank for the good old arcade game).
What is distinctive of pacman uses libalpm (Arch Linux Package Management (ALPM) library) as a back-end to perform all the actions.

 

# pacman -Qi | awk '/^Name/{name=$3} /^Installed Size/{print $4$5, name}' | sort -hr | head -25
296.64MiB linux-firmware
144.20MiB python
105.43MiB gcc-libs
72.90MiB python2
66.91MiB linux
57.47MiB perl
45.49MiB glibc
35.33MiB icu
34.68MiB git
30.96MiB binutils
29.95MiB grub
18.96MiB systemd
13.94MiB glib2
13.79MiB coreutils
13.41MiB python2-boto
10.65MiB util-linux
9.50MiB gnupg
8.09MiB groff
8.05MiB gettext
7.99MiB texinfo
7.93MiB sqlite
7.15MiB bash
6.50MiB lvm2
6.43MiB openssl
6.33MiB db


There is another mean to list packages by size using a ArchLinux tool called pacgraph
 

 

# pacgraph -c | head -25

Autodetected Arch.
Loading package info
Total size: 1221MB
367MB linux
144MB pacgraph
98MB cloud-init
37MB grub
35MB icu
34MB git
31698kB binutils
19337kB pacman
11029kB man-db
8186kB texinfo
8073kB lvm2
7632kB nano
7131kB openssh
5735kB man-pages
3815kB xfsprogs
3110kB sudo
3022kB wget
2676kB tar
2626kB netctl
1924kB parted
1300kB procps-ng
1248kB diffutils

 

 

 

4. Debian Goodies

 

 

Most debian users perhaps never hear of debian-goodies package, but I thought it is worthy to mention it as sooner or later as a sysadmin or .deb based Desktop user it might help you somewhere.
 

Debian-goodies is sall toolbox-style utilities for Debian systems
 These programs are designed to integrate with standard shell tools,
 extending them to operate on the Debian packaging system.

 .
  dglob  – Generate a list of package names which match a pattern
           [dctrl-tools, apt*, apt-file*, perl*]
  dgrep  – Search all files in specified packages for a regex
           [dctrl-tools, apt-file (both via dglob)]
 .
 These are also included, because they are useful and don't justify
 their own packages:
 .
  check-enhancements
 
           – find packages which enhance installed packages [apt,
                dctrl-tools]
  checkrestart
 
           – Help to find and restart processes which are using old versions
               of upgraded files (such as libraries) [python3, procps, lsof*]
  debget     – Fetch a .deb for a package in APT's database [apt]
  debman     – Easily view man pages from a binary .deb without extracting
               [man, apt* (via debget)]
  debmany    – Select manpages of installed or uninstalled packages [man |
               sensible-utils, whiptail | dialog | zenity, apt*, konqueror*,
               libgnome2-bin*, xdg-utils*]
  dhomepage  – Open homepage of a package in a web browser [dctrl-tools,
               sensible-utils*, www-browser* | x-www-browser*]
  dman       – Fetch manpages from online manpages.debian.org service [curl,
               man, lsb-release*]
  dpigs      – Show which installed packages occupy the most space
               [dctrl-tools]
  find-dbgsym-packages
             – Get list of dbgsym packages from core dump or PID [dctrl-tools,
               elfutils, libfile-which-perl, libipc-system-simple-perl]
  popbugs    – Display a customized release-critical bug list based on
               packages you use (using popularity-contest data) [python3,
               popularity-contest]
  which-pkg-broke
             – find which package might have broken another [python3, apt]
  which-pkg-broke-build
             – find which package might have broken the build of another
               [python3 (via which-pkg-broke), apt]

Even simpler by that is to use dpigs shell script part of the debian-goodies package which will automatically print out the largest packages.

dpigs command output is exactly the same as 'dpkg-query -Wf '${Installed-Size}\t${Package}\n' | sort -nr | head', but is useful cause you don't have to remember that complex syntax.

 

5. Checking where your space is gone in a Spacesniffer like GUI manner with Baobab


In my prior article Must have software on a new installed Windows 2 of the  of the precious tools to set are Spacesniffer and WinDirStat.
Windows users will be highly delighted to know that SpaceSniffer equivallent is already present on Linux – say hello baobab.
Baobab
is simple but useful Graphic disk usage overview program for those who don't want to mess to much with the console / terminal to find out which might be the possible directory candidate for removal. It is very simplistic but it does well what it is aimed for, to install it on a Debian or .deb based OS.

# apt install –yes baobab


baobab-entry-screen-debian-gnu-linux-screenshot

baobab Linux Hard Disk Usage Analyzer for GNOME. – It can easily scan either the whole filesystem or a specific user-requested branch (Iocal or remote)

 

baobab-entry-screen-debian-gnu-linux-directories-taking-most-space-pie-screenshot

Baobab / (root) directory statistics Rings Chart pie

 

baobab-entry-screen-debian-gnu-linux-disk-space-by-size-visualized-screenshot

baobab – Treemap Chart for directory usage sorted by size on disk 

!!! Note that before removing any files found as taking up too much space with baobab – make sure this files are not essential parts of a .deb package first, otherwise you might break up your system !!!

KDE (Plasma) QT library users could use Qdirstat instead of baobab 

qdirstat-on-gnu-linur checking what is the disk space bottleneck qdirstat KDE


6. Use ncdu or duper perl script tool to generate directory disk usage in ASCII chart bar

ncdu and duper are basicly the same except one is using ncurses and is interactive in a very simplistic interface with midnight commander.
 

# apt install –yes ncdu
# ncdu /root


ncdu-gnu-linux-debian-screenshot

 

# apt-get install –yes durep
# durep -td 1 /usr

[ /usr    14.4G (0 files, 11 dirs) ]
   6.6G [#############                 ]  45.54% lib/
   5.5G [###########                   ]  38.23% share/
   1.1G [##                            ]   7.94% bin/
 552.0M [#                             ]   3.74% local/
 269.2M [                              ]   1.83% games/
 210.4M [                              ]   1.43% src/
  88.9M [                              ]   0.60% libexec/
  51.3M [                              ]   0.35% sbin/
  41.2M [                              ]   0.28% include/
   8.3M [                              ]   0.06% lib32/
 193.8K [                              ]   0.00% lib64/

 

 

Conclusion


In this article, I've shortly explained the few approach you can take to handle low disk space preventing you to update a regular security updates on Linux.
The easiest one is to clone your drive to a bigger (larger) sized SATA HDD or SDD Drive or using a free space left on a hard drive to exnted the current filling up the root partition. 

Further, I looked through the common reasons for endind with a disk being low spaced and a quick work around to free disk space through listing and purging larges sized package, this is made differently in different Linux distributions, because different Linux has different package managers. As I'm primary using Debian, I explained thoroughfully on how this is achieved with apt-get / dpkg-query / dpkg / aptitude and the little known debian-goodies .deb package manager helper pack. For GUI Desktop users there is baobab / qdirstat. ASCII lovers could enjoy durep and ncdu.

That's all folks hope you enjoyed and learned something new. If you know of other cool tools or things this article is missing please share.

Check when Windows Active Directory user expires and set user password expire to Never

Thursday, January 9th, 2020

micorosoft-windows-10-logo-net-user-command-check-expiry-dates

If you're working for a company that is following high security / PCI Security Standards and you're using m$ Windows OS that belongs to the domain it is useful to know when your user is set to expiry
to know how many days are left until you'll be forced to change your Windows AD password.
In this short article I'll explain how to check Windows AD last password set date / date expiry date and how you can list expiry dates for other users, finally will explain how to set your expiry date to Never
to get rid of annoying change password every 90 days.

 

1. Query domain Username for Password set / Password Expires set dates

To know this info you need to know the Password expiration date for Active Directory user account, to know it just open Command Line Prompt cmd.exe

And run command:
 

 

NET USER Your-User-Name /domain


net-user-domain-command-check-AD-user-expiry

Note that, many companies does only connect you to AD for security reason only on a VPN connect with something like Cisco AnyConnect Secure Mobility Client whatever VPN connect tool is used to encrypt the traffic between you and the corporate DMZ-ed network

Below is basic NET USER command usage args:

Net User Command Options
 

Item          Explanation

net user    Execute the net user command alone to show a very simple list of every user account, active or not, on the computer you're currently using.

username    This is the name of the user account, up to 20 characters long, that you want to make changes to, add, or remove. Using username with no other option will show detailed information about the user in the Command Prompt window.

password    Use the password option to modify an existing password or assign one when creating a new username. The minimum characters required can be viewed using the net accounts command. A maximum of 127 characters is allowed1.
*    You also have the option of using * in place of a password to force the entering of a password in the Command Prompt window after executing the net user command.

/add    Use the /add option to add a new username on the system.
options    See Additional Net User Command Options below for a complete list of available options to be used at this point when executing net user.

/domain    This switch forces net user to execute on the current domain controller instead of the local computer.

/delete    The /delete switch removes the specified username from the system.

/help    Use this switch to display detailed information about the net user command. Using this option is the same as using the net help command with net user: net help user.
/?    The standard help command switch also works with the net user command but only displays the basic command syntax. Executing net user without options is equal to using the /? switch.

 

 

2. Listing all Active Directory users last set date / never expires and expiration dates


If you have the respective Active Directory rights and you have the Remote Server Administration Tools for Windows (RSAT Tools), you are able to do also other interesting stuff,

 

such as

– using PowerShell to list all user last set dates, to do so use Open Power Shell and issue:
 

get-aduser -filter * -properties passwordlastset, passwordneverexpires |ft Name, passwordlastset, Passwordneverexpires


get-aduser-properties-passwordlastset-passwordneverexpires1

This should show you info as password last set date and whether password expiration is set for account.

– Using PS to get only the password expirations for all AD existing users is with:

 

Get-ADUser -filter {Enabled -eq $True -and PasswordNeverExpires -eq $False} –Properties "DisplayName", "msDS-UserPasswordExpiryTimeComputed" |
Select-Object -Property "Displayname",@{Name="ExpiryDate";Expression={[datetime]::FromFileTime($_."msDS-UserPasswordExpiryTimeComputed")}}


If you need the output data to get stored in CSV file delimitered format you can add to above PS commands
 

| export-csv YOUR-OUTPUT-FILE.CSV

 

3. Setting a user password to never Expiry

 

If the user was created with NET USER command by default it will have been created to have a password expiration. 
However if you need to create new users for yourself (assuming you have the rights), with passwords that never expire on lets say Windows Server 2016 – (if you don't care about security so much), use:
 

NET USER "Username" /Add /Active:Yes

WMIC USERACCOUNT WHERE "Name='Username' SET PasswordExpires=False

 

NET-USER-ADD_Active-yes-Microsoft-Windows-screenshot

NET-USER-set-password-policy-to-Never-expiry-MS-Windows

To view the general password policies, type following:
 

NET ACCOUNTS


NET-ACCOUNTS-view-default-Microsoft-Windows-password-policy
 

 

Howto Pass SSH traffic through a Secured Corporate Proxy server with corkscrew, using sshd as a standalone proxy service with no proxy installed on remote Linux server or VPS

Tuesday, November 19th, 2019

howto pass ssh traffic through proxy to remote server use remote machine as a proxy for connecting to the Internet

Working in the big bad corporate world (being employed in  any of the Fortune 500) companies, especially in an IT delivery company is a nasty thing in terms of User Personal Data Privacy because usually when employeed in any of a corporation, the company ships you with a personal Computer with some kind of pre-installed OS (most often this is Windows) and the computer is not a standalone one but joined in Active Directory (AD) belonging to Windows Domain and centrally administered by whoever.

As part of the default deplyed configuration in this pre-installed OS and software is that part or all your network traffic and files is being monitored in some kind of manner as your pre-installed Windows or Linux notebook given by the Corporation is having a set of standard software running in the background, and even though you have Windows Administrator there are many things you have zero control or even if you have changed it once the Domain Policy is triggered your custom made changes / Installed Programs that happen to be against the company policy are being automatically deleted, any registry changes made are being rewinded etc. Sometimes even by trying to manually clean up your PC from the corporate crapware,  you might breaks access to the corporate DMZ firewalled network. A common way to secure their employee PC data large companies have a Network seperation, your PC when not connected to the Corporate VPN is having a certain IP configuration and once connected to the Demilitarized Zone VPN those configuration changes and the PC have access to internal company infrastructure servers / router / switches / firewalls / SANs etc. Access to corporate Infrastructure is handled via crypted VPN clinet such as Cisco AnyConnect Secure Mobility Client which is perhaps one of the most used ones out there.

Part of the common software installed to Monitor your PC for threats / viruses / trojans among which is MCafee / EMET (Enhandced Mitigation Experience Toolkit) the PC is often prebundled with some kind of anti-malware (crapware) :). But the tip of the iceberg on user surveillance where most of surveillance happens is the default installed proxy on the PC which usually does keep track of all your remote accessed HTTP Website URLs accessed in plain text – traffic flowing on Port 80 and crypted one on standard (SSL) Port 443. This Web Traffic is handled by the Central Corporate proxy that is being deployed via some kind of Domain policy, every time the Computer joins the Windows domain. 

This of course is a terrible thing for your Browsing security and together with the good security practice to run your browser in Incognito mode, which makes all your browsing activity such as access URLs History or Saved Cookies data to be cleared up on a Browser close it is important to make sure you run your own personal traffic via a separate browser which you will use only for your own concern browsing such as Accessing your Bank Money Accounts to check your Monthly Sallary / Purchase things online via Amazon.com / Ebay.com, whether all of the rest traffic company related is trafficed via the default set corporate central proxy.
This is relatively easy sometimes in companies, where security is not of a top concern but in corporations with tightened security accessing remote proxy, or accessing even common daily news and Public Email websites or social media sites  Gmail.com / Twitter / Youtube will be filtered so the only way to reach them will be via some kind of Proxy and often this proxy is the only way out to the Free world from the corporate jail.

Here is where the good old SSH comes as a saving grace as it turns out SSH traffic could be trafficed over a proxy. In below article I will give you a short insight on how Proxy through SSH could be achieved to Secure your dailty web traffic and use SSH to reach your own server on the Internet as well as how you can copy securely data via SSH through corporate Proxy. 
 

1. How to view your corporate used (default) proxy / Check Proxy.pac file definitions

 

To get an idea what is the used proxy on your Corporate PC (as most corporate employee given notebooks are running some kind of M$ Windows)  you can go to:

Windows Control Panel -> Internet Options -> Connections -> Lan Settings


internet-properties-microsoft-windows-screenshot

Under the field Proxy server (check out the Proxy configured Address and Port number )

local-area-network-lan-settings-screenshot-windows-1
 

Having that as browsers venerate the so-called Proxy.pac file, to be rawly aware on some general Company Proxy configured definitions you can access in a browser the proxy itself fething the proxy.pac file for example.

 

http://your-corporate-firewall-rpoxy-url:8080/proxy.pac

 

This is helpful as some companies Proxies have some proxy rules that reveal some things about its Internet architecture and even some have some badly configured proxy.pac files which could be used to fool the proxy under some circumstances 🙂
 

2. Few of the reasons corporations proxy all their employee's work PC web traffic

 

The corporate proxying of traffic has a number of goals, some of which are good hearted and others are for mostly spying on the users.

 

1. Protect Corporate Employees from malicious Viruses / Trojans Horses / Malware / Badware / Whatever ware – EXCELLENT
2. Prevent users from acessing a set of sources that due to the corporate policy are considered harmful (e.g. certain addresses 
of information or disinformation of competitors, any Internet source that might preach against the corporation, hacking ralated websites etc.) – NOT GOOD (for the employee / user) and GOOD for the company
3.Spy on the users activity and be able to have evidence against the employee in case he decided to do anything harmful to the company evidences from proxy could even later be used in court if some kind of corpoate infringment occurs due to misbehave of the employee. – PERFECT FOR COMPANY and Complete breach of User privacy and IMHO totally against European Union privacy legislation such as GDRP
4. In companies that are into the field of Aritificial Intelligence / Users behavior could even be used to advance Self-learning bots and mechanisms – NASTY ! YAECKES

 

3. Run SSH Socks proxy to remote SSHd server running on common SSL 443 port

 

Luckily sysadmins who were ordered the big bosses to sniff on your Web behaviour and preferences could be outsmarted with some hacks.

To protect your Browsing behaviours and Secure your privacy perhaps the best option is to use the Old but gold practice o Securing your Networkf traffic using SSH Over Proxy and SSH Dynamic tunnel as a Proxy as explained in my previous article here.

how-to-use-sshd-server-as-a-proxy-without-a-real-proxy-ssh-socks5_proxy_linux
 

In short the quest way to have your free of charge SOCKS  Remote proxy to your Home based Linux installed OS server / VPN with a Public Internet address is to use ssh as so:

 

ssh -D 3128 UserName@IP-of-Remote-SSHD-Host -p 443

 

This will start the SOCKS Proxy tunnel from Corporate Work PC to your Own Home brew server.

For some convenience it is useful to set up an .alias (for cygwin) / linux users in .bashrc file:

 

alias proxy='ssh -D 3128 UserName@IP-of-Remote-SSHD-Host -p 443';

 

To start using the Proxy from browser, I use a plugin called FoxyProxy in Chrome and Firefox browsers
set-up to connect to localhost – 127.0.0.1:3128 for All Protocols as a SOCKs v5 Proxy.

The sshd Socks proxy can be used for multiple others for example, using it you can also pass on traffic from Mail client such as Thunderbird to your Email server if you're behind a firewall prohibiting access to the common POP3 port 110 or IMAP port TCP 143. 

4. How to access SSH through Proxy using jumphost SSH hop


If you're like me and you have on your Home Linux machine only one Internet address and you have already setupped an SSL enabled service (lets say Webmail) to listen to that Public Internet IP and you don't have the possibility to run another instance of /usr/bin/sshd on port 443 via configuration or manually one time by issuing:

 

/usr/sbin/sshd -p 443

 

Then you can use another ssh another Linux server as a jump host to your own home Linux sshd server. This can be done even by purchasing a cheap VPS server for lets say 3 dollars month etc. or even better if you have a friend with another Linux home server, you can ask him to run you sshd on TCP port 443 and add you an ssh account.
Once you have the second Linux machine as JumpHost to reach out to your own machine use:

 

ssh -J Your-User@Your-jump-host.com:443 hipo@your-home-server.com -v

 

To easify this a bit long line it is handy to use some kind of alias like:

 

alias sshhome='ssh -J Your-User@Your-jump-host.com:443 hipo@your-home-server.com -v'

 

The advantage here is just by issuing this sshd tunnel and keeping it open in a terminal or setting it up as Plink Putty tunnel you have all your Web Traffic Secured
between your Work Corporate PC and your Home Brew Server, keeping the curious eyes of your Company Security Officers from your own Web traffic, hence
separating the corporate privacy from your own personal privacy. Using the just established own SSH Proxy Tunnel to home for your non-work stuff browsing habits
from the corporate systems which are accessed by switching with a button click in FoxyProxy to default proxy settings.
 

5. How to get around paranoid corporate setup where only remote access to Corporate proxy on TCP Port 80 and TCP 443 is available in Browser only

 

Using straight ssh and to create Proxy will work in most of the cases but it requires SSH access to your remote SSH running server / VPS on TCP Port 22, however under some Fort-Nox like financial involved institutions and companies for the sake of tightened security, it is common that all Outbound TCP Ports are prohibited except TCP Port 80 and SSL 443 as prior said, so what can you do then to get around this badful firewall and access the Internet via your own server Proxy? 
The hack to run SSH server either on tcp port 80 or tcp port 443 on remote Host and use 443 / 80 to acess SSHD should work, but then even for the most paranoid corporations the ones who are PCI Compliant – PCI stands for (Payment Card Industry), e.g. works with Debit and Credit Card data etc, accessing even 80 or 443  ports with something like telnet client or netcat will be impossible. 
Once connected to the corporate VPN,  this 2 two ports firewall exceptions will be only accessible via the Corporate Proxy server defined in a Web Browser (Firefox / IE / Chrome etc.) as prior explained in article.

The remedy here is to use a 3rd party tools such as httptunnel or corkscrew that  are able to TUNNEL SSH TRAFFIC VIA CORPORATE PROXY SERVER and access your own resource out of the DMZ.

Both httptunnel and corkscrew are installable both on most Linux distros or for Windows users via CygWin for those who use MobaXterm.

Just to give you better idea on what corkscrew and (hts) httptunnel does, here is Debian packages descriptions.

# apt-cache show​ corkscrew
" corkscrew is a simple tool to tunnel TCP connections through an HTTP
 proxy supporting the CONNECT method. It reads stdin and writes to
 stdout during the connection, just like netcat.
 .
 It can be used for instance to connect to an SSH server running on
 a remote 443 port through a strict HTTPS proxy.
"

 

# apt-cache show httptunnel|grep -i description -A 7
Description-en: Tunnels a data stream in HTTP requests
 Creates a bidirectional virtual data stream tunnelled in
 HTTP requests. The requests can be sent via a HTTP proxy
 if so desired.
 .
 This can be useful for users behind restrictive firewalls. If WWW
 access is allowed through a HTTP proxy, it's possible to use
 httptunnel and, say, telnet or PPP to connect to a computer

Description-md5: ed96b7d53407ae311a6c5ef2eb229c3f
Homepage: http://www.nocrew.org/software/httptunnel.html
Tag: implemented-in::c, interface::commandline, interface::daemon,
 network::client, network::server, network::vpn, protocol::http,
 role::program, suite::gnu, use::routing
Section: net
Priority: optional
Filename: pool/main/h/httptunnel/httptunnel_3.3+dfsg-4_amd64.deb

Windows cygwin users can install the tools with:
 

apt-cyg install –yes corkscrew httptunnel


Linux users respectively with:

apt-get install –yes corkscrew httptunnel

or 

yum install -y corkscrew httptunnel

 

You will then need to have the following configuration in your user home directory $HOME/.ssh/config file
 

Host host-addrs-of-remote-home-ssh-server.com
ProxyCommand /usr/bin/corkscrew your-corporate-firewall-rpoxy-url 8080 %h %p

 

howto-transfer-ssh-traffic-over-proxy

Picture Copyright by Daniel Haxx

The best picture on how ssh traffic is proxied is the one found on Daniel Haxx's website which is a great quick tutorial which originally helped to get the idea of how corkscrew works in proxying traffic I warmly recommend you take a quick look at his SSH Through or over Proxy article.

Host-addrs-of-remote-home-ssh-server.com could be also and IP if you don't have your own domain name in case if using via some cheap VPN Linux server with SSH, or alternatively
if you don't want to spend money on buying domain for SSH server (assuming you don't have such yet) you can use Dyn DNS or NoIP.

Another thing is to setup the proper http_proxy / https_proxy / ftp_proxy variable exports in $HOME/.bashrc in my setup I have the following:
 

export ftp_proxy="http://your-corporate-firewall-rpoxy-url:8080"
export https_proxy="https://your-corporate-firewall-rpoxy-url:8080"
export http_proxy="http://your-corporate-firewall-rpoxy-url:8080"
export HTTP_PROXY="http://your-corporate-firewall-rpoxy-url:8080"
export HTTPS_PROXY="http://your-corporate-firewall-rpoxy-url:8080"


 

6. How to Transfer Files / Data via SSH Protocol through  Proxy with SCP and SFTP


Next logical question is how to Transfer your own personal encrypted files (that contains no corporate sensitive information) between your Work laptop and home brew Linux ssh server or cheap VPN.

It took me quite a lot of try-outs until finally I got it how Secure Copy (scp) command can be used toto transfer files between my Work Computer and my Home brew server using JumpHost, here is how:
 

scp -o 'ProxyJump Username@Jumpt-Host-or-IP.com:443' ~/file-or-files-to-copy* Username@home-ssh-server.com:/path/where/to/copy/files


I love using sftp (Secure FTP) command Linux client to copy files and rarely use scp so I have a lot of try-outs to connect interacitvely via the Corporate Proxy server over a Jump-Host:443 to my Destination home machine, 

 

I've tried using netcat as it was pointed in many articles online, like so to traffic my sftp traffic via my localhost binded SSH Socks proxy on :3128 together with netcat as shown in article prior example, using following line:
 

sftp -oProxyCommand='/bin/nc -X connect -x 127.0.0.1:3128 %h %p' Username@home-ssh-server.com 22

 

Also tried proxy connect like this:

 

sftp -o ProxyCommand="proxy-connect -h localhost -p 3128 %h %p" Username@home-ssh-server.com

 

Moreover, tried to use the ssh  command (-s) argument capability to invoke SSH protocol subsystem feature which is used to facilitiate use of SSH secure transport for other application
 

ssh -v -J hipo@Jump-Host:443 -s sftp root@home-ssh-server.com -v

open failed: administratively prohibited: open failed

 

Finally decided to give a try to the same options arguments as in scp and thanks God it worked and I can even access via the Corporate Proxy through the Jump Host SSH interactively via Secure FTP 🙂

!! THE FINAL WORKING SFTP THROUGH PROXY VIA SSH JUMPHOST !!
 

sftp -o 'ProxyJump Username@Jumpt-Host-or-IP.com:443' Username@home-ssh-server.com


To save time from typing this long line every time, I've setup the following alias to ~/.bashrc
 

alias sftphome='sftp -o 'ProxyJump Username@Jumpt-Host-or-IP.com:443' Username@home-ssh-server.com'

 

Conclusion

Of course using own Proxy via your Home brew SSH Machine as well as transferring your data securely from your Work PC (notebook) to Home does not completely make you Surveillance free, as the Corporate Windows installed OS image is perhaps prebundled with its own integrated Keylogger as well as the Windows Domain administrators have certainly access to connect to your PC and run various commands, so this kind of Security is just an attempt to make company has less control and know less on your browsing habits and the best solution where possible to secure your privacy and separate your Personal Space form Work space by using a second computer (if having the ability to work from home) with a KVM Switch device and switch over your Work PC and Home PC via it or in some cases (where companies) allows it, setup something like VNC server (TightVNC / RealVNC) on work PC and leave it all time running in office and connect remotely with vncviewer from your own controlled secured computer.

In article I've explained shortly common scenario found in corporate Work computers proxy setup, designed to Surveil all your move, mentioned few common softwares running by default to protect from Viruses and aimed to Protect user from malicious hacking tools, explained how to view your work notebook configured Proxy, shortly mentioned on Proxy.pac and hinted how to view proxy.pac config as well as gave few of the reasons why all web traffic is being routed over central proxy.

That's all folks, Enjoy the Freedom to be less surveilled !

A Concise and Complete Strategy to Earn Microsoft MCSE: Core Infrastructure Certification

Wednesday, August 21st, 2019

microsoft-certification-mcse-infrastructure-azure-mcse-boot-camp-499x330

This article is going to be a bit astray from Linux but as recently, there are so many jobs offered for Windows administrators, I believe it will be useful for sysadmins, who are more interested in Windows sysadmin job, so lets get through some of the essential Microsoft certificates to give you idea what kind of certificate you might want to enter the world of Windows.


In recent months, Microsoft is by-and-by altering its certification program. But, how does this affect the certification track as a whole? This creates a new breed of Microsoft credentials that are specifically aligned to certain job roles like administrator, solution architect, developer, and functional consultant.

Further, the incorporation of role-based certifications means the phasing out of old certifications tracks like MCSA: Cloud Platform, MCSA: Linux on Azure, MCSE: Mobility and the list continues. All the retired certifications and certification exams are pensioned off to reflect the newest technologies and advancements, which are highly needed by different IT job roles.

But even with the changes, Microsoft hasn’t totally ditched some of their previous certification tracks―simply because these are still significant up to the present time. And one of the limited expert-level Microsoft validations that deserve a mention is, without a doubt, MCSE: Core Infrastructure.

https://www.examsnap.com/microsoft-certification-training.html


microsoft-certified-solutions-master-main-qimg-82c85948f30e27f6eb3f8d5c4eda9915

The Past and the Present Days of MCSE: Core Infrastructure

MCSE: Core Infrastructure is certainly the best way to certify your expertise in managing more complex and modern IT technologies, including data center, system and identity management, storage, virtualization, and networking.

To get you ready, see the functional preparation guide that shows three main steps to earn this MCSE endorsement.

  1. Acquire your MCSA certification

The very first step is to arm yourself with an entry-level credential that declares your foundational understanding of specific IT technologies. This means that you can’t just jump directly to the expert-tier without gaining valuable groundwork, which for this case, is either the MCSA Windows Server 2012 or MCSA Windows Server 2012. Both these certifications are aimed to give you a significant footing in specific Microsoft infrastructure in an enterprise setting, to further improve the business worth and abate unnecessary expenses.

  1. Choose your preferred MCSE certification exam

Next step is to pick from given five MCSE certifications exams: 70-744, 70-745, 70-413, 70-414, and 70-537. Though there are five listed options, only four are available since exam 70-537 hasn’t been released up to now.

  • Exam 70-744

Dubbed as the exam for Securing Windows Server 2016, 70-744 tests how well you utilize various technologies and methodologies relating to server hardening environments and virtual and network machines infrastructure.

https://www.microsoft.com/en-us/learning/certification-overview.aspx

Featuring topics such as Active Directory, Enhanced Security Administrative Environment, Local Administrator Password Solution, Threat Detection Solutions, Privileged Access Workstations, and such, the exam serves a remarkable way to fully take a grasp of the security needed in Windows Server 2016.

  • Exam 70-745

If securing Windows Server 2016 does not entice you, there’s another option―exam 70-745, which is implementing a Software-Defined Datacenter. This test is suitable for both analysts and data scientists who’ve got a thing for complex processes and data sets as well as virtual machine manager.

Software-defined networking, software-defined data center, and software-defined storage are three main subjects expounded in this exam. You will learn how to implement, manage, secure, and maintain these various solutions. Accordingly, it’s recommended to have background skills in data structures, programming concepts, R functions, and statistical methods for you to easily take up and pass this exam.

  • Exam 70-413

Next on the list is the test that corroborates your capability in designing and implementing a 2012 Windows Server 2012 infrastructure. Exam 70-413 is part one of a two-series test that revolves around key functions of a server environment.

If you pass this exam, this means that you are fully-furnished with abilities in core topics related to Windows Server 2012, including network access services, server virtualization, deployment, and infrastructure. This is because your skills in creating and implementing both logical and physical active directory infrastructures will be put into test.

  • Exam 70-414

70-414 is the second test of the two-part series exam about Windows Server 2012. This means that you have to complete and pass both exams 70-413 and 70-414 to earn your MCSE.

In comparison to the first exam, this refers to a more complicated server infrastructure in a highly virtualized setting. The exam sets the seal in your command in managing and maintaining advanced server infrastructure. Furthermore, you get to mug up your skills in planning and implementing highly available enterprise and server virtualization infrastructures along with designing and executing identity and access solutions.

   3. Start practicing the exam

Once you’ve decided what exam/s you’ll take, you need to start gathering essential exam materials. Start with books and Microsoft exam guides so that you’ll acquire a deeper understanding of each topic. Training courses are other imperative resources you shouldn’t miss. These are relevant in mounting your knowledge―in a more stimulating and less stressful manner. Either in an instructor-led or self-paced format, these training courses are carved to give you a more advanced yet highly engaging type of learning. And luckily, there’s no need for you to look further because Microsoft provides candidates with official and vital training courses for every exam.

And to accompany your exam preparation, get assistance from Examsnap’s series of practice tests. Featuring the most updated test questions with answers, the practice tests offered by Examsnap are not just limited to one but a lot of files per exam. They have all the MCSE required and current exams, which are 70-744, 70-745, 70-413, and 70-414. With the various files on offer, these give you several options to expand your knowledge bank before the exam day. Since the tests are offered in .ete format, you can train them with the help of the ETE Simulator. This will give you the insight of what is waiting for you at the exam. Moreover, you can practice the file unlimited times, track your results, improve them, thus you’ll be confident in your skills and knowledge and escape nervousness.

Conclusion

And when you pass the required exam/s, you’ll be rewarded with the ever-famous MCSE: Core Infrastructure to your profile. More than that highly-distinguished international credential, you are now qualified for various job roles like information security specialist, computer support analyst, IT administrator, architect, and such. So, keep the ball rolling and tighten your preparation stage for you to earn this amazing Microsoft validation.

 

The Non Hand made Image of God “Mandylion” – The Saviour not made by hands icon in Eastern Orthodox Church

Friday, August 16th, 2019

Non-Hand-Made-Image-of-Our_Lord-and-Saviour_Jesus-Christ-Eastern-Orthodox-icon

 

Short History of Non Hand Made Image of God

The question of how really the Lord Jesus Christ looked liked during his earthly living is a question actively intriguing billion of people that lived over the centuries from year 33 A.D. in which Jesus Christ raised from the death as the Gospel testifies. Even for non Christian reliigon in pagan world before the Incarnation of Christ and the Old Hebrew Testament this question has been of interest whether the Abrahamic religions such as Jewish faith and Islam (as emerged in the 7th century) makes it clear that Image of God should not be made. However this Monotheistic religions did not went through the revelation given in Christian faith that God is Three in Faces but one in Essence and that The Holy Trinity. Thus the Icon Image being depicted in Christianity is an Image showing us a close copy to the Image of God The Son (The second Hypostasis / Face of God)  during his Humanity Saviour Mission as he was living and seen by our living ancestors while still being in fleshly earthly Body on Earth before his crucifixion / execution on the Cross of Golgotha in 33 A.D. in Jerusalem.

Hence the veneration of the Image of the Lord Jesus Christ is not in reality a break up of any of the 10 Commandments which says "Thou shalt not make unto thee any graven image" – (Hebrew: לֹא-תַעֲשֶׂה לְךָ פֶסֶל, וְכָל-תְּמוּנָה) as Jewish, Muslim and other people of Abrahamic religions claims but is a close copy of visualized graphical depiction, how Jesus (The Begotten Son of God before all Ages) looked like – there was no other way as Photography was not yet discovered so Jesus as a Son of God used this miracle to leave us a memory about his physical and spiritual likeness.. The Non-Hand Image of God is just an Image very much like on the Photo, you see an Image of someone but you cannot know exactly who he is he good except if you already don't know him and the photo is just a remembrance of him in the similar manner the Non-Hand Made image of God is a remembrance of God for those who believed in Him and already know him with their spirits, e.g. Christians. And a testimony memory for his physical existence in the world 2000 years ago.


The Non Hand made image (icon – word rooted from Greek εικών) also known as Acheiropoieta (Byzantine Greek: αχειροποίητα, "made without hand") is a painted copy surviving over the centuries to another Image the so called Image of Edessa known also under the term of Mandylion – a square or rectangle of cloth upon which a miraculous image of the face of Jesus had been imprinted as Jesus took a simple cloth (used for painting pictures put it on his face and miraculously imprinted  exact copy of his face.

This Imprinted Image has been very famous through the last 21 centuries and has been an object of interest to innumerous number of Christians and scholars and is known to have made multiple miracles, used as a shield for cities, have cured the incurrable sick king Abgar V-th (The King of Edessa) who attempted to find a cure for his terrible skin disease across all known and best medicians, pharmacists, herbalists, magicians etc. healers with various methods but none helped.
 

King Abgar healed by Jesus's Non Hand Made Image of God Mandylion

 

The most ancient known reference to the Non-Hand-Made story in History has been recorded by Eusebius of Caesarea in his Ecclesiastical History.

It was retold in elaborated form by Ephrem the Syrian in the fifth-century Syriac Doctrine of Addai.
An early version of the Abgar legend exists in the Syriac Doctrine of Addai, an early Christian document from Edessa. The Epistula Abgari is a Greek recension of the letter of correspondence exchanged between Jesus Christ and Abgar V of Edessa, known as the Acts of Thaddaeus. The letters were likely composed in the early 4th century.
The legend became relatively popular in the Middle Ages (In the West), even though it was well known in the East for centuries. The letters were translated from Syriac into the Greek, Armenian, Latin, Coptic and Arabic languages.

The Letters tells us the story of King Abgar being seriously desperated heard about Jesus whose fame has been growing as a healer who can heal any disease, because of the already famous multple miracles done by him of healing people who suffered diseases from birth, for healing leppers and giving sight to blind people from birth, resurrecting death people, such as Lazarus who was dead for 4 days until Jesus Resurrected him from the Grave.

Hearing about this great things done by this new healer King Abgar due to his inability to travel long distances sent his servant portrait-painter Anananias with a letter addressed to Jesus in which he was begging him to come to Edessa and heal him from his leprosy. Ananias was told by prince Abgar that in case if Jesus is unable to come to Edessa for some reason, Ananias should paint a picture of Jesus's face and bring it back to the kings palace, firmly hoping and believing that only seing the image will immediately heal him.

The servent went to search for Jesus and found him, preaching the Good news of Salvation being fulfilled in himself and stayed, kept himself on a side and tried to paint Jesus face on a number of times as each time he started painting the face of Jesus in a very short while the face has changed so he couldn't complete it and have to start again.
After some time Jesus has called him took the cloth (napkin) and placed the cloth on his face and an image depicting has face has imprinted on the cloth.

The Image of Christ on the cloth was brought to the King and as Abgar kissed the Image on the napkin with Love and Faith and the sickness disappeared as he was completely cured, only a small part of his face was still having the trace of the disease. After this glorious Miracle King Abgar ordered that the idols on the Entry Gate of the City that were believed to protect the city from Evil should be and in their place the Non-Hand made image of Jesus Christ to be put on top of the city Entry Gate having the Image stuck onto wood, surrounded with a gold frame and ornamented with pearls. Prince Abgar also wrote above the icon on the gateway:
 

"O Christ our God, no-one who hopes in Thee will be put to shame" !

 

, to let everyone know about the Great power of the one who is depicted on the Mandilyon cloth, just as well as a protection sign for the city from all evil and invaders as he was convinced the image of Christ had the power to not only heal him but had the Power to protect his city from the numerous barbaryan raids that were so common during Ist century.

In the coming centuries The Non Hand Made Image of God have been said to have cured multiple people from all kind of diseases.

Abgar-with-image-of-Edessa10thcentury-Monastery-Saint_Ekaterina-Sinai

Abgar receiving the Non Hand Made Image of Christ cloth icon – Saint Catherin (Ekaterina) Monastery Sinai 10th century

According to the eastern Tradition, Jesus has said to the survent to convey message to the the King that this Image of Him will cure most of the disease of the king but the complete healing will be completed by Jesus's desciple Thaddaeus (who was one of the 70th Apostles). This letter was brought to King Abgar too. The Orthodox Tradition believed in the Church is that the words of Jesus fulfilled, King Abgar biggest part of body lepper buds, have been healed and only a small part on his face remaimed and this was also healed by disciple Thaddaeus who after the Crucifix and Resurrection of Christ in the 3rd day after his execution went to the King and have preached the Good news of Gospel that Christ has resurrected and asked the king to repent and shortly baptized him in the New Christian faith In the Name of the Father, The Son and The Holy Spirit. Immediately after Baptism King Abgar become completely healed. More on the Story for the Image Not Made By Hands can be red here.

 

08.21_saint_ap_Thaddeus_rex_Abgar_ikona_Sinai_Sv_Ekaterina
Saint Apostle Thaddeus arriving to King Abgar V of Eddessa and healing – icon X-th Saint Catherin's Monastery Sinai

The Jesus Image of Edessa – a Syrian city in upper Mesopotamia on the Banks of Euphrates  (today city of Urfa in Turkey), painted copy has been made on number of occasions through the history before the original clothed image disappeared mysteriously in the 13th century as it was stolen by the Roman Catholic Crusaders during IV-th Crusade whose goal was the liberation of the Holy Lands (Jerusalem and the other biblical cities from the Muslim) in year 1204.  After the hungry and and mad crusaders destroed (sacked out) Constantinople they took the Mandylion have reappeared as a relic in King Louis IX of France's Sainte-Chapelle Church in Paris and has been said to have been there until its total disappearance in the French Revolution (during which many holy relics were stolen by revolutionists, many of whose are known to have been Masons as well as many relics were destroyed).

This Non-Hand Made image of Jesus Christ known in Church Slavonic under the term (Убрус / Ubrus) is considered the first icon ("image"). In the Eastern Orthodox Church (Bulgarian, Russian, Greek, Serbian, Macedonian, Moldovian, Syrian, Antiochian, Jerusalem's Churches) and the rest of Oriental Orthodox Churches (Armenian, Copts, Syriacs, Jacobites etc.) according to Church Tradition.

Christos_Acheiropoietos-Non-hand-made-image-of-Jesus-Christ-given-to-King-Abgar

Christos Acheiropoietos (Non-hand-made-image)  of Jesus Christ given to King Abgar of Edessa
(Novgorodian Russian Icon circa 1100)

 

The Eastern Orthodox Church observes a feast for this icon on August 16 (August 29 in N.S.), which commemorates its translation from Edessa to Constantinople and is being observed with a Church service Holy Liturgy.

Ancha_Icon_of_the_Savior_(Art_Museum_of_Georgia,_Tbilisi)

Ancha Icon of the Savior ანჩისხატი (Traditionally considered to have been the tile Kiramidion a tile copy of the Mandylion) – "holy tile" imprinted with the face of Jesus Christ miraculously transferred by contact with the Image of Edessa (Mandylion).

Ancha icon is dated to the 6th-7th century, it was covered with a chased silver riza and partly repainted in the following centuries. The icon derives its name from the Georgian monastery of Ancha in what is now Turkey, whence it was brought to Tbilisi in 1664.

Holy Face of Genoa

Mandylion image is also point of high interest in the Roman Catholic Church too as well as to some more conservative Protestant denominations (though perhaps most of them are rejecting that story as a pious myth.).
In Western Tradtion the Ubrus's copy there is the famous Holy Face of Genoa (which is a donation by the Byzantine Emperor John V Paleologos to Doge of Genos Leonardo Montaldo in the 14th century it was carefully studied by Colette Dufour Bozzo that the image is on cloth that is on a wooden board (the wooden board is a typical plane on which icons are being painted) and is dated also to 14th century.

Jesus_Christ_Face_Holy-face-of-Genoa-_Genes

Holy face of Genoa 14th Century with face made more visible

 

Holy Face of San Silvestro

 

Holy Face of San Silvestro (Saint Silvester) image was kept in Rome's Church San Silvestro in Capite and is now
kept in Matilda chapel Vatican Palace. Its earliest existence known is from 1517 (at that year the nuns were forbidden to exhibit it) as people often has mixed it up with the Veronica Veil (another famous Jesus cloth face imprint often called Volto Santo – holy face).

The_San_Silvestro-image-imprint-of-Jesus-non-hand-made-Mandylion_visage

 

Veronica Veil

 

The Western Roman Catholic tradition recounts that Saint Veronica from Jerusalem encountered Jesus along the Via Dolorosa on the way to Calvary. When she paused to wipe the blood and sweat (Latin sudor) off his face with her veil, his image was imprinted on the cloth. The Veronica Veil tradition however is legendary and not accepted in Eastern Orthodox Churches as this tradition is quite new first to occur somewhere in the middle ages, like in the 16th century and was not known at all in Christiandome prior to that.

The act of Saint Veronica wiping the face of Jesus with her veil is celebrated in the sixth Station of the Cross in many Anglican, Catholic, Lutheran rites, Methodist and Western Orthodox churches.

 

Image of the Saviour Other traditional Orthodox Copy icons

It is notable to mention few very famous Russian iconography interpretation, many of the interpretations are taken by the Russian Iconography from the sample Novgorodian Russian Icon which of itself has been an exact copy, the Russian and Greek monks have made of the Origianl Non Hand Made Cloth image that has been on top of Byzantine Eastern Empire Capital Constantinople's city hanging on the entry Gate.

Saviour-Wet-Beard-Spas_Mokraya_Brada_ikona-Russian-icon


Спас Мокрая Брада (Spas Mokraja Brada – The Saviour Wet Beard) Russian Orthodox Icon 16th Century

Even today, many of the Eastern Orthodox Church are having a copy interpretation of the icon hanging on top of the Dveri (The Inner part Alter Church Doors) – see below picture for reference:

Ubrus-Mandylion-Non-Hand-Made-icon-Hanging-upper-to-Church-Alter-Walls-Dveri-Eastern-Orthodox-Church-Vlaherna-Tsarigrad-Istanbul-ex-Konstantinopol

Non-Hand Made Image copy of the Saviour Jesus Christ hanging on top of Church Alter Walls (Dveri) on top of the priest head

Image-of-the-Saviour-traditional-Orthodox-Iconography-Simon_Ushakov_Nerukotvorniy

 

The Mandylion by Simeon Ushakov – year 1658.

As well as the bit newer but very beautiful Russian Iconograph interpretation  of  Image of Saviour from Harkov.

Harkovskij-Spas-Saviour-Orthodox-icon-18th-century-Harkov

Harkovskij Spas icon Harkov Saviour from 18th century Russian Orthodox Icon

How to make Samba smbfs / cifs mount share location with user / pass credentials authenticate via file stored credentials

Friday, July 19th, 2019

how-to-use-username-and-password-to-authenticate-to-samba-share-server-or-linux-share-server-linux-samba-logo
That's pretty trivial and perhaps if you had to manage samba server or cifs on a Linux host you already know it but for beginners, that might be interesting.

So in this short article I will explain how to make configure smbfs / cifs authentication from Linux host A client to Linux host B server running smbd and nmbd samba server (which is the smfs / cifs share server) by using external authentication file for either mount command or if /etc/fstab used to automatically authenticate using a preconfigured mount saba share via /etc/fstab.

Before you start to do anything with samba on Linux host A client machine, you will need as a minimum to have installed cifs-utils or smbfs (assuming you're on Debian Linux like you can check with dpkg -l and if missing install it via:

 

 

apt-get install cifs-utils

 

Or on older systems or for smbfs support

 

apt-get install smbfs

 

The general mount smbfs share command without specified external credentials file would look like so:

 

mount //mynetworksharename/ /shares/data -o username=myusername, password=mypassword


So how to use external auth file to prevent samba shares  users and passwords to not be stored in root user history all the time?

To do so it is pretty straight forward all you need to do is to create a single user / pass credentials variable defined lets say to file called .smbcredentials or .cifs under some directory lets /root/.smbcredentials.

One note here is (many people prefer to store the password under /root) for security reasons as root directory is usually readable only by administrator and would prevent a non-privileged user to read the user / pass which are stored in plain text.

.smbcredentials is described in mount.cifs man page, here is what it says about credentials variable understood by mount / mount.cifs command  file syntax:
 

 

credentials=filename
    specifies a file that contains a username and/or password. The format of the file is:

         username=value
         password=value


For a CIFS (Common Internet File System) which is a new implementation of old Windows Share (SMB protocol) avaiable in newer Windows XP / 7 / 10 machines, to do the cifs mount manually:
 

mount -v -t cifs //WINSHARESERVER/topsecretfiles /mnt/network/ -o credentials=/mnt/creds-file

or use 

 

mount.cifs //WINSSHARE/topsecretfiles /mnt/network/ -o credentials=/root/.creds-file

 

For old smbfs protocol for backward compatibility so older Win 2000 or Winblows server XP PCs configured to also access the Linux samba mount.

mount -t smbfs //WINHARESERVER/topsecretfiles /mnt/network/ -o credentials=/mnt/.smbcredentials


Once you have the defined .smbcredentials file name, be sure to also protect it with properly set permissions like 0600 (rw) readable only for root user. 

chmod 0600 /root/.smbcredentials

Note that in that example .smbcredentials is set to be a hidden file on purpose as this is a hidden file it will make it slightly less seenable if introduder breaks on the server (an example of security through obscurity)

 

Next lets see how to mount the Windows Samba Share permanently with predefined user / pass server login

For many non secured Windows shares one can use /etc/fstab line definition as simple as:
 

//server-share-name/sharename  /mnt/shares/sharename  cifs  guest,uid=1000,iocharset=utf8  0


For password protected Win Share mounts however, the simplest way to do is via /etc/fstab line add like so:

 

 

 

//servername/sharename  /mnt/shares/sharename  cifs  username=msusername,password=mspassword,iocharset=utf8,sec=ntlm  0  0


Note that the sec=ntlm is optional and remote samba server or Windows Share server version has to support this kind of authentication and in some cases you could safely reove sec=ntlm, just use it, when you know what you're doing. iocharset is good to have as for Russian / Bulgarian e.g.  Cyrillic, Chineese, Indian and other exotic languages and other strange language encoding to be supported and properly shown on the mounted share it should be properly defined …, 

A good permissions would be:

chmod 600 ~/.smbcredentials

To use the external /root/.smbcredentials password it shold be like so:

 

 

 

 

 

 

 

# cat /root/.smbcredentials

username=msusername
password=mssecretpassword
56#

 

 

Finally /root/.smbcredentials record should be as so:
 

//share-server-name/sharename /mnt/shares/windowsshare cifs credentials=/home/ubuntuusername/.smbcredentials,iocharset=utf8,sec=ntlm 0 0


Note You should already have

/mnt/shares/windowshare created on server B (the ount client) with:

mkdir -p  /mnt/shares/windowshare


To mount /etc/fstab defined filesystem to mount on next server boot then do

mount /mnt/shares/windowshare


or completely mount / remount all present /etc/fstab filesystems with the common

mount -a


(but here be careful as this might cause you troubles already other NFS or whatever FS is mounted and being read by clients) :

And you the remote Samba Share (mount location) – should be reachable with ping command and traceroute and remote server ports 139, 445 etc. should be up running opened and connectable from server B share-server-name/sharename

If you face some issues when trying to mount remote share with mount -t smbfs / mount.cifs then you can use smbclient with debug option to find out some more on the connectivity / authentication issue by using the smb share server IP address instead of hostnae and lets say a debug level of 3 like so:

 

 

 

 

smbclient -d3 -L //10.5.8.118/Files -A /root/.smbcredentials

[0] smbclient -d3 -L //10.2.3.111/Files -A /home/acteam/.smbcredentials     lp_load_ex: refreshing parameters
Initialising global parameters
rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384)
Processing section "[global]"
WARNING: The "syslog" option is deprecated
added interface eth0 ip=10.2.3.127 bcast=10.2.3.255 netmask=255.255.255.0
Client started (version 4.3.11-Ubuntu).
Connecting to 10.2.3.111 at port 445
Doing spnego session setup (blob length=120)
got OID=1.3.6.1.4.1.311.2.2.30
got OID=1.2.840.48018.1.2.2
got OID=1.2.840.113554.1.2.2
got OID=1.2.840.113554.1.2.2.3
got OID=1.3.6.1.4.1.311.2.2.10
got principal=not_defined_in_RFC4178@please_ignore
GENSEC backend 'gssapi_spnego' registered
GENSEC backend 'gssapi_krb5' registered
GENSEC backend 'gssapi_krb5_sasl' registered
GENSEC backend 'spnego' registered
GENSEC backend 'schannel' registered
GENSEC backend 'naclrpc_as_system' registered
GENSEC backend 'sasl-EXTERNAL' registered
GENSEC backend 'ntlmssp' registered
GENSEC backend 'ntlmssp_resume_ccache' registered
GENSEC backend 'http_basic' registered
GENSEC backend 'http_ntlm' registered
GENSEC backend 'krb5' registered
GENSEC backend 'fake_gssapi_krb5' registered
Got challenge flags:
Got NTLMSSP neg_flags=0x62898215
NTLMSSP: Set final flags:
Got NTLMSSP neg_flags=0x62088215
NTLMSSP Sign/Seal – Initialising with flags:
Got NTLMSSP neg_flags=0x62088215
NTLMSSP Sign/Seal – Initialising with flags:
Got NTLMSSP neg_flags=0x62088215
Domain=[TMGRID] OS=[Windows Server 2012 R2 Standard 9600] Server=[Windows Server 2012 R2 Standard 6.3]

 

        Sharename       Type      Comment
        ———       —-      ——-
        ADMIN$          Disk      Remote Admin
        C$              Disk      Default share
        Files           Disk
        IPC$            IPC       Remote IPC
        MappedDrive     Disk
Connecting to 10.2.3.111 at port 139
Connecting to 10.2.3.111 at port 139
Connection to 10.2.3.111 failed (Error NT_STATUS_RESOURCE_NAME_NOT_FOUND)
NetBIOS over TCP disabled — no workgroup available

 

Sum it up

Lets Summarize a bit, here I described how to mount smbfs and cifs mount shares with mount command, how to define the auto mount on server boot via /etc/fstab, how to mount manually /etc/fstab defined mount and what should be the syntax of .smbcredentials user / pass file and also pointed how to debug problems on samba / windows server location share mounts with smbclient command.