Posts Tagged ‘access’

Configure own Media streaming minidlna Linux server to access data from your Smart TV

Friday, February 18th, 2022

dlna-media-minidlna-server-linux-logo

If you happen to buy or already own or just have to install a Smart TV to be connected with a LAN Network to a Linux based custom built NAS (Network Attached Storage) server. You might benefit of the smart TV to Share and Watching the Disk Storage Pictures, Music, Video files from the NAS  to the Smart TV using the Media Server protocol.

You have certainly already faced the Media Server at your life on many locations in stores and Mall Buildings, because virtually any reoccuring advertisements, movies projected on the TVs, Kids entertainment or Floor and Buildings Room location schedules or timeline promition schedules are streamed using the Media Server protocol, for many years now. Thus having a brief idea about Media Server proto existence is foundamental stuff to be aware of for sysadmins and programmers.

Shortly about DLNA UPnP Media Streaming Protocol

Assuming that your Smart TV has been already connected to your Wireless Router 2.4Ghz or 5Ghz Wifi, one would think that the easiest way to share the files with the SmartTV is via something like a simple SAMBA Linux server via smb:// cifs:// protocols or via the good old NFS Server, however most of Samsung Smart TV and many other in year 2022 does not have embedded support for Samba SMB / CIFS Protocol but instead have support for the DLNA (Digital Living Network Alliance) streaming support. DLNA is part of the UPnP (Universal Plug and Play) Protocols, UPnP is also known to those using and familiar with Windows Operating Systems realm simply as UPnP AV Media server or Windows Media server.
Windows Media server for those who never heard it or used it 
 allows you to build a Playlists with Media files Video and Audio data files, that can be then later played remotely via a Local LAN or even long distance over TCP / IP remote side connected Internet network.
 

1. Set up and Stream data via Media server on  Windows PC / notebook with integrated Windows Media server 

Windows Media server configuration on Windows 7, 10 and 11 is a relatively easy to configure via:

Network and Sharing Center -> Media Streaming Options -> Turn on Media Streaming 


Then you have to define the name of the Media Library, configure whether Media server should show
on the Local Netework
for other conected devices and Allow or Block access from the other network present devices.


 2. Using a more advanced Media Server to get rid about the limitation of DLNA set of supported file codecs.
 

The Windows default embedded DLNA server is the easiest and fastest one to set up, but it’s not necessarily the best option.
Due to the way DLNA works, you can only stream certain types of media codecs supported by the server. If you have other types of media not defaultly supported and defined by DLNA win server, it just won’t work.

Thus thanksfully it was developed other DLNA servers improve this by offering real-time transcoding.
If you try to play an unsupported file, they’ll transcode it on-the-fly, streaming the video in a supported format to your DLNA device.
Just to name few of the DLNA Media Streaming servers that have supported for larger MPG Video, MP3 / MP4 and other Audio formats encodings,
you can try Plex or the Universal Media Server both of which are free to use under freeware license and have versions for Linux and Mac OS.


Universal_media_server-windows-screenshot-stream-media-data-on-network

 

3. Setting up a free as in freedom DLNA server MiniDLNA (ReadyMedia) on GNU / Linux


ReadyMedia (formerly known as MiniDLNA) is a simple media server software, with the aim of being fully compliant with DLNA/UPnP-AV clients. It was originally developed by a NETGEAR employee for the ReadyNAS product line.

MiniDNLA daemon serves media files (music, pictures, and video) to clients on a network. Linux Media servers clients you can use to test or scan your network for existent Media servers are multiple perhaps the most famous ones are applications such as totem (for QT users) and Kodi (for KDE).
The devices that can be used with minidlna are devices such as portable media players (iPod), Smartphones, Televisions, Tablets, and gaming systems (such as PS3 and Xbox 360) etc.
 

ReadyMedia is a simple, lightweight, the downside of it is It does not have a web interface for administration and must be configured by editing a text file. But for a simple Video streaming in most cases does a great job.


3.1 Install the minidlna software package 

Minidlna is available out of the box on most linux distributions (Fedora / CentOS / Debian / Ubuntu etc.) as of year 2022.

  • Install on Debian Linux (Deb based distro)

media-server:~# apt install minidlna –yes

  • Install on Fedora / CentOS (other RPM based distro)

media-server:~# yum install -y minidlna


3.2 Configure minidlna

– /etc/minidlna.conf – main config file
Open with text editor and set user= ,  media_dir= ,  port=, friendly_name= ,  network_interface= variables as minimum.
To be add minidlnad support symlinks to external file locations, set also wide_links=yes

media-server:~# vim /etc/minidlna.conf

#user=minidlna
user=root
media_dir=/var/www/owncloud/data
network_interface=eth0,eth1

# Port number for HTTP traffic (descriptions, SOAP, media transfer).
# This option is mandatory (or it must be specified on the command-line using
# "-p").
port=8200
# Name that the DLNA server presents to clients.
# Defaults to "hostname: username".
#friendly_name=
friendly_name=DLNAServer Linux
# set this to yes to allow symlinks that point outside user-defined media_dirs.
wide_links=yes
# Automatic discovery of new files in the media_dir directory.
#inotify=yes

Keep in mind that it is supported to provide separete media_dir and provide different USB / External Hard Drive or SD Card sources separated only by content be it Video, Audio or Pictures short named in config as (A,V,P).

media_dir=P,/media/usb/photos
media_dir=V,/media/external-disk/videos
media_dir=A,/media/sd-card/music

You might want to diasble / ineable the inotify depending on your liking, if you don't plan to place new files automated to the NAS and don't care to get indexed and streamed from the Media server you can disable it with inotify=no otherwise keep that on.

– /etc/default/minidlna – additional startup config to set minidlnad (daemon) options such as setup to run with admin superuser root:root 
(usually it is safe to leave it empty and set the user=root, whether needed straight from /etc/minidlna.conf
That's all now go on and launch the minidlna and enable it to automatically boot on Linux boot.

media-server:~# systemctl start minidlna
media-server:~# systemctl enable minidlna
media-server:~# systemctl status minidlna

 

3.3 Rebuilt minidlna database with data indexed files

If you need to re- generate minidlna's database.
To do so stop the minidlna server with the
 

media-server:~# systemctop stop minidlna


 command, then issue the following command (both commands should be run as root):

media-server:~# minidlna -R

Since this command might kept in the background and keep the minidlna server running with incorrect flags, after a minute or two kill minidlna process and relaunch the server via sysctl.

media-server:~#  killall -9 minidlna
media-server:~#  systemctl start minidlna

 

3.4 Permission Issues / Scanning issues

If you plan to place files in /home directory. You better have a seperate partition or folder *outside* your "home" directory devoted to your media. Default user with which minidlna runs is minidlna, this could prevent some files with root or other users being red. So either run minidlna daemon as root or as other user with whom all media files should be accessible.
If service runs as root:root, and still getting some scanning issues, check permissions on your files and remove special characters from file names.
 

media-server:~# tail -10 /var/log/minidlna/minidlna.log 
[2022/02/17 22:51:36] scanner.c:489: warn: Unsuccessful getting details for /var/www/owncloud/data/Videos/Family-Videos/FILE006.MPG
[2022/02/17 22:52:08] scanner.c:819: warn: Scanning /var/www/owncloud/data finished (10637 files)!
[2022/02/17 22:52:08] playlist.c:135: warn: Parsing playlists…
[2022/02/17 22:52:08] playlist.c:269: warn: Finished parsing playlists.
minidlna.c:1126: warn: Starting MiniDLNA version 1.3.0.
minidlna.c:1186: warn: HTTP listening on port 8200
scanner.c:489: warn: Unsuccessful getting details for /var/www/owncloud/data/admin/files/origin/External SD card/media/Viber Images/IMG-4477de7b1eee273d5e6ae25236c5c223-V.jpg
scanner.c:489: warn: Unsuccessful getting details for /var/www/owncloud/data/Videos/Family-Video/FILE006.MPG
playlist.c:135: warn: Parsing playlists…
playlist.c:269: warn: Finished parsing playlists.

 

3.5. Fix minidlna Inotify errors

In /etc/sysctl.conf 

Add:

fs.inotify.max_user_watches=65536

in a blank line at end of file and do 

media-server:~# sysctl -p

Debugging minidlna problems, index errors, warnings etc

minidlna does write by default to /var/log/minidlna/minidlna.log inspect the log closely and you should get most of the time what is wrong with it.
Note that some files might not get indexed because minidlna won't support the strange file codecs such as SWF encoding, if you have some important files to stream that are not indexed by minidlna, then install and try one of the more sophisticated free software Media Servers for Linux:

plex-media-streaming-server-screenshot

Note that most Linux users from my quick research shows, MediaTomb is the preferred advanced features Open Source Linux Media Server of choice for most of the guys.

mediatomb-linux-media-streaming-server-picture.jpg.webp
 

 

4. Test minidlna Linux servers works, getting information of other DLNA Servers on the network

media-server:~# lynx -dump  http://127.0.0.1:8200
MiniDLNA status

  Media library

   Audio files 0
   Video files 455
   Image files 10182

  Connected clients

   ID Type                   IP Address    HW Address        Connections
   0  Samsung Series [CDEFJ] 192.168.1.11  7C:0A:3D:88:A6:FA 0
   1  Generic DLNA 1.5       192.168.0.241 00:16:4E:1D:48:05 0
   2  Generic DLNA 1.5       192.168.1.18  00:16:3F:0D:45:05 0
   3  Unknown                127.0.0.1     FF:FF:FF:FF:FF:FF 0

   -1 connections currently open
 

Note that there is -1 connections (no active connections) currently to the server. 
The 2 Generic DLNA 1.5 IPs are another DLNA servers provided by a OpenXEN hosted Windows 7 Virtual machines, that are also broadcasting their existence in the network. The Samsung Series [CDEFJ] is the DLNA client on the Samsung TV found, used to detect and stream data from the just configured Linux dlna server.

The DLNA Protocol enabled devices on a network as you can see are quite easy to access, querying localhost on the 8200 server dumps, what minidlna knows, the rest of IPs connecting should not be able to receive this info. But anyways since the minidlna does not have a special layers of security to access it, but the only way to restrict is filtering the 8200 port, it is a very good idea to put a good iptables firewall on the machine to allow only the devices that should have access to the data.

Further more if you happen to need to access the Media files on Linux from GUI you might use some client as upmentioned totem, VLC or if you need something more feature rich Java eezUPnP .

eeZUPnP-screenshot-java-client-for-media-server

That's all folks !
Enjoy your media on the TV 🙂

KVM Virtual Machine RHEL 8.3 Linux install on Redhat 8.3 Linux Hypervisor with custom tailored kickstart.cfg

Friday, January 22nd, 2021

kvm_virtualization-logo-redhat-8.3-install-howto-with-kickstart

If you don't have tried it yet Redhat and CentOS and other RPM based Linux operationg systems that use anaconda installer is generating a kickstart file after being installed under /root/{anaconda-ks.cfg,initial-setup- ks.cfg,original-ks.cfg} immediately after the OS installation completes. Using this Kickstart file template you can automate installation of Redhat installation with exactly the same configuration as many times as you like by directly loading your /root/original-ks.cfg file in RHEL installer.

Here is the official description of Kickstart files from Redhat:

"The Red Hat Enterprise Linux installation process automatically writes a Kickstart file that contains the settings for the installed system. This file is always saved as /root/anaconda-ks.cfg. You may use this file to repeat the installation with identical settings, or modify copies to specify settings for other systems."


Kickstart files contain answers to all questions normally asked by the text / graphical installation program, such as what time zone you want the system to use, how the drives should be partitioned, or which packages should be installed. Providing a prepared Kickstart file when the installation begins therefore allows you to perform the installation automatically, without need for any intervention from the user. This is especially useful when deploying Redhat based distro (RHEL / CentOS / Fedora …) on a large number of systems at once and in general pretty useful if you're into the field of so called "DevOps" system administration and you need to provision a certain set of OS to a multitude of physical servers or create or recreate easily virtual machines with a certain set of configuration.
 

1. Create /vmprivate storage directory where Virtual machines will reside

First step on the Hypervisor host which will hold the future created virtual machines is to create location where it will be created:

[root@redhat ~]#  lvcreate –size 140G –name vmprivate vg00
[root@redhat ~]#  mkfs.ext4 -j -b 4096 /dev/mapper/vg00-vmprivate
[root@redhat ~]# mount /dev/mapper/vg00-vmprivate /vmprivate

To view what is the situation with Logical Volumes and  VG group names:

[root@redhat ~]# vgdisplay -v|grep -i vmprivate -A7 -B7
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:0

 

  — Logical volume —
  LV Path                /dev/vg00/vmprivate
  LV Name                vmprivate
  VG Name                vg00
  LV UUID                VVUgsf-FXq2-TsMJ-QPLw-7lGb-Dq5m-3J9XJJ
  LV Write Access        read/write
  LV Creation host, time main.hostname.com, 2021-01-20 17:26:11 +0100
  LV Status              available
  # open                 1
  LV Size                150.00 GiB


Note that you'll need to have the size physically available on a SAS / SSD Hard Drive physically connected to Hypervisor Host.

To make the changes Virtual Machines storage location directory permanently mounted add to /etc/fstab

/dev/mapper/vg00-vmprivate  /vmprivate              ext4    defaults,nodev,nosuid 1 2

[root@redhat ~]# echo '/dev/mapper/vg00-vmprivate  /vmprivate              ext4    defaults,nodev,nosuid 1 2' >> /etc/fstab

 

2. Second we need to install the following set of RPM packages on the Hypervisor Hardware host

[root@redhat ~]# yum install qemu-kvm qemu-img libvirt virt-install libvirt-client virt-manager libguestfs-tools virt-install virt-top -y

3. Enable libvirtd on the host

[root@redhat ~]#  lsmod | grep -i kvm
[root@redhat ~]#  systemctl enable libvirtd

4. Configure network bridging br0 interface on Hypervisor


In /etc/sysconfig/network-scripts/ifcfg-eth0 you need to include:

NM_CONTROLED=NO

Next use nmcli redhat configurator to create the bridge (you can use ip command instead) but since the tool is the redhat way to do it lets do it their way ..

[root@redhat ~]# nmcli connection delete eno3
[root@redhat ~]# nmcli connection add type bridge autoconnect yes con-name br0 ifname br0
[root@redhat ~]# nmcli connection modify br0 ipv4.addresses 10.80.51.16/26 ipv4.method manual
[root@redhat ~]# nmcli connection modify br0 ipv4.gateway 10.80.51.1
[root@redhat ~]# nmcli connection modify br0 ipv4.dns 172.20.88.2
[root@redhat ~]# nmcli connection add type bridge-slave autoconnect yes con-name eno3 ifname eno3 master br0
[root@redhat ~]# nmcli connection up br0

5. Prepare a working kickstart.cfg file for VM


Below is a sample kickstart file I've used to build a working fully functional Virtual Machine with Red Hat Enterprise Linux 8.3 (Ootpa) .

#version=RHEL8
#install
# Run the Setup Agent on first boot
firstboot --enable
ignoredisk --only-use=vda
# Use network installation
#url --url=http://hostname.com/rhel/8/BaseOS
##url --url=http://171.23.8.65/rhel/8/os/BaseOS
# Use text mode install
text
#graphical
# System language
#lang en_US.UTF-8
keyboard --vckeymap=us --xlayouts='us'
# Keyboard layouts
##keyboard us
lang en_US.UTF-8
# Root password
rootpw $6$gTiUCif4$YdKxeewgwYCLS4uRc/XOeKSitvDJNHFycxWVHi.RYGkgKctTMCAiY2TErua5Yh7flw2lUijooOClQQhlbstZ81 --iscrypted
# network-stuff
# place ip=your_VM_IP, netmask, gateway, nameserver hostname 
network --bootproto=static --ip=10.80.21.19 --netmask=255.255.255.192 --gateway=10.80.21.1 --nameserver=172.30.85.2 --device=eth0 --noipv6 --hostname=FQDN.VMhost.com --onboot=yes
# if you need just localhost initially configured uncomment and comment above
##network В --device=lo --hostname=localhost.localdomain
# System authorization information
authconfig --enableshadow --passalgo=sha512 --enablefingerprint
# skipx
skipx
# Firewall configuration
firewall --disabled
# System timezone
timezone Europe/Berlin
# Clear the Master Boot Record
##zerombr
# Repositories
## Add RPM repositories from KS file if necessery
#repo --name=appstream --baseurl=http://hostname.com/rhel/8/AppStream
#repo --name=baseos --baseurl=http://hostname.com/rhel/8/BaseOS
#repo --name=inst.stage2 --baseurl=http://hostname.com ff=/dev/vg0/vmprivate
##repo --name=rhsm-baseos В  В --baseurl=http://172.54.8.65/rhel/8/rhsm/x86_64/BaseOS/
##repo --name=rhsm-appstream --baseurl=http://172.54.8.65/rhel/8/rhsm/x86_64/AppStream/
##repo --name=os-baseos В  В  В --baseurl=http://172.54.9.65/rhel/8/os/BaseOS/
##repo --name=os-appstream В  --baseurl=http://172.54.8.65/rhel/8/os/AppStream/
#repo --name=inst.stage2 --baseurl=http://172.54.8.65/rhel/8/BaseOS
# Disk partitioning information set proper disk sizing
##bootloader --location=mbr --boot-drive=vda
bootloader --append=" crashkernel=auto tsc=reliable divider=10 plymouth.enable=0 console=ttyS0 " --location=mbr --boot-drive=vda
# partition plan
zerombr
clearpart --all --drives=vda --initlabel
part /boot --size=1024 --fstype=ext4 --asprimary
part swap --size=1024
part pv.01 --size=30000 --grow --ondisk=vda
##part pv.0 --size=80000 --fstype=lvmpv
#part pv.0 --size=61440 --fstype=lvmpv
volgroup s pv.01
logvol / --vgname=s --size=15360 --name=root --fstype=ext4
logvol /var/cache/ --vgname=s --size=5120 --name=cache --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var/log --vgname=s --size=7680 --name=log --fstype=ext4 --fsoptions="defaults,nodev,noexec,nosuid"
logvol /tmp --vgname=s --size=5120 --name=tmp --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /home --vgname=s --size=5120 --name=home --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /opt --vgname=s --size=2048 --name=opt --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var/log/audit --vgname=s --size=3072 --name=audit --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var/spool --vgname=s --size=2048 --name=spool --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var --vgname=s --size=7680 --name=var --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
# SELinux configuration
selinux --disabled
# Installation logging level
logging --level=debug
# reboot automatically
reboot
###
%packages
@standard
python3
pam_ssh_agent_auth
-nmap-ncat
#-plymouth
#-bpftool
-cockpit
#-cryptsetup
-usbutils
#-kmod-kvdo
#-ledmon
#-libstoragemgmt
#-lvm2
#-mdadm
-rsync
#-smartmontools
-sos
-subscription-manager-cockpit
# Tune Linux vm.dirty_background_bytes (IMAGE-439)
# The following tuning causes dirty data to begin to be background flushed at
# 100 Mbytes, so that it writes earlier and more often to avoid a large build
# up and improving overall throughput.
echo "vm.dirty_background_bytes=100000000" >> /etc/sysctl.conf
# Disable kdump
systemctl disable kdump.service
%end

Important note to make here is the MD5 set root password string in (rootpw) line this string can be generated with openssl or mkpasswd commands :

Method 1: use openssl cmd to generate (md5, sha256, sha512) encrypted pass string

[root@redhat ~]# openssl passwd -6 -salt xyz test
$6$xyz$rjarwc/BNZWcH6B31aAXWo1942.i7rCX5AT/oxALL5gCznYVGKh6nycQVZiHDVbnbu0BsQyPfBgqYveKcCgOE0

Note: passing -1 will generate an MD5 password, -5 a SHA256 encryption and -6 SHA512 encrypted string (logically recommended for better security)

Method 2: (md5, sha256, sha512)

[root@redhat ~]# mkpasswd –method=SHA-512 –stdin

The option –method accepts md5, sha-256 and sha-512
Theoretically there is also a kickstart file generator web interface on Redhat's site here however I never used it myself but instead use above kickstart.cfg
 

6. Install the new VM with virt-install cmd


Roll the new preconfigured VM based on above ks template file use some kind of one liner command line  like below:
 

[root@redhat ~]# virt-install -n RHEL8_3-VirtualMachine –description "CentOS 8.3 Virtual Machine" –os-type=Linux –os-variant=rhel8.3 –ram=8192 –vcpus=8 –location=/vmprivate/rhel-server-8.3-x86_64-dvd.iso –disk path=/vmprivate/RHEL8_3-VirtualMachine.img,bus=virtio,size=70 –graphics none –initrd-inject=/root/kickstart.cfg –extra-args "console=ttyS0 ks=file:/kickstart.cfg"

7. Use a tiny shell script to automate VM creation


For some clarity and better automation in case you plan to repeat VM creation you can prepare a tiny bash shell script:
 

#!/bin/sh
KS_FILE='kickstart.cfg';
VM_NAME='RHEL8_3-VirtualMachine';
VM_DESCR='CentOS 8.3 Virtual Machine';
RAM='8192';
CPUS='8';
# size is in Gigabytes
VM_IMG_SIZE='140';
ISO_LOCATION='/vmprivate/rhel-server-8.3-x86_64-dvd.iso';
VM_IMG_FILE_LOC='/vmprivate/RHEL8_3-VirtualMachine.img';

virt-install -n "$VMNAME" –description "$VM_DESCR" –os-type=Linux –os-variant=rhel8.3 –ram=8192 –vcpus=8 –location="$ISO_LOCATION" –disk path=$VM_IMG_FILE,bus=virtio,size=$IMG_VM_SIZE –graphics none –initrd-inject=/root/$KS_FILE –extra-args "console=ttyS0 ks=file:/$KS_FILE"


A copy of virt-install.sh script can be downloaded here

Wait for the installation to finish it should be visualized and if all installation is smooth you should get a login prompt use the password generated with openssl tool and test to login, then disconnect from the machine by pressing CTRL + ] and try to login via TTY with

[root@redhat ~]# virst list –all
 Id   Name        State
—————————
 2    
RHEL8_3-VirtualMachine   running

[root@redhat ~]#  virsh console RHEL8_3-VirtualMachine


redhat8-login-prompt

One last thing I recommend you check the official documentation on Kickstart2 from CentOS official website

In case if you later need to destroy the VM and the respective created Image file you can do it with:
 

[root@redhat ~]#  virsh destroy RHEL8_3-VirtualMachine
[root@redhat ~]#  virsh undefine RHEL8_3-VirtualMachine

Don't forget to celebreate the success and give this nice article a credit by sharing this nice tutorial with a friend or by placing a link to it from your blog 🙂

 

 

Enjoy !

Install and configure rkhunter for improved security on a PCI DSS Linux / BSD servers with no access to Internet

Wednesday, November 10th, 2021

install-and-configure-rkhunter-with-tightened-security-variables-rkhunter-logo

rkhunter or Rootkit Hunter scans systems for known and unknown rootkits. The tool is not new and most system administrators that has to mantain some good security servers perhaps already use it in their daily sysadmin tasks.

It does this by comparing SHA-1 Hashes of important files with known good ones in online databases, searching for default directories (of rootkits), wrong permissions, hidden files, suspicious strings in kernel modules, commmon backdoors, sniffers and exploits as well as other special tests mostly for Linux and FreeBSD though a ports for other UNIX operating systems like Solaris etc. are perhaps available. rkhunter is notable due to its inclusion in popular mainstream FOSS operating systems (CentOS, Fedora,Debian, Ubuntu etc.).

Even though rkhunter is not rapidly improved over the last 3 years (its last Official version release was on 20th of Febuary 2018), it is a good tool that helps to strengthen even further security and it is often a requirement for Unix servers systems that should follow the PCI DSS Standards (Payment Card Industry Data Security Standards).

Configuring rkhunter is a pretty straight forward if you don't have too much requirements but I decided to write this article for the reason there are fwe interesting options that you might want to adopt in configuration to whitelist any files that are reported as Warnings, as well as how to set a configuration that sets a stricter security checks than the installation defaults. 

1. Install rkhunter .deb / .rpm package depending on the Linux distro or BSD

  • If you have to place it on a Redhat based distro CentOS / Redhat / Fedora

[root@Centos ~]# yum install -y rkhunter

 

  • On Debian distros the package name is equevallent to install there exec usual:

root@debian:~# apt install –yes rkhunter

  • On FreeBSD / NetBSD or other BSD forks you can install it from the BSD "World" ports system or install it from a precompiled binary.

freebsd# pkg install rkhunter

One important note to make here is to have a fully functional Alarming from rkhunter, you will have to have a fully functional configured postfix / exim / qmail whatever mail server to relay via official SMTP so you the Warning Alarm emails be able to reach your preferred Alarm email address. If you haven't installed postfix for example and configure it you might do.

– On Deb based distros 

[root@Centos ~]#yum install postfix


– On RPM based distros

root@debian:~# apt-get install –yes postfix


and as minimum, further on configure some functional Email Relay server within /etc/postfix/main.cf
 

# vi /etc/postfix/main.cf
relayhost = [relay.smtp-server.com]

2. Prepare rkhunter.conf initial configuration


Depending on what kind of files are present on the filesystem it could be for some reasons some standard package binaries has to be excluded for verification, because they possess unusual permissions because of manual sys admin monification this is done with the rkhunter variable PKGMGR_NO_VRFY.

If remote logging is configured on the system via something like rsyslog you will want to specificly tell it to rkhunter so this check as a possible security issue is skipped via ALLOW_SYSLOG_REMOTE_LOGGING=1. 

In case if remote root login via SSH protocol is disabled via /etc/ssh/sshd_config
PermitRootLogin no variable, the variable to include is ALLOW_SSH_ROOT_USER=no

It is useful to also increase the hashing check algorithm for security default one SHA256 you might want to change to SHA512, this is done via rkhunter.conf var HASH_CMD=SHA512

Triggering new email Warnings has to be configured so you receive, new mails at a preconfigured mailbox of your choice via variable
MAIL-ON-WARNING=SetMailAddress

 

# vi /etc/rkhunter.conf

PKGMGR_NO_VRFY=/usr/bin/su

PKGMGR_NO_VRFY=/usr/bin/passwd

ALLOW_SYSLOG_REMOTE_LOGGING=1

# Needed for corosync/pacemaker since update 19.11.2020

ALLOWDEVFILE=/dev/shm/qb-*/qb-*

# enabled ssh root access skip

ALLOW_SSH_ROOT_USER=no

HASH_CMD=SHA512

# Email address to sent alert in case of Warnings

MAIL-ON-WARNING=Your-Customer@Your-Email-Server-Destination-Address.com

MAIL-ON-WARNING=Your-Second-Peronsl-Email-Address@SMTP-Server.com

DISABLE_TESTS=os_specific


Optionally if you're using something specific such as corosync / pacemaker High Availability cluster or some specific software that is creating /dev/ files identified as potential Risks you might want to add more rkhunter.conf options like:
 

# Allow PCS/Pacemaker/Corosync
ALLOWDEVFILE=/dev/shm/qb-attrd-*
ALLOWDEVFILE=/dev/shm/qb-cfg-*
ALLOWDEVFILE=/dev/shm/qb-cib_rw-*
ALLOWDEVFILE=/dev/shm/qb-cib_shm-*
ALLOWDEVFILE=/dev/shm/qb-corosync-*
ALLOWDEVFILE=/dev/shm/qb-cpg-*
ALLOWDEVFILE=/dev/shm/qb-lrmd-*
ALLOWDEVFILE=/dev/shm/qb-pengine-*
ALLOWDEVFILE=/dev/shm/qb-quorum-*
ALLOWDEVFILE=/dev/shm/qb-stonith-*
ALLOWDEVFILE=/dev/shm/pulse-shm-*
ALLOWDEVFILE=/dev/md/md-device-map
# Needed for corosync/pacemaker since update 19.11.2020
ALLOWDEVFILE=/dev/shm/qb-*/qb-*

# tomboy creates this one
ALLOWDEVFILE="/dev/shm/mono.*"
# created by libv4l
ALLOWDEVFILE="/dev/shm/libv4l-*"
# created by spice video
ALLOWDEVFILE="/dev/shm/spice.*"
# created by mdadm
ALLOWDEVFILE="/dev/md/autorebuild.pid"
# 389 Directory Server
ALLOWDEVFILE=/dev/shm/sem.slapd-*.stats
# squid proxy
ALLOWDEVFILE=/dev/shm/squid-cf*
# squid ssl cache
ALLOWDEVFILE=/dev/shm/squid-ssl_session_cache.shm
# Allow podman
ALLOWDEVFILE=/dev/shm/libpod*lock*

 

3. Set the proper mirror database URL location to internal network repository

 

Usually  file /var/lib/rkhunter/db/mirrors.dat does contain Internet server address where latest version of mirrors.dat could be fetched, below is how it looks by default on Debian 10 Linux.

root@debian:/var/lib/rkhunter/db# cat mirrors.dat 
Version:2007060601
mirror=http://rkhunter.sourceforge.net
mirror=http://rkhunter.sourceforge.net

As you can guess a machine that doesn't have access to the Internet neither directly, neither via some kind of secure proxy because it is in a Paranoic Demilitarized Zone (DMZ) Network with many firewalls. What you can do then is setup another Mirror server (Apache / Nginx) within the local PCI secured LAN that gets regularly the database from official database on http://rkhunter.sourceforge.net/ (by installing and running rkhunter –update command on the Mirror WebServer and copying data under some directory structure on the remote local LAN accessible server, to keep the DB uptodate you might want to setup a cron to periodically copy latest available rkhunter database towards the http://mirror-url/path-folder/)

# vi /var/lib/rkhunter/db/mirrors.dat

local=http://rkhunter-url-mirror-server-url.com/rkhunter/1.4/


A mirror copy of entire db files from Debian 10.8 ( Buster ) ready for download are here.

Update entire file property db and check for rkhunter db updates

 

# rkhunter –update && rkhunter –propupdate

[ Rootkit Hunter version 1.4.6 ]

Checking rkhunter data files…
  Checking file mirrors.dat                                  [ Skipped ]
  Checking file programs_bad.dat                             [ No update ]
  Checking file backdoorports.dat                            [ No update ]
  Checking file suspscan.dat                                 [ No update ]
  Checking file i18n/cn                                      [ No update ]
  Checking file i18n/de                                      [ No update ]
  Checking file i18n/en                                      [ No update ]
  Checking file i18n/tr                                      [ No update ]
  Checking file i18n/tr.utf8                                 [ No update ]
  Checking file i18n/zh                                      [ No update ]
  Checking file i18n/zh.utf8                                 [ No update ]
  Checking file i18n/ja                                      [ No update ]

 

rkhunter-update-propupdate-screenshot-centos-linux


4. Initiate a first time check and see whether something is not triggering Warnings

# rkhunter –check

rkhunter-checking-for-rootkits-linux-screenshot

As you might have to run the rkhunter multiple times, there is annoying Press Enter prompt, between checks. The idea of it is that you're able to inspect what went on but since usually, inspecting /var/log/rkhunter/rkhunter.log is much more easier, I prefer to skip this with –skip-keypress option.

# rkhunter –check  –skip-keypress


5. Whitelist additional files and dev triggering false warnings alerts


You have to keep in mind many files which are considered to not be officially PCI compatible and potentially dangerous such as lynx browser curl, telnet etc. might trigger Warning, after checking them thoroughfully with some AntiVirus software such as Clamav and checking the MD5 checksum compared to a clean installed .deb / .rpm package on another RootKit, Virus, Spyware etc. Clean system (be it virtual machine or a Testing / Staging) machine you might want to simply whitelist the files which are incorrectly detected as dangerous for the system security.

Again this can be achieved with

PKGMGR_NO_VRFY=

Some Cluster softwares that are preparing their own /dev/ temporary files such as Pacemaker / Corosync might also trigger alarms, so you might want to suppress this as well with ALLOWDEVFILE

ALLOWDEVFILE=/dev/shm/qb-*/qb-*


If Warnings are found check what is the issue and if necessery white list files due to incorrect permissions in /etc/rkhunter.conf .

rkhunter-warnings-found-screenshot

Re-run the check until all appears clean as in below screenshot.

rkhunter-clean-report-linux-screenshot

Fixing Checking for a system logging configuration file [ Warning ]

If you happen to get some message like, message appears when rkhunter -C is done on legacy CentOS release 6.10 (Final) servers:

[13:45:29] Checking for a system logging configuration file [ Warning ]
[13:45:29] Warning: The 'systemd-journald' daemon is running, but no configuration file can be found.
[13:45:29] Checking if syslog remote logging is allowed [ Allowed ]

To fix it, you will have to disable SYSLOG_CONFIG_FILE at all.
 

SYSLOG_CONFIG_FILE=NONE

How to move transfer binary files encoded with base64 on Linux with Copy Paste of text ASCII encoded string

Monday, October 25th, 2021

base64-encode-decode-binary-files-to-transfer-between-servers-base64-artistic-logo

If you have to work on servers in a protected environments that are accessed via multiple VPNs, Jump hosts or Web Citrix and you have no mean to copy binary files to your computer or from your computer because you have all kind of FTP / SFTP or whatever Data Copy clients disabled on remote jump host side or CITRIX server and you still are looking for a way to copy files between your PC and the Remote server Side.
Or for example if you have 2 or more servers that are in a special Demilitarized Network Zones ( DMZ ) and the machines does not have SFTP / FTP / WebServer or other kind of copy protocol service that can be used to copy files between the hosts and you still need to copy some files between the 2 or more machines in a slow but still functional way, then you might not know of one old school hackers trick you can employee to complete the copy of files between DMZ-ed Server Host A lets say with IP address (192.168.50.5) -> Server Host B (192.168.30.7). The way to complete the binary file copy is to Encode the binary on Server Host A and then, use cat  command to display the encoded string and copy whole encoded cat command output  to your (local PC buffer from where you access the remote side via SSH via the CITRIX or Jump host.). Then decode the encoded file with an encoding tool such as base64 or uuencode. In this article, I'll show how this is done with base64 and uuencode. Base64 binary is pretty standard in most Linux / Unix OS-es today on most Linux distributions it is part of the coreutils package.
The main use of base64 encoding to encode non-text Attachment files to Electronic Mail, but for our case it fits perfectly.
Keep in mind, that this hack to copy the binary from Machine A to Machine B of course depends on the Copy / Paste buffer being enabled both on remote Jump host or Citrix from where you reach the servers as well as your own PC laptop from where you access the remote side.

base64-character-encoding-string-table

Base64 Encoding and Decoding text strings legend

The file copy process to the highly secured PCI host goes like this:
 

1. On Server Host A encode with md5sum command

[root@serverA ~]:# md5sum -b /tmp/inputbinfile-to-encode
66c4d7b03ed6df9df5305ae535e40b7d *inputbinfile-to-encode

 

As you see one good location to encode the file would be /tmp as this is a temporary home or you can use alternatively your HOME dir

but you have to be quite careful to not run out of space if you produce it anywhere 🙂

 

2. Encode the binary file with base64 encoding

 [root@serverB ~]:# base64 -w0 inputbinfile-to-encode > outputbin-file.base64

The -w0 option is given to disable line wrapping. Line wrapping is perhaps not needed if you will copy paste the data.

base64-encoded-binary-file-text-string-linux-screenshot

Base64 Encoded string chunk with line wrapping

For a complete list of possible accepted arguments check here.

3. Cat the inputbinfile-to-encode just generated to display the text encoded file in your SecureCRT / Putty / SuperPutty etc. remote ssh access client

[root@serverA ~]:# cat /tmp/inputbinfile-to-encode
f0VMRgIBAQAAAAAAAAAAAAMAPgABAAAAMGEAAAAAAABAAAAAAAAAACgXAgAAAAAAAAAAA
EAAOAALAEAAHQAcAAYAAAAEAAA ……………………………………………………………… cTD6lC+ViQfUCPn9bs

 

4. Select the cat-ted string and copy it to your PC Copy / Paste buffer


If the bin file is not few kilobytes, but few megabytes copying the file might be tricky as the string produced from cat command would be really long, so make sure the SSH client you're using is configured to have a large buffer to scroll up enough and be able to select the whole encoded string until the end of the cat command and copy it to Copy / Paste buffer.

 

5. On Server Host B paste the bas64 encoded binary inside a newly created file

Open with a text editor vim / mc or whatever is available

[root@serverB ~]:# vi inputbinfile-to-encode

Some very paranoid Linux / UNIX systems might not have even a normal text editor like 'vi' if you happen to need to copy files on such one a useful thing is to use a simple cat on the remote side to open a new File Descriptor buffer, like this:

[root@server2 ~]:# cat >> inputbinfile-to-encode <<'EOF'
Paste the string here

 

6. Decode the encoded binary with base64 cmd again

[root@serverB ~]:# base64 –decode outputbin-file.base64 > inputbinfile-to-encode

 

7. Set proper file permissions (the same as on Host A)

[root@serverB ~]:#  chmod +x inputbinfile-to-encode

 

8. Check again the binary file checksum on Host B is identical as on Host A

[root@serverB ~]:# md5sum -b inputbinfile-to-encode
66c4d7b03ed6df9df5305ae535e40b7d *inputbinfile-to-encode

As you can md5sum match on both sides so file should be OK.

 

9. Encoding and decoding files with uuencode


If you are lucky and you have uuencode installed (sharutils) package is present on remote machine to encode lets say an archived set of binary files in .tar.gz format do:

Prepare the archive of all the files you want to copy with tar on Host A:

[root@Machine1 ~]:#  tar -czvf /bin/whatever /usr/local/bin/htop /usr/local/bin/samhain /etc/hosts archived-binaries-and-configs.tar.gz

[root@Machine1 ~]:# uuencode archived-binaries-and-configs.tar.gz archived-binaries-and-configs.uu

Cat / Copy / paste the encoded content as usual to a file on Host B:

Then on Machine 2 decode:

[root@Machine2 ~]:# uuencode -c < archived-binaries-and-configs.tar.gz.uu

 

Conclusion


In this short method I've shown you a hack that is used often by script kiddies to copy over files between pwn3d machines, a method which however is very precious and useful for sysadmins like me who has to admin a paranoid secured servers that are placed in a very hard to access environments.

With the same method you can encode or decode not only binary file but also any standard input/output file content. base64 encoding is quite useful stuff to use also in bash scripts or perl where you want to have the script copy file in a plain text format . Datas are encoded and decoded to make the data transmission and storing process easier. You have to keep in mind always that Encoding and Decoding are not similar to encryption and decryption as encr. deprytion gives a special security layers to the encoded that. Encoded data can be easily revealed by decoding, so if you need to copy between the servers very sensitive data like SSL certificates Private RSA / DSA key, this command line utility tool better to be not used for sesitive data copying.

 

 

Apache disable requests to not log to access.log Logfile through SetEnvIf and dontlog httpd variables

Monday, October 11th, 2021

apache-disable-certain-strings-from-logging-to-access-log-logo

Logging to Apache access.log is mostly useful as this is a great way to keep log on who visited your website and generate periodic statistics with tools such as Webalizer or Astats to keep track on your visitors and generate various statistics as well as see the number of new visitors as well most visited web pages (the pages which mostly are attracting your web visitors), once the log analysis tool generates its statistics, it can help you understand better which Web spiders visit your website the most (as spiders has a predefined) IP addresses, which can give you insight on various web spider site indexation statistics on Google, Yahoo, Bing etc. . Sometimes however either due to bugs in web spiders algorithms or inconsistencies in your website structure, some of the web pages gets double visited records inside the logs, this could happen for example if your website uses to include iframes.

Having web pages accessed once but logged to be accessed twice hence is erroneous and unwanted, and though that usually have to be fixed by the website programmers, if such approach is not easily doable in the moment and the website is running on critical production system, the double logging of request can be omitted thanks to a small Apache log hack with SetEnvIf Apache config directive. Even if there is no double logging inside Apache log happening it could be that some cron job or automated monitoring scripts or tool such as monit is making periodic requests to Apache and this is garbling your Log Statistics results.

In this short article hence I'll explain how to do remove certain strings to not get logged inside /var/log/httpd/access.log.

1. Check SetEnvIf is Loaded on the Webserver
 

On CentOS / RHEL Linux:

# /sbin/apachectl -M |grep -i setenvif
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using localhost.localdomain. Set the 'ServerName' directive globally to suppress this message
 setenvif_module (shared)


On Debian / Ubuntu Linux:

/usr/sbin/apache2ctl -M |grep -i setenvif
AH00548: NameVirtualHost has no effect and will be removed in the next release /etc/apache2/sites-enabled/000-default.conf:1
 setenvif_module (shared)


2. Using SetEnvIf to omit certain string to get logged inside apache access.log


SetEnvIf could be used either in some certain domain VirtualHost configuration (if website is configured so), or it can be set as a global Apache rule from the /etc/httpd/conf/httpd.conf 

To use SetEnvIf  you have to place it inside a <Directory …></Directory> configuration block, if it has to be enabled only for a Certain Apache configured directory, otherwise you have to place it in the global apache config section.

To be able to use SetEnvIf, only in a certain directories and subdirectories via .htaccess, you will have defined in <Directory>

AllowOverride FileInfo


The general syntax to omit a certain Apache repeating string from keep logging with SetEnvIf is as follows:
 

SetEnvIf Request_URI "^/WebSiteStructureDirectory/ACCESS_LOG_STRING_TO_REMOVE$" dontlog


General syntax for SetEnvIf is as follows:

SetEnvIf attribute regex env-variable

SetEnvIf attribute regex [!]env-variable[=value] [[!]env-variable[=value]] …

Below is the overall possible attributes to pass as described in mod_setenvif official documentation.
 

  • Host
  • User-Agent
  • Referer
  • Accept-Language
  • Remote_Host: the hostname (if available) of the client making the request.
  • Remote_Addr: the IP address of the client making the request.
  • Server_Addr: the IP address of the server on which the request was received (only with versions later than 2.0.43).
  • Request_Method: the name of the method being used (GET, POST, etc.).
  • Request_Protocol: the name and version of the protocol with which the request was made (e.g., "HTTP/0.9", "HTTP/1.1", etc.).
  • Request_URI: the resource requested on the HTTP request line – generally the portion of the URL following the scheme and host portion without the query string.

Next locate inside the configuration the line:

CustomLog /var/log/apache2/access.log combined


To enable filtering of included strings, you'll have to append env=!dontlog to the end of line.

 

CustomLog /var/log/apache2/access.log combined env=!dontlog

 

You might be using something as cronolog for log rotation to prevent your WebServer logs to become too big in size and hard to manage, you can append env=!dontlog to it in same way.

If you haven't used cronolog is it is perhaps best to show you the package description.

server:~# apt-cache show cronolog|grep -i description -A10 -B5
Version: 1.6.2+rpk-2
Installed-Size: 63
Maintainer: Debian QA Group <packages@qa.debian.org>
Architecture: amd64
Depends: perl:any, libc6 (>= 2.4)
Description-en: Logfile rotator for web servers
 A simple program that reads log messages from its input and writes
 them to a set of output files, the names of which are constructed
 using template and the current date and time.  The template uses the
 same format specifiers as the Unix date command (which are the same
 as the standard C strftime library function).
 .
 It intended to be used in conjunction with a Web server, such as
 Apache, to split the access log into daily or monthly logs:
 .
   TransferLog "|/usr/bin/cronolog /var/log/apache/%Y/access.%Y.%m.%d.log"
 .
 A cronosplit script is also included, to convert existing
 traditionally-rotated logs into this rotation format.

Description-md5: 4d5734e5e38bc768dcbffccd2547922f
Homepage: http://www.cronolog.org/
Tag: admin::logging, devel::lang:perl, devel::library, implemented-in::c,
 implemented-in::perl, interface::commandline, role::devel-lib,
 role::program, scope::utility, suite::apache, use::organizing,
 works-with::logfile
Section: web
Priority: optional
Filename: pool/main/c/cronolog/cronolog_1.6.2+rpk-2_amd64.deb
Size: 27912
MD5sum: 215a86766cc8d4434cd52432fd4f8fe7

If you're using cronolog to daily rotate the access.log and you need to filter out the strings out of the logs, you might use something like in httpd.conf:

 

CustomLog "|/usr/bin/cronolog –symlink=/var/log/httpd/access.log /var/log/httpd/access.log_%Y_%m_%d" combined env=!dontlog


 

3. Disable Apache logging access.log from certain USERAGENT browser
 

You can do much more with SetEnvIf for example you might want to omit logging requests from a UserAgent (browser) to end up in /dev/null (nowhere), e.g. prevent any Website requests originating from Internet Explorer (MSIE) to not be logged.

SetEnvIf User_Agent "(MSIE)" dontlog

CustomLog /var/log/apache2/access.log combined env=!dontlog


4. Disable Apache logging from requests coming from certain FQDN (Fully Qualified Domain Name) localhost 127.0.0.1 or concrete IP / IPv6 address

SetEnvIf Remote_Host "dns.server.com$" dontlog

CustomLog /var/log/apache2/access.log combined env=!dontlog


Of course for this to work, your website should have a functioning DNS servers and Apache should be configured to be able to resolve remote IPs to back resolve to their respective DNS defined Hostnames.

SetEnvIf recognized also perl PCRE Regular Expressions, if you want to filter out of Apache access log requests incoming from multiple subdomains starting with a certain domain hostname.

 

SetEnvIf Remote_Host "^example" dontlog

– To not log anything coming from localhost.localdomain address ( 127.0.0.1 ) as well as from some concrete IP address :

SetEnvIf Remote_Addr "127\.0\.0\.1" dontlog

SetEnvIf Remote_Addr "192\.168\.1\.180" dontlog

– To disable IPv6 requests that be coming at the log even though you don't happen to use IPv6 at all

SetEnvIf Request_Addr "::1" dontlog

CustomLog /var/log/apache2/access.log combined env=!dontlog


– Note here it is obligatory to escape the dots '.'


5. Disable robots.txt Web Crawlers requests from being logged in access.log

SetEnvIf Request_URI "^/robots\.txt$" dontlog

CustomLog /var/log/apache2/access.log combined env=!dontlog

Using SetEnvIfNoCase to read incoming useragent / Host / file requests case insensitve

The SetEnvIfNoCase is to be used if you want to threat incoming originators strings as case insensitive, this is useful to omit extraordinary regular expression SetEnvIf rules for lower upper case symbols.

SetEnvIFNoCase User-Agent "Slurp/cat" dontlog
SetEnvIFNoCase User-Agent "Ask Jeeves/Teoma" dontlog
SetEnvIFNoCase User-Agent "Googlebot" dontlog
SetEnvIFNoCase User-Agent "bingbot" dontlog
SetEnvIFNoCase Remote_Host "fastsearch.net$" dontlog

Omit from access.log logging some standard web files .css , .js .ico, .gif , .png and Referrals from own domain

Sometimes your own site scripts do refer to stuff on your own domain that just generates junks in the access.log to keep it off.

SetEnvIfNoCase Request_URI "\.(gif)|(jpg)|(png)|(css)|(js)|(ico)|(eot)$" dontlog

 

SetEnvIfNoCase Referer "www\.myowndomain\.com" dontlog

CustomLog /var/log/apache2/access.log combined env=!dontlog

 

6. Disable Apache requests in access.log and error.log completely


Sometimes at rare cases the produced Apache logs and error log is really big and you already have the requests logged in another F5 Load Balancer or Haproxy in front of Apache WebServer or alternatively the logging is not interesting at all as the Web Application served written in ( Perl / Python / Ruby ) does handle the logging itself. 
I've earlier described how this is done in a good amount of details in previous article Disable Apache access.log and error.log logging on Debian Linux and FreeBSD

To disable it you will have to comment out CustomLog or set it to together with ErrorLog to /dev/null in apache2.conf / httpd.conf (depending on the distro)
 

CustomLog /dev/null
ErrorLog /dev/null


7. Restart Apache WebServer to load settings
 

An important to mention is in case you have Webserver with multiple complex configurations and there is a specific log patterns to omit from logs it might be a very good idea to:

a. Create /etc/httpd/conf/dontlog.conf / etc/apache2/dontlog.conf
add inside all your custom dontlog configurations
b. Include dontlog.conf from /etc/httpd/conf/httpd.conf / /etc/apache2/apache2.conf

Finally to make the changes take affect, of course you will need to restart Apache webserver depending on the distro and if it is with systemd or System V:

For systemd RPM based distro:

systemctl restart httpd

or for Deb based Debian etc.

systemctl apache2 restart

On old System V scripts systems:

On RedHat / CentOS etc. restart Apache with:
 

/etc/init.d/httpd restart


On Deb based SystemV:
 

/etc/init.d/apache2 restart


What we learned ?
 

We have learned about SetEnvIf how it can be used to prevent certain requests strings getting logged into access.log through dontlog, how to completely stop certain browser based on a useragent from logging to the access.log as well as how to omit from logging certain requests incoming from certain IP addresses / IPv6 or FQDNs and how to stop robots.txt from being logged to httpd log.


Finally we have learned how to completely disable Apache logging if logging is handled by other external application.
 

Install and enable Sysstats IO / DIsk / CPU / Network monitoring console suite on Redhat 8.3, Few sar useful command examples

Tuesday, September 28th, 2021

linux-sysstat-monitoring-logo

 

Why to monitoring CPU, Memory, Hard Disk, Network usage etc. with sysstats tools?
 

Using system monitoring tools such as Zabbix, Nagios Monit is a good approach, however sometimes due to zabbix server interruptions you might not be able to track certain aspects of system performance on time. Thus it is always a good idea to 
Gain more insights on system peroformance from command line. Of course there is cmd tools such as iostat and top, free, vnstat that provides plenty of useful info on system performance issues or bottlenecks. However from my experience to have a better historical data that is systimized and all the time accessible from console it is a great thing to have sysstat package at place. Since many years mostly on every server I administer, I've been using sysstats to monitor what is going on servers over a short time frames and I'm quite happy with it. In current company we're using Redhats and CentOS-es and I had to install sysstats on Redhat 8.3. I've earlier done it multiple times on Debian / Ubuntu Linux and while I've faced on some .deb distributions complications of making sysstat collect statistics I've come with an article on Howto fix sysstat Cannot open /var/log/sysstat/sa no such file or directory” on Debian / Ubuntu Linux
 

Sysstat contains the following tools related to collecting I/O and CPU statistics:
iostat
Displays an overview of CPU utilization, along with I/O statistics for one or more disk drives.
mpstat
Displays more in-depth CPU statistics.
Sysstat also contains tools that collect system resource utilization data and create daily reports based on that data. These tools are:
sadc
Known as the system activity data collector, sadc collects system resource utilization information and writes it to a file.
sar
Producing reports from the files created by sadc, sar reports can be generated interactively or written to a file for more intensive analysis.

My experience with CentOS 7 and Fedora to install sysstat it was pretty straight forward, I just had to install it via yum install sysstat wait for some time and use sar (System Activity Reporter) tool to report collected system activity info stats over time.
Unfortunately it seems on RedHat 8.3 as well as on CentOS 8.XX instaling sysstats does not work out of the box.

To complete a successful installation of it on RHEL 8.3, I had to:

[root@server ~]# yum install -y sysstat


To make sysstat enabled on the system and make it run, I've enabled it in sysstat

[root@server ~]# systemctl enable sysstat


Running immediately sar command, I've faced the shitty error:


Cannot open /var/log/sysstat/sa18:
No such file or directory. Please check if data collecting is enabled”

 

Once installed I've waited for about 5 minutes hoping, that somehow automatically sysstat would manage it but it didn't.

To solve it, I've had to create additionally file /etc/cron.d/sysstat (weirdly RPM's post install instructions does not tell it to automatically create it)

[root@server ~]# vim /etc/cron.d/sysstat

# run system activity accounting tool every 10 minutes
0 * * * * root /usr/lib64/sa/sa1 60 59 &
# generate a daily summary of process accounting at 23:53
53 23 * * * root /usr/lib64/sa/sa2 -A &

 

  • /usr/local/lib/sa1 is a shell script that we can use for scheduling cron which will create daily binary log file.
  • /usr/local/lib/sa2 is a shell script will change binary log file to human-readable form.

 

[root@server ~]# chmod 600 /etc/cron.d/sysstat

[root@server ~]# systemctl restart sysstat


In a while if sysstat is working correctly you should get produced its data history logs inside /var/log/sa

[root@server ~]# ls -al /var/log/sa 


Note that the standard sysstat history files on Debian and other modern .deb based distros such as Debian 10 (in  y.2021) is stored under /var/log/sysstat

Here is few useful uses of sysstat cmds


1. Check with sysstat machine history SWAP and RAM Memory use


To lets say check last 10 minutes SWAP memory use:

[hipo@server yum.repos.d] $ sar -W  |last -n 10
 

Linux 4.18.0-240.el8.x86_64 (server)       09/28/2021      _x86_64_        (8 CPU)

12:00:00 AM  pswpin/s pswpout/s
12:00:01 AM      0.00      0.00
12:01:01 AM      0.00      0.00
12:02:01 AM      0.00      0.00
12:03:01 AM      0.00      0.00
12:04:01 AM      0.00      0.00
12:05:01 AM      0.00      0.00
12:06:01 AM      0.00      0.00

[root@ccnrlb01 ~]# sar -r | tail -n 10
14:00:01        93008   1788832     95.06         0   1357700    725740      9.02    795168    683484        32
14:10:01        78756   1803084     95.81         0   1358780    725740      9.02    827660    652248        16
14:20:01        92844   1788996     95.07         0   1344332    725740      9.02    813912    651620        28
14:30:01        92408   1789432     95.09         0   1344612    725740      9.02    816392    649544        24
14:40:01        91740   1790100     95.12         0   1344876    725740      9.02    816948    649436        36
14:50:01        91688   1790152     95.13         0   1345144    725740      9.02    817136    649448        36
15:00:02        91544   1790296     95.14         0   1345448    725740      9.02    817472    649448        36
15:10:01        91108   1790732     95.16         0   1345724    725740      9.02    817732    649340        36
15:20:01        90844   1790996     95.17         0   1346000    725740      9.02    818016    649332        28
Average:        93473   1788367     95.03         0   1369583    725074      9.02    800965    671266        29

 

2. Check system load? Are my processes waiting too long to run on the CPU?

[root@server ~ ]# sar -q |head -n 10
Linux 4.18.0-240.el8.x86_64 (server)       09/28/2021      _x86_64_        (8 CPU)

12:00:00 AM   runq-sz  plist-sz   ldavg-1   ldavg-5  ldavg-15   blocked
12:00:01 AM         0       272      0.00      0.02      0.00         0
12:01:01 AM         1       271      0.00      0.02      0.00         0
12:02:01 AM         0       268      0.00      0.01      0.00         0
12:03:01 AM         0       268      0.00      0.00      0.00         0
12:04:01 AM         1       271      0.00      0.00      0.00         0
12:05:01 AM         1       271      0.00      0.00      0.00         0
12:06:01 AM         1       265      0.00      0.00      0.00         0


3. Show various CPU statistics per CPU use
 

On a multiprocessor, multi core server sometimes for scripting it is useful to fetch processor per use historic data, 
this can be attained with:

 

[hipo@server ~ ] $ mpstat -P ALL
Linux 4.18.0-240.el8.x86_64 (server)       09/28/2021      _x86_64_        (8 CPU)

06:08:38 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
06:08:38 PM  all    0.17    0.02    0.25    0.00    0.05    0.02    0.00    0.00    0.00   99.49
06:08:38 PM    0    0.22    0.02    0.28    0.00    0.06    0.03    0.00    0.00    0.00   99.39
06:08:38 PM    1    0.28    0.02    0.36    0.00    0.08    0.02    0.00    0.00    0.00   99.23
06:08:38 PM    2    0.27    0.02    0.31    0.00    0.06    0.01    0.00    0.00    0.00   99.33
06:08:38 PM    3    0.15    0.02    0.22    0.00    0.03    0.01    0.00    0.00    0.00   99.57
06:08:38 PM    4    0.13    0.02    0.20    0.01    0.03    0.01    0.00    0.00    0.00   99.60
06:08:38 PM    5    0.14    0.02    0.27    0.00    0.04    0.06    0.01    0.00    0.00   99.47
06:08:38 PM    6    0.10    0.02    0.17    0.00    0.04    0.02    0.00    0.00    0.00   99.65
06:08:38 PM    7    0.09    0.02    0.15    0.00    0.02    0.01    0.00    0.00    0.00   99.70


 

sar-sysstat-cpu-statistics-screenshot

Monitor processes and threads currently being managed by the Linux kernel.

[hipo@server ~ ] $ pidstat

pidstat-various-random-process-statistics

[hipo@server ~ ] $ pidstat -d 2


pidstat-show-processes-with-most-io-activities-linux-screenshot

This report tells us that there is few processes with heave I/O use Filesystem system journalling daemon jbd2, apache, mysqld and supervise, in 3rd column you see their respective PID IDs.

To show threads used inside a process (like if you press SHIFT + H) inside Linux top command:

[hipo@server ~ ] $ pidstat -t -p 10765 1 3

Linux 4.19.0-14-amd64 (server)     28.09.2021     _x86_64_    (10 CPU)

21:41:22      UID      TGID       TID    %usr %system  %guest   %wait    %CPU   CPU  Command
21:41:23      108     10765         –    1,98    0,99    0,00    0,00    2,97     1  mysqld
21:41:23      108         –     10765    0,00    0,00    0,00    0,00    0,00     1  |__mysqld
21:41:23      108         –     10768    0,00    0,00    0,00    0,00    0,00     0  |__mysqld
21:41:23      108         –     10771    0,00    0,00    0,00    0,00    0,00     5  |__mysqld
21:41:23      108         –     10784    0,00    0,00    0,00    0,00    0,00     7  |__mysqld
21:41:23      108         –     10785    0,00    0,00    0,00    0,00    0,00     6  |__mysqld
21:41:23      108         –     10786    0,00    0,00    0,00    0,00    0,00     2  |__mysqld

10765 – is the Process ID whose threads you would like to list

With pidstat, you can further monitor processes for memory leaks with:

[hipo@server ~ ] $ pidstat -r 2

 

4. Report paging statistics for some old period

 

[root@server ~ ]# sar -B -f /var/log/sa/sa27 |head -n 10
Linux 4.18.0-240.el8.x86_64 (server)       09/27/2021      _x86_64_        (8 CPU)

15:42:26     LINUX RESTART      (8 CPU)

15:55:30     LINUX RESTART      (8 CPU)

04:00:01 PM  pgpgin/s pgpgout/s   fault/s  majflt/s  pgfree/s pgscank/s pgscand/s pgsteal/s    %vmeff
04:01:01 PM      0.00     14.47    629.17      0.00    502.53      0.00      0.00      0.00      0.00
04:02:01 PM      0.00     13.07    553.75      0.00    419.98      0.00      0.00      0.00      0.00
04:03:01 PM      0.00     11.67    548.13      0.00    411.80      0.00      0.00      0.00      0.00

 

5.  Monitor Received RX and Transmitted TX network traffic perl Network interface real time
 

To print out Received and Send traffic per network interface 4 times in a raw

sar-sysstats-network-traffic-statistics-screenshot
 

[hipo@server ~ ] $ sar -n DEV 1 4


To continusly monitor all network interfaces I/O traffic

[hipo@server ~ ] $ sar -n DEV 1


To only monitor a certain network interface lets say loopback interface (127.0.0.1) received / transmitted bytes

[hipo@server yum.repos.d] $  sar -n DEV 1 2|grep -i lo
06:29:53 PM        lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
06:29:54 PM        lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00


6. Monitor block devices use
 

To check block devices use 3 times in a raw
 

[hipo@server yum.repos.d] $ sar -d 1 3


sar-sysstats-blockdevice-statistics-screenshot
 

7. Output server monitoring data in CSV database structured format


For preparing a nice graphs with Excel from CSV strucuted file format, you can dump the collected data as so:

 [root@server yum.repos.d]# sadf -d /var/log/sa/sa27 — -n DEV | grep -v lo|head -n 10
server-name-fqdn;-1;2021-09-27 13:42:26 UTC;LINUX-RESTART    (8 CPU)
# hostname;interval;timestamp;IFACE;rxpck/s;txpck/s;rxkB/s;txkB/s;rxcmp/s;txcmp/s;rxmcst/s;%ifutil
server-name-fqdn;-1;2021-09-27 13:55:30 UTC;LINUX-RESTART    (8 CPU)
# hostname;interval;timestamp;IFACE;rxpck/s;txpck/s;rxkB/s;txkB/s;rxcmp/s;txcmp/s;rxmcst/s;%ifutil
server-name-fqdn;60;2021-09-27 14:01:01 UTC;eth1;19.42;16.12;1.94;1.68;0.00;0.00;0.00;0.00
server-name-fqdn;60;2021-09-27 14:01:01 UTC;eth0;7.18;9.65;0.55;0.78;0.00;0.00;0.00;0.00
server-name-fqdn;60;2021-09-27 14:01:01 UTC;eth2;5.65;5.13;0.42;0.39;0.00;0.00;0.00;0.00
server-name-fqdn;60;2021-09-27 14:02:01 UTC;eth1;18.90;15.55;1.89;1.60;0.00;0.00;0.00;0.00
server-name-fqdn;60;2021-09-27 14:02:01 UTC;eth0;7.15;9.63;0.55;0.74;0.00;0.00;0.00;0.00
server-name-fqdn;60;2021-09-27 14:02:01 UTC;eth2;5.67;5.15;0.42;0.39;0.00;0.00;0.00;0.00

To graph the output data you can use Excel / LibreOffice's Excel equivalent Calc or if you need to dump a CSV sar output and generate it on the fly from a script  use gnuplot 


What we've learned?


How to install and enable on cron sysstats on Redhat and CentOS 8 Linux ? 
How to continuously monitor CPU / Disk and Network, block devices, paging use and processes and threads used by the kernel per process ?  
As well as how to export previously collected data to CSV to import to database or for later use inrder to generate graphic presentation of data.
Cheers ! 🙂

 

Change Windows 10 default lock screen image via win registry LockScreenImage key change

Tuesday, September 21st, 2021

fix-lock-screen-missing-change-option-on-windows-10-windows-registry-icon

If you do work for a corporation on a Windows machine that is part of Windows Active Directory domain or a Microsoft 365 environment and your Domain admimistrator after some of the scheduled updates. Has enforced a Windows lock screen image change.
You  might be surprised to have some annoying corporation logo picture shown as a default Lock Screen image on your computer on next reoboot. Perhaps for some people it doesn't matter but for as a person who seriously like customizations, and a valuer of
freedom having an enforced picture logo each time I press CTRL + L (To lock my computer) is really annoying.

The logical question hence was how to reverse my desired image as  a default lock screen to enkoy. Some would enjoy some relaxing picture of a Woods, Cave or whatever Natural place landscape. I personally prefer simplicity so I simply use a simple purely black
background.

To do it you'll have anyways to have some kind of superuser access to the computer. At the company I'm epmloyeed, it is possible to temporary request Administrator access this is done via a software installed on the machine. So once I request it I become
Administratof of machine for 20 minutes. In that time I do used a 'Run as Administartor' command prompt cmd.exe and inside Windows registry do the following Registry change.

The first logical thing to do is to try to manually set the picture via:
 

Settings ->  Lock Screen

But unfortunately as you can see in below screenshot, there was no way to change the LockScreen background image.

Windows-settings-lockscreen-screenshot

In Windows Registry Editor

I had to go to registry path


[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\.]

And from there in create a new "String Value" key
 

"LockScreenImage"


so full registry key path should be equal to:


[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\Personalization\LockScreenImage]"

The value to set is:

C:\Users\a768839\Desktop\var-stuff\background\Desired-background-picture.jpg

windows-registry-change-lock-screen-background-picture-from-registry-screenshot

If you want to set a black background picture for LockScreen like me you can download my black background picture from here.

That's all press CTRL + L  key combination and the black screen background lock screen picture will appear !

Hopefully the Domain admin, would not soon enforce some policty to update the registry keys or return your old registry database from backup if something crashs out with something strange to break just set configuration.

To test whether the setting will stay permanent after the next scheduled Windows PC update of policies enforced by the Active Directory (AD) sysadmin, run manually from CMD.EXE

C:\> gpupdate /force


The command will download latest policies from Windows Domain, try to lock the screen once again with Control + L, if the background picture is still there most likely the Picture change would stay for a long.
If you get again the corporation preset domain background instead,  you're out of luck and will have to follow the same steps every, now and then after a domani policy update.

Enjoy your new smooth LockScreen Image 🙂

 

Remove “Windows 7 PC is out of Support” annoying reoccuring warning popup alert

Friday, September 10th, 2021

Windows-7-End-of-life-pc-is-out-of-support-removal-rip-win-7

Since January 15th 2020, Windows 7 which reached its End of Life (EOL)  and is no longer Supported. Windows 7 Service Pack 1 Starter, Home Basic, Home Premium, and Professional installations will display the message


"Your Windows 7 PC is out of support".

The use of Windows 7, since 2020 is steadily declining but some hard core maniacs, who refuse to be in tune with latest fashion do still roll Windows 7 on dedicated VPS Servers (running on Xen / VMWare etc.).
With the reach of End of Support, people who still run Windows 7 have no longer the usual Operating system provided.

  • No security updates
  • No software updates
  • No tech support

Even though running End of Support system is quite dangerous and you might get hacked easily by autometed bot, still for some custom uses and if the Windows 7 Runs behind a solid firewall it could be considered relatively safe.

Microsoft hence made their Windows (remote controlled system) to have an annoying pop up window with the "YOUR WINDOWS 7 PC IS OUT OF SUPPORT" as shown in below screenshot:

windows-7-disable-pc-is-out-of-support-popup-annoying-message-screenshot.

For those who don't plan to migrate from Windows 7 to Windows 10, this message becomes quickly very annoying especially if you happen to access remotely your Windows 7 VPS and use it for simple things as browsing a few news websites or you're a marketer and you use the Windows for accessing Amazon / Ebay from a different country as many Marketers do to access General Webstores emulating access from a remote location. 


Disable "Your Windows 7 PC is out of support" popup alert
 

Luckily it is possible to disable this annoying Your Windows 7 pop-up alert by setting a value key in Windows Registry
DiscontinueEOS to 1.

To do so launch from Administrator command  line cmd.exe prompt (or start it from Windows start menu):

regedit

1. Open Windows Registry Editor and navigate to

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\EOSNotify.

 You will need to set the DWORD DiscontinueEOS value to 1 in the Windows Registry
 

windows-7-disable-pc-is-out-of-support-popup-eosnotify-dword-registry-03-600x366

windows-7-disable-pc-is-out-of-support-popup-discontinueEOS-registry-modify

–  In case EOSNotify key is not available, right-click the CurrentVersion key and select New > Key and name it EOSNotify.

windows-7-disable-pc-is-out-of-support-popup-EOSNotify-create-new-key-600x367

2. Right click anywhere in the right pane and select New > DWORD (32-bit) Value and name it DiscontinueEOS.

3. Set Value data to 1 and click OK.

windows-7-disable-pc-is-out-of-support-popup-edit-dword-32-bit-value-regedit-screenshot.

4. When the new value has been set, Restart the Windows7 computer / Virtual machine, to make sure registry setting take effect.

windows-7-disable-pc-is-out-of-support-popup-discontninueEOS-reg-dword-0x000000001-600x248

 

To automate the procedure in large environments, you can create a small script using the reg  command load the Registry key or use Windows GPO (Group Policy Object) to enforce the setting across all Active Directory PC members.
 

[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\EOSNotify]
"DiscontinueEOS"=dword:00000001
custom GPO in Active Directory.

To avoid potential issues using a non supported OS, you should start planning to upgrade your Windows 7 clients to Windows 10.

That's it ! Out of support Windows 7  alert should no longer bug you 🙂

Adding custom user based host IP aliases load custom prepared /etc/hosts from non root user on Linux – Script to allow define IPs that doesn’t have DNS records to user preferred hostname

Wednesday, April 14th, 2021

adding-custom-user-based-host-aliases-etc-hosts-logo-linux

Say you have access to a remote Linux / UNIX / BSD server, i.e. a jump host and you have to remotely access via ssh a bunch of other servers
who have existing IP addresses but the DNS resolver recognized hostnames from /etc/resolv.conf are long and hard to remember by the jump host in /etc/resolv.conf and you do not have a way to include a new alias to /etc/hosts because you don't have superuser admin previleges on the hop station.
To make your life easier you would hence want to add a simplistic host alias to be able to easily do telnet, ssh, curl to some aliased name like s1, s2, s3 … etc.


The question comes then, how can you define the IPs to be resolvable by easily rememberable by using a custom User specific /etc/hosts like definition file? 

Expanding /etc/hosts predefined host resolvable records is pretty simple as most as most UNIX / Linux has the HOSTALIASES environment variable
Hostaliases uses the common technique for translating host names into IP addresses using either getaddrinfo(3) or the obsolete gethostbyname(3). As mentioned in hostname(7), you can set the HOSTALIASES environment variable to point to an alias file, and you've got per-user aliases

create ~/.hosts file

linux:~# vim ~/.hosts

with some content like:
 

g google.com
localhostg 127.0.0.1
s1 server-with-long-host1.fqdn-whatever.com 
s2 server5-with-long-host1.fqdn-whatever.com
s3 server18-with-long-host5.fqdn-whatever.com

linux:~# export HOSTALIASES=$PWD/.hosts

The caveat of hostaliases you should know is this will only works for resolvable IP hostnames.
So if you want to be able to access unresolvable hostnames.
You can use a normal alias for the hostname you want in ~/.bashrc with records like:

alias server-hostname="ssh username@10.10.10.18 -v -o stricthostkeychecking=no -o passwordauthentication=yes -o UserKnownHostsFile=/dev/null"
alias server-hostname1="ssh username@10.10.10.19 -v -o stricthostkeychecking=no -o passwordauthentication=yes -o UserKnownHostsFile=/dev/null"
alias server-hostname2="ssh username@10.10.10.20 -v -o stricthostkeychecking=no -o passwordauthentication=yes -o UserKnownHostsFile=/dev/null"

then to access server-hostname1 simply type it in terminal.

The more elegant solution is to use a bash script like below:

# include below code to your ~/.bashrc
function resolve {
        hostfile=~/.hosts
        if [[ -f “$hostfile” ]]; then
                for arg in $(seq 1 $#); do
                        if [[ “${!arg:0:1}” != “-” ]]; then
                                ip=$(sed -n -e "/^\s*\(\#.*\|\)$/d" -e "/\<${!arg}\>/{s;^\s*\(\S*\)\s*.*$;\1;p;q}" "$hostfile")
                                if [[ -n “$ip” ]]; then
                                        command "${FUNCNAME[1]}" "${@:1:$(($arg-1))}" "$ip" "${@:$(($arg+1)):$#}"
                                        return
                                fi
                        fi
                done
        fi
        command "${FUNCNAME[1]}" "$@"
}

function ping {
        resolve "$@"
}

function traceroute {
        resolve "$@"
}

function ssh {
        resolve "$@"
}

function telnet {
        resolve "$@"
}

function curl {
        resolve "$@"
}

function wget {
        resolve "$@"
}

 

Now after reloading bash login session $HOME/.bashrc with:

linux:~# source ~/.bashrc

ssh / curl / wget / telnet / traceroute and ping will be possible to the defined ~/.hosts IP addresses just like if it have been defined global wide on System in /etc/hosts.

Enjoy
 

How to add local user to admin access via /etc/sudoers with sudo su – root / Create a sudo admin group to enable users belonging to group become superuser

Friday, January 15th, 2021

sudo_logo-how-to-add-user-to-sysadmin-group

Did you had to have a local users on a server and you needed to be able to add Admins group for all system administrators, so any local user on the system that belongs to the group to be able to become root with command lets say sudo su – root / su -l root / su – root?
If so below is an example /etc/sudoers file that will allow your users belonging to a group local group sysadmins with some assigned group number

Here is how to create the sysadmins group as a starter

linux:~# groupadd -g 800 sysadmins

Lets create a new local user georgi and append the user to be a member of sysadmins group which will be our local system Administrator (superuser) access user group.

To create a user with a specific desired userid lets check in /etc/passwd and create it:

linux:~# grep :811: /etc/passwd || useradd -u 811 -g 800 -c 'Georgi hip0' -d /home/georgi -m georgi

Next lets create /etc/sudoers (if you need to copy paste content of file check here)and paste below configuration:

linux:~# mcedit /etc/sudoers

## Updating the locate database
# Cmnd_Alias LOCATE = /usr/bin/updatedb

 

## Storage
# Cmnd_Alias STORAGE = /sbin/fdisk, /sbin/sfdisk, /sbin/parted, /sbin/partprobe, /bin/mount, /bin/umount

## Delegating permissions
# Cmnd_Alias DELEGATING = /usr/sbin/visudo, /bin/chown, /bin/chmod, /bin/chgrp

## Processes
# Cmnd_Alias PROCESSES = /bin/nice, /bin/kill, /usr/bin/kill, /usr/bin/killall

## Drivers
# Cmnd_Alias DRIVERS = /sbin/modprobe

Cmnd_Alias PASSWD = /usr/bin/passwd [a-zA-Z][a-zA-Z0-9_-]*, \\
!/usr/bin/passwd root

Cmnd_Alias SU_ROOT = /bin/su root, \\
                     /bin/su – root, \\
                     /bin/su -l root, \\
                     /bin/su -p root


# Defaults specification

#
# Refuse to run if unable to disable echo on the tty.
#
Defaults   !visiblepw

#
# Preserving HOME has security implications since many programs
# use it when searching for configuration files. Note that HOME
# is already set when the the env_reset option is enabled, so
# this option is only effective for configurations where either
# env_reset is disabled or HOME is present in the env_keep list.
#
Defaults    always_set_home
Defaults    match_group_by_gid

Defaults    env_reset
Defaults    env_keep =  "COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS"
Defaults    env_keep += "MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE"
Defaults    env_keep += "LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES"
Defaults    env_keep += "LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE"
Defaults    env_keep += "LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY"

#
# Adding HOME to env_keep may enable a user to run unrestricted
# commands via sudo.
#
# Defaults   env_keep += "HOME"
Defaults    secure_path = /sbin:/bin:/usr/sbin:/usr/bin

## Next comes the main part: which users can run what software on
## which machines (the sudoers file can be shared between multiple
## systems).
## Syntax:
##
##      user    MACHINE=COMMANDS
##
## The COMMANDS section may have other options added to it.
##
## Allow root to run any commands anywhere
root    ALL=(ALL)       ALL

## Allows members of the 'sys' group to run networking, software,
## service management apps and more.
# %sys ALL = NETWORKING, SOFTWARE, SERVICES, STORAGE, DELEGATING, PROCESSES, LOCATE, DRIVERS

## Allows people in group wheel to run all commands
%wheel  ALL=(ALL)       ALL

## Same thing without a password
# %wheel        ALL=(ALL)       NOPASSWD: ALL

## Allows members of the users group to mount and unmount the
## cdrom as root
# %users  ALL=/sbin/mount /mnt/cdrom, /sbin/umount /mnt/cdrom
## Allows members of the users group to shutdown this system
# %users  localhost=/sbin/shutdown -h now

%sysadmins            ALL            = SU_ROOT, \\
                                   NOPASSWD: PASSWD

## Read drop-in files from /etc/sudoers.d (the # here does not mean a comment)
#includedir /etc/sudoers.d

zabbix  ALL=(ALL) NOPASSWD:/usr/bin/grep


Save the config and give it a try now to become root with sudo su – root command

linux:~$ id
uid=811(georgi) gid=800(sysadmins) groups=800(sysadmins)
linux:~$ sudo su – root
linux~#

w00t Voila your user is with super rights ! Enjoy 🙂