Archive for the ‘Linux and FreeBSD Desktop’ Category

How to do a port redirect to localhost service with socat or ncat commands to open temporary access to service not seen on the network

Friday, February 23rd, 2024

socat-simple-redirect-tcp-port-on-linux-bsd-logo

You know sometimes it is necessery to easily and temporary redirect network TCP ports to be able to be accessible from Internal DMZ-ed Network via some Local Network IP connection or if the computer system is Internet based and has an external "'real" Internet Class A / B address to be reachable directly from the internet via lets say a modern Internet browser such as Mozilla Firefox / Google Chrome Browser etc.

Such things are easy to be done with iptables if you need to do the IP redirect permanent with Firewall rule changes on Linux router with iptables.
One way to create a TCP port redirect using firewall would include few iptable rules  like for example:

1. Redirect port traffic from external TCP port source to internal one

# iptables -t nat -I PREROUTING -p tcp –dport 10000 -j REDIRECT –to-ports 80
# iptables -t nat -I OUTPUT -p tcp -o lo –dport 10000 -j REDIRECT –to-ports 80
# iptables -t nat -A OUTPUT -o lo -d 127.0.0.1 -p tcp –dport 80 -j DNAT  –to-destination 192.168.0.50:10000
# iptables -t nat -I OUTPUT –source 0/0 –destination 0/0 -p tcp –dport 80 -j REDIRECT –to-ports 10000


Then you will have 192.168.00.50:10000 listener (assuming that the IP is already configured on some of the host network interface, plugged in to the network).

 But as messing up with the firewall is not the best thing to do especially, if you need to just temporary redirect external listener port to a service configured on the server to only run on TCP port on loopback address 127.0.0.1, you can do it instead with another script or command for simplicy.

One simple way to do a port redirect on the fly on GNU / Linux or FreeBSD / OpenBSD is with socat command.

Lets say you have a running statistics of a web server Apache / Nginx / Haproxy frontend / backend statistics or whatever kind of web TCP service on port 80 on your server and this interface is on purpose configured to be reachable only on localhost interface port 80, so you can either access it by creating an ssh tunnel towards the service on 127.0.0.1 or by accessing it by redirecting the traffic towards another external TCP port, lets say 10000.

Here is how you can achieve

2. Redirect Local network accessible IP on all configured Server network interfaces port 10000 to 127.0.0.1 TCP 80 with socat

# socat tcp-l:10000,fork,reuseaddr tcp:127.0.0.1:80

If you need to access later the redirected port in a Browser, pick up the machine first configured IP and open it in a browser (assuming there is no firewall filter prohibiting access to redirected port).

root@pcfreak:~# ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 109.104.212.130  netmask 255.255.255.0  broadcast 109.104.212.255
        ether 91:f8:51:03:75:e5  txqueuelen 1000  (Ethernet)
        RX packets 652945510  bytes 598369753019 (557.2 GiB)
        RX errors 0  dropped 10541  overruns 0  frame 0
        TX packets 619726615  bytes 630209829226 (586.9 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Then in a browser open http://102.104.212.130 or https://102.104.212.130 (depending on if remote service has SSL encryption enabled or not) and you're done, the configured listener Server service should pop-up on the screen.

3. Redirect IP Traffic from External IP to Localhost loopback interface with netcat ( ncat ) swiss army knife hackers and sysadmins tool

If you need to redirect lets say TCP / IP port 8000 to Port a server local binded service on TCP 80 with ncat, instead of socat (if lets say socat is not pre-installed on the machine), you can do it by simply running those two commands:

[root@server ~]# mkfifo svr1_to_svr2
[root@server ~]# ncat -vk -l 8000 < svr1_to_svr2 | ncat 127.0.0.1 80 > svr1_to_svr2
Ncat: Version 7.92 ( https://nmap.org/ncat )
Ncat: Listening on 0.0.0.0:10000
Ncat: Connection from 10.10.258.39.
Ncat: Connection from 10.10.258.39:51813.
Ncat: Connection from 10.10.258.39.
Ncat: Connection from 10.10.258.39:23179.

 

I you don't care to log what is going on the background of connection and you simply want to background the process with a one liner command you can achive that with:


[root@server /tmp]# cd tmp; mkfifo svr1_to_svr2; (ncat -vk -l 8000 < svr1_to_svr2 | ncat 127.0.0.1 80 > svr1_to_svr2 &)
 

Then you can open the Internal Machine Port 80 TCP service on 8000 in a browser as usual.

For those who want a bit of more sophisticated proxy like script I would suggest you take a look at using netcat and a few lines of shell script loop, that can simulate a raw and very primitive proxy with netcat this is exampled in my previous article Create simple proxy server with netcat ( nc ) based utility.

Hope this article is helpful to anyone, there is plenty of other ways to do a port redirect with lets say perl, python and perhaps other micro tools. If you know of one liners or small scripts, that do it please share in comments, so we can learn from each other ! 

Enjoy ! 🙂
 

How to turn On or Off Screen Reader ORCA on Linux Desktop enabled by mistype or a kid smash on the keyboard

Wednesday, November 22nd, 2023

orca-screen-reader-communication-services-logo

For those who type quite fast and use Microsoft Windows, its quite common to start the annoying NARRATOR (Windows Speaking Program) by accidently due to mistyping pressing together Windows key + Control + Enter.
This enables Narrator to read stuff on the screen here and there and to turn it off you just have to either Lock the Windows Computer and press again Windows key + Control + Enter to TURN OFF NARRATOR.

Linux does not have a Narrator but have also embedded Eye impairment Assistive Technology called ORCA.

Orca works with applications and toolkits that support the Assistive Technology Service Provider Interface (AT-SPI), which is the primary assistive technology infrastructure for Linux and Solaris. Applications and toolkits supporting the AT-SPI include the GNOME Gtk+ toolkit, the Java platform's Swing toolkit, LibreOffice, Gecko, and WebKitGtk. AT-SPI support for the KDE Qt toolkit is being pursued.

ORCA is nowadays installed and integrated into many if not most Linux distributions out there. Enabling ORCA is not such a common thing on Linux,so today I got quite puzzled once I came back to the computer, leaving the 3.7 months kid near the Keyboard and finding out that I've enabled aloud screen reader that is reading what is every Window / Menu / Program or object I select with the mouse on my Linux MATE Desktop home GUI environment running on top of Debian Linux.

After a quick look up in Google on what exactly is the Linux program that is reading my screen I came across ORCA, which seem to be visible also as running in my process list:

hipo@jeremiah:~/Downloads$ ps -ef|grep -i orca
hipo     1068376    7960 17 18:48 tty2     00:00:01 orca

After a quick check online I found out that,

To start (Turn On ) Orca Screen Reader using the keyboard:

Windows logo button (Super Key) key + Alt + S 

Of course, it is possible to shut off the annoying reader by simply killing it with:

kill -9 orca

 

Ubuntu users, could start Orca using a mouse and keyboard:

Open the Activities overview and start typing Accessibility.

Click Accessibility to open the panel.

Select thez to open it.

Switch the Screen Reader switch to on.

Problem solved now Screen Reader on Linux is disabled, maybe it is time to disable Orca key press ability to prevent the kid from enabling it again since I don't need it actively thanksfully. with

xmodmap -e 'keycode <value>='

or simply removing the orca package with apt:

# apt remove orca

Resize KVM .img QCOW Image file and Create new LVM partition and ext4 filesystem inside KVM Virtual Machine

Friday, November 10th, 2023

LVM-add-space-to-RHEL-Linux-on-KVM_Virtual_machine-howto

Part of migration project for a customer I'm working on is migration of a couple of KVM based Guest virtual machine servers. The old machines has a backup solution stratetegy using IBM's TSM and the new Machines should use the Cheaper solution adopted by the Customer company using the CommVault backup solution (an enterprise software thath is used for data backup and recovery not only to local Tape Library / Data blobs on central backup servers infra but also in Cloud infrastructure.

To install the CommVault software on the Redhat Linux-es, the official install documentation (prepared by the team who prepared the CommVault) infrastructure for the customer recommends to have a separate partition for the CommVault backups under /opt directory  (/opt/commvault) and the partition should be as a minumum at least 10 Gigabytes of size. 

Unfortunately on our new prepared KVM VM guest machines, it was forgotten to have the separate /opt of 10GB prepared in advanced. And we ended up with Virtual Machines that has a / (root directory) of 68GB size and a separate /var and /home LVM parititons. Thus to correct the issues it was required to find a way to add another separate LVM partition inside the KVM VirtualMachine.img (QCOW Image file). 

This seemed to be an easy task at first as that might be possible with simple .img partition mount with losetup command kpartx and simple lvreduce command in some way such as

# mount /dev/loop0 /mnt/test/

# kpartx -a /dev/loop0
# kpartx -l /dev/loop0
# ls -al /dev/mapper/*

… 

# lvreduce 

etc. however unfortunately kpartx though not returning error did not provided the new /dev/mapper devices to be used with LVM tools and this approach seems to not be possible on RHEL 8.8 as the kpartx couldn't list.

 

A colleague of mine Mr. Paskalev suggested that we can perhaps try to mount the partition with default KVM tool to mount .img partitions which is guestmount but unfortunately
with a command like:
 

# guestmount -a /kvm/VM.img -i –rw /mnt/test/

But unfortunately this mounted the filesystem in fuse filesystem and the LVM /dev/mapper of the VM can't be seen so we decided to abondon this method.

After some pondering with Dimitar Paskalev and Dimitar Hristov, thanks to joint efforts we found the way to do it, below are the steps we followed to succeed in creating new LVM ext4 partition required.
One would wonder how many system
 

1. Check enough space is available on the HV machine

 

The VMs are held under /kvm so in this case:

[root@hypervisor-host ~]# df -h|grep -i /kvm
/dev/mapper/vg00-vmprivate  206G   27G  169G  14% /kvm

 

2. Shutdown the running VM and make sure it is stopped
 

[root@hypervisor-host ~]# virsh shutdown vm-host

 

[root@hypervisor-host ~]# virsh list –all
 Id   Name       State
————————–
 4    lpdkv01f   running
 5    vm-host   shut off

 

3. Check current Space status of VM

 

[root@hypervisor-host ~]# qemu-img info /kvm/vm-host.img       
image: /kvm/vm-host.img
file format: qcow2
virtual size: 100 GiB (107374182400 bytes)
disk size: 8.62 GiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: true
    refcount bits: 16
    corrupt: false
    extended l2: false

 

4. Resize (extend VM) with whatever size you want    
 

[root@hypervisor-host ~]# qemu-img resize /kvm/vm-host.img +10G

 

5. Start VM    
 

[root@hypervisor-host ~]# virsh start vm-host


7. Check the LVM and block devices on HVs (not necessery but good for an overview)
 

[root@hypervisor-host ~]# pvs
  PV         VG   Fmt  Attr PSize   PFree 
  /dev/sda2  vg00 lvm2 a–  277.87g 19.87g
  
[root@hypervisor-host ~]# vgs
  VG   #PV #LV #SN Attr   VSize   VFree 
  vg00   1  11   0 wz–n- 277.87g 19.87g

 

[root@hypervisor-host ~]# lsblk 
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                  8:0    0 278.9G  0 disk 
├─sda1               8:1    0     1G  0 part /boot
└─sda2               8:2    0 277.9G  0 part 
  ├─vg00-root      253:0    0    15G  0 lvm  /
  ├─vg00-swap      253:1    0     1G  0 lvm  [SWAP]
  ├─vg00-var       253:2    0     5G  0 lvm  /var
  ├─vg00-spool     253:3    0     2G  0 lvm  /var/spool
  ├─vg00-audit     253:4    0     3G  0 lvm  /var/log/audit
  ├─vg00-opt       253:5    0     2G  0 lvm  /opt
  ├─vg00-home      253:6    0     5G  0 lvm  /home
  ├─vg00-tmp       253:7    0     5G  0 lvm  /tmp
  ├─vg00-log       253:8    0     5G  0 lvm  /var/log
  ├─vg00-cache     253:9    0     5G  0 lvm  /var/cache
  └─vg00-vmprivate 253:10   0   210G  0 lvm  /vmprivate

  
8 . Check logical volumes on Hypervisor host
 

[root@hypervisor-host ~]# lvdisplay 
  — Logical volume —
  LV Path                /dev/vg00/swap
  LV Name                swap
  VG Name                vg00
  LV UUID                3tNa0n-HDVw-dLvl-EC06-c1Ex-9jlf-XAObKm
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:45 +0200
  LV Status              available
  # open                 2
  LV Size                1.00 GiB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:1
   
  — Logical volume —
  LV Path                /dev/vg00/var
  LV Name                var
  VG Name                vg00
  LV UUID                JBerim-fxVv-jU10-nDmd-figw-4jVA-8IYdxU
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:45 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:2
   
  — Logical volume —
  LV Path                /dev/vg00/spool
  LV Name                spool
  VG Name                vg00
  LV UUID                nFlmp2-iXg1-tFxc-FKaI-o1dA-PO70-5Ve0M9
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:45 +0200
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:3
   
  — Logical volume —
  LV Path                /dev/vg00/audit
  LV Name                audit
  VG Name                vg00
  LV UUID                e6H2OC-vjKS-mPlp-JOmY-VqDZ-ITte-0M3npX
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:46 +0200
  LV Status              available
  # open                 1
  LV Size                3.00 GiB
  Current LE             768
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:4
   
  — Logical volume —
  LV Path                /dev/vg00/opt
  LV Name                opt
  VG Name                vg00
  LV UUID                oqUR0e-MtT1-hwWd-MhhP-M2Y4-AbRo-Kx7yEG
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:46 +0200
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:5
   
  — Logical volume —
  LV Path                /dev/vg00/home
  LV Name                home
  VG Name                vg00
  LV UUID                ehdsH7-okS3-gPGk-H1Mb-AlI7-JOEt-DmuKnN
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:47 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:6
   
  — Logical volume —
  LV Path                /dev/vg00/tmp
  LV Name                tmp
  VG Name                vg00
  LV UUID                brntSX-IZcm-RKz2-CP5C-Pp00-1fA6-WlA7lD
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:47 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:7
   
  — Logical volume —
  LV Path                /dev/vg00/log
  LV Name                log
  VG Name                vg00
  LV UUID                ZerDyL-birP-Pwck-yvFj-yEpn-XKsn-sxpvWY
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:47 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:8
   
  — Logical volume —
  LV Path                /dev/vg00/cache
  LV Name                cache
  VG Name                vg00
  LV UUID                bPPfzQ-s4fH-4kdT-LPyp-5N20-JQTB-Y2PrAG
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:48 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:9
   
  — Logical volume —
  LV Path                /dev/vg00/root
  LV Name                root
  VG Name                vg00
  LV UUID                mZr3p3-52R3-JSr5-HgGh-oQX1-B8f5-cRmaIL
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:48 +0200
  LV Status              available
  # open                 1
  LV Size                15.00 GiB
  Current LE             3840
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:0
   
  — Logical volume —
  LV Path                /dev/vg00/vmprivate
  LV Name                vmprivate
  VG Name                vg00
  LV UUID                LxNRWV-le3h-KIng-pUFD-hc7M-39Gm-jhF2Aj
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-09-18 11:54:19 +0200
  LV Status              available
  # open                 1
  LV Size                210.00 GiB
  Current LE             53760
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:10

9. Check Hypervisor existing partitions and space
 

[root@hypervisor-host ~]# fdisk -l
Disk /dev/sda: 278.9 GiB, 299439751168 bytes, 584843264 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0581e6e2

Device     Boot   Start       End   Sectors   Size Id Type
/dev/sda1  *       2048   2099199   2097152     1G 83 Linux
/dev/sda2       2099200 584843263 582744064 277.9G 8e Linux LVM


Disk /dev/mapper/vg00-root: 15 GiB, 16106127360 bytes, 31457280 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-swap: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-var: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-spool: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-audit: 3 GiB, 3221225472 bytes, 6291456 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-opt: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-home: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-tmp: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-log: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-cache: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-vmprivate: 210 GiB, 225485783040 bytes, 440401920 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

 

10. List block devices on VM
 

[root@vm-host ~]# lsblk 
NAME               MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                 11:0    1 1024M  0 rom  
vda                252:0    0  100G  0 disk 
├─vda1             252:1    0    1G  0 part /boot
├─vda2             252:2    0   88G  0 part 
│ ├─vg00-root      253:0    0   68G  0 lvm  /
│ ├─vg00-home      253:2    0   10G  0 lvm  /home
│ └─vg00-var       253:3    0   10G  0 lvm  /var
├─vda3             252:3    0    1G  0 part [SWAP]
└─vda4             252:4    0   10G  0 part 

 

 

11. Create new LVM partition with fdisk or cfdisk
 

If there is no cfdisk new resized space with qemu-img could be setup with a fdisk, though I personally always prefer to use cfdisk

[root@vm-host ~]# fdisk /dev/vda
# > p (print)
# > m (manfile)
# > n
# … follow on screen instructions to select start and end blocks
# > t (change partition type)
# > select and set to 8e
# > w (write changes)

[root@vm-host ~]# cfdisk /dev/vda


Setup new partition from Free space as [ primary ] partition and Choose to be of type LVM


12. List partitions to make sure new LVM partition is present
 

[root@vm-host ~]# fdisk -l
Disk /dev/vda: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xe7b2d9fd

Device     Boot     Start       End   Sectors Size Id Type
/dev/vda1  *         2048   2099199   2097152   1G 83 Linux
/dev/vda2         2099200 186646527 184547328  88G 8e Linux LVM
/dev/vda3       186646528 188743679   2097152   1G 82 Linux swap / Solaris
/dev/vda4       188743680 209715199  20971520  10G 8e Linux LVM

The extra added 10 Giga is seen under /dev/vda4.
  — Physical volume —
  PV Name               /dev/vda4
  VG Name               vg01
  PV Size               10.00 GiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              2559
  Free PE               0
  Allocated PE          2559
  PV UUID               yvMX8a-sEka-NLA7-53Zj-fFdZ-Jd2K-r0Db1z
   
  — Physical volume —
  PV Name               /dev/vda2
  VG Name               vg00
  PV Size               <88.00 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              22527
  Free PE               0
  Allocated PE          22527
  PV UUID               i4UpGr-h9Cd-iKBu-KqEI-15vK-CGc1-DwRPj8
   
[root@vm-host ~]# 

 

13. List LVM Physical Volumes
 

[root@vm-host ~]# pvdisplay 
  — Physical volume —
  PV Name               /dev/vda2
  VG Name               vg00
  PV Size               <88.00 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              22527
  Free PE               0
  Allocated PE          22527
  PV UUID               i4UpGr-h9Cd-iKBu-KqEI-15vK-CGc1-DwRPj8

 


  
  Notice the /dev/vda4 is not seen in pvdisplay (Physical Volume display command) because not created yet, so lets create it.
 

14. Initialize new Physical Volume to be available for use by LVM
 

[root@vm-host ~]# pvcreate /dev/vda4


15. Inform the OS for partition table changes
 

If partprobe is not available as command on the host, below obscure command should do the trick.
 

[root@vm-host ~]# echo "- – -" | tee /sys/class/scsi_host/host*/scan

However usually, better to use partprobe to inform the Operating System of partition table changes

[root@vm-host ~]# partprobe


16. Use lsblk again to see the new /dev/vda4 LVM is listed into "vda" root block device
 

[root@vm-host ~]# 
[root@vm-host ~]# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0            11:0    1 1024M  0 rom  
vda           252:0    0  100G  0 disk 
├─vda1        252:1    0    1G  0 part /boot
├─vda2        252:2    0   88G  0 part 
│ ├─vg00-root 253:0    0   68G  0 lvm  /
│ ├─vg00-home 253:1    0   10G  0 lvm  /home
│ └─vg00-var  253:2    0   10G  0 lvm  /var
├─vda3        252:3    0    1G  0 part [SWAP]
└─vda4        252:4    0   10G  0 part 
[root@vm-host ~]# 


17. Create new Volume Group (VG) on /dev/vda4 block device
 

Before creating a new VG, list what kind of VG is on the machine to be sure the new created one will not be already present.
 

[root@vm-host ~]# vgdisplay 
  — Volume group —
  VG Name               vg00
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <88.00 GiB
  PE Size               4.00 MiB
  Total PE              22527
  Alloc PE / Size       22527 / <88.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               oyo1oY-saSm-0IKk-gZnf-Knwz-utO7-Aw8c60

vg00 is existing only, so we can use vg01 as a Volume Group name for the new volume group where the fresh 10GB LVM partition will lay

[root@vm-host ~]# vgcreate vg01 /dev/vda4
  Volume group "vg01" successfully created

 

18. Create new Logical Volume (LV) and extend it to occupy the full space available on Volume Group vg01

 

 

[root@vm-host ~]# lvcreate -n commvault -l 100%FREE vg01
  Logical volume "commvault" created.

  An alternative way to create the same LV is by running:

lvcreate -n commvault -L 10G vg01


19. Relist block devices with lsblk to make sure the new created Logical Volume commvault is really present and seen, in case of it missing re-run again partprobe cmd
 

[root@vm-host ~]# lsblk 
NAME               MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                 11:0    1 1024M  0 rom  
vda                252:0    0  100G  0 disk 
├─vda1             252:1    0    1G  0 part /boot
├─vda2             252:2    0   88G  0 part 
│ ├─vg00-root      253:0    0   68G  0 lvm  /
│ ├─vg00-home      253:1    0   10G  0 lvm  /home
│ └─vg00-var       253:2    0   10G  0 lvm  /var
├─vda3             252:3    0    1G  0 part [SWAP]
└─vda4             252:4    0   10G  0 part 
  └─vg01-commvault 253:3    0   10G  0 lvm  

 

As it is not mounted yet, the VG will be not seen in df free space but will be seen as a volume group with vgdispaly
 

[root@vm-host ~]# df -h
Filesystem                  Size  Used Avail Use% Mounted on
devtmpfs                    2.8G     0  2.8G   0% /dev
tmpfs                       2.8G   33M  2.8G   2% /dev/shm
tmpfs                       2.8G   17M  2.8G   1% /run
tmpfs                       2.8G     0  2.8G   0% /sys/fs/cgroup
/dev/mapper/vg00-root        67G  2.4G   61G   4% /
/dev/mapper/vg00-var        9.8G 1021M  8.3G  11% /var
/dev/mapper/vg00-home       9.8G   24K  9.3G   1% /home
/dev/vda1                   974M  242M  665M  27% /boot
tmpfs                       569M     0  569M   0% /run/user/0

 

[root@vm-host ~]# vgdisplay 
  — Volume group —
  VG Name               vg01
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <10.00 GiB
  PE Size               4.00 MiB
  Total PE              2559
  Alloc PE / Size       2559 / <10.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               nYP0tv-IbFw-fBVT-slBB-H1hF-jD0h-pE3V0S
   
  — Volume group —
  VG Name               vg00
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <88.00 GiB
  PE Size               4.00 MiB
  Total PE              22527
  Alloc PE / Size       22527 / <88.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               oyo1oY-saSm-0IKk-gZnf-Snwz-utO7-Aw8c60
  


20. Create new ext4 filesystem on the just created vg01-commvault   
 

[root@vm-host ~]# mkfs.ext4 /dev/mapper/vg01-commvault 

[root@vm-host ~]# mkfs.ext4 /dev/mapper/vg01-commvault 
mke2fs 1.45.6 (20-Mar-2020)
Discarding device blocks: done                            
Creating filesystem with 2620416 4k blocks and 655360 inodes
Filesystem UUID: 1491d8b1-2497-40fe-bc40-5faa6a2b2644
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 


21. Mount vg01-commvault into /opt directory
 

[root@vm-host ~]# mkdir -p /opt/

[root@vm-host ~]# mount /dev/mapper/vg01-commvault /opt/


22. Check mount is present on VM guest OS
 

[root@vm-host ~]# mount|grep -i /opt
/dev/mapper/vg01-commvault on /opt type ext4 (rw,relatime)
[root@vm-host ~]# 

[root@vm-host ~]# df -h|grep -i opt
/dev/mapper/vg01-commvault  9.8G   24K  9.3G   1% /opt
[root@vm-host ~]# 
 

23. Add vg01-commvault to be auto mounted via /etc/fstab on next Virtual Machine reboot
 

[root@vm-host ~]# echo '/dev/mapper/vg01-commvault /opt         ext4            defaults        1        2' >> /etc/fstab

[root@vm-host ~]# rpm -ivh commvault-fs.Instance001-11.0.0-80.240.0.3589820.240.4083067.el8.x86_64.rpm

[root@vm-host ~]# systemctl status commvault
● commvault.Instance001.service – commvault Service
   Loaded: loaded (/etc/systemd/system/commvault.Instance001.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2023-11-10 15:13:59 CET; 27s ago
  Process: 9972 ExecStart=/opt/commvault/Base/Galaxy start direct -focus Instance001 (code=exited, status=0/SUCCESS)
    Tasks: 54
   Memory: 155.5M
   CGroup: /system.slice/commvault.Instance001.service
           ├─10132 /opt/commvault/Base/cvlaunchd
           ├─10133 /opt/commvault/Base/cvd
           ├─10135 /opt/commvault/Base/cvfwd
           └─10137 /opt/commvault/Base/ClMgrS

Nov 10 15:13:57 vm-host.ffm.de.int.atosorigin.com systemd[1]: Starting commvault Service…
Nov 10 15:13:58 vm-host.ffm.de.int.atosorigin.com Galaxy[9972]: Cleaning up /opt/commvault/Base/Temp …
Nov 10 15:13:58 vm-host.ffm.de.int.atosorigin.com Galaxy[9972]: Starting Commvault services for Instance001 …
Nov 10 15:13:59 vm-host.ffm.de.int.atosorigin.com Galaxy[9972]: [22B blob data]
Nov 10 15:13:59 vm-host.ffm.de.int.atosorigin.com systemd[1]: Started commvault Service.
[root@vm-host ~]# 

 

24. Install Commvault backup client RPM in new mounted LVM under /opt

[root@vm-host ~]#  rpm -ivh commvault.rpm

How to make 27 inch monitor to work on 2560×1440 with Virtualbox Linux

Wednesday, October 4th, 2023

make-virtualbox-with-linux-work-on-2k-2560x1440-howto

I've bought a new "second hand" refurbished EIZO Flexscan Monitor EV2760 S1 K1 awesome monitor re from Kvant Serviz a company reseller of Second Hand electronics that is located on the territy of Bulgarian Academy of Sciences (BAN / BAS) and was created by BAS people originally for the BAS people and am pretty happy with it for doing my daily job as system administrator, especially as the monitor has been used on very short screen time for only 256 use hours (which is less than a year of full-use time), whether EIZO does guarantee their monitors to be able to serve up to 5 Full years monitor use time.

For those who deals with Graphics such as Designers and people into art working with Computers knows EIZO brand Monitors for quite some time now and it seems as much of those people are using Windows or Macintoshes, these monitors have been mainly created to work optimally with Windows / Mac computers on a higher resolution.
My work PC that is Dell Latitude 5510 with its HDMI cable has been running perfect with The EIZO with Windows 10, however as I'm using a Virtualbox virutal machines with CentOS Linux, the VM does not automatically detected the highest resolution 2K that this monitors supports 2560×1440 at 60 Hz is the best one can use to get more things fit into the screen and hopefully also good for the Eyes, the Ecoview shoulk also be a good idea for the eyes, as the Ecoview by EIZO tries to adjust the monitor brightness to lower levels according to the light in the room to try to minimize the eye strain on the eyes. The Ecoview mode is a little bit I guess like the famous BENQ's monitors Eye care. 
I'm talking about all this Displays specifics as I spend quite a lot of time to learn the very basics about monitors as my old old 24 Inch EIZO Monitor Flexscan model 2436W started to wear off with time and doesn't support HDMI cable input, so I had to use a special. cable connector that modifies the signal from HDMI to DVI (and I'm not sure how this really effects the eyes), plus the DVI quality is said to be a little bit worse than HDMI as far as I read a bit on the topic online.

Well anyways currently I'm a happy owner of the EIZO EV2760 Monitor which has a full set of inputs of:

  • 27" In-Plane Switching (IPS) Panel
  • DisplayPort | HDMI | DVI-D | 3.5mm Audio
  • 2560 x 1440 Native Resolution
  • 1000:1 Typical Contrast Ratio
     

I've tried to make the monitor work with Linux and my first assumption from what I've read was that I have to reinstall the Guess Addition Tools on the Virtualbox with additing the Guest Addition Tools via the Vbox GUI interface:

Devices -> Insert Guest Additions CD Image

virtualbox-resolutions-screenshot

But got an error that the Guest additions tools iso is missing
So eventually resolved it by remounting and reinstalling the guest addition tools with the following set of commands:

[root@localhost test]# yum install perl gcc dkms kernel-devel kernel-headers make bzip2
[root@localhost test]# cd /mnt/cdrom/
[root@localhost cdrom]# ls
AUTORUN.INF  runasroot.sh                       VBoxSolarisAdditions.pkg
autorun.sh   TRANS.TBL                          VBoxWindowsAdditions-amd64.exe
cert         VBoxDarwinAdditions.pkg            VBoxWindowsAdditions.exe
NT3x         VBoxDarwinAdditionsUninstall.tool  VBoxWindowsAdditions-x86.exe
OS2          VBoxLinuxAdditions.run

 


[root@localhost cdrom]# ./VBoxLinuxAdditions.run 

Verifying archive integrity… All good.
Uncompressing VirtualBox 6.1.34 Guest Additions for Linux……..
VirtualBox Guest Additions installer
Removing installed version 6.1.34 of VirtualBox Guest Additions…
Copying additional installer modules …
Installing additional modules …
VirtualBox Guest Additions: Starting.
VirtualBox Guest Additions: Building the VirtualBox Guest Additions kernel
modules.  This may take a while.
VirtualBox Guest Additions: To build modules for other installed kernels, run
VirtualBox Guest Additions:   /sbin/rcvboxadd quicksetup <version>
VirtualBox Guest Additions: or
VirtualBox Guest Additions:   /sbin/rcvboxadd quicksetup all
VirtualBox Guest Additions: Building the modules for kernel
3.10.0-1160.80.1.el7.x86_64.
ERROR: Can't map '//etc/selinux/targeted/policy/policy.31':  Invalid argument

ERROR: Unable to open policy //etc/selinux/targeted/policy/policy.31.
libsemanage.semanage_read_policydb: Error while reading kernel policy from /etc/selinux/targeted/active/policy.kern. (No such file or directory).
OSError: No such file or directory
VirtualBox Guest Additions: Running kernel modules will not be replaced until
the system is restarted

 

 

The solution to that was to reinstal the security policy-target was necessery

[root@localhost test]# yum install selinux-policy-targeted –reinstall


And of course rerun the reinstall of Guest addition tools up to the latest

[root@localhost cdrom]# ./VBoxLinuxAdditions.run 

Unfortunately that doesn't make it resolve it and even shutting down the VM machine and reloading it again with Raised Video Memory for the simulated hardware from settings from 16 MB to 128MB for the VM does not give the option from the Virtualbox interface to set the resolution from
 

View -> Virtual Screen 1 (Resize to 1920×1200)

to any higher than that.

After a bit of googling I found some newer monitors doesn't seem to be seen by xrandr command and few extra commands with xrandr need to be run to make the 2K resolution 2560×1440@60 Herzes work under the Linux virtual machine.

These are the extra xranrd command that make it happen

# xrandr –newmode "2560x1440_60.00" 311.83  2560 2744 3024 3488  1440 1441 1444 1490  -HSync +Vsync
# xrandr –addmode Virtual1 2560x1440_60.00
# xrandr –output Virtual1 –mode "2560x1440_60.00"

As this kind of settings needs to be rerun on next time the Virtual Machine runs it is a good idea to place the commands in a tiny shell script:

[test@localhost ~]$ cat xrandr-set-resolution-to-2560×1440.sh 
#!/bin/bash
xrandr –newmode "2560x1440_60.00" 311.83  2560 2744 3024 3488  1440 1441 1444 1490  -HSync +Vsync
xrandr –addmode Virtual1 2560x1440_60.00
xrandr –output Virtual1 –mode "2560x1440_60.00"


You can Download  the xrandr-set-resolution-to-2560×1440.sh script from here

Once the commands are run, to make it affect the Virtualbox, you can simply put it in FullScreen mode via


View -> Full-Screen Mode (can be teriggered from keyboard by pressing Right CTRL + F) together

[test@localhost ~]$ xrandr –addmode Virtual1 2560x1440_60.00
[test@localhost ~]$ xrandr –output Virtual1 –mode "2560x1440_60.00"
[test@localhost ~]$ xrandr 
Screen 0: minimum 1 x 1, current 2560 x 1440, maximum 8192 x 8192
Virtual1 connected primary 2560×1440+0+0 (normal left inverted right x axis y axis) 0mm x 0mm
   1920×1200     60.00 +  59.88  
   2560×1600     59.99  
   1920×1440     60.00  
   1856×1392     60.00  
   1792×1344     60.00  
   1600×1200     60.00  
   1680×1050     59.95  
   1400×1050     59.98  
   1280×1024     60.02  
   1440×900      59.89  
   1280×960      60.00  
   1360×768      60.02  
   1280×800      59.81  
   1152×864      75.00  
   1280×768      59.87  
   1024×768      60.00  
   800×600       60.32  
   640×480       59.94  
   2560x1440_60.00  60.00* 
Virtual2 disconnected (normal left inverted right x axis y axis)
Virtual3 disconnected (normal left inverted right x axis y axis)
Virtual4 disconnected (normal left inverted right x axis y axis)
Virtual5 disconnected (normal left inverted right x axis y axis)
Virtual6 disconnected (normal left inverted right x axis y axis)
Virtual7 disconnected (normal left inverted right x axis y axis)
Virtual8 disconnected (normal left inverted right x axis y axis)

Tadadadam ! That's all folks, enjoy having your 27 Inch monitor running at 2560×1440 @ 60 Hz 🙂
 

 

How to set up Notify by email expiring local UNIX user accounts on Linux / BSD with a bash script

Thursday, August 24th, 2023

password-expiry-linux-tux-logo-script-picture-how-to-notify-if-password-expires-on-unix

If you have already configured Linux Local User Accounts Password Security policies Hardening – Set Password expiry, password quality, limit repatead access attempts, add directionary check, increase logged history command size and you want your configured local user accounts on a Linux / UNIX / BSD system to not expire before the user is reminded that it will be of his benefit to change his password on time, not to completely loose account to his account, then you might use a small script that is just checking the upcoming expiry for a predefined users and emails in an array with lslogins command like you will learn in this article.

The script below is written by a colleague Lachezar Pramatarov (Credit for the script goes to him) in order to solve this annoying expire problem, that we had all the time as me and colleagues often ended up with expired accounts and had to bother to ask for the password reset and even sometimes clearance of account locks. Hopefully this little script will help some other unix legacy admin systems to get rid of the account expire problem.

For the script to work you will need to have a properly configured SMTP (Mail server) with or without a relay to be able to send to the script predefined email addresses that will get notified. 

Here is example of a user whose account is about to expire in a couple of days and who will benefit of getting the Alert that he should hurry up to change his password until it is too late 🙂

[root@linux ~]# date
Thu Aug 24 17:28:18 CEST 2023

[root@server~]# chage -l lachezar
Last password change                                    : May 30, 2023
Password expires                                        : Aug 28, 2023
Password inactive                                       : never
Account expires                                         : never
Minimum number of days between password change          : 0
Maximum number of days between password change          : 90
Number of days of warning before password expires       : 14

Here is the user_passwd_expire.sh that will report the user

# vim  /usr/local/bin/user_passwd_expire.sh

#!/bin/bash

# This script will send warning emails for password expiration 
# on the participants in the following list:
# 20, 15, 10 and 0-7 days before expiration
# ! Script sends expiry Alert only if day is Wednesday – if (( $(date +%u)==3 )); !

# email to send if expiring
alert_email='alerts@pc-freak.net';
# the users that are admins added to belong to this group
admin_group="admins";
notify_email_header_customer_name='Customer Name';

declare -A mails=(
# list below accounts which will receive account expiry emails

# syntax to define uid / email
# [“account_name_from_etc_passwd”]="real_email_addr@fqdn";

#    [“abc”]="abc@fqdn.com"
#    [“cba”]="bca@fqdn.com"
    [“lachezar”]="lachezar.user@gmail.com"
    [“georgi”]="georgi@fqdn-mail.com"
    [“acct3”]="acct3@fqdn-mail.com"
    [“acct4”]="acct4@fqdn-mail.com"
    [“acct5”]="acct5@fqdn-mail.com"
    [“acct6”]="acct6@fqdn-mail.com"
#    [“acct7”]="acct7@fqdn-mail.com"
#    [“acct8”]="acct8@fqdn-mail.com"
#    [“acct9”]="acct9@fqdn-mail.com"
)

declare -A days

while IFS="=" read -r person day ; do
  days[“$person”]="$day"
done < <(lslogins –noheadings -o USER,GROUP,PWD-CHANGE,PWD-WARN,PWD-MIN,PWD-MAX,PWD-EXPIR,LAST-LOGIN,FAILED-LOGIN  –time-format=iso | awk '{print "echo "$1" "$2" "$3" $(((($(date +%s -d \""$3"+90 days\")-$(date +%s)))/86400)) "$5}' | /bin/bash | grep -E " $admin_group " | awk '{print $1 "=" $4}')

#echo ${days[laprext]}
for person in "${!mails[@]}"; do
     echo "$person ${days[$person]}";
     tmp=${days[$person]}

#     echo $tmp
# each person will receive mails only if 20th days / 15th days / 10th days remaining till expiry or if less than 7 days receive alert mail every day

     if  (( (${tmp}==20) || (${tmp}==15) || (${tmp}==10) || ((${tmp}>=0) && (${tmp}<=7)) )); 
     then
         echo "Hello, your password for $(hostname -s) will expire after ${days[$person]} days.” | mail -s “$notify_email_header_customer_name $(hostname -s) server password expiration”  -r passwd_expire ${mails[$person]};
     elif ((${tmp}<0));
     then
#          echo "The password for $person on $(hostname -s) has EXPIRED before{days[$person]} days. Please take an action ASAP.” | mail -s “EXPIRED password of  $person on $(hostname -s)”  -r EXPIRED ${mails[$person]};

# ==3 meaning day is Wednesday the day on which OnCall Person changes

        if (( $(date +%u)==3 ));
        then
             echo "The password for $person on $(hostname -s) has EXPIRED. Please take an action." | mail -s "EXPIRED password of  $person on $(hostname -s)"  -r EXPIRED $alert_email;
        fi
     fi  
done

 


To make the script notify about expiring user accounts, place the script under some directory lets say /usr/local/bin/user_passwd_expire.sh and make it executable and configure a cron job that will schedule it to run every now and then.

# cat /etc/cron.d/passwd_expire_cron

# /etc/cron.d/pwd_expire
#
# Check password expiration for users
#
# 2023-01-16 LPR
#
02 06 * * * root /usr/local/bin/user_passwd_expire.sh >/dev/null

Script will execute every day morning 06:02 by the cron job and if the day is wednesday (3rd day of week) it will send warning emails for password expiration if 20, 15, 10 days are left before account expires if only 7 days are left until the password of user acct expires, the script will start sending the Alarm every single day for 7th, 6th … 0 day until pwd expires.

If you don't have an expiring accounts and you want to force a specific account to have a expire date you can do it with:

# chage -E 2023-08-30 someuser


Or set it for new created system users with:

# useradd -e 2023-08-30 username


That's it the script will notify you on User PWD expiry.

If you need to for example set a single account to expire 90 days from now (3 months) that is a kind of standard password expiry policy admins use, do it with:

# date -d "90 days" +"%Y-%m-%d"
2023-11-22


Ideas for user_passwd_expire.sh script improvement
 

The downside of the script if you have too many local user accounts is you have to hardcode into it the username and user email_address attached to and that would be tedios task if you have 100+ accounts. 

However it is pretty easy if you already have a multitude of accounts in /etc/passwd that are from UID range to loop over them in a small shell loop and build new array from it. Of course for a solution like this to work you will have to have defined as user data as GECOS with command like chfn.
 

[georgi@server ~]$ chfn
Changing finger information for test.
Name [test]: 
Office []: georgi@fqdn-mail.com
Office Phone []: 
Home Phone []: 

Password: 

[root@server test]# finger georgi
Login: georgi                       Name: georgi
Directory: /home/georgi                   Shell: /bin/bash
Office: georgi@fqdn-mail.com
On since чт авг 24 17:41 (EEST) on :0 from :0 (messages off)
On since чт авг 24 17:43 (EEST) on pts/0 from :0
   2 seconds idle
On since чт авг 24 17:44 (EEST) on pts/1 from :0
   49 minutes 30 seconds idle
On since чт авг 24 18:04 (EEST) on pts/2 from :0
   32 minutes 42 seconds idle
New mail received пт окт 30 17:24 2020 (EET)
     Unread since пт окт 30 17:13 2020 (EET)
No Plan.

Then it should be relatively easy to add the GECOS for multilpe accounts if you have them predefined in a text file for each existing local user account.

Hope this script will help some sysadmin out there, many thanks to Lachezar for allowing me to share the script here.
Enjoy ! 🙂

Fix ruby: /usr/lib/libcrypt.so.1: version `XCRYPT_2.0′ not found in apt upgrade on Debian Linux 10

Saturday, August 5th, 2023

I've an old legacy Thinkpad Laptop that is for simplicty running Window Maker Wmaker which was laying on my home desk for almost an year and I remembered since i'm for few days in my parents home in Dobrich that it will be a good idea to update its software to the latest Debian packages to patch security issues with it. Thus if you're like me and  you tried to update your Debian 10 Linux to the latest Stable release debian packages  and you end up into a critical error that is preventing apt to to resolve conflicts (fix it with) cmds like:

# apt-get update –fix-missing

# apt –fix-broken install

As usual I looked into Google to see about solution and found few articles, claiming to have scripts that fix it but at the end nothing worked.
And the shitty error occured during the standard:

# apt-get update && apt-get upgrade

ruby: /usr/lib/libcrypt.so.1: version `XCRYPT_2.0' not found

Hence the cause and work around seemed to be very unexpected.
For some reason debian makes a link

root@noah:/lib# ls -al /lib/libcrypt.so.1
lrwxrwxrwx 1 root root 19 Aug  3 16:53 /lib/libcrypt.so.1 ->
libcrypt.so.1.bak

root@noah:/lib# ls -al /lib/libcrypt.so.1.bak
lrwxrwxrwx 1 root root 16 Jun 15  2017 /lib/libcrypt.so.1.bak -> libcrypt-2.24.so

Thus to resolve it and force the .deb upgrade package to continue it is up to simply deleting the strange simlink and re-run the

# apt-get update && apt-get upgrade

Setting up libc6:i386 (2.31-13+deb11u6) …
/usr/bin/perl: /lib/libcrypt.so.1: version `XCRYPT_2.0' not found (required by /usr/bin/perl)
dpkg: error processing package libc6:i386 (–configure):
 installed libc6:i386 package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
 libc6:i386

Few more times. If you get some critical apt failures still, each time make sure to rerun the command after doing a simple removal of the strange simbolic link with cmd:

# rm -f /lib/libcrypt.so.1

That's all folks after a short while your Debian will be updated to latest Enoy folks ! 🙂

Generate and Add UUID for every existing Redhat / CentOS / RHEL network interface to configuration if missing howto

Saturday, August 5th, 2023

linux-fix-missing-uid-on-redhat-centos-fedora-networking-logo

If you manage old Linux machines it might be after the update either due to update mess or because of old system administrators which manually included the UUID to the config forgot to include it in the present network configuration in /etc/sysconfig/networking-scripts/ifcfg-* Universally Unique IDentifier (UUID)128-bit label I used a small one liner after listing all the existing configured LAN interfaces reported from iproute2 network stack with ip command. As this might be useful to someone out there here is the simple command that returns a number of commands to later just copy paste to console once verified there are no duplicates of the UUID already in the present server configuration with grep.

In overall to correct the configs and reload the network with the proper UUIDs here is what I had to do:


# grep -rli UUID /etc/sysconfig/network-scripts/ifcfg-*

No output from the recursive grep means UUIDs are not present on any existing interface, so we can step further check all the existing machines network ifaces and generate the missing UUIDs with uuidgen command

# ip a s |grep -Ei ': <'|sed -e 's#:##g' |grep -v '\.' |awk '{ print $2 }'
ifcfg-venet0
ifcfg-eth0
ifcfg-eth1

ifcfg-eth2
ifcfg-eth3

I've stumbled on that case on some legacy Linux inherited from other people sysadmins and in order to place the correct 

# for i in $(ip a s |grep -Ei ': <'|sed -e 's#:##g' |grep -v '\.' |awk '{ print $2 }'); do echo "echo UUID=$(uuidgen $i)"" >> ifcfg-$i"; done|grep -v '\-lo' 
echo UUID=26819d24-9452-4431-a9ca-176d87492b75 >> ifcfg-venet0
echo UUID=3c7e8848-0232-436f-a52a-46db9a03eb33 >> ifcfg-eth0
echo UUID=1fc0454d-bf23-417d-b960-571fc04754d2 >> ifcfg-eth1
echo UUID=5793c1e5-4481-4f09-967e-2cceda85c35f >> ifcfg-eth2
echo UUID=65fdcaf6-d271-4845-a8f1-0ec478c375d1 >> ifcfg-eth3


As you can see I exclude the loopback interface -lo from the ouput as it is not necessery to have UUID for it.
That's all folks problem solved. Enjoy

Analyze disk space usage in Linux / BSD with du / find and filelight /qdirstat / baobab GUI disk usage analyzers to check what takes up your disk space on Unix like OSes

Friday, April 21st, 2023

linux-how-to-find-out-what-files-and-directories-has-occupied-all-your-disk-space-partition-from-console-and-GUI_du-find-filelight-baobab-qdirstat-duff-linux-450x450

If you're a Desktop Linux or BSD UNIX user and your hard disk / external SSD / flash drive etc. space starts to be misteriously disapper due to whatever reaseon such as a crashing applications producing rapidly log error / warning messages leading quickly to filling up the disk or out of a sudden you have some Disk space lost without knowing what kind of data filled up the disk or you're downloading some big sized bittorrent files forgotten in your bittorrent client or complete mirroring a large website and you suddenly get the result of root directory ( / ) getting fully or nearly filled up, then you definitely would want to check out what has disk activity has eaten up your disk space and leaing to OS and Aplication slow responsiveness.

For the Linux regular *nix user finding out what is filling the disk is a trivial task with with find / du -hsc * but as people have different habits to use find and du I'll show you the most common ways I use this two command line tools to identify disk space low issues for the sake of comparison.
Others who have better easier ways to do it are very welcome to share it with me in the comments.
 

1. Finding large files on hard disk with find Linux command tool
 

host:~# find /home -type f -printf "%s\t%p\n" | sort -n | tail -10
2100000000    /home/hipo/Downloads/MameUIfx incl. ROMs/MameUIfx incl. ROMs-6.bin
2100000000    /home/hipo/Downloads/MameUIfx incl. ROMs/MameUIfx incl. ROMs-7.bin
2100000000    /home/hipo/Downloads/MameUIfx incl. ROMs/MameUIfx incl. ROMs-8.bin
2100000000    /home/hipo/Downloads/MameUIfx incl. ROMs/MameUIfx incl. ROMs-9.bin
2815424080    /home/hipo/.thunderbird/h3dasfii.default\
/ImapMail/imap.gmail.com/INBOX
2925584895    /home/hipo/Documents/.git/\
objects/pack/pack-8590b069cad26ac0af7560fb42b51fa9bfe41050.pack
4336918067    /home/hipo/Games/Mames_4GB-compilation-best-arcade-games-of-your-14_04_2021.tar.gz
6109003776    /home/hipo/VirtualBox VMs/CentOS/CentOS.vdi
23599251456    /home/hipo/VirtualBox VMs/Windows 7/Windows 7.vdi
33913044992    /home/hipo/VirtualBox VMs/Windows 10/Windows 10.vdi

I use less rarely find on Desktops and more when I have to do some kind of data usage analysis on servers, of course for my Linux home computer and any other Linux desktop machines, or just a small incomprehensive analysis du cmd is much more appropriate to use.


2. Finding large files Megabyte occupying space files sorted in Megabytes and Gigas with du
 

  • Check main 10 files sorted in megabytes that are hanging in a directory

pcfkreak:~# du -hsc /home/hipo/*|grep 'M\s'|sort -rn|head -n 10
956M    /home/hipo/last_dump1.sql
711M    /home/hipo/hipod
571M    /home/hipo/from-thinkpad_r61
453M    /home/hipo/ultimate-edition-themes
432M    /home/hipo/metasploit-framework
355M    /home/hipo/output-upgrade.txt
333M    /home/hipo/Плот
209M    /home/hipo/Work-New.tar.gz
98M    /home/hipo/DOOM64
90M    /home/hipo/mp3

  • Get 10 top larges files in Gigabytes that are space hungry and eating up your space

pcfkreak:~# du -hsc /home/hipo/*|grep 'G\s'|sort -rn|head -n 10
156G    total
60G    /home/hipo/VirtualBox VMs
37G    /home/hipo/Downloads
18G    /home/hipo/Desktop
11G    /home/hipo/Games
7.4G    /home/hipo/ownCloud
7.1G    /home/hipo/Документи
4.6G    /home/hipo/music
2.9G    /home/hipo/root
2.8G    /home/hipo/Documents


If you want to still work on the console terminal but you don't want to type too much you can use ncdu (ncurses) text tool, install it with

# apt install –yes ncdu


https://www.pc-freak.net/images/ncdu-gnu-linux-debian-screenshot.png

 For the most lazy ones or complete Linux newbies that doesn't want to spend time typing / learing or using text commands or softwares you can also check what has eaten up your full disk space with GUI tools as well.

There are at least 3 tools to use to check in Graphical Interface what has occupied your disk space on Linux / BSD, I'm aware of:

3. Filelight GUI disk usage analysis Linux tool

For those using KDE or preferring a shiny GUI interface that will capture the eye, perhaps filelight would be the option of choice tool to get analysis sum of your directory sturctures and file use on the laptop or desktop *unix OS.

unix-desktop:~# apt-cache show filelight|grep -i description-en -A 7
Description-en: show where your diskspace is being used
 Filelight allows you to understand your disk usage by graphically
 representing your filesystem as a set of concentric, segmented rings.
 .
 It is like a pie-chart, but the segments nest, allowing you to see both
 which directories take up all your space, and which directories
 and files inside those directories are the real culprits.
Description-md5: 397ff9a469e07a772f22460c66b66875


To use it simply go ahead and install it with apt or yum / dnf or whatever Linux package manager your distro uses:

unix-desktop:~# apt-get install –yes filelight

filelight-show-where-disk-space-is-being-used-graphically-tool-linux

4. GNOME DIsk Usage Analyzer Baobab GUI tool

For those being a GNOME / Mate / Budgie / Cinnamon Graphical interface users baobab shold be the program to use as it uses the famous LibGD library.

unix-desktop:~# apt-cache show baobab|grep -i description-en -A10
Description-en: GNOME disk usage analyzer
 Disk Usage Analyzer is a graphical, menu-driven application to analyse
 disk usage in a GNOME environment. It can easily scan either the whole
 filesystem tree, or a specific user-requested directory branch (local or
 remote).
 .
 It also auto-detects in real-time any changes made to your home
 directory as far as any mounted/unmounted device. Disk Usage Analyzer
 also provides a full graphical treemap window for each selected folder.
Description-md5: 5f6072b89ebb1dc83433fa7658814dc6
Homepage: https://wiki.gnome.org/Apps/Baobab

 

gnome-disk-analyzer-baobab-tool-screenshot-of-hard-disk-directory-locations-sorted-by-size

5. Qdirstat graphical application to show where your disk space has gone on Linux

Qdirstat is perhaps well known tool to track disk space issues on Linux desktop hosts, known by the hardcore KDE / LXDE / LXQT / DDE GUI interface / environment lovers and as a KDE tool uses the infamous Qt library. I personally don't like it and don't put it on machines I use because I never use kde and don't want to waste my disk space with additional libraries such as the QT Library which historically was not totally free in terms of licensing and even now is in both free and non free licensing GPL / LGPL and QT Commercial Licensing license.

unix-desktop:~# apt-cache show qdirstat|grep -i description-en -A10
Description-en: Qt-based directory statistics
 QDirStat is a graphical application to show where your disk space has gone and
 to help you to clean it up.
 .
 QDirStat has a number of new features compared to KDirStat. To name a few:
  * Multi-selection in both the tree and the treemap.
  * Unlimited number of user-defined cleanup actions.
  * Properly show errors of cleanup actions (and their output, if desired).
  * File categories (MIME types) and their treemap color are now configurable.
  * Exclude rules for directories are easily configurable.
  * Desktop-agnostic; no longer relies on KDE or any other specific desktop.


qdirstat-linux-screenshot-show-what-directory-uses-most-hard-disk-space

That shiny fuzed graphics is actually a repsesantation of all directories the bigger and if one scrolls on the colorful gamma a text with directory and size or file will appear. Though the graphical represantation is really c00l to me it is a bit unreadable, thus I prefer and recommend the other two GUI tools filelight or baobab instead.

6. Finding duplicate files on Linux system with duff command tool

Talking about big unknown left-over files on your hard drives, it is appropriate to mention one tool here that is a console one but very useful to anyone willing to get rid of old duplicate files that are hanging around on the disk. Sometimes such copies are produced while copying large amount of files from place to place or simply by mistake while copying Photo / Video files from your Smart Phone to Linux desktop etc. 

This is where the duff command line utility might be super beneficial for you.

unix-desktop:~# apt-cache show duff|grep -i description-en -A3
Description-en: Duplicate file finder
 Duff is a command-line utility for identifying duplicates in a given set of
 files.  It attempts to be usably fast and uses the SHA family of message
 digests as a part of the comparisons.

Using duff tool is very straight forward to see all the duplicate files hanging in a directory lets say your home folder.

unix-desktop:~#  duff -rP /home/hipo

/home/hipo/music/var/Quake II Soundtrack – Kill Ratio.mp3
/home/hipo/mp3/Quake II Soundtrack – Kill Ratio.mp3
2 files in cluster 44 (7913472 bytes, digest 98f38be49e2ffcbf90927f9357b3e24a81d5a649)
/home/hipo/music/var/HYPODIL_01-Scakauec.mp3
/home/hipo/mp3/HYPODIL_01-Scakauec.mp3
2 files in cluster 45 (2807808 bytes, digest ce9067ce1f132fc096a5044845c7fac73e99c0ed)
/home/hipo/music/var/Quake II Suondtrack – March Of The Stoggs.mp3
/home/hipo/mp3/Quake II Suondtrack – March Of The Stoggs.mp3
2 files in cluster 46 (3506176 bytes, digest efcc401b4ebda9b0b2367aceb8e334c8ba1a357d)
/home/hipo/music/var/Quake II Suondtrack – Quad Machine.mp3
/home/hipo/mp3/Quake II Suondtrack – Quad Machine.mp3
2 files in cluster 47 (7917568 bytes, digest 0905c1d790654016c2ecf2949f78d47a870c3822)
/home/hipo/music/var/Cyberpunk Group – Futureshock!.mp3
/home/hipo/mp3/Cyberpunk Group – Futureshock!.mp3

-r (Recursively search into all specified directories.)

P (Don't follow any symbolic links.  This overrides any previous -H or -L option.  This is the default.  Note that this only applies to directories, as sym‐
             bolic links to files are never followed.)

7. Deleting duplicate files with duff

If you're absolutely sure you know what you're doing and you have a backup in case if something messes up during duplicate teletions, to get rid of lets say any duplicate Picture files found by duff run sommething like:

# duff -e0 -r /home/hipo/Pictures/ | xargs -0 rm

!!! Please note that using duff is for those who absolutely know what they're doing and have their data recent data. Deleting the wrong data by mistake with the tool might put you in the first grade and you'll be the only one to blame  🙂 !!!

Wrap it Up

Filling up the disk with unknown large files is a task to resolve that happens often. For the unlazy on Linux / BSD / Mac OS and other UNIX like OS-es the easiest way is to use find or du with some one liner command. For the lazy Windows addicted Graphical users filelightqdirstat or baobab GUI disk usage analysis tools are there.
If you have a lot of files and many of thems are duplicates you can use duff to check them out and remove all unneded duplicates and save space. 
Hope this article, was helpful for someone.
That's all folks, enjoy your data profilactics, if you know any other good easy command or GUI tools or hints for drive disk space profilactics please share.

Megaraid SAS software installation on CentOS Linux

Saturday, October 20th, 2012

With a standard el5 on a new Dell server, it may be necessary to install the Dell Raid driver, otherwise the OMSA always reports an error and hardware monitoring is therefore obsolete:

Previously, the megaraid_sys package was now called mptlinux

For this we need the following packages in advance:

# yum install gcc kernel-devel
Now the driver stuff:

# yum install dkms mptlinux
That should have built the new module, better test it:

# modinfo mptsas

# dkms status
After a kernel update it may be necessary to build the driver for the new version:

# dkms build -m mptlinux -v 4.00.38.02

# dkms install -m mptlinux -v 4.00.38.02