How to turn On or Off Screen Reader ORCA on Linux Desktop enabled by mistype or a kid smash on the keyboard


November 22nd, 2023

orca-screen-reader-communication-services-logo

For those who type quite fast and use Microsoft Windows, its quite common to start the annoying NARRATOR (Windows Speaking Program) by accidently due to mistyping pressing together Windows key + Control + Enter.
This enables Narrator to read stuff on the screen here and there and to turn it off you just have to either Lock the Windows Computer and press again Windows key + Control + Enter to TURN OFF NARRATOR.

Linux does not have a Narrator but have also embedded Eye impairment Assistive Technology called ORCA.

Orca works with applications and toolkits that support the Assistive Technology Service Provider Interface (AT-SPI), which is the primary assistive technology infrastructure for Linux and Solaris. Applications and toolkits supporting the AT-SPI include the GNOME Gtk+ toolkit, the Java platform's Swing toolkit, LibreOffice, Gecko, and WebKitGtk. AT-SPI support for the KDE Qt toolkit is being pursued.

ORCA is nowadays installed and integrated into many if not most Linux distributions out there. Enabling ORCA is not such a common thing on Linux,so today I got quite puzzled once I came back to the computer, leaving the 3.7 months kid near the Keyboard and finding out that I've enabled aloud screen reader that is reading what is every Window / Menu / Program or object I select with the mouse on my Linux MATE Desktop home GUI environment running on top of Debian Linux.

After a quick look up in Google on what exactly is the Linux program that is reading my screen I came across ORCA, which seem to be visible also as running in my process list:

hipo@jeremiah:~/Downloads$ ps -ef|grep -i orca
hipo     1068376    7960 17 18:48 tty2     00:00:01 orca

After a quick check online I found out that,

To start (Turn On ) Orca Screen Reader using the keyboard:

Windows logo button (Super Key) key + Alt + S 

Of course, it is possible to shut off the annoying reader by simply killing it with:

kill -9 orca

 

Ubuntu users, could start Orca using a mouse and keyboard:

Open the Activities overview and start typing Accessibility.

Click Accessibility to open the panel.

Select thez to open it.

Switch the Screen Reader switch to on.

Problem solved now Screen Reader on Linux is disabled, maybe it is time to disable Orca key press ability to prevent the kid from enabling it again since I don't need it actively thanksfully. with

xmodmap -e 'keycode <value>='

or simply removing the orca package with apt:

# apt remove orca

Resize KVM .img QCOW Image file and Create new LVM partition and ext4 filesystem inside KVM Virtual Machine


November 10th, 2023

LVM-add-space-to-RHEL-Linux-on-KVM_Virtual_machine-howto

Part of migration project for a customer I'm working on is migration of a couple of KVM based Guest virtual machine servers. The old machines has a backup solution stratetegy using IBM's TSM and the new Machines should use the Cheaper solution adopted by the Customer company using the CommVault backup solution (an enterprise software thath is used for data backup and recovery not only to local Tape Library / Data blobs on central backup servers infra but also in Cloud infrastructure.

To install the CommVault software on the Redhat Linux-es, the official install documentation (prepared by the team who prepared the CommVault) infrastructure for the customer recommends to have a separate partition for the CommVault backups under /opt directory  (/opt/commvault) and the partition should be as a minumum at least 10 Gigabytes of size. 

Unfortunately on our new prepared KVM VM guest machines, it was forgotten to have the separate /opt of 10GB prepared in advanced. And we ended up with Virtual Machines that has a / (root directory) of 68GB size and a separate /var and /home LVM parititons. Thus to correct the issues it was required to find a way to add another separate LVM partition inside the KVM VirtualMachine.img (QCOW Image file). 

This seemed to be an easy task at first as that might be possible with simple .img partition mount with losetup command kpartx and simple lvreduce command in some way such as

# mount /dev/loop0 /mnt/test/

# kpartx -a /dev/loop0
# kpartx -l /dev/loop0
# ls -al /dev/mapper/*

… 

# lvreduce 

etc. however unfortunately kpartx though not returning error did not provided the new /dev/mapper devices to be used with LVM tools and this approach seems to not be possible on RHEL 8.8 as the kpartx couldn't list.

 

A colleague of mine Mr. Paskalev suggested that we can perhaps try to mount the partition with default KVM tool to mount .img partitions which is guestmount but unfortunately
with a command like:
 

# guestmount -a /kvm/VM.img -i –rw /mnt/test/

But unfortunately this mounted the filesystem in fuse filesystem and the LVM /dev/mapper of the VM can't be seen so we decided to abondon this method.

After some pondering with Dimitar Paskalev and Dimitar Hristov, thanks to joint efforts we found the way to do it, below are the steps we followed to succeed in creating new LVM ext4 partition required.
One would wonder how many system
 

1. Check enough space is available on the HV machine

 

The VMs are held under /kvm so in this case:

[root@hypervisor-host ~]# df -h|grep -i /kvm
/dev/mapper/vg00-vmprivate  206G   27G  169G  14% /kvm

 

2. Shutdown the running VM and make sure it is stopped
 

[root@hypervisor-host ~]# virsh shutdown vm-host

 

[root@hypervisor-host ~]# virsh list –all
 Id   Name       State
————————–
 4    lpdkv01f   running
 5    vm-host   shut off

 

3. Check current Space status of VM

 

[root@hypervisor-host ~]# qemu-img info /kvm/vm-host.img       
image: /kvm/vm-host.img
file format: qcow2
virtual size: 100 GiB (107374182400 bytes)
disk size: 8.62 GiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: true
    refcount bits: 16
    corrupt: false
    extended l2: false

 

4. Resize (extend VM) with whatever size you want    
 

[root@hypervisor-host ~]# qemu-img resize /kvm/vm-host.img +10G

 

5. Start VM    
 

[root@hypervisor-host ~]# virsh start vm-host


7. Check the LVM and block devices on HVs (not necessery but good for an overview)
 

[root@hypervisor-host ~]# pvs
  PV         VG   Fmt  Attr PSize   PFree 
  /dev/sda2  vg00 lvm2 a–  277.87g 19.87g
  
[root@hypervisor-host ~]# vgs
  VG   #PV #LV #SN Attr   VSize   VFree 
  vg00   1  11   0 wz–n- 277.87g 19.87g

 

[root@hypervisor-host ~]# lsblk 
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                  8:0    0 278.9G  0 disk 
├─sda1               8:1    0     1G  0 part /boot
└─sda2               8:2    0 277.9G  0 part 
  ├─vg00-root      253:0    0    15G  0 lvm  /
  ├─vg00-swap      253:1    0     1G  0 lvm  [SWAP]
  ├─vg00-var       253:2    0     5G  0 lvm  /var
  ├─vg00-spool     253:3    0     2G  0 lvm  /var/spool
  ├─vg00-audit     253:4    0     3G  0 lvm  /var/log/audit
  ├─vg00-opt       253:5    0     2G  0 lvm  /opt
  ├─vg00-home      253:6    0     5G  0 lvm  /home
  ├─vg00-tmp       253:7    0     5G  0 lvm  /tmp
  ├─vg00-log       253:8    0     5G  0 lvm  /var/log
  ├─vg00-cache     253:9    0     5G  0 lvm  /var/cache
  └─vg00-vmprivate 253:10   0   210G  0 lvm  /vmprivate

  
8 . Check logical volumes on Hypervisor host
 

[root@hypervisor-host ~]# lvdisplay 
  — Logical volume —
  LV Path                /dev/vg00/swap
  LV Name                swap
  VG Name                vg00
  LV UUID                3tNa0n-HDVw-dLvl-EC06-c1Ex-9jlf-XAObKm
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:45 +0200
  LV Status              available
  # open                 2
  LV Size                1.00 GiB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:1
   
  — Logical volume —
  LV Path                /dev/vg00/var
  LV Name                var
  VG Name                vg00
  LV UUID                JBerim-fxVv-jU10-nDmd-figw-4jVA-8IYdxU
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:45 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:2
   
  — Logical volume —
  LV Path                /dev/vg00/spool
  LV Name                spool
  VG Name                vg00
  LV UUID                nFlmp2-iXg1-tFxc-FKaI-o1dA-PO70-5Ve0M9
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:45 +0200
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:3
   
  — Logical volume —
  LV Path                /dev/vg00/audit
  LV Name                audit
  VG Name                vg00
  LV UUID                e6H2OC-vjKS-mPlp-JOmY-VqDZ-ITte-0M3npX
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:46 +0200
  LV Status              available
  # open                 1
  LV Size                3.00 GiB
  Current LE             768
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:4
   
  — Logical volume —
  LV Path                /dev/vg00/opt
  LV Name                opt
  VG Name                vg00
  LV UUID                oqUR0e-MtT1-hwWd-MhhP-M2Y4-AbRo-Kx7yEG
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:46 +0200
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:5
   
  — Logical volume —
  LV Path                /dev/vg00/home
  LV Name                home
  VG Name                vg00
  LV UUID                ehdsH7-okS3-gPGk-H1Mb-AlI7-JOEt-DmuKnN
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:47 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:6
   
  — Logical volume —
  LV Path                /dev/vg00/tmp
  LV Name                tmp
  VG Name                vg00
  LV UUID                brntSX-IZcm-RKz2-CP5C-Pp00-1fA6-WlA7lD
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:47 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:7
   
  — Logical volume —
  LV Path                /dev/vg00/log
  LV Name                log
  VG Name                vg00
  LV UUID                ZerDyL-birP-Pwck-yvFj-yEpn-XKsn-sxpvWY
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:47 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:8
   
  — Logical volume —
  LV Path                /dev/vg00/cache
  LV Name                cache
  VG Name                vg00
  LV UUID                bPPfzQ-s4fH-4kdT-LPyp-5N20-JQTB-Y2PrAG
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:48 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:9
   
  — Logical volume —
  LV Path                /dev/vg00/root
  LV Name                root
  VG Name                vg00
  LV UUID                mZr3p3-52R3-JSr5-HgGh-oQX1-B8f5-cRmaIL
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:48 +0200
  LV Status              available
  # open                 1
  LV Size                15.00 GiB
  Current LE             3840
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:0
   
  — Logical volume —
  LV Path                /dev/vg00/vmprivate
  LV Name                vmprivate
  VG Name                vg00
  LV UUID                LxNRWV-le3h-KIng-pUFD-hc7M-39Gm-jhF2Aj
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-09-18 11:54:19 +0200
  LV Status              available
  # open                 1
  LV Size                210.00 GiB
  Current LE             53760
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:10

9. Check Hypervisor existing partitions and space
 

[root@hypervisor-host ~]# fdisk -l
Disk /dev/sda: 278.9 GiB, 299439751168 bytes, 584843264 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0581e6e2

Device     Boot   Start       End   Sectors   Size Id Type
/dev/sda1  *       2048   2099199   2097152     1G 83 Linux
/dev/sda2       2099200 584843263 582744064 277.9G 8e Linux LVM


Disk /dev/mapper/vg00-root: 15 GiB, 16106127360 bytes, 31457280 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-swap: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-var: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-spool: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-audit: 3 GiB, 3221225472 bytes, 6291456 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-opt: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-home: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-tmp: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-log: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-cache: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-vmprivate: 210 GiB, 225485783040 bytes, 440401920 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

 

10. List block devices on VM
 

[root@vm-host ~]# lsblk 
NAME               MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                 11:0    1 1024M  0 rom  
vda                252:0    0  100G  0 disk 
├─vda1             252:1    0    1G  0 part /boot
├─vda2             252:2    0   88G  0 part 
│ ├─vg00-root      253:0    0   68G  0 lvm  /
│ ├─vg00-home      253:2    0   10G  0 lvm  /home
│ └─vg00-var       253:3    0   10G  0 lvm  /var
├─vda3             252:3    0    1G  0 part [SWAP]
└─vda4             252:4    0   10G  0 part 

 

 

11. Create new LVM partition with fdisk or cfdisk
 

If there is no cfdisk new resized space with qemu-img could be setup with a fdisk, though I personally always prefer to use cfdisk

[root@vm-host ~]# fdisk /dev/vda
# > p (print)
# > m (manfile)
# > n
# … follow on screen instructions to select start and end blocks
# > t (change partition type)
# > select and set to 8e
# > w (write changes)

[root@vm-host ~]# cfdisk /dev/vda


Setup new partition from Free space as [ primary ] partition and Choose to be of type LVM


12. List partitions to make sure new LVM partition is present
 

[root@vm-host ~]# fdisk -l
Disk /dev/vda: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xe7b2d9fd

Device     Boot     Start       End   Sectors Size Id Type
/dev/vda1  *         2048   2099199   2097152   1G 83 Linux
/dev/vda2         2099200 186646527 184547328  88G 8e Linux LVM
/dev/vda3       186646528 188743679   2097152   1G 82 Linux swap / Solaris
/dev/vda4       188743680 209715199  20971520  10G 8e Linux LVM

The extra added 10 Giga is seen under /dev/vda4.
  — Physical volume —
  PV Name               /dev/vda4
  VG Name               vg01
  PV Size               10.00 GiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              2559
  Free PE               0
  Allocated PE          2559
  PV UUID               yvMX8a-sEka-NLA7-53Zj-fFdZ-Jd2K-r0Db1z
   
  — Physical volume —
  PV Name               /dev/vda2
  VG Name               vg00
  PV Size               <88.00 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              22527
  Free PE               0
  Allocated PE          22527
  PV UUID               i4UpGr-h9Cd-iKBu-KqEI-15vK-CGc1-DwRPj8
   
[root@vm-host ~]# 

 

13. List LVM Physical Volumes
 

[root@vm-host ~]# pvdisplay 
  — Physical volume —
  PV Name               /dev/vda2
  VG Name               vg00
  PV Size               <88.00 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              22527
  Free PE               0
  Allocated PE          22527
  PV UUID               i4UpGr-h9Cd-iKBu-KqEI-15vK-CGc1-DwRPj8

 


  
  Notice the /dev/vda4 is not seen in pvdisplay (Physical Volume display command) because not created yet, so lets create it.
 

14. Initialize new Physical Volume to be available for use by LVM
 

[root@vm-host ~]# pvcreate /dev/vda4


15. Inform the OS for partition table changes
 

If partprobe is not available as command on the host, below obscure command should do the trick.
 

[root@vm-host ~]# echo "- – -" | tee /sys/class/scsi_host/host*/scan

However usually, better to use partprobe to inform the Operating System of partition table changes

[root@vm-host ~]# partprobe


16. Use lsblk again to see the new /dev/vda4 LVM is listed into "vda" root block device
 

[root@vm-host ~]# 
[root@vm-host ~]# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0            11:0    1 1024M  0 rom  
vda           252:0    0  100G  0 disk 
├─vda1        252:1    0    1G  0 part /boot
├─vda2        252:2    0   88G  0 part 
│ ├─vg00-root 253:0    0   68G  0 lvm  /
│ ├─vg00-home 253:1    0   10G  0 lvm  /home
│ └─vg00-var  253:2    0   10G  0 lvm  /var
├─vda3        252:3    0    1G  0 part [SWAP]
└─vda4        252:4    0   10G  0 part 
[root@vm-host ~]# 


17. Create new Volume Group (VG) on /dev/vda4 block device
 

Before creating a new VG, list what kind of VG is on the machine to be sure the new created one will not be already present.
 

[root@vm-host ~]# vgdisplay 
  — Volume group —
  VG Name               vg00
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <88.00 GiB
  PE Size               4.00 MiB
  Total PE              22527
  Alloc PE / Size       22527 / <88.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               oyo1oY-saSm-0IKk-gZnf-Knwz-utO7-Aw8c60

vg00 is existing only, so we can use vg01 as a Volume Group name for the new volume group where the fresh 10GB LVM partition will lay

[root@vm-host ~]# vgcreate vg01 /dev/vda4
  Volume group "vg01" successfully created

 

18. Create new Logical Volume (LV) and extend it to occupy the full space available on Volume Group vg01

 

 

[root@vm-host ~]# lvcreate -n commvault -l 100%FREE vg01
  Logical volume "commvault" created.

  An alternative way to create the same LV is by running:

lvcreate -n commvault -L 10G vg01


19. Relist block devices with lsblk to make sure the new created Logical Volume commvault is really present and seen, in case of it missing re-run again partprobe cmd
 

[root@vm-host ~]# lsblk 
NAME               MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                 11:0    1 1024M  0 rom  
vda                252:0    0  100G  0 disk 
├─vda1             252:1    0    1G  0 part /boot
├─vda2             252:2    0   88G  0 part 
│ ├─vg00-root      253:0    0   68G  0 lvm  /
│ ├─vg00-home      253:1    0   10G  0 lvm  /home
│ └─vg00-var       253:2    0   10G  0 lvm  /var
├─vda3             252:3    0    1G  0 part [SWAP]
└─vda4             252:4    0   10G  0 part 
  └─vg01-commvault 253:3    0   10G  0 lvm  

 

As it is not mounted yet, the VG will be not seen in df free space but will be seen as a volume group with vgdispaly
 

[root@vm-host ~]# df -h
Filesystem                  Size  Used Avail Use% Mounted on
devtmpfs                    2.8G     0  2.8G   0% /dev
tmpfs                       2.8G   33M  2.8G   2% /dev/shm
tmpfs                       2.8G   17M  2.8G   1% /run
tmpfs                       2.8G     0  2.8G   0% /sys/fs/cgroup
/dev/mapper/vg00-root        67G  2.4G   61G   4% /
/dev/mapper/vg00-var        9.8G 1021M  8.3G  11% /var
/dev/mapper/vg00-home       9.8G   24K  9.3G   1% /home
/dev/vda1                   974M  242M  665M  27% /boot
tmpfs                       569M     0  569M   0% /run/user/0

 

[root@vm-host ~]# vgdisplay 
  — Volume group —
  VG Name               vg01
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <10.00 GiB
  PE Size               4.00 MiB
  Total PE              2559
  Alloc PE / Size       2559 / <10.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               nYP0tv-IbFw-fBVT-slBB-H1hF-jD0h-pE3V0S
   
  — Volume group —
  VG Name               vg00
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <88.00 GiB
  PE Size               4.00 MiB
  Total PE              22527
  Alloc PE / Size       22527 / <88.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               oyo1oY-saSm-0IKk-gZnf-Snwz-utO7-Aw8c60
  


20. Create new ext4 filesystem on the just created vg01-commvault   
 

[root@vm-host ~]# mkfs.ext4 /dev/mapper/vg01-commvault 

[root@vm-host ~]# mkfs.ext4 /dev/mapper/vg01-commvault 
mke2fs 1.45.6 (20-Mar-2020)
Discarding device blocks: done                            
Creating filesystem with 2620416 4k blocks and 655360 inodes
Filesystem UUID: 1491d8b1-2497-40fe-bc40-5faa6a2b2644
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 


21. Mount vg01-commvault into /opt directory
 

[root@vm-host ~]# mkdir -p /opt/

[root@vm-host ~]# mount /dev/mapper/vg01-commvault /opt/


22. Check mount is present on VM guest OS
 

[root@vm-host ~]# mount|grep -i /opt
/dev/mapper/vg01-commvault on /opt type ext4 (rw,relatime)
[root@vm-host ~]# 

[root@vm-host ~]# df -h|grep -i opt
/dev/mapper/vg01-commvault  9.8G   24K  9.3G   1% /opt
[root@vm-host ~]# 
 

23. Add vg01-commvault to be auto mounted via /etc/fstab on next Virtual Machine reboot
 

[root@vm-host ~]# echo '/dev/mapper/vg01-commvault /opt         ext4            defaults        1        2' >> /etc/fstab

[root@vm-host ~]# rpm -ivh commvault-fs.Instance001-11.0.0-80.240.0.3589820.240.4083067.el8.x86_64.rpm

[root@vm-host ~]# systemctl status commvault
● commvault.Instance001.service – commvault Service
   Loaded: loaded (/etc/systemd/system/commvault.Instance001.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2023-11-10 15:13:59 CET; 27s ago
  Process: 9972 ExecStart=/opt/commvault/Base/Galaxy start direct -focus Instance001 (code=exited, status=0/SUCCESS)
    Tasks: 54
   Memory: 155.5M
   CGroup: /system.slice/commvault.Instance001.service
           ├─10132 /opt/commvault/Base/cvlaunchd
           ├─10133 /opt/commvault/Base/cvd
           ├─10135 /opt/commvault/Base/cvfwd
           └─10137 /opt/commvault/Base/ClMgrS

Nov 10 15:13:57 vm-host.ffm.de.int.atosorigin.com systemd[1]: Starting commvault Service…
Nov 10 15:13:58 vm-host.ffm.de.int.atosorigin.com Galaxy[9972]: Cleaning up /opt/commvault/Base/Temp …
Nov 10 15:13:58 vm-host.ffm.de.int.atosorigin.com Galaxy[9972]: Starting Commvault services for Instance001 …
Nov 10 15:13:59 vm-host.ffm.de.int.atosorigin.com Galaxy[9972]: [22B blob data]
Nov 10 15:13:59 vm-host.ffm.de.int.atosorigin.com systemd[1]: Started commvault Service.
[root@vm-host ~]# 

 

24. Install Commvault backup client RPM in new mounted LVM under /opt

[root@vm-host ~]#  rpm -ivh commvault.rpm

Simple bash shell Script to easy and quickly automate deploy RHEL Linux Virtual Machines on KVM Virtualization


November 3rd, 2023

how-to-install-a-kvm-guest-os-from-the-commandline-easily-with-script-virt-install-logo

Earlier I've blogged in about howto build KVM Virtual Machine RHEL 8.3 Linux install on Redhat 8.3 Linux Hypervisor with custom tailored kickstart.cfg as one of the Projects i was involved in my work duty as system administrator, we had the task to build
a KVM virtual machines and build a High Availability Linux  PCS Corosync / Pacemaker / haproxy cluster on it to save some money for the company from purchasing VMWare licenses.

The setup of the KVM Virtual machine on a first glimpse is relatively simple and I thought, this can be done just for 2 / 3 days but it turned out to take up 2 weeks or so to properly prepare the kickstarter file and learn a bit about basic virt-install KVM options and experiment with them until
we can produce a noral working Virtual machines. 

The original developed simple script that I used to bring up a new KVM virtual machines in a bit easier way virt-install-kvm.sh looked like so:
 

#!/bin/sh
# Script to build a new VM based on a kickstart.cfg file template
# with virt-install
# Author Georgi Georgiev hip0
# hipo@Pc-freak.net

KS_FILE='kickstart.cfg';
VM_NAME='RHEL8_3-VirtualMachine';
VM_DESCR='CentOS 8.3 Virtual Machine';
RAM='8192';
CPUS='8';
# size is in Gigabytes
VM_IMG_SIZE='70';
ISO_LOCATION='/vmprivate/rhel-server-8.3-x86_64-dvd.iso';
VM_IMG_FILE_LOC='/vmprivate/RHEL8_3-VirtualMachine.img';

virt-install -n "$VMNAME" –description "$VM_DESCR" –os-type=Linux –os-variant=rhel8.3 –ram=8192 –vcpus=8 –location="$ISO_LOCATION" –disk path=$VM_IMG_FILE,bus=virtio,size=$IMG_VM_SIZE –graphics none –initrd-inject=/root/$KS_FILE –extra-args "console=ttyS0 ks=file:/$KS_FILE"

 

The script basicly did what it was aimed for but modifying the kickstarter ks.cfg every time with multiple additional parameters, like network configurations as well some additional modifications to ks.cfg for VM parameters was annoying.

Recently as we had to repeat the same task again in order to migrate old customer containing a Linux OpenVZ Virtual machines to a newer  OS installed RHEL versionb with KVM virtual machines, i've took some hours and scripted a small script that would easiy our task to build new KVM Virtual Machines
from scratch relatively easy, if we have to repeat the Linux OS build operation again and again.

Before proceeding to use the script of course one has to use LVM and setup the partitions on the Host which will be the Hypervisor where the KVM VMs will be installed as well you need to download an ISO image of Redhat Enterprise Linux / CentOS / Fedora or whatever kind of RPM based Linux
you would like to setup as well as do the basic configurations regarding the emulated Hardware node "Power" of the new Virtual Machine (CPU / Memory / Disk Partition) as well as choose a proper meaningful VM name (preferrably following some good meaningful crafted naming convention, that will talk a bit about what is inside the KVM VM container).

The script that automates a bit the KVM VM installation which you find below you can also download from here virt-install.sh

 

#!/bin/bash
virt_install_path=$(which virt-install);
vm_host='vmname01';
boot_iso='/vmprivate/rhel-8.7-x86_64-dvd.iso';
os_version='rhel8.7';
vm_cpus_count='4';
vm_ram='6144';
vm_img_location='/vmprivate/NAME-of-VM.img';
vm_machine_description='Name Production system';
ks_file_location_template='/root/ks.cfg.templ';
ks_file_location='/root/ks.cfg';
ks_vm_read_loc=$(echo $ks_file_location |sed -e 's#root/##g');
vm_main_ip='192.168.23.52';
vm_netmask='255.255.255.192';
vm_gateway='192.168.23.33';
vm_nameserver='172.30.50.2';
#–ip=192.168.233.52 –netmask=255.255.255.192 –gateway=192.168.233.33 –nameserver=172.20.88.2

echo "Checking if defined $vm_host image $vm_img_location is not already present";
if [ ! -f $vm_img_location ]; then
echo
else
echo 'Exiting $vm_img_location is present';
echo "To destroy exiting VM image $vm_img_location run manually cmds:";
echo 'virsh list; virsh destroy $vm_host; virsh undefine $vm_host –remove-all-storage';
exit 1;
fi

alias cp='cp -f';
/usr/bin/cp -rpf $ks_file_location_template $ks_file_location;

echo "Setting  VM Main IP: $vm_main_ip";
echo "Setting  VM netmask: $vm_netmask";
echo "Setting  VM Gateway: $vm_gateway";
echo "Setting  VM Nameserver $vm_nameserver";
/usr/bin/perl -pi -e "s/ip=a1.a2.a3.a4/ip=$vm_main_ip/" $ks_file_location
/usr/bin/perl -pi -e "s/netmask=b1.b2.b3.b4/netmask=$vm_netmask/" $ks_file_location
/usr/bin/perl -pi -e "s/gateway=c1.c2.c3.c4/nameserver=$vm_nameserver/" $ks_file_location
/usr/bin/perl -pi -e "s/nameserver=d1.d2.d3.d4/gateway=$vm_gateway/" $ks_file_location
/usr/bin/perl -pi -e "s/vm-hostname/$vm_host/" $ks_file_location

echo "Running VM install:";
echo $virt_install_path -n $vm_host –description "$vm_machine_description" –os-type=Linux –os-variant=$os_version –ram=$vm_ram –vcpus=$vm_cpus_count –location=$boot_iso –disk path=$vm_img_location,bus=virtio,size=90 –graphics none –initrd-inject=$ks_file_location –extra-args "console=ttyS0 ks=file:$ks_vm_read_loc"

$virt_install_path -n $vm_host –description "$vm_machine_description" –os-type=Linux –os-variant=$os_version –ram=$vm_ram –vcpus=$vm_cpus_count –location=$boot_iso –disk path=$vm_img_location,bus=virtio,size=90 –graphics none –initrd-inject=$ks_file_location –extra-args "console=ttyS0 ks=file:$ks_vm_read_loc"
 

 

Modify the script to insert the required parameters of the new VM in the script header session, you will have to provide below options.

 

vm_host='vmname01';
boot_iso='/vmprivate/rhel-8.7-x86_64-dvd.iso';
os_version='rhel8.7';
vm_cpus_count='4';
vm_ram='6144';
vm_img_location='/vmprivate/NAME-of-VM.img';
vm_machine_description='Name Production system';
ks_file_location_template='/root/ks.cfg.templ';
ks_file_location='/root/ks.cfg';
ks_vm_read_loc=$(echo $ks_file_location |sed -e 's#root/##g');
vm_main_ip='192.168.23.52';
vm_netmask='255.255.255.192';
vm_gateway='192.168.23.33';
vm_nameserver='172.30.50.2';

The automatic Build virtual machine script is tested and works with Redhat Enterprise Linux and of course is pretty primitive as there is so much available online that would do similar, but still I like it because it works for me 🙂

You will need also the ks.cfg.templ file which has the basic kickstarter configuration that will bring up the Virtual machines according to predefined configurations.
Here is the ks.cfg.templ and ks.cfg files as well, you will have to place them under /root/ or some other directory.
Of course this script is just a basic one, it can be easily updated to accept its variable options as arguments if you need to bring up a multitude of virtual machines relatively quickly with few minor modifications. 

Hope the script is helpful to some sysadmin out there. If so don't forget to donate me for a beer in my Patreon account found in the widgets section 🙂

Check the Type and Model of available installed Memory on Linux / Unix / BSD Server howto


October 30th, 2023

how-linux-kernel-manages-memory-picture

As a system administrator one of the common task, one has to do is Add / Remove or Replace (of Broken or failing Bank of RAM memory) a piece of additional Bank of memory Bank to a Linux / BSD / Unix server.  Lets say you need to fullfil the new RAM purchase and provide some information to the SDM (Service Delivery Manager) of the compnay you're hirder in or you need to place the purchase yourself. Then you  need to know the exact speed and type of RAM currently installed on the server installed.

In this article i'll shortly explain how do I find out ram (SDRAM) information from a via ordinary remote ssh shell session cmd prompt. In short will be shown how can one check RAM speed configured and detected by Linux / Unix kernel ? 
As well as  how to Check the type of memory (if it is DDR / DDR2 / DDR or DDR4) or ECC with no access to Hardware Console.  Please note this article will be definitely boring for the experienced sysadmins but might help to a starter sysadmins to get on board with a well know basic stuff.

There are several approaches, of course easiest one is to use remote hardware access interrace statistics web interface of ILO (on IBM machine) or the IDRAC on (Dell Server) or Fujitsu's servers iRMC. However as not always access to remote Remote hardware management interface is available to admin. Linux comes with few commands that can do the trick, that are available to most Linux distributions straight for the default package repositories.

Since mentioning about ECC a bit up, most old school admins and computer users knows pretty well about DDRs as they have been present over time but ECC is being used over actively on servers perhaps over the last 10 / 15 years and for those not dealt with it below is a short description on what is ECC RAM Memory.

ECC RAM, short for Error Correcting Code Random Access Memory, is a kind of RAM can detect most common kinds of memory errors and correct a subset of them. ECC RAM is common in enterprise deployments and most server-class hardware. Above a certain scale and memory density, single-bit errors which were up to this point are sufficiently statistically unlikely begin to occur with enough frequency that they can no longer be ignored. At certain scales and densities of memory arbitrary memory errors that are literally "one in a million chances" (or more) may in fact occur several times throughout a system's operational life.

Putting some basics, Lets proceed and Check RAM speed and type (line DDR or DDR2 or DDR3 or DDR4) without having to physically go to the the Data Center numbered rack that is containing the server.


Most famous and well known (also mentioned) on few occasions in my previous articles are: dmidecode and lshw

Quickest way to get a quick overview of installed servers memory is with:
 

root@server:~# dmidecode -t memory | grep -E "Speed:|Type:" | sort | uniq -c
      4     Configured Memory Speed: 2133 MT/s
     12     Configured Memory Speed: Unknown
      4     Error Correction Type: Multi-bit ECC
      2     Speed: 2133 MT/s
      2     Speed: 2400 MT/s
     12     Speed: Unknown
     16     Type: DDR4

 

To get more specifics on the exact type of memory installed on the server, the respective slots that are already taken and the free ones:

root@server:~# dmidecode –type 17 | less

Usually the typical output the command would produce regarding lets say 4 installed Banks of RAM memory on the server will be like:

Handle 0x002B, DMI type 17, 40 bytes
Memory Device
        Array Handle: 0x0029
        Error Information Handle: Not Provided
        Total Width: 72 bits
        Data Width: 64 bits
       
Size: 16 GB
        Form Factor: RIMM
        Set: None
        Locator: CPU1 DIMM A1
        Bank Locator: A1_Node0_Channel0_Dimm1
       
Type: DDR4
        Type Detail: Synchronous
       
Speed: 2400 MT/s
        Manufacturer: Micron
       
Serial Number: 15B36358
        Asset Tag: CPU1 DIMM A1_AssetTag
       
Part Number: 18ASF2G72PDZ-2G3B1 
        Rank: 2
       
Configured Memory Speed: 2133 MT/s
        Minimum Voltage: Unknown
        Maximum Voltage: Unknown
        Configured Voltage: Unknown

Handle 0x002E, DMI type 17, 40 bytes
Memory Device
        Array Handle: 0x0029
        Error Information Handle: Not Provided
        Total Width: Unknown
        Data Width: Unknown
        Size: No Module Installed
        Form Factor: RIMM
        Set: None
        Locator: CPU1 DIMM A2
        Bank Locator: A1_Node0_Channel0_Dimm2
        Type: DDR4
        Type Detail: Synchronous
        Speed: Unknown
        Manufacturer: NO DIMM
        Serial Number: NO DIMM
        Asset Tag: NO DIMM
        Part Number: NO DIMM
        Rank: Unknown
        Configured Memory Speed: Unknown
        Minimum Voltage: Unknown
        Maximum Voltage: Unknown
        Configured Voltage: Unknown

 

Handle 0x002D, DMI type 17, 40 bytes
Memory Device
        Array Handle: 0x0029
        Error Information Handle: Not Provided
        Total Width: 72 bits
        Data Width: 64 bits
        Size: 16 GB
        Form Factor: RIMM
        Set: None
        Locator: CPU1 DIMM B1
        Bank Locator: A1_Node0_Channel1_Dimm1
        Type: DDR4
        Type Detail: Synchronous
        Speed: 2400 MT/s
        Manufacturer: Micron
        Serial Number: 15B363AF
        Asset Tag: CPU1 DIMM B1_AssetTag
        Part Number: 18ASF2G72PDZ-2G3B1 
        Rank: 2
        Configured Memory Speed: 2133 MT/s
        Minimum Voltage: Unknown
        Maximum Voltage: Unknown
        Configured Voltage: Unknown

Handle 0x0035, DMI type 17, 40 bytes
Memory Device
        Array Handle: 0x0031
        Error Information Handle: Not Provided
        Total Width: 72 bits
        Data Width: 64 bits
        Size: 16 GB
        Form Factor: RIMM
        Set: None
        Locator: CPU1 DIMM D1
        Bank Locator: A1_Node0_Channel3_Dimm1
        Type: DDR4
        Type Detail: Synchronous
        Speed: 2133 MT/s
        Manufacturer: Micron
        Serial Number: 1064B491
        Asset Tag: CPU1 DIMM D1_AssetTag
        Part Number: 36ASF2G72PZ-2G1A2  
        Rank: 2
        Configured Memory Speed: 2133 MT/s
        Minimum Voltage: Unknown
        Maximum Voltage: Unknown
        Configured Voltage: Unknown

Handle 0x0033, DMI type 17, 40 bytes
Memory Device
        Array Handle: 0x0031
        Error Information Handle: Not Provided
        Total Width: 72 bits
        Data Width: 64 bits
        Size: 16 GB
        Form Factor: RIMM
        Set: None
        Locator: CPU1 DIMM C1
        Bank Locator: A1_Node0_Channel2_Dimm1
        Type: DDR4
        Type Detail: Synchronous
        Speed: 2133 MT/s
        Manufacturer: Micron
        Serial Number: 10643A5B
        Asset Tag: CPU1 DIMM C1_AssetTag
        Part Number: 36ASF2G72PZ-2G1A2  
        Rank: 2
        Configured Memory Speed: 2133 MT/s
        Minimum Voltage: Unknown
        Maximum Voltage: Unknown
        Configured Voltage: Unknown

 

The marked in green are the banks of memory that are plugged in the server. The

field Speed: and Configured Memory Speed: are fields indicating respectively the Maximum speed on which a plugged-in RAM bank can operate and the the actual Speed the Linux kernel has it configured and uses is at.

It is useful for the admin to usually check the complete number of available RAM slots on a server, this can be done with command like:

root@server:~#  dmidecode –type 17 | grep -i Handle | grep 'DMI'|wc -l
16


As you can see at this specific case 16 Memory slots are avaiable (4 are already occupied and working configured on the machine at 2133 Mhz and 12 are empty and can have installed a memory banks in).


Perhaps the most interesting information for the RAM replacement to be ordered is to know the data communication SPEED on which the Memory is working on the server and interacting with Kernel and Processor to find out.

root@server:~#  dmidecode –type 17 | grep -i "speed"|grep -vi unknown
    Speed: 2400 MT/s
    Configured Memory Speed: 2133 MT/s
    Speed: 2400 MT/s
    Configured Memory Speed: 2133 MT/s
    Speed: 2133 MT/s
    Configured Memory Speed: 2133 MT/s
    Speed: 2133 MT/s
    Configured Memory Speed: 2133 MT/s

 

If you're lazy to remember the exact dmidecode memory type 17 you can use also memory keyword:

root@server:~# dmidecode –type memory | more

For servers that have the lshw command installed, a quick overview of RAM installed and Full slots available for memory placement can be done with:
 

root@server:~#  lshw -short -C memory
H/W path                 Device        Class          Description
=================================================================
/0/0                                   memory         64KiB BIOS
/0/29                                  memory         64GiB System Memory
/0/29/0                                memory         16GiB RIMM DDR4 Synchronous 2400 MHz (0.4 ns)
/0/29/1                                memory         RIMM DDR4 Synchronous [empty]
/0/29/2                                memory         16GiB RIMM DDR4 Synchronous 2400 MHz (0.4 ns)
/0/29/3                                memory         RIMM DDR4 Synchronous [empty]
/0/29/4                                memory         16GiB RIMM DDR4 Synchronous 2133 MHz (0.5 ns)
/0/29/5                                memory         RIMM DDR4 Synchronous [empty]
/0/29/6                                memory         16GiB RIMM DDR4 Synchronous 2133 MHz (0.5 ns)
/0/29/7                                memory         RIMM DDR4 Synchronous [empty]
/0/29/8                                memory         RIMM DDR4 Synchronous [empty]
/0/29/9                                memory         RIMM DDR4 Synchronous [empty]
/0/29/a                                memory         RIMM DDR4 Synchronous [empty]
/0/29/b                                memory         RIMM DDR4 Synchronous [empty]
/0/29/c                                memory         RIMM DDR4 Synchronous [empty]
/0/29/d                                memory         RIMM DDR4 Synchronous [empty]
/0/29/e                                memory         RIMM DDR4 Synchronous [empty]
/0/29/f                                memory         RIMM DDR4 Synchronous [empty]
/0/43                                  memory         768KiB L1 cache
/0/44                                  memory         3MiB L2 cache
/0/45                                  memory         30MiB L3 cache

Now once we know the exact model and RAM Serial and Part number you can google it online and to purchase more of the same RAM Model and Type you need so the installed memory work on the same Megaherzes as the installed ones.
 

How to create PCS / Corosync High Availability Cluster config backup and migrate to new Virtual Machines


October 26th, 2023

pcs-pcmk-internals-explained-picture

The aim of this article is to illustrate how to literally migrate a an Haproxy PCS Pacemaker / Corosync Cluster configurations from old Virtual Machines that due to time passed become unsupported (The Operating System end of life (EOF)) has reached to a new ones. 
This is quite a complex task especially as you usually need to setup the Hypervisor hosts with VMWare / Xen / KVM / OpenVZ or whatever kind of virtualization is to be used. Then setup the correct network interfaces IPs failover the heartbeat lines over which the cluster will work to prevent Split Brain scenartions, the Network Bonding interfaces to guarantee a higher amount of higher availability as well as physically install and update all the cluster software on the new built Linux hosts that will be members of the new cluster in setup. 

All this configuration from scratch of a PCS Corosync cluster is a very lenght topic which I'll try to cover in some of my next articles. In short to migrate the cluster from old machines to new once all this predescribed steps are in line. 
You will need to.


1. Create backup of old cluster configuration
2. Migrate the backup to a new built VM Machine hosts
3. Import the cluster configuration into the PCS Cluster.


Bear in mind that this article discusses a migration of CentOS Linux release 7.9.2009 with its shipped versions of corosync / pacemaker and pcs 

How to create cluster config backup and migrate to new VM

1. Dump cluster assuming that is a Quality Assuare or Pre – Production host  to create full cluster config backup

[root@old-cluster-machine ~]# pcs config backup old-cluster-machine.pcs.config.bak

2. Dump cluster Production full configuration

[root@old-cluster-machine1 ~]# pcs config backup old-cluster-machine1.pcs.config.bak

This command will output a backup of 

old-cluster-machine1.pcs.config.bak.tar.bz2

3. Migrate a cluster identical config to the new Virtual machines

Usually this moval of produced backup files with pcs config backup  commands can be copied with something like FTP / SFTP  or SSL-ed / TLS-ed protocol. However if you have to move the configuration files from a paranoid Citrix environment that doesn't allow you to have any SFTP / SSH or FTP kind of transfer protocol from the location where the old config lays to the new ones. 
A simple encoding of the binary format dumped configuration to plain text files can be done and files, can be moved via a simple copy / paste operation (a bit of a hack) 🙂

Encode the cluster config to be able to migrate configuration in plain text via a simple Copy / Paste operation.

 

[root@old-cluster-machine ~]# base64 config backup old-cluster-machine.pcs.config.bak > old-cluster-machine.pcs.config.bak.tgz.b64

[root@old-cluster-machine1 ~]# base64 old-cluster-machine1.pcs.config.bak.tar.bz2 > old-cluster-machine1.pcs.config.bak.tgz.b64
[root@old-cluster-machine ~]# cat  old-cluster-machine.pcs.config.bak.tgz.b64

(Copy output and Paste to new host VM) /root/haproxy-cluster-backup)

[root@old-cluster-machine1 ~]# cat old-cluster-machine1.pcs.config.bak.tgz.b64 


(Copy output and Paste to new host VM) /root/haproxy-cluster-backup)

Login to the new hosts, where configs has to be migrated and restore the files with base64

For QA / Preprod to restore backup config

[root@dkv-newqa-vm ~]# mkdir /root/haproxy-cluster-backup
[root@dkv-newqa-vm ~]# cd /root/haproxy-cluster-backup
[root@dkv-newqa-vm ~]# base64 -d old-cluster-machine.config.bak.tgz.b64 > old-cluster-machine.pcs.config.bak.tar.bz2
[root@dkv-newqa-vm ~]#  tar -jxvf old-cluster-machine.pcs.config.bak.tar.bz2
ak.tar.bz2
version.txt
pcs_settings.conf
corosync.conf
cib.xml
pacemaker_authkey
uidgid.d/

 

For Prod to restore backup config

[root@dkv-newprod-vm  ~]# mkdir /root/haproxy-cluster-backup
[root@dkv-newprod-vm ~]# cd /root/haproxy-cluster-backup
[root@dkv-newprod-vm ~]# base64 -d old-cluster-machine.config.bak.tgz.b64 > old-cluster-machine1.pcs.config.bak.tar.bz2
ak.tar.bz2
version.txt
pcs_settings.conf
corosync.conf
cib.xml
pacemaker_authkey
uidgid.d/


N!B! An Useful hin is on RHEL 8 Linux's shipped pcs command version has also a very useful command with which you can simply dump completly the config of the cluster in straight commands which you can run directly on the new VM machines where you have migrated.

The command to print out commands that would add existing cluster resources on Redhat 8:

# pcs resource config –output-format=cmd

Another useful command for cluster migration is cibadmin

i.e. to dump cluster xml config

#cibadmin –q > cluster.xml

Later you can import the prior xml dump with it.

# cibadmin –replace –xml-file cib.xml

 

How to make 27 inch monitor to work on 2560×1440 with Virtualbox Linux


October 4th, 2023

make-virtualbox-with-linux-work-on-2k-2560x1440-howto

I've bought a new "second hand" refurbished EIZO Flexscan Monitor EV2760 S1 K1 awesome monitor re from Kvant Serviz a company reseller of Second Hand electronics that is located on the territy of Bulgarian Academy of Sciences (BAN / BAS) and was created by BAS people originally for the BAS people and am pretty happy with it for doing my daily job as system administrator, especially as the monitor has been used on very short screen time for only 256 use hours (which is less than a year of full-use time), whether EIZO does guarantee their monitors to be able to serve up to 5 Full years monitor use time.

For those who deals with Graphics such as Designers and people into art working with Computers knows EIZO brand Monitors for quite some time now and it seems as much of those people are using Windows or Macintoshes, these monitors have been mainly created to work optimally with Windows / Mac computers on a higher resolution.
My work PC that is Dell Latitude 5510 with its HDMI cable has been running perfect with The EIZO with Windows 10, however as I'm using a Virtualbox virutal machines with CentOS Linux, the VM does not automatically detected the highest resolution 2K that this monitors supports 2560×1440 at 60 Hz is the best one can use to get more things fit into the screen and hopefully also good for the Eyes, the Ecoview shoulk also be a good idea for the eyes, as the Ecoview by EIZO tries to adjust the monitor brightness to lower levels according to the light in the room to try to minimize the eye strain on the eyes. The Ecoview mode is a little bit I guess like the famous BENQ's monitors Eye care. 
I'm talking about all this Displays specifics as I spend quite a lot of time to learn the very basics about monitors as my old old 24 Inch EIZO Monitor Flexscan model 2436W started to wear off with time and doesn't support HDMI cable input, so I had to use a special. cable connector that modifies the signal from HDMI to DVI (and I'm not sure how this really effects the eyes), plus the DVI quality is said to be a little bit worse than HDMI as far as I read a bit on the topic online.

Well anyways currently I'm a happy owner of the EIZO EV2760 Monitor which has a full set of inputs of:

  • 27" In-Plane Switching (IPS) Panel
  • DisplayPort | HDMI | DVI-D | 3.5mm Audio
  • 2560 x 1440 Native Resolution
  • 1000:1 Typical Contrast Ratio
     

I've tried to make the monitor work with Linux and my first assumption from what I've read was that I have to reinstall the Guess Addition Tools on the Virtualbox with additing the Guest Addition Tools via the Vbox GUI interface:

Devices -> Insert Guest Additions CD Image

virtualbox-resolutions-screenshot

But got an error that the Guest additions tools iso is missing
So eventually resolved it by remounting and reinstalling the guest addition tools with the following set of commands:

[root@localhost test]# yum install perl gcc dkms kernel-devel kernel-headers make bzip2
[root@localhost test]# cd /mnt/cdrom/
[root@localhost cdrom]# ls
AUTORUN.INF  runasroot.sh                       VBoxSolarisAdditions.pkg
autorun.sh   TRANS.TBL                          VBoxWindowsAdditions-amd64.exe
cert         VBoxDarwinAdditions.pkg            VBoxWindowsAdditions.exe
NT3x         VBoxDarwinAdditionsUninstall.tool  VBoxWindowsAdditions-x86.exe
OS2          VBoxLinuxAdditions.run

 


[root@localhost cdrom]# ./VBoxLinuxAdditions.run 

Verifying archive integrity… All good.
Uncompressing VirtualBox 6.1.34 Guest Additions for Linux……..
VirtualBox Guest Additions installer
Removing installed version 6.1.34 of VirtualBox Guest Additions…
Copying additional installer modules …
Installing additional modules …
VirtualBox Guest Additions: Starting.
VirtualBox Guest Additions: Building the VirtualBox Guest Additions kernel
modules.  This may take a while.
VirtualBox Guest Additions: To build modules for other installed kernels, run
VirtualBox Guest Additions:   /sbin/rcvboxadd quicksetup <version>
VirtualBox Guest Additions: or
VirtualBox Guest Additions:   /sbin/rcvboxadd quicksetup all
VirtualBox Guest Additions: Building the modules for kernel
3.10.0-1160.80.1.el7.x86_64.
ERROR: Can't map '//etc/selinux/targeted/policy/policy.31':  Invalid argument

ERROR: Unable to open policy //etc/selinux/targeted/policy/policy.31.
libsemanage.semanage_read_policydb: Error while reading kernel policy from /etc/selinux/targeted/active/policy.kern. (No such file or directory).
OSError: No such file or directory
VirtualBox Guest Additions: Running kernel modules will not be replaced until
the system is restarted

 

 

The solution to that was to reinstal the security policy-target was necessery

[root@localhost test]# yum install selinux-policy-targeted –reinstall


And of course rerun the reinstall of Guest addition tools up to the latest

[root@localhost cdrom]# ./VBoxLinuxAdditions.run 

Unfortunately that doesn't make it resolve it and even shutting down the VM machine and reloading it again with Raised Video Memory for the simulated hardware from settings from 16 MB to 128MB for the VM does not give the option from the Virtualbox interface to set the resolution from
 

View -> Virtual Screen 1 (Resize to 1920×1200)

to any higher than that.

After a bit of googling I found some newer monitors doesn't seem to be seen by xrandr command and few extra commands with xrandr need to be run to make the 2K resolution 2560×1440@60 Herzes work under the Linux virtual machine.

These are the extra xranrd command that make it happen

# xrandr –newmode "2560x1440_60.00" 311.83  2560 2744 3024 3488  1440 1441 1444 1490  -HSync +Vsync
# xrandr –addmode Virtual1 2560x1440_60.00
# xrandr –output Virtual1 –mode "2560x1440_60.00"

As this kind of settings needs to be rerun on next time the Virtual Machine runs it is a good idea to place the commands in a tiny shell script:

[test@localhost ~]$ cat xrandr-set-resolution-to-2560×1440.sh 
#!/bin/bash
xrandr –newmode "2560x1440_60.00" 311.83  2560 2744 3024 3488  1440 1441 1444 1490  -HSync +Vsync
xrandr –addmode Virtual1 2560x1440_60.00
xrandr –output Virtual1 –mode "2560x1440_60.00"


You can Download  the xrandr-set-resolution-to-2560×1440.sh script from here

Once the commands are run, to make it affect the Virtualbox, you can simply put it in FullScreen mode via


View -> Full-Screen Mode (can be teriggered from keyboard by pressing Right CTRL + F) together

[test@localhost ~]$ xrandr –addmode Virtual1 2560x1440_60.00
[test@localhost ~]$ xrandr –output Virtual1 –mode "2560x1440_60.00"
[test@localhost ~]$ xrandr 
Screen 0: minimum 1 x 1, current 2560 x 1440, maximum 8192 x 8192
Virtual1 connected primary 2560×1440+0+0 (normal left inverted right x axis y axis) 0mm x 0mm
   1920×1200     60.00 +  59.88  
   2560×1600     59.99  
   1920×1440     60.00  
   1856×1392     60.00  
   1792×1344     60.00  
   1600×1200     60.00  
   1680×1050     59.95  
   1400×1050     59.98  
   1280×1024     60.02  
   1440×900      59.89  
   1280×960      60.00  
   1360×768      60.02  
   1280×800      59.81  
   1152×864      75.00  
   1280×768      59.87  
   1024×768      60.00  
   800×600       60.32  
   640×480       59.94  
   2560x1440_60.00  60.00* 
Virtual2 disconnected (normal left inverted right x axis y axis)
Virtual3 disconnected (normal left inverted right x axis y axis)
Virtual4 disconnected (normal left inverted right x axis y axis)
Virtual5 disconnected (normal left inverted right x axis y axis)
Virtual6 disconnected (normal left inverted right x axis y axis)
Virtual7 disconnected (normal left inverted right x axis y axis)
Virtual8 disconnected (normal left inverted right x axis y axis)

Tadadadam ! That's all folks, enjoy having your 27 Inch monitor running at 2560×1440 @ 60 Hz 🙂
 

 

How to set up Notify by email expiring local UNIX user accounts on Linux / BSD with a bash script


August 24th, 2023

password-expiry-linux-tux-logo-script-picture-how-to-notify-if-password-expires-on-unix

If you have already configured Linux Local User Accounts Password Security policies Hardening – Set Password expiry, password quality, limit repatead access attempts, add directionary check, increase logged history command size and you want your configured local user accounts on a Linux / UNIX / BSD system to not expire before the user is reminded that it will be of his benefit to change his password on time, not to completely loose account to his account, then you might use a small script that is just checking the upcoming expiry for a predefined users and emails in an array with lslogins command like you will learn in this article.

The script below is written by a colleague Lachezar Pramatarov (Credit for the script goes to him) in order to solve this annoying expire problem, that we had all the time as me and colleagues often ended up with expired accounts and had to bother to ask for the password reset and even sometimes clearance of account locks. Hopefully this little script will help some other unix legacy admin systems to get rid of the account expire problem.

For the script to work you will need to have a properly configured SMTP (Mail server) with or without a relay to be able to send to the script predefined email addresses that will get notified. 

Here is example of a user whose account is about to expire in a couple of days and who will benefit of getting the Alert that he should hurry up to change his password until it is too late 🙂

[root@linux ~]# date
Thu Aug 24 17:28:18 CEST 2023

[root@server~]# chage -l lachezar
Last password change                                    : May 30, 2023
Password expires                                        : Aug 28, 2023
Password inactive                                       : never
Account expires                                         : never
Minimum number of days between password change          : 0
Maximum number of days between password change          : 90
Number of days of warning before password expires       : 14

Here is the user_passwd_expire.sh that will report the user

# vim  /usr/local/bin/user_passwd_expire.sh

#!/bin/bash

# This script will send warning emails for password expiration 
# on the participants in the following list:
# 20, 15, 10 and 0-7 days before expiration
# ! Script sends expiry Alert only if day is Wednesday – if (( $(date +%u)==3 )); !

# email to send if expiring
alert_email='alerts@pc-freak.net';
# the users that are admins added to belong to this group
admin_group="admins";
notify_email_header_customer_name='Customer Name';

declare -A mails=(
# list below accounts which will receive account expiry emails

# syntax to define uid / email
# [“account_name_from_etc_passwd”]="real_email_addr@fqdn";

#    [“abc”]="abc@fqdn.com"
#    [“cba”]="bca@fqdn.com"
    [“lachezar”]="lachezar.user@gmail.com"
    [“georgi”]="georgi@fqdn-mail.com"
    [“acct3”]="acct3@fqdn-mail.com"
    [“acct4”]="acct4@fqdn-mail.com"
    [“acct5”]="acct5@fqdn-mail.com"
    [“acct6”]="acct6@fqdn-mail.com"
#    [“acct7”]="acct7@fqdn-mail.com"
#    [“acct8”]="acct8@fqdn-mail.com"
#    [“acct9”]="acct9@fqdn-mail.com"
)

declare -A days

while IFS="=" read -r person day ; do
  days[“$person”]="$day"
done < <(lslogins –noheadings -o USER,GROUP,PWD-CHANGE,PWD-WARN,PWD-MIN,PWD-MAX,PWD-EXPIR,LAST-LOGIN,FAILED-LOGIN  –time-format=iso | awk '{print "echo "$1" "$2" "$3" $(((($(date +%s -d \""$3"+90 days\")-$(date +%s)))/86400)) "$5}' | /bin/bash | grep -E " $admin_group " | awk '{print $1 "=" $4}')

#echo ${days[laprext]}
for person in "${!mails[@]}"; do
     echo "$person ${days[$person]}";
     tmp=${days[$person]}

#     echo $tmp
# each person will receive mails only if 20th days / 15th days / 10th days remaining till expiry or if less than 7 days receive alert mail every day

     if  (( (${tmp}==20) || (${tmp}==15) || (${tmp}==10) || ((${tmp}>=0) && (${tmp}<=7)) )); 
     then
         echo "Hello, your password for $(hostname -s) will expire after ${days[$person]} days.” | mail -s “$notify_email_header_customer_name $(hostname -s) server password expiration”  -r passwd_expire ${mails[$person]};
     elif ((${tmp}<0));
     then
#          echo "The password for $person on $(hostname -s) has EXPIRED before{days[$person]} days. Please take an action ASAP.” | mail -s “EXPIRED password of  $person on $(hostname -s)”  -r EXPIRED ${mails[$person]};

# ==3 meaning day is Wednesday the day on which OnCall Person changes

        if (( $(date +%u)==3 ));
        then
             echo "The password for $person on $(hostname -s) has EXPIRED. Please take an action." | mail -s "EXPIRED password of  $person on $(hostname -s)"  -r EXPIRED $alert_email;
        fi
     fi  
done

 


To make the script notify about expiring user accounts, place the script under some directory lets say /usr/local/bin/user_passwd_expire.sh and make it executable and configure a cron job that will schedule it to run every now and then.

# cat /etc/cron.d/passwd_expire_cron

# /etc/cron.d/pwd_expire
#
# Check password expiration for users
#
# 2023-01-16 LPR
#
02 06 * * * root /usr/local/bin/user_passwd_expire.sh >/dev/null

Script will execute every day morning 06:02 by the cron job and if the day is wednesday (3rd day of week) it will send warning emails for password expiration if 20, 15, 10 days are left before account expires if only 7 days are left until the password of user acct expires, the script will start sending the Alarm every single day for 7th, 6th … 0 day until pwd expires.

If you don't have an expiring accounts and you want to force a specific account to have a expire date you can do it with:

# chage -E 2023-08-30 someuser


Or set it for new created system users with:

# useradd -e 2023-08-30 username


That's it the script will notify you on User PWD expiry.

If you need to for example set a single account to expire 90 days from now (3 months) that is a kind of standard password expiry policy admins use, do it with:

# date -d "90 days" +"%Y-%m-%d"
2023-11-22


Ideas for user_passwd_expire.sh script improvement
 

The downside of the script if you have too many local user accounts is you have to hardcode into it the username and user email_address attached to and that would be tedios task if you have 100+ accounts. 

However it is pretty easy if you already have a multitude of accounts in /etc/passwd that are from UID range to loop over them in a small shell loop and build new array from it. Of course for a solution like this to work you will have to have defined as user data as GECOS with command like chfn.
 

[georgi@server ~]$ chfn
Changing finger information for test.
Name [test]: 
Office []: georgi@fqdn-mail.com
Office Phone []: 
Home Phone []: 

Password: 

[root@server test]# finger georgi
Login: georgi                       Name: georgi
Directory: /home/georgi                   Shell: /bin/bash
Office: georgi@fqdn-mail.com
On since чт авг 24 17:41 (EEST) on :0 from :0 (messages off)
On since чт авг 24 17:43 (EEST) on pts/0 from :0
   2 seconds idle
On since чт авг 24 17:44 (EEST) on pts/1 from :0
   49 minutes 30 seconds idle
On since чт авг 24 18:04 (EEST) on pts/2 from :0
   32 minutes 42 seconds idle
New mail received пт окт 30 17:24 2020 (EET)
     Unread since пт окт 30 17:13 2020 (EET)
No Plan.

Then it should be relatively easy to add the GECOS for multilpe accounts if you have them predefined in a text file for each existing local user account.

Hope this script will help some sysadmin out there, many thanks to Lachezar for allowing me to share the script here.
Enjoy ! 🙂

Fix ruby: /usr/lib/libcrypt.so.1: version `XCRYPT_2.0′ not found in apt upgrade on Debian Linux 10


August 5th, 2023

I've an old legacy Thinkpad Laptop that is for simplicty running Window Maker Wmaker which was laying on my home desk for almost an year and I remembered since i'm for few days in my parents home in Dobrich that it will be a good idea to update its software to the latest Debian packages to patch security issues with it. Thus if you're like me and  you tried to update your Debian 10 Linux to the latest Stable release debian packages  and you end up into a critical error that is preventing apt to to resolve conflicts (fix it with) cmds like:

# apt-get update –fix-missing

# apt –fix-broken install

As usual I looked into Google to see about solution and found few articles, claiming to have scripts that fix it but at the end nothing worked.
And the shitty error occured during the standard:

# apt-get update && apt-get upgrade

ruby: /usr/lib/libcrypt.so.1: version `XCRYPT_2.0' not found

Hence the cause and work around seemed to be very unexpected.
For some reason debian makes a link

root@noah:/lib# ls -al /lib/libcrypt.so.1
lrwxrwxrwx 1 root root 19 Aug  3 16:53 /lib/libcrypt.so.1 ->
libcrypt.so.1.bak

root@noah:/lib# ls -al /lib/libcrypt.so.1.bak
lrwxrwxrwx 1 root root 16 Jun 15  2017 /lib/libcrypt.so.1.bak -> libcrypt-2.24.so

Thus to resolve it and force the .deb upgrade package to continue it is up to simply deleting the strange simlink and re-run the

# apt-get update && apt-get upgrade

Setting up libc6:i386 (2.31-13+deb11u6) …
/usr/bin/perl: /lib/libcrypt.so.1: version `XCRYPT_2.0' not found (required by /usr/bin/perl)
dpkg: error processing package libc6:i386 (–configure):
 installed libc6:i386 package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
 libc6:i386

Few more times. If you get some critical apt failures still, each time make sure to rerun the command after doing a simple removal of the strange simbolic link with cmd:

# rm -f /lib/libcrypt.so.1

That's all folks after a short while your Debian will be updated to latest Enoy folks ! 🙂

Saint Emilianos (Emilian) of Dorostorum (Silistra) ancient saint venerated in Bulgarian Orthodox Church


August 5th, 2023

saint_Emilian-Dorostolski-Dorostorum-780x470-orthodox-icon

Saint Emilian / Emilianos Dorostolski is a martyr revered with a feast day by the Bulgarian Orthodox Church.

According to his biography, he was born in Durostorum (now Silistra Bulgaria), where he spent his life as a servant (or slave) to the mayor.
He lived in the time of Emperor Julian (the apostate).

Emperor Julian sent a new governor to Dorostol charged with the task of eradicating Christianity from the city.

Frightened by his fame as a very cruel ruler, the local inhabitants hide from him that there are Christians among them and declare that they all worship the pagan gods.
Satisfied, he gives a feast to the citizens, but for the zealous Christian Aemilianus (Emilianos), the boasting of the pagan governor is unbearable, and during the feast he smashes the statues of the pagan gods in the sanctuary with a hammer.

An innocent person is accused of the crime, but knowing this, Emilian appears before the governor and confesses his guilt.

The city was fined for harboring Christians, and Aemilianus himself, after torture, was burned at the stake by the Danube[1] (river) on July 18, 362; this date is today the day of his veneration by the church[2].

It is assumed that the life of Saint Emilian was written immediately after the saint's death – the end of the 4th and the beginning of the 5th century. Its earliest variants are generally two.

The first is based on the so-called Codex Vaticanus 866 (published by Boschius in 1868), and the second is based on the so-called Codex Parisiensis of the 9th century (published by François Halkin). Although the Codex Parisiensis largely repeats the Codex Vaticanus, there is a difference between the two lives in both the date of Aemilian's martyrdom and the location of his obituary. According to the first, Emilian was burned at the stake on September 3 in Gedina (localized near the present-day village of Golesh), and according to the second, it happened on July 18 in Gezedina, right next to Durostorum (fortress).

Information about Saint Emilian can also be found in blessed Jerome, and saint Ambrose (Ambrosius) of Milan, Theophanes the Confessor and Nicephorus Callistus.
In the Church-Slavic hagiography, the life enters mainly from its later copies in the Paschal Chronicle (Chronicon Paschalae), the Synaxarium (Church book with the service text dedicated to the sant) of the Constantinople Patriarchate (Synaxarium Constantinopolitanum) and the Monthly Message of Emperor Basil II (Menologium Basilii (Basilius) II).

A major difference between the early lives and their later editions is Aemilian's social status.

According to the late Church Slavonic redactions, he was a slave / servant of the mayor of Durostorum (today city of Silistra Bulgaria), while according to the earlier ones he himself was of noble birth – his father Sevastian was the governor of the city – and was a soldier (presumably from the XI Claudius Legion)[3].

 

Sources

 1. Georgi Atanasov, 345 early Christian saints-martyrs from the Bulgarian lands I – IV centuries / Publisher: Unicart ISBN: 9789542953012 / page 11
  2. Lives of the Saints. Synodal publishing house. Sofia, 1991. pp. 337-338.
  3. St. Emilian Dorostolski: My name is Christian

Other Research sources

  • Constantinesco, R. Les martyrs de Durostorum. – Revue des Etudes Sud-Est Europeennes, 5, 1967, No. 1 – 2, 14 – 19.
  • G. Atanasov. St. Emilian Dorostolsky († 362) – the last early Christian martyr in Mysia. – In: Civitas divino-humana in honor of Professor Georgi Bakalov. S. 2004, 203 – 218.
  • Ivanova, R., G. Atanasov, P. Donevski. History of Silistra. T. 1. The ancient Durostorum. Silistra-S., 2006.
  • Atanasov, G. The Christian Durostorum-Druster. Varna, 2007.

Generate and Add UUID for every existing Redhat / CentOS / RHEL network interface to configuration if missing howto


August 5th, 2023

linux-fix-missing-uid-on-redhat-centos-fedora-networking-logo

If you manage old Linux machines it might be after the update either due to update mess or because of old system administrators which manually included the UUID to the config forgot to include it in the present network configuration in /etc/sysconfig/networking-scripts/ifcfg-* Universally Unique IDentifier (UUID)128-bit label I used a small one liner after listing all the existing configured LAN interfaces reported from iproute2 network stack with ip command. As this might be useful to someone out there here is the simple command that returns a number of commands to later just copy paste to console once verified there are no duplicates of the UUID already in the present server configuration with grep.

In overall to correct the configs and reload the network with the proper UUIDs here is what I had to do:


# grep -rli UUID /etc/sysconfig/network-scripts/ifcfg-*

No output from the recursive grep means UUIDs are not present on any existing interface, so we can step further check all the existing machines network ifaces and generate the missing UUIDs with uuidgen command

# ip a s |grep -Ei ': <'|sed -e 's#:##g' |grep -v '\.' |awk '{ print $2 }'
ifcfg-venet0
ifcfg-eth0
ifcfg-eth1

ifcfg-eth2
ifcfg-eth3

I've stumbled on that case on some legacy Linux inherited from other people sysadmins and in order to place the correct 

# for i in $(ip a s |grep -Ei ': <'|sed -e 's#:##g' |grep -v '\.' |awk '{ print $2 }'); do echo "echo UUID=$(uuidgen $i)"" >> ifcfg-$i"; done|grep -v '\-lo' 
echo UUID=26819d24-9452-4431-a9ca-176d87492b75 >> ifcfg-venet0
echo UUID=3c7e8848-0232-436f-a52a-46db9a03eb33 >> ifcfg-eth0
echo UUID=1fc0454d-bf23-417d-b960-571fc04754d2 >> ifcfg-eth1
echo UUID=5793c1e5-4481-4f09-967e-2cceda85c35f >> ifcfg-eth2
echo UUID=65fdcaf6-d271-4845-a8f1-0ec478c375d1 >> ifcfg-eth3


As you can see I exclude the loopback interface -lo from the ouput as it is not necessery to have UUID for it.
That's all folks problem solved. Enjoy