Posts Tagged ‘virtual machines’

Resize KVM .img QCOW Image file and Create new LVM partition and ext4 filesystem inside KVM Virtual Machine

Friday, November 10th, 2023

LVM-add-space-to-RHEL-Linux-on-KVM_Virtual_machine-howto

Part of migration project for a customer I'm working on is migration of a couple of KVM based Guest virtual machine servers. The old machines has a backup solution stratetegy using IBM's TSM and the new Machines should use the Cheaper solution adopted by the Customer company using the CommVault backup solution (an enterprise software thath is used for data backup and recovery not only to local Tape Library / Data blobs on central backup servers infra but also in Cloud infrastructure.

To install the CommVault software on the Redhat Linux-es, the official install documentation (prepared by the team who prepared the CommVault) infrastructure for the customer recommends to have a separate partition for the CommVault backups under /opt directory  (/opt/commvault) and the partition should be as a minumum at least 10 Gigabytes of size. 

Unfortunately on our new prepared KVM VM guest machines, it was forgotten to have the separate /opt of 10GB prepared in advanced. And we ended up with Virtual Machines that has a / (root directory) of 68GB size and a separate /var and /home LVM parititons. Thus to correct the issues it was required to find a way to add another separate LVM partition inside the KVM VirtualMachine.img (QCOW Image file). 

This seemed to be an easy task at first as that might be possible with simple .img partition mount with losetup command kpartx and simple lvreduce command in some way such as

# mount /dev/loop0 /mnt/test/

# kpartx -a /dev/loop0
# kpartx -l /dev/loop0
# ls -al /dev/mapper/*

… 

# lvreduce 

etc. however unfortunately kpartx though not returning error did not provided the new /dev/mapper devices to be used with LVM tools and this approach seems to not be possible on RHEL 8.8 as the kpartx couldn't list.

 

A colleague of mine Mr. Paskalev suggested that we can perhaps try to mount the partition with default KVM tool to mount .img partitions which is guestmount but unfortunately
with a command like:
 

# guestmount -a /kvm/VM.img -i –rw /mnt/test/

But unfortunately this mounted the filesystem in fuse filesystem and the LVM /dev/mapper of the VM can't be seen so we decided to abondon this method.

After some pondering with Dimitar Paskalev and Dimitar Hristov, thanks to joint efforts we found the way to do it, below are the steps we followed to succeed in creating new LVM ext4 partition required.
One would wonder how many system
 

1. Check enough space is available on the HV machine

 

The VMs are held under /kvm so in this case:

[root@hypervisor-host ~]# df -h|grep -i /kvm
/dev/mapper/vg00-vmprivate  206G   27G  169G  14% /kvm

 

2. Shutdown the running VM and make sure it is stopped
 

[root@hypervisor-host ~]# virsh shutdown vm-host

 

[root@hypervisor-host ~]# virsh list –all
 Id   Name       State
————————–
 4    lpdkv01f   running
 5    vm-host   shut off

 

3. Check current Space status of VM

 

[root@hypervisor-host ~]# qemu-img info /kvm/vm-host.img       
image: /kvm/vm-host.img
file format: qcow2
virtual size: 100 GiB (107374182400 bytes)
disk size: 8.62 GiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: true
    refcount bits: 16
    corrupt: false
    extended l2: false

 

4. Resize (extend VM) with whatever size you want    
 

[root@hypervisor-host ~]# qemu-img resize /kvm/vm-host.img +10G

 

5. Start VM    
 

[root@hypervisor-host ~]# virsh start vm-host


7. Check the LVM and block devices on HVs (not necessery but good for an overview)
 

[root@hypervisor-host ~]# pvs
  PV         VG   Fmt  Attr PSize   PFree 
  /dev/sda2  vg00 lvm2 a–  277.87g 19.87g
  
[root@hypervisor-host ~]# vgs
  VG   #PV #LV #SN Attr   VSize   VFree 
  vg00   1  11   0 wz–n- 277.87g 19.87g

 

[root@hypervisor-host ~]# lsblk 
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                  8:0    0 278.9G  0 disk 
├─sda1               8:1    0     1G  0 part /boot
└─sda2               8:2    0 277.9G  0 part 
  ├─vg00-root      253:0    0    15G  0 lvm  /
  ├─vg00-swap      253:1    0     1G  0 lvm  [SWAP]
  ├─vg00-var       253:2    0     5G  0 lvm  /var
  ├─vg00-spool     253:3    0     2G  0 lvm  /var/spool
  ├─vg00-audit     253:4    0     3G  0 lvm  /var/log/audit
  ├─vg00-opt       253:5    0     2G  0 lvm  /opt
  ├─vg00-home      253:6    0     5G  0 lvm  /home
  ├─vg00-tmp       253:7    0     5G  0 lvm  /tmp
  ├─vg00-log       253:8    0     5G  0 lvm  /var/log
  ├─vg00-cache     253:9    0     5G  0 lvm  /var/cache
  └─vg00-vmprivate 253:10   0   210G  0 lvm  /vmprivate

  
8 . Check logical volumes on Hypervisor host
 

[root@hypervisor-host ~]# lvdisplay 
  — Logical volume —
  LV Path                /dev/vg00/swap
  LV Name                swap
  VG Name                vg00
  LV UUID                3tNa0n-HDVw-dLvl-EC06-c1Ex-9jlf-XAObKm
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:45 +0200
  LV Status              available
  # open                 2
  LV Size                1.00 GiB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:1
   
  — Logical volume —
  LV Path                /dev/vg00/var
  LV Name                var
  VG Name                vg00
  LV UUID                JBerim-fxVv-jU10-nDmd-figw-4jVA-8IYdxU
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:45 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:2
   
  — Logical volume —
  LV Path                /dev/vg00/spool
  LV Name                spool
  VG Name                vg00
  LV UUID                nFlmp2-iXg1-tFxc-FKaI-o1dA-PO70-5Ve0M9
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:45 +0200
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:3
   
  — Logical volume —
  LV Path                /dev/vg00/audit
  LV Name                audit
  VG Name                vg00
  LV UUID                e6H2OC-vjKS-mPlp-JOmY-VqDZ-ITte-0M3npX
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:46 +0200
  LV Status              available
  # open                 1
  LV Size                3.00 GiB
  Current LE             768
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:4
   
  — Logical volume —
  LV Path                /dev/vg00/opt
  LV Name                opt
  VG Name                vg00
  LV UUID                oqUR0e-MtT1-hwWd-MhhP-M2Y4-AbRo-Kx7yEG
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:46 +0200
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:5
   
  — Logical volume —
  LV Path                /dev/vg00/home
  LV Name                home
  VG Name                vg00
  LV UUID                ehdsH7-okS3-gPGk-H1Mb-AlI7-JOEt-DmuKnN
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:47 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:6
   
  — Logical volume —
  LV Path                /dev/vg00/tmp
  LV Name                tmp
  VG Name                vg00
  LV UUID                brntSX-IZcm-RKz2-CP5C-Pp00-1fA6-WlA7lD
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:47 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:7
   
  — Logical volume —
  LV Path                /dev/vg00/log
  LV Name                log
  VG Name                vg00
  LV UUID                ZerDyL-birP-Pwck-yvFj-yEpn-XKsn-sxpvWY
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:47 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:8
   
  — Logical volume —
  LV Path                /dev/vg00/cache
  LV Name                cache
  VG Name                vg00
  LV UUID                bPPfzQ-s4fH-4kdT-LPyp-5N20-JQTB-Y2PrAG
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:48 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:9
   
  — Logical volume —
  LV Path                /dev/vg00/root
  LV Name                root
  VG Name                vg00
  LV UUID                mZr3p3-52R3-JSr5-HgGh-oQX1-B8f5-cRmaIL
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:48 +0200
  LV Status              available
  # open                 1
  LV Size                15.00 GiB
  Current LE             3840
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:0
   
  — Logical volume —
  LV Path                /dev/vg00/vmprivate
  LV Name                vmprivate
  VG Name                vg00
  LV UUID                LxNRWV-le3h-KIng-pUFD-hc7M-39Gm-jhF2Aj
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-09-18 11:54:19 +0200
  LV Status              available
  # open                 1
  LV Size                210.00 GiB
  Current LE             53760
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:10

9. Check Hypervisor existing partitions and space
 

[root@hypervisor-host ~]# fdisk -l
Disk /dev/sda: 278.9 GiB, 299439751168 bytes, 584843264 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0581e6e2

Device     Boot   Start       End   Sectors   Size Id Type
/dev/sda1  *       2048   2099199   2097152     1G 83 Linux
/dev/sda2       2099200 584843263 582744064 277.9G 8e Linux LVM


Disk /dev/mapper/vg00-root: 15 GiB, 16106127360 bytes, 31457280 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-swap: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-var: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-spool: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-audit: 3 GiB, 3221225472 bytes, 6291456 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-opt: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-home: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-tmp: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-log: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-cache: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-vmprivate: 210 GiB, 225485783040 bytes, 440401920 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

 

10. List block devices on VM
 

[root@vm-host ~]# lsblk 
NAME               MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                 11:0    1 1024M  0 rom  
vda                252:0    0  100G  0 disk 
├─vda1             252:1    0    1G  0 part /boot
├─vda2             252:2    0   88G  0 part 
│ ├─vg00-root      253:0    0   68G  0 lvm  /
│ ├─vg00-home      253:2    0   10G  0 lvm  /home
│ └─vg00-var       253:3    0   10G  0 lvm  /var
├─vda3             252:3    0    1G  0 part [SWAP]
└─vda4             252:4    0   10G  0 part 

 

 

11. Create new LVM partition with fdisk or cfdisk
 

If there is no cfdisk new resized space with qemu-img could be setup with a fdisk, though I personally always prefer to use cfdisk

[root@vm-host ~]# fdisk /dev/vda
# > p (print)
# > m (manfile)
# > n
# … follow on screen instructions to select start and end blocks
# > t (change partition type)
# > select and set to 8e
# > w (write changes)

[root@vm-host ~]# cfdisk /dev/vda


Setup new partition from Free space as [ primary ] partition and Choose to be of type LVM


12. List partitions to make sure new LVM partition is present
 

[root@vm-host ~]# fdisk -l
Disk /dev/vda: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xe7b2d9fd

Device     Boot     Start       End   Sectors Size Id Type
/dev/vda1  *         2048   2099199   2097152   1G 83 Linux
/dev/vda2         2099200 186646527 184547328  88G 8e Linux LVM
/dev/vda3       186646528 188743679   2097152   1G 82 Linux swap / Solaris
/dev/vda4       188743680 209715199  20971520  10G 8e Linux LVM

The extra added 10 Giga is seen under /dev/vda4.
  — Physical volume —
  PV Name               /dev/vda4
  VG Name               vg01
  PV Size               10.00 GiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              2559
  Free PE               0
  Allocated PE          2559
  PV UUID               yvMX8a-sEka-NLA7-53Zj-fFdZ-Jd2K-r0Db1z
   
  — Physical volume —
  PV Name               /dev/vda2
  VG Name               vg00
  PV Size               <88.00 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              22527
  Free PE               0
  Allocated PE          22527
  PV UUID               i4UpGr-h9Cd-iKBu-KqEI-15vK-CGc1-DwRPj8
   
[root@vm-host ~]# 

 

13. List LVM Physical Volumes
 

[root@vm-host ~]# pvdisplay 
  — Physical volume —
  PV Name               /dev/vda2
  VG Name               vg00
  PV Size               <88.00 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              22527
  Free PE               0
  Allocated PE          22527
  PV UUID               i4UpGr-h9Cd-iKBu-KqEI-15vK-CGc1-DwRPj8

 


  
  Notice the /dev/vda4 is not seen in pvdisplay (Physical Volume display command) because not created yet, so lets create it.
 

14. Initialize new Physical Volume to be available for use by LVM
 

[root@vm-host ~]# pvcreate /dev/vda4


15. Inform the OS for partition table changes
 

If partprobe is not available as command on the host, below obscure command should do the trick.
 

[root@vm-host ~]# echo "- – -" | tee /sys/class/scsi_host/host*/scan

However usually, better to use partprobe to inform the Operating System of partition table changes

[root@vm-host ~]# partprobe


16. Use lsblk again to see the new /dev/vda4 LVM is listed into "vda" root block device
 

[root@vm-host ~]# 
[root@vm-host ~]# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0            11:0    1 1024M  0 rom  
vda           252:0    0  100G  0 disk 
├─vda1        252:1    0    1G  0 part /boot
├─vda2        252:2    0   88G  0 part 
│ ├─vg00-root 253:0    0   68G  0 lvm  /
│ ├─vg00-home 253:1    0   10G  0 lvm  /home
│ └─vg00-var  253:2    0   10G  0 lvm  /var
├─vda3        252:3    0    1G  0 part [SWAP]
└─vda4        252:4    0   10G  0 part 
[root@vm-host ~]# 


17. Create new Volume Group (VG) on /dev/vda4 block device
 

Before creating a new VG, list what kind of VG is on the machine to be sure the new created one will not be already present.
 

[root@vm-host ~]# vgdisplay 
  — Volume group —
  VG Name               vg00
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <88.00 GiB
  PE Size               4.00 MiB
  Total PE              22527
  Alloc PE / Size       22527 / <88.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               oyo1oY-saSm-0IKk-gZnf-Knwz-utO7-Aw8c60

vg00 is existing only, so we can use vg01 as a Volume Group name for the new volume group where the fresh 10GB LVM partition will lay

[root@vm-host ~]# vgcreate vg01 /dev/vda4
  Volume group "vg01" successfully created

 

18. Create new Logical Volume (LV) and extend it to occupy the full space available on Volume Group vg01

 

 

[root@vm-host ~]# lvcreate -n commvault -l 100%FREE vg01
  Logical volume "commvault" created.

  An alternative way to create the same LV is by running:

lvcreate -n commvault -L 10G vg01


19. Relist block devices with lsblk to make sure the new created Logical Volume commvault is really present and seen, in case of it missing re-run again partprobe cmd
 

[root@vm-host ~]# lsblk 
NAME               MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                 11:0    1 1024M  0 rom  
vda                252:0    0  100G  0 disk 
├─vda1             252:1    0    1G  0 part /boot
├─vda2             252:2    0   88G  0 part 
│ ├─vg00-root      253:0    0   68G  0 lvm  /
│ ├─vg00-home      253:1    0   10G  0 lvm  /home
│ └─vg00-var       253:2    0   10G  0 lvm  /var
├─vda3             252:3    0    1G  0 part [SWAP]
└─vda4             252:4    0   10G  0 part 
  └─vg01-commvault 253:3    0   10G  0 lvm  

 

As it is not mounted yet, the VG will be not seen in df free space but will be seen as a volume group with vgdispaly
 

[root@vm-host ~]# df -h
Filesystem                  Size  Used Avail Use% Mounted on
devtmpfs                    2.8G     0  2.8G   0% /dev
tmpfs                       2.8G   33M  2.8G   2% /dev/shm
tmpfs                       2.8G   17M  2.8G   1% /run
tmpfs                       2.8G     0  2.8G   0% /sys/fs/cgroup
/dev/mapper/vg00-root        67G  2.4G   61G   4% /
/dev/mapper/vg00-var        9.8G 1021M  8.3G  11% /var
/dev/mapper/vg00-home       9.8G   24K  9.3G   1% /home
/dev/vda1                   974M  242M  665M  27% /boot
tmpfs                       569M     0  569M   0% /run/user/0

 

[root@vm-host ~]# vgdisplay 
  — Volume group —
  VG Name               vg01
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <10.00 GiB
  PE Size               4.00 MiB
  Total PE              2559
  Alloc PE / Size       2559 / <10.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               nYP0tv-IbFw-fBVT-slBB-H1hF-jD0h-pE3V0S
   
  — Volume group —
  VG Name               vg00
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <88.00 GiB
  PE Size               4.00 MiB
  Total PE              22527
  Alloc PE / Size       22527 / <88.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               oyo1oY-saSm-0IKk-gZnf-Snwz-utO7-Aw8c60
  


20. Create new ext4 filesystem on the just created vg01-commvault   
 

[root@vm-host ~]# mkfs.ext4 /dev/mapper/vg01-commvault 

[root@vm-host ~]# mkfs.ext4 /dev/mapper/vg01-commvault 
mke2fs 1.45.6 (20-Mar-2020)
Discarding device blocks: done                            
Creating filesystem with 2620416 4k blocks and 655360 inodes
Filesystem UUID: 1491d8b1-2497-40fe-bc40-5faa6a2b2644
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 


21. Mount vg01-commvault into /opt directory
 

[root@vm-host ~]# mkdir -p /opt/

[root@vm-host ~]# mount /dev/mapper/vg01-commvault /opt/


22. Check mount is present on VM guest OS
 

[root@vm-host ~]# mount|grep -i /opt
/dev/mapper/vg01-commvault on /opt type ext4 (rw,relatime)
[root@vm-host ~]# 

[root@vm-host ~]# df -h|grep -i opt
/dev/mapper/vg01-commvault  9.8G   24K  9.3G   1% /opt
[root@vm-host ~]# 
 

23. Add vg01-commvault to be auto mounted via /etc/fstab on next Virtual Machine reboot
 

[root@vm-host ~]# echo '/dev/mapper/vg01-commvault /opt         ext4            defaults        1        2' >> /etc/fstab

[root@vm-host ~]# rpm -ivh commvault-fs.Instance001-11.0.0-80.240.0.3589820.240.4083067.el8.x86_64.rpm

[root@vm-host ~]# systemctl status commvault
● commvault.Instance001.service – commvault Service
   Loaded: loaded (/etc/systemd/system/commvault.Instance001.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2023-11-10 15:13:59 CET; 27s ago
  Process: 9972 ExecStart=/opt/commvault/Base/Galaxy start direct -focus Instance001 (code=exited, status=0/SUCCESS)
    Tasks: 54
   Memory: 155.5M
   CGroup: /system.slice/commvault.Instance001.service
           ├─10132 /opt/commvault/Base/cvlaunchd
           ├─10133 /opt/commvault/Base/cvd
           ├─10135 /opt/commvault/Base/cvfwd
           └─10137 /opt/commvault/Base/ClMgrS

Nov 10 15:13:57 vm-host.ffm.de.int.atosorigin.com systemd[1]: Starting commvault Service…
Nov 10 15:13:58 vm-host.ffm.de.int.atosorigin.com Galaxy[9972]: Cleaning up /opt/commvault/Base/Temp …
Nov 10 15:13:58 vm-host.ffm.de.int.atosorigin.com Galaxy[9972]: Starting Commvault services for Instance001 …
Nov 10 15:13:59 vm-host.ffm.de.int.atosorigin.com Galaxy[9972]: [22B blob data]
Nov 10 15:13:59 vm-host.ffm.de.int.atosorigin.com systemd[1]: Started commvault Service.
[root@vm-host ~]# 

 

24. Install Commvault backup client RPM in new mounted LVM under /opt

[root@vm-host ~]#  rpm -ivh commvault.rpm

How to RPM update Hypervisors and Virtual Machines running Haproxy High Availability cluster on KVM, Virtuozzo without a downtime on RHEL / CentOS Linux

Friday, May 20th, 2022

virtuozzo-kvm-virtual-machines-and-hypervisor-update-manual-haproxy-logo


Here is the scenario, lets say you have on your daily task list two Hypervisor (HV) hosts running CentOS or RHEL Linux with KVM or Virutozzo technology and inside the HV hosts you have configured at least 2 pairs of virtual machines one residing on HV Host 1 and one residing on HV Host 2 and you need to constantly keep the hosts to the latest distribution major release security patchset.

The Virtual Machines has been running another set of Redhat Linux or CentOS configured to work in a High Availability Cluster running Haproxy / Apache / Postfix or any other kind of HA solution on top of corosync / keepalived or whatever application cluster scripts Free or Open Source technology that supports a switch between clustered Application nodes.

The logical question comes how to keep up the CentOS / RHEL Machines uptodate without interfering with the operations of the Applications running on the cluster?

Assuming that the 2 or more machines are configured to run in Active / Passive App member mode, e.g. one machine is Active at any time and the other is always Passive, a switch is possible between the Active and Passive node.

HAProxy--Load-Balancer-cluster-2-nodes-your-Servers

In this article I'll give a simple step by step tested example on how you I succeeded to update (for security reasons) up to the latest available Distribution major release patchset on one by one first the Clustered App on Virtual Machines 1 and VM2 on Linux Hypervisor Host 1. Then the App cluster VM 1 / VM 2 on Hypervisor Host 2.
And finally update the Hypervisor1 (after moving the Active resources from it to Hypervisor2) and updating the Hypervisor2 after moving the App running resources back on HV1.
I know the procedure is a bit monotonic but it tries to go through everything step by step to try to mitigate any possible problems. In case of failure of some rpm dependencies during yum / dnf tool updates you can always revert to backups so in anyways don't forget to have a fully functional backup of each of the HV hosts and the VMs somewhere on a separate machine before proceeding further, any possible failures due to following my aritcle literally is your responsibility 🙂

 

0. Check situation before the update on HVs / get VM IDs etc.

Check the virsion of each of the machines to be updated both Hypervisor and Hosted VMs, on each machine run:
 

# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)


The machine setup I'll be dealing with is as follows:
 

hypervisor-host1 -> hypervisor-host1.fqdn.com 
•    virt-mach-centos1
•    virt-machine-zabbix-proxy-centos (zabbix proxy)

hypervisor-host2 -> hypervisor-host2.fqdn.com
•    virt-mach-centos2
•    virt-machine-zabbix2-proxy-centos (zabbix proxy)

To check what is yours check out with virsh cmd –if on KVM or with prlctl if using Virutozzo, you should get something like:

[root@hypervisor-host2 ~]# virsh list
 Id Name State
—————————————————-
 1 vm-host1 running
 2 virt-mach-centos2 running

 # virsh list –all

[root@hypervisor-host1 ~]# virsh list
 Id Name State
—————————————————-
 1 vm-host2 running
 3 virt-mach-centos1 running

[root@hypervisor-host1 ~]# prlctl list
UUID                                    STATUS       IP_ADDR         T  NAME
{dc37c201-08c9-589d-aa20-9386d63ce3f3}  running      –               VM virt-mach-centos1
{76e8a5f8-caa8-5442-830e-aa4bfe8d42d9}  running      –               VM vm-host2
[root@hypervisor-host1 ~]#

If you have stopped VMs with Virtuozzo to list the stopped ones as well.
 

# prlctl list -a

[root@hypervisor-host2 74a7bbe8-9245-5385-ac0d-d10299100789]# vzlist -a
                                CTID      NPROC STATUS    IP_ADDR         HOSTNAME
[root@hypervisor-host2 74a7bbe8-9245-5385-ac0d-d10299100789]# prlctl list
UUID                                    STATUS       IP_ADDR         T  NAME
{92075803-a4ce-5ec0-a3d8-9ee83d85fc76}  running      –               VM virt-mach-centos2
{74a7bbe8-9245-5385-ac0d-d10299100789}  running      –               VM vm-host1

# prlctl list -a


If due to Virtuozzo version above command does not return you can manually check in the VM located folder, VM ID etc.
 

[root@hypervisor-host2 vmprivate]# ls
74a7bbe8-9245-4385-ac0d-d10299100789  92075803-a4ce-4ec0-a3d8-9ee83d85fc76
[root@hypervisor-host2 vmprivate]# pwd
/vz/vmprivate
[root@hypervisor-host2 vmprivate]#


[root@hypervisor-host1 ~]# ls -al /vz/vmprivate/
total 20
drwxr-x—. 5 root root 4096 Feb 14  2019 .
drwxr-xr-x. 7 root root 4096 Feb 13  2019 ..
drwxr-x–x. 4 root root 4096 Feb 18  2019 1c863dfc-1deb-493c-820f-3005a0457627
drwxr-x–x. 4 root root 4096 Feb 14  2019 76e8a5f8-caa8-4442-830e-aa4bfe8d42d9
drwxr-x–x. 4 root root 4096 Feb 14  2019 dc37c201-08c9-489d-aa20-9386d63ce3f3
[root@hypervisor-host1 ~]#


Before doing anything with the VMs, also don't forget to check the Hypervisor hosts has enough space, otherwise you'll get in big troubles !
 

[root@hypervisor-host2 vmprivate]# df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/centos_hypervisor-host2-root   20G  1.8G   17G  10% /
devtmpfs                          20G     0   20G   0% /dev
tmpfs                             20G     0   20G   0% /dev/shm
tmpfs                             20G  2.0G   18G  11% /run
tmpfs                             20G     0   20G   0% /sys/fs/cgroup
/dev/sda1                        992M  159M  766M  18% /boot
/dev/mapper/centos_hypervisor-host2-home  9.8G   37M  9.2G   1% /home
/dev/mapper/centos_hypervisor-host2-var   9.8G  355M  8.9G   4% /var
/dev/mapper/centos_hypervisor-host2-vz    755G   25G  692G   4% /vz

 

[root@hypervisor-host1 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   50G  1.8G   45G   4% /
devtmpfs                  20G     0   20G   0% /dev
tmpfs                     20G     0   20G   0% /dev/shm
tmpfs                     20G  2.1G   18G  11% /run
tmpfs                     20G     0   20G   0% /sys/fs/cgroup
/dev/sda2                992M  153M  772M  17% /boot
/dev/mapper/centos-home  9.8G   37M  9.2G   1% /home
/dev/mapper/centos-var   9.8G  406M  8.9G   5% /var
/dev/mapper/centos-vz    689G   12G  643G   2% /vz

Another thing to do before proceeding with update is to check and tune if needed the amount of CentOS repositories used, before doing anything with yum.
 

[root@hypervisor-host2 yum.repos.d]# ls -al
total 68
drwxr-xr-x.   2 root root  4096 Oct  6 13:13 .
drwxr-xr-x. 110 root root 12288 Oct  7 11:13 ..
-rw-r–r–.   1 root root  4382 Mar 14  2019 CentOS7.repo
-rw-r–r–.   1 root root  1664 Sep  5  2019 CentOS-Base.repo
-rw-r–r–.   1 root root  1309 Sep  5  2019 CentOS-CR.repo
-rw-r–r–.   1 root root   649 Sep  5  2019 CentOS-Debuginfo.repo
-rw-r–r–.   1 root root   314 Sep  5  2019 CentOS-fasttrack.repo
-rw-r–r–.   1 root root   630 Sep  5  2019 CentOS-Media.repo
-rw-r–r–.   1 root root  1331 Sep  5  2019 CentOS-Sources.repo
-rw-r–r–.   1 root root  6639 Sep  5  2019 CentOS-Vault.repo
-rw-r–r–.   1 root root  1303 Mar 14  2019 factory.repo
-rw-r–r–.   1 root root   666 Sep  8 10:13 openvz.repo
[root@hypervisor-host2 yum.repos.d]#

 

[root@hypervisor-host1 yum.repos.d]# ls -al
total 68
drwxr-xr-x.   2 root root  4096 Oct  6 13:13 .
drwxr-xr-x. 112 root root 12288 Oct  7 11:09 ..
-rw-r–r–.   1 root root  1664 Sep  5  2019 CentOS-Base.repo
-rw-r–r–.   1 root root  1309 Sep  5  2019 CentOS-CR.repo
-rw-r–r–.   1 root root   649 Sep  5  2019 CentOS-Debuginfo.repo
-rw-r–r–.   1 root root   314 Sep  5  2019 CentOS-fasttrack.repo
-rw-r–r–.   1 root root   630 Sep  5  2019 CentOS-Media.repo
-rw-r–r–.   1 root root  1331 Sep  5  2019 CentOS-Sources.repo
-rw-r–r–.   1 root root  6639 Sep  5  2019 CentOS-Vault.repo
-rw-r–r–.   1 root root  1303 Mar 14  2019 factory.repo
-rw-r–r–.   1 root root   300 Mar 14  2019 obsoleted_tmpls.repo
-rw-r–r–.   1 root root   666 Sep  8 10:13 openvz.repo


1. Dump VM definition XMs (to have it in case if it gets wiped during update)

There is always a possibility that something will fail during the update and you might be unable to restore back to the old version of the Virtual Machine due to some config misconfigurations or whatever thus a very good idea, before proceeding to modify the working VMs is to use KVM's virsh and dump the exact set of XML configuration that makes the VM roll properly.

To do so:
Check a little bit up in the article how we have listed the IDs that are part of the directory containing the VM.
 

[root@hypervisor-host1 ]# virsh dumpxml (Id of VM virt-mach-centos1 ) > /root/virt-mach-centos1_config_bak.xml
[root@hypervisor-host2 ]# virsh dumpxml (Id of VM virt-mach-centos2) > /root/virt-mach-centos2_config_bak.xml

 


2. Set on standby virt-mach-centos1 (virt-mach-centos1)

As I'm upgrading two machines that are configured to run an haproxy corosync cluster, before proceeding to update the active host, we have to switch off
the proxied traffic from node1 to node2, – e.g. standby the active node, so the cluster can move up the traffic to other available node.
 

[root@virt-mach-centos1 ~]# pcs cluster standby virt-mach-centos1


3. Stop VM virt-mach-centos1 & backup on Hypervisor host (hypervisor-host1) for VM1

Another prevention step to make sure you don't get into damaged VM or broken haproxy cluster after the upgrade is to of course backup 

 

[root@hypervisor-host1 ]# prlctl backup virt-mach-centos1

or
 

[root@hypervisor-host1 ]# prlctl stop virt-mach-centos1
[root@hypervisor-host1 ]# cp -rpf /vz/vmprivate/dc37c201-08c9-489d-aa20-9386d63ce3f3 /vz/vmprivate/dc37c201-08c9-489d-aa20-9386d63ce3f3-bak
[root@hypervisor-host1 ]# tar -czvf virt-mach-centos1_vm_virt-mach-centos1.tar.gz /vz/vmprivate/dc37c201-08c9-489d-aa20-9386d63ce3f3

[root@hypervisor-host1 ]# prlctl start virt-mach-centos1


4. Remove package version locks on all hosts

If you're using package locking to prevent some other colleague to not accidently upgrade the machine (if multiple sysadmins are managing the host), you might use the RPM package locking meachanism, if that is used check RPM packs that are locked and release the locking.

+ List actual list of locked packages

[root@hypervisor-host1 ]# yum versionlock list  

…..
0:libtalloc-2.1.16-1.el7.*
0:libedit-3.0-12.20121213cvs.el7.*
0:p11-kit-trust-0.23.5-3.el7.*
1:quota-nls-4.01-19.el7.*
0:perl-Exporter-5.68-3.el7.*
0:sudo-1.8.23-9.el7.*
0:libxslt-1.1.28-5.el7.*
versionlock list done
                          

+ Clear the locking            

# yum versionlock clear                               


+ List actual list / == clear all entries
 

[root@virt-mach-centos2 ]# yum versionlock list; yum versionlock clear
[root@virt-mach-centos1 ]# yum versionlock list; yum versionlock clear
[root@hypervisor-host1 ~]# yum versionlock list; yum versionlock clear
[root@hypervisor-host2 ~]# yum versionlock list; yum versionlock clear


5. Do yum update virt-mach-centos1


For some clarity if something goes wrong, it is really a good idea to make a dump of the basic packages installed before the RPM package update is initiated,
The exact versoin of RHEL or CentOS as well as the list of locked packages, if locking is used.

Enter virt-mach-centos1 (ssh virt-mach-centos1) and run following cmds:
 

# cat /etc/redhat-release  > /root/logs/redhat-release-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out
# cat /etc/grub.d/30_os-prober > /root/logs/grub2-efi-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out


+ Only if needed!!
 

# yum versionlock clear
# yum versionlock list


Clear any previous RPM packages – careful with that as you might want to keep the old RPMs, if unsure comment out below line
 

# yum clean all |tee /root/logs/yumcleanall-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out

 

Proceed with the update and monitor closely the output of commands and log out everything inside files using a small script that you should place under /root/status the script is given at the end of the aritcle.:
 

yum check-update |tee /root/logs/yumcheckupdate-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out
yum check-update | wc -l
yum update |tee /root/logs/yumupdate-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out
sh /root/status |tee /root/logs/status-before-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out

 

6. Check if everything is running fine after upgrade

Reboot VM
 

# shutdown -r now


7. Stop VM virt-mach-centos2 & backup  on Hypervisor host (hypervisor-host2)

Same backup step as prior 

# prlctl backup virt-mach-centos2


or
 

# prlctl stop virt-mach-centos2
# cp -rpf /vz/vmprivate/92075803-a4ce-4ec0-a3d8-9ee83d85fc76 /vz/vmprivate/92075803-a4ce-4ec0-a3d8-9ee83d85fc76-bak
## tar -czvf virt-mach-centos2_vm_virt-mach-centos2.tar.gz /vz/vmprivate/92075803-a4ce-4ec0-a3d8-9ee83d85fc76

# prctl start virt-mach-centos2


8. Do yum update on virt-mach-centos2

Log system state, before the update
 

# cat /etc/redhat-release  > /root/logs/redhat-release-vorher-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out
# cat /etc/grub.d/30_os-prober > /root/logs/grub2-efi-vorher-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out
# yum versionlock clear == if needed!!
# yum versionlock list

 

Clean old install update / packages if required
 

# yum clean all |tee /root/logs/yumcleanall-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out


Initiate the update

# yum check-update |tee /root/logs/yumcheckupdate-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out 2>&1
# yum check-update | wc -l 
# yum update |tee /root/logs/yumupdate-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out 2>&1
# sh /root/status |tee /root/logs/status-before-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out


9. Check if everything is running fine after upgrade
 

Reboot VM
 

# shutdown -r now

 

10. Stop VM vm-host2 & backup
 

# prlctl backup vm-host2


or

# prlctl stop vm-host2

Or copy the actual directory containig the Virtozzo VM (use the correct ID)
 

# cp -rpf /vz/vmprivate/76e8a5f8-caa8-5442-830e-aa4bfe8d42d9 /vz/vmprivate/76e8a5f8-caa8-5442-830e-aa4bfe8d42d9-bak
## tar -czvf vm-host2.tar.gz /vz/vmprivate/76e8a5f8-caa8-4442-830e-aa5bfe8d42d9

# prctl start vm-host2


11. Do yum update vm-host2
 

# cat /etc/redhat-release  > /root/logs/redhat-release-vorher-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out
# cat /etc/grub.d/30_os-prober > /root/logs/grub2-efi-vorher-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out


Clear only if needed

# yum versionlock clear
# yum versionlock list
# yum clean all |tee /root/logs/yumcleanall-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out


Do the rpm upgrade

# yum check-update |tee /root/logs/yumcheckupdate-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out
# yum check-update | wc -l
# yum update |tee /root/logs/yumupdate-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out
# sh /root/status |tee /root/logs/status-before-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out


12. Check if everything is running fine after upgrade
 

Reboot VM
 

# shutdown -r now


13. Do yum update hypervisor-host2

 

 

# cat /etc/redhat-release  > /root/logs/redhat-release-vorher-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out
# cat /etc/grub.d/30_os-prober > /root/logs/grub2-efi-vorher-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out

Clear lock   if needed

# yum versionlock clear
# yum versionlock list
# yum clean all |tee /root/logs/yumcleanall-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out


Update rpms
 

# yum check-update |tee /root/logs/yumcheckupdate-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out 2>&1
# yum check-update | wc -l
# yum update |tee /root/logs/yumupdate-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out 2>&1
# sh /root/status |tee /root/logs/status-before-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out


14. Stop VM vm-host1 & backup


Some as ealier
 

# prlctl backup vm-host1

or
 

# prlctl stop vm-host1

# cp -rpf /vz/vmprivate/74a7bbe8-9245-4385-ac0d-d10299100789 /vz/vmprivate/74a7bbe8-9245-4385-ac0d-d10299100789-bak
# tar -czvf vm-host1.tar.gz /vz/vmprivate/74a7bbe8-9245-4385-ac0d-d10299100789

# prctl start vm-host1


15. Do yum update vm-host2
 

# cat /etc/redhat-release  > /root/logs/redhat-release-vorher-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out
# cat /etc/grub.d/30_os-prober > /root/logs/grub2-efi-vorher-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out
# yum versionlock clear == if needed!!
# yum versionlock list
# yum clean all |tee /root/logs/yumcleanall-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out
# yum check-update |tee /root/logs/yumcheckupdate-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out
# yum check-update | wc -l
# yum update |tee /root/logs/yumupdate-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out
# sh /root/status |tee /root/logs/status-before-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out


16. Check if everything is running fine after upgrade

+ Reboot VM

# shutdown -r now


17. Do yum update hypervisor-host1

Same procedure for HV host 1 

# cat /etc/redhat-release  > /root/logs/redhat-release-vorher-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out
# cat /etc/grub.d/30_os-prober > /root/logs/grub2-efi-vorher-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out

Clear lock
 

# yum versionlock clear
# yum versionlock list
# yum clean all |tee /root/logs/yumcleanall-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out

# yum check-update |tee /root/logs/yumcheckupdate-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out
# yum check-update | wc -l
# yum update |tee /root/logs/yumupdate-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out
# sh /root/status |tee /root/logs/status-before-$(hostname)-$(date '+%Y-%m-%d_%H-%M-%S').out


18. Check if everything is running fine after upgrade

Reboot VM
 

# shutdown -r now


Check hypervisor-host1 all VMs run as expected 


19. Check if everything is running fine after upgrade

Reboot VM
 

# shutdown -r now


Check hypervisor-host2 all VMs run as expected afterwards


20. Check once more VMs and haproxy or any other contained services in VMs run as expected

Login to hosts and check processes and logs for errors etc.
 

21. Haproxy Unstandby virt-mach-centos1

Assuming that the virt-mach-centos1 and virt-mach-centos2 are running a Haproxy / corosync cluster you can try to standby node1 and check the result
hopefully all should be fine and traffic should come to host node2.

[root@virt-mach-centos1 ~]# pcs cluster unstandby virt-mach-centos1


Monitor logs and make sure HAproxy works fine on virt-mach-centos1


22. If necessery to redefine VMs (in case they disappear from virsh) or virtuosso is not working

[root@virt-mach-centos1 ]# virsh define /root/virt-mach-centos1_config_bak.xml
[root@virt-mach-centos1 ]# virsh define /root/virt-mach-centos2_config_bak.xml


23. Set versionlock to RPMs to prevent accident updates and check OS version release

[root@virt-mach-centos2 ]# yum versionlock \*
[root@virt-mach-centos1 ]# yum versionlock \*
[root@hypervisor-host1 ~]# yum versionlock \*
[root@hypervisor-host2 ~]# yum versionlock \*

[root@hypervisor-host2 ~]# cat /etc/redhat-release 
CentOS Linux release 7.8.2003 (Core)

Other useful hints

[root@hypervisor-host1 ~]# virsh console dc37c201-08c9-489d-aa20-9386d63ce3f3
Connected to domain virt-mach-centos1
..

! Compare packages count before the upgrade on each of the supposable identical VMs and HVs – if there is difference in package count review what kind of packages are different and try to make the machines to look as identical as possible  !

Packages to update on hypervisor-host1 Count: XXX
Packages to update on hypervisor-host2 Count: XXX
Packages to update virt-mach-centos1 Count: – 254
Packages to update virt-mach-centos2 Count: – 249

The /root/status script

+++

#!/bin/sh
echo  '=======================================================   '
echo  '= Systemctl list-unit-files –type=service | grep enabled '
echo  '=======================================================   '
systemctl list-unit-files –type=service | grep enabled

echo  '=======================================================   '
echo  '= systemctl | grep ".service" | grep "running"            '
echo  '=======================================================   '
systemctl | grep ".service" | grep "running"

echo  '=======================================================   '
echo  '= chkconfig –list                                        '
echo  '=======================================================   '
chkconfig –list

echo  '=======================================================   '
echo  '= netstat -tulpn                                          '
echo  '=======================================================   '
netstat -tulpn

echo  '=======================================================   '
echo  '= netstat -r                                              '
echo  '=======================================================   '
netstat -r


+++

That's all folks, once going through the article, after some 2 hours of efforts or so you should have an up2date machines.
Any problems faced or feedback is mostly welcome as this might help others who have the same setup.

Thanks for reading me 🙂

Create Linux High Availability Load Balancer Cluster with Keepalived and Haproxy on Linux

Tuesday, March 15th, 2022

keepalived-logo-linux

Configuring a Linux HA (High Availibiltiy) for an Application with Haproxy is already used across many Websites on the Internet and serious corporations that has a crucial infrastructure has long time
adopted and used keepalived to provide High Availability Application level Clustering.
Usually companies choose to use HA Clusters with Haproxy with Pacemaker and Corosync cluster tools.
However one common used alternative solution if you don't have the oportunity to bring up a High availability cluster with Pacemaker / Corosync / pcs (Pacemaker Configuration System) due to fact machines you need to configure the cluster on are not Physical but VMWare Virtual Machines which couldn't not have configured a separate Admin Lans and Heartbeat Lan as we usually do on a Pacemaker Cluster due to the fact the 5 Ethernet LAN Card Interfaces of the VMWare Hypervisor hosts are configured as a BOND (e.g. all the incoming traffic to the VMWare vSphere  HV is received on one Virtual Bond interface).

I assume you have 2 separate vSphere Hypervisor Physical Machines in separate Racks and separate switches hosting the two VMs.
For the article, I'll call the two brand new brought Virtual Machines with some installation automation software such as Terraform or Ansible – vm-server1 and vm-server2 which would have configured some recent version of Linux.

In that scenario to have a High Avaiability for the VMs on Application level and assure at least one of the two is available at a time if one gets broken due toe malfunction of the HV, a Network connectivity issue, or because the VM OS has crashed.
Then one relatively easily solution is to use keepalived and configurea single High Availability Virtual IP (VIP) Address, i.e. 10.10.10.1, which would float among two VMs using keepalived so at a time at least one of the two VMs would be reachable on the Network.

haproxy_keepalived-vip-ip-diagram-linux

Having a VIP IP is quite a common solution in corporate world, as it makes it pretty easy to add F5 Load Balancer in front of the keepalived cluster setup to have a 3 Level of security isolation, which usually consists of:

1. Physical (access to the hardware or Virtualization hosts)
2. System Access (The mechanism to access the system login credetials users / passes, proxies, entry servers leading to DMZ-ed network)
3. Application Level (access to different programs behind L2 and data based on the specific identity of the individual user,
special Secondary UserID,  Factor authentication, biometrics etc.)

 

1. Install keepalived and haproxy on machines

Depending on the type of Linux OS:

On both machines
 

[root@server1:~]# yum install -y keepalived haproxy

If you have to install keepalived / haproxy on Debian / Ubuntu and other Deb based Linux distros

[root@server1:~]# apt install keepalived haproxy –yes

2. Configure haproxy (haproxy.cfg) on both server1 and server2

 

Create some /etc/haproxy/haproxy.cfg configuration

 

[root@server1:~]vim /etc/haproxy/haproxy.cfg

#———————————————————————
# Global settings
#———————————————————————
global
    log          127.0.0.1 local6 debug
    chroot       /var/lib/haproxy
    pidfile      /run/haproxy.pid
    stats socket /var/lib/haproxy/haproxy.sock mode 0600 level admin 
    maxconn      4000
    user         haproxy
    group        haproxy
    daemon
    #debug
    #quiet

#———————————————————————
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#———————————————————————
defaults
    mode        tcp
    log         global
#    option      dontlognull
#    option      httpclose
#    option      httplog
#    option      forwardfor
    option      redispatch
    option      log-health-checks
    timeout connect 10000 # default 10 second time out if a backend is not found
    timeout client 300000
    timeout server 300000
    maxconn     60000
    retries     3

#———————————————————————
# round robin balancing between the various backends
#———————————————————————

listen FRONTEND_APPNAME1
        bind 10.10.10.1:15000
        mode tcp
        option tcplog
#        #log global
        log-format [%t]\ %ci:%cp\ %bi:%bp\ %b/%s:%sp\ %Tw/%Tc/%Tt\ %B\ %ts\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq
        balance roundrobin
        timeout client 350000
        timeout server 350000
        timeout connect 35000
        server app-server1 10.10.10.55:30000 weight 1 check port 68888
        server app-server2 10.10.10.55:30000 weight 2 check port 68888

listen FRONTEND_APPNAME2
        bind 10.10.10.1:15000
        mode tcp
        option tcplog
        #log global
        log-format [%t]\ %ci:%cp\ %bi:%bp\ %b/%s:%sp\ %Tw/%Tc/%Tt\ %B\ %ts\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq
        balance roundrobin
        timeout client 350000
        timeout server 350000
        timeout connect 35000
        server app-server1 10.10.10.55:30000 weight 5
        server app-server2 10.10.10.55:30000 weight 5 

 

You can get a copy of above haproxy.cfg configuration here.
Once configured roll it on.

[root@server1:~]#  systemctl start haproxy
 
[root@server1:~]# ps -ef|grep -i hapro
root      285047       1  0 Mar07 ?        00:00:00 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
haproxy   285050  285047  0 Mar07 ?        00:00:26 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid

Bring up the haproxy also on server2 machine, by placing same configuration and starting up the proxy.
 

[root@server1:~]vim /etc/haproxy/haproxy.cfg


 

3. Configure keepalived on both servers

We'll be configuring 2 nodes with keepalived even though if necessery this can be easily extended and you can add more nodes.
First we make a copy of the original or existing server configuration keepalived.conf (just in case we need it later on or if you already had something other configured manually by someone – that could be so on inherited servers by other sysadmin)
 

[root@server1:~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.orig
[root@server2:~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.orig

a. Configure keepalived to serve as a MASTER Node

 

[root@server1:~]# vim /etc/keepalived/keepalived.conf

Master Node
global_defs {
  router_id server1-fqdn # The hostname of this host.
  
  enable_script_security
  # Synchro of the state of the connections between the LBs on the eth0 interface
   lvs_sync_daemon eth0
 
notification_email {
        linuxadmin@notify-domain.com     # Email address for notifications 
    }
 notification_email_from keepalived@server1-fqdn        # The from address for the notifications
    smtp_server 127.0.0.1                       # SMTP server address
    smtp_connect_timeout 15
}

vrrp_script haproxy {
  script "killall -0 haproxy"
  interval 2
  weight 2
  user root
}

vrrp_instance LB_VIP_QA {
  virtual_router_id 50
  advert_int 1
  priority 51

  state MASTER
  interface eth0
  smtp_alert          # Enable Notifications Via Email
  
  authentication {
              auth_type PASS
              auth_pass testp141

    }
### Commented because running on VM on VMWare
##    unicast_src_ip 10.44.192.134 # Private IP address of master
##    unicast_peer {
##        10.44.192.135           # Private IP address of the backup haproxy
##   }

#        }
# master node with higher priority preferred node for Virtual IP if both keepalived up
###  priority 51
###  state MASTER
###  interface eth0
  virtual_ipaddress {
     10.10.10.1 dev eth0 # The virtual IP address that will be shared between MASTER and BACKUP
  }
  track_script {
      haproxy
  }
}

 

 To dowload a copy of the Master keepalived.conf configuration click here

Below are few interesting configuration variables, worthy to mention few words on, most of them are obvious by their names but for more clarity I'll also give a list here with short description of each:

 

  • vrrp_instance – defines an individual instance of the VRRP protocol running on an interface.
  • state – defines the initial state that the instance should start in (i.e. MASTER / SLAVE )state –
  • interface – defines the interface that VRRP runs on.
  • virtual_router_id – should be unique value per Keepalived Node (otherwise slave master won't function properly)
  • priority – the advertised priority, the higher the priority the more important the respective configured keepalived node is.
  • advert_int – specifies the frequency that advertisements are sent at (1 second, in this case).
  • authentication – specifies the information necessary for servers participating in VRRP to authenticate with each other. In this case, a simple password is defined.
    only the first eight (8) characters will be used as described in  to note is Important thing
    man keepalived.conf – keepalived.conf variables documentation !!! Nota Bene !!! – Password set on each node should match for nodes to be able to authenticate !
  • virtual_ipaddress – defines the IP addresses (there can be multiple) that VRRP is responsible for.
  • notification_email – the notification email to which Alerts will be send in case if keepalived on 1 node is stopped (e.g. the MASTER node switches from host 1 to 2)
  • notification_email_from – email address sender from where email will originte
    ! NB ! In order for notification_email to be working you need to have configured MTA or Mail Relay (set to local MTA) to another SMTP – e.g. have configured something like Postfix, Qmail or Postfix

b. Configure keepalived to serve as a SLAVE Node

[root@server1:~]vim /etc/keepalived/keepalived.conf
 

#Slave keepalived
global_defs {
  router_id server2-fqdn # The hostname of this host!

  enable_script_security
  # Synchro of the state of the connections between the LBs on the eth0 interface
  lvs_sync_daemon eth0
 
notification_email {
        linuxadmin@notify-host.com     # Email address for notifications
    }
 notification_email_from keepalived@server2-fqdn        # The from address for the notifications
    smtp_server 127.0.0.1                       # SMTP server address
    smtp_connect_timeout 15
}

vrrp_script haproxy {
  script "killall -0 haproxy"
  interval 2
  weight 2
  user root
}

vrrp_instance LB_VIP_QA {
  virtual_router_id 50
  advert_int 1
  priority 50

  state BACKUP
  interface eth0
  smtp_alert          # Enable Notifications Via Email

authentication {
              auth_type PASS
              auth_pass testp141
}
### Commented because running on VM on VMWare    
##    unicast_src_ip 10.10.192.135 # Private IP address of master
##    unicast_peer {
##        10.10.192.134         # Private IP address of the backup haproxy
##   }

###  priority 50
###  state BACKUP
###  interface eth0
  virtual_ipaddress {
     10.10.10.1 dev eth0 # The virtual IP address that will be shared betwee MASTER and BACKUP.
  }
  track_script {
    haproxy
  }
}

 

Download the keepalived.conf slave config here

 

c. Set required sysctl parameters for haproxy to work as expected
 

[root@server1:~]vim /etc/sysctl.conf
#Haproxy config
# haproxy
net.core.somaxconn=65535
net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_syn_backlog = 10240
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3

4. Test Keepalived keepalived.conf configuration syntax is OK

 

[root@server1:~]keepalived –config-test
(/etc/keepalived/keepalived.conf: Line 7) Unknown keyword 'lvs_sync_daemon_interface'
(/etc/keepalived/keepalived.conf: Line 21) Unable to set default user for vrrp script haproxy – removing
(/etc/keepalived/keepalived.conf: Line 31) (LB_VIP_QA) Specifying lvs_sync_daemon_interface against a vrrp is deprecated.
(/etc/keepalived/keepalived.conf: Line 31)              Please use global lvs_sync_daemon
(/etc/keepalived/keepalived.conf: Line 35) Truncating auth_pass to 8 characters
(/etc/keepalived/keepalived.conf: Line 50) (LB_VIP_QA) track script haproxy not found, ignoring…

I've experienced this error because first time I've configured keepalived, I did not mention the user with which the vrrp script haproxy should run,
in prior versions of keepalived, leaving the field empty did automatically assumed you have the user with which the vrrp script runs to be set to root
as of RHELs keepalived-2.1.5-6.el8.x86_64, i've been using however this is no longer so and thus in prior configuration as you can see I've
set the user in respective section to root.
The error Unknown keyword 'lvs_sync_daemon_interface'
is also easily fixable by just substituting the lvs_sync_daemon_interface and lvs_sync_daemon and reloading
keepalived etc.

Once keepalived is started and you can see the process on both machines running in process list.

[root@server1:~]ps -ef |grep -i keepalived
root     1190884       1  0 18:50 ?        00:00:00 /usr/sbin/keepalived -D
root     1190885 1190884  0 18:50 ?        00:00:00 /usr/sbin/keepalived -D

Next step is to check the keepalived statuses as well as /var/log/keepalived.log

If everything is configured as expected on both keepalived on first node you should see one is master and one is slave either in the status or the log

[root@server1:~]#systemctl restart keepalived

 

[root@server1:~]systemctl status keepalived|grep -i state
Mar 14 18:59:02 server1-fqdn Keepalived_vrrp[1192003]: (LB_VIP_QA) Entering MASTER STATE

[root@server1:~]systemctl status keepalived

● keepalived.service – LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since Mon 2022-03-14 18:15:51 CET; 32min ago
  Process: 1187587 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 1187589 (code=exited, status=0/SUCCESS)

Mar 14 18:15:04 server1lb-fqdn Keepalived_vrrp[1187590]: Sending gratuitous ARP on eth0 for 10.44.192.142
Mar 14 18:15:50 server1lb-fqdn systemd[1]: Stopping LVS and VRRP High Availability Monitor…
Mar 14 18:15:50 server1lb-fqdn Keepalived[1187589]: Stopping
Mar 14 18:15:50 server1lb-fqdn Keepalived_vrrp[1187590]: (LB_VIP_QA) sent 0 priority
Mar 14 18:15:50 server1lb-fqdn Keepalived_vrrp[1187590]: (LB_VIP_QA) removing VIPs.
Mar 14 18:15:51 server1lb-fqdn Keepalived_vrrp[1187590]: Stopped – used 0.002007 user time, 0.016303 system time
Mar 14 18:15:51 server1lb-fqdn Keepalived[1187589]: CPU usage (self/children) user: 0.000000/0.038715 system: 0.001061/0.166434
Mar 14 18:15:51 server1lb-fqdn Keepalived[1187589]: Stopped Keepalived v2.1.5 (07/13,2020)
Mar 14 18:15:51 server1lb-fqdn systemd[1]: keepalived.service: Succeeded.
Mar 14 18:15:51 server1lb-fqdn systemd[1]: Stopped LVS and VRRP High Availability Monitor

[root@server2:~]systemctl status keepalived|grep -i state
Mar 14 18:59:02 server2-fqdn Keepalived_vrrp[297368]: (LB_VIP_QA) Entering BACKUP STATE

[root@server1:~]# grep -i state /var/log/keepalived.log
Mar 14 18:59:02 server1lb-fqdn Keepalived_vrrp[297368]: (LB_VIP_QA) Entering MASTER STATE
 

a. Fix Keepalived SECURITY VIOLATION – scripts are being executed but script_security not enabled.
 

When configurating keepalived for a first time we have faced the following strange error inside keepalived status inside keepalived.log 
 

Feb 23 14:28:41 server1 Keepalived_vrrp[945478]: SECURITY VIOLATION – scripts are being executed but script_security not enabled.

 

To fix keepalived SECURITY VIOLATION error:

Add to /etc/keepalived/keepalived.conf on the keepalived node hosts
inside 

global_defs {}

After chunk
 

enable_script_security

include

# Synchro of the state of the connections between the LBs on the eth0 interface
  lvs_sync_daemon_interface eth0

 

5. Prepare rsyslog configuration and Inlcude additional keepalived options
to force keepalived log into /var/log/keepalived.log

To force keepalived log into /var/log/keepalived.log on RHEL 8 / CentOS and other Redhat Package Manager (RPM) Linux distributions

[root@server1:~]# vim /etc/rsyslog.d/48_keepalived.conf

#2022/02/02: HAProxy logs to local6, save the messages
local7.*                                                /var/log/keepalived.log
if ($programname == 'Keepalived') then -/var/log/keepalived.log
if ($programname == 'Keepalived_vrrp') then -/var/log/keepalived.log
& stop

[root@server:~]# touch /var/log/keepalived.log

Reload rsyslog to load new config
 

[root@server:~]# systemctl restart rsyslog
[root@server:~]# systemctl status rsyslog

 

rsyslog.service – System Logging Service
   Loaded: loaded (/usr/lib/systemd/system/rsyslog.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/rsyslog.service.d
           └─rsyslog-service.conf
   Active: active (running) since Mon 2022-03-07 13:34:38 CET; 1 weeks 0 days ago
     Docs: man:rsyslogd(8)

           https://www.rsyslog.com/doc/
 Main PID: 269574 (rsyslogd)
    Tasks: 6 (limit: 100914)
   Memory: 5.1M
   CGroup: /system.slice/rsyslog.service
           └─269574 /usr/sbin/rsyslogd -n

Mar 15 08:15:16 server1lb-fqdn rsyslogd[269574]: — MARK —
Mar 15 08:35:16 server1lb-fqdn rsyslogd[269574]: — MARK —
Mar 15 08:55:16 server1lb-fqdn rsyslogd[269574]: — MARK —

 

If once keepalived is loaded but you still have no log written inside /var/log/keepalived.log

[root@server1:~]# vim /etc/sysconfig/keepalived
 KEEPALIVED_OPTIONS="-D -S 7"

[root@server2:~]# vim /etc/sysconfig/keepalived
 KEEPALIVED_OPTIONS="-D -S 7"

[root@server1:~]# systemctl restart keepalived.service
[root@server1:~]#  systemctl status keepalived

● keepalived.service – LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2022-02-24 12:12:20 CET; 2 weeks 4 days ago
 Main PID: 1030501 (keepalived)
    Tasks: 2 (limit: 100914)
   Memory: 1.8M
   CGroup: /system.slice/keepalived.service
           ├─1030501 /usr/sbin/keepalived -D
           └─1030502 /usr/sbin/keepalived -D

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

[root@server2:~]# systemctl restart keepalived.service
[root@server2:~]# systemctl status keepalived

6. Monitoring VRRP traffic of the two keepaliveds with tcpdump
 

Once both keepalived are up and running a good thing is to check the VRRP protocol traffic keeps fluently on both machines.
Keepalived VRRP keeps communicating over the TCP / IP Port 112 thus you can simply snoop TCP tracffic on its protocol.
 

[root@server1:~]# tcpdump proto 112

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
11:08:07.356187 IP server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20
11:08:08.356297 IP server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20
11:08:09.356408 IP server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20
11:08:10.356511 IP server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20
11:08:11.356655 IP server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20

[root@server2:~]# tcpdump proto 112

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
​listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
11:08:07.356187 IP server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20
11:08:08.356297 IP server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20
11:08:09.356408 IP server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20
11:08:10.356511 IP server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20
11:08:11.356655 IP server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20

As you can see the VRRP traffic on the network is originating only from server1lb-fqdn, this is so because host server1lb-fqdn is the keepalived configured master node.

It is possible to spoof the password configured to authenticate between two nodes, thus if you're bringing up keepalived service cluster make sure your security is tight at best the machines should be in a special local LAN DMZ, do not configure DMZ on the internet !!! 🙂 Or if you eventually decide to configure keepalived in between remote hosts, make sure you somehow use encrypted VPN or SSH tunnels to tunnel the VRRP traffic.

[root@server1:~]tcpdump proto 112 -vv
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
11:36:25.530772 IP (tos 0xc0, ttl 255, id 59838, offset 0, flags [none], proto VRRP (112), length 40)
    server1lb-fqdn > vrrp.mcast.net: vrrp server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20, addrs: VIPIP_QA auth "testp431"
11:36:26.530874 IP (tos 0xc0, ttl 255, id 59839, offset 0, flags [none], proto VRRP (112), length 40)
    server1lb-fqdn > vrrp.mcast.net: vrrp server1lb-fqdn > vrrp.mcast.net: VRRPv2, Advertisement, vrid 50, prio 53, authtype simple, intvl 1s, length 20, addrs: VIPIP_QA auth "testp431"

Lets also check what floating IP is configured on the machines:

[root@server1:~]# ip -brief address show
lo               UNKNOWN        127.0.0.1/8 
eth0             UP             10.10.10.5/26 10.10.10.1/32 

The 10.10.10.5 IP is the main IP set on LAN interface eth0, 10.10.10.1 is the floating IP which as you can see is currently set by keepalived to listen on first node.

[root@server2:~]# ip -brief address show |grep -i 10.10.10.1

An empty output is returned as floating IP is currently configured on server1

To double assure ourselves the IP is assigned on correct machine, lets ping it and check the IP assigned MAC  currently belongs to which machine.
 

[root@server2:~]# ping 10.10.10.1
PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data.
64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=0.526 ms
^C
— 10.10.10.1 ping statistics —
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms

[root@server2:~]# arp -an |grep -i 10.44.192.142
? (10.10.10.1) at 00:48:54:91:83:7d [ether] on eth0
[root@server2:~]# ip a s|grep -i 00:48:54:91:83:7d
[root@server2:~]# 

As you can see from below output MAC is not found in configured IPs on server2.
 

[root@server1-fqdn:~]# /sbin/ip a s|grep -i 00:48:54:91:83:7d -B1 -A1
 eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:48:54:91:83:7d brd ff:ff:ff:ff:ff:ff
inet 10.10.10.1/26 brd 10.10.1.191 scope global noprefixroute eth0

Pretty much expected MAC is on keepalived node server1.

 

7. Testing keepalived on server1 and server2 maachines VIP floating IP really works
 

To test the overall configuration just created, you should stop keeaplived on the Master node and in meantime keep an eye on Slave node (server2), whether it can figure out the Master node is gone and switch its
state BACKUP to save MASTER. By changing the secondary (Slave) keepalived to master the floating IP: 10.10.10.1 will be brought up by the scripts on server2.

Lets assume that something went wrong with server1 VM host, for example the machine crashed due to service overload, DDoS or simply a kernel bug or whatever reason.
To simulate that we simply have to stop keepalived, then the broadcasted information on VRRP TCP/IP proto port 112 will be no longer available and keepalived on node server2, once
unable to communicate to server1 should chnage itself to state MASTER.

[root@server1:~]# systemctl stop keepalived
[root@server1:~]# systemctl status keepalived

● keepalived.service – LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since Tue 2022-03-15 12:11:33 CET; 3s ago
  Process: 1192001 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 1192002 (code=exited, status=0/SUCCESS)

Mar 14 18:59:07 server1lb-fqdn Keepalived_vrrp[1192003]: Sending gratuitous ARP on eth0 for 10.10.10.1
Mar 15 12:11:32 server1lb-fqdn systemd[1]: Stopping LVS and VRRP High Availability Monitor…
Mar 15 12:11:32 server1lb-fqdn Keepalived[1192002]: Stopping
Mar 15 12:11:32 server1lb-fqdn Keepalived_vrrp[1192003]: (LB_VIP_QA) sent 0 priority
Mar 15 12:11:32 server1lb-fqdn Keepalived_vrrp[1192003]: (LB_VIP_QA) removing VIPs.
Mar 15 12:11:33 server1lb-fqdn Keepalived_vrrp[1192003]: Stopped – used 2.145252 user time, 15.513454 system time
Mar 15 12:11:33 server1lb-fqdn Keepalived[1192002]: CPU usage (self/children) user: 0.000000/44.555362 system: 0.001151/170.118126
Mar 15 12:11:33 server1lb-fqdn Keepalived[1192002]: Stopped Keepalived v2.1.5 (07/13,2020)
Mar 15 12:11:33 server1lb-fqdn systemd[1]: keepalived.service: Succeeded.
Mar 15 12:11:33 server1lb-fqdn systemd[1]: Stopped LVS and VRRP High Availability Monitor.

 

On keepalived off, you will get also a notification Email on the Receipt Email configured from keepalived.conf from the working keepalived node with a simple message like:

=> VRRP Instance is no longer owning VRRP VIPs <=

Once keepalived is back up you will get another notification like:

=> VRRP Instance is now owning VRRP VIPs <=

[root@server2:~]# systemctl status keepalived
● keepalived.service – LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2022-03-14 18:13:52 CET; 17h ago
  Process: 297366 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 297367 (keepalived)
    Tasks: 2 (limit: 100914)
   Memory: 2.1M
   CGroup: /system.slice/keepalived.service
           ├─297367 /usr/sbin/keepalived -D -S 7
           └─297368 /usr/sbin/keepalived -D -S 7

Mar 15 12:11:33 server2lb-fqdn Keepalived_vrrp[297368]: Sending gratuitous ARP on eth0 for 10.10.10.1
Mar 15 12:11:33 server2lb-fqdn Keepalived_vrrp[297368]: Sending gratuitous ARP on eth0 for 10.10.10.1
Mar 15 12:11:33 server2lb-fqdn Keepalived_vrrp[297368]: Remote SMTP server [127.0.0.1]:25 connected.
Mar 15 12:11:33 server2lb-fqdn Keepalived_vrrp[297368]: SMTP alert successfully sent.
Mar 15 12:11:38 server2lb-fqdn Keepalived_vrrp[297368]: (LB_VIP_QA) Sending/queueing gratuitous ARPs on eth0 for 10.10.10.1
Mar 15 12:11:38 server2lb-fqdn Keepalived_vrrp[297368]: Sending gratuitous ARP on eth0 for 10.10.10.1
Mar 15 12:11:38 server2lb-fqdn Keepalived_vrrp[297368]: Sending gratuitous ARP on eth0 for 10.10.10.1
Mar 15 12:11:38 server2lb-fqdn Keepalived_vrrp[297368]: Sending gratuitous ARP on eth0 for 10.10.10.1
Mar 15 12:11:38 server2lb-fqdn Keepalived_vrrp[297368]: Sending gratuitous ARP on eth0 for 10.10.10.1
Mar 15 12:11:38 server2lb-fqdn Keepalived_vrrp[297368]: Sending gratuitous ARP on eth0 for 10.10.10.1

[root@server2:~]#  ip addr show|grep -i 10.10.10.1
    inet 10.10.10.1/32 scope global eth0
    

As you see the VIP is now set on server2, just like expected – that's OK, everything works as expected. If the IP did not move double check the keepalived.conf on both nodes for errors or misconfigurations.

To recover the initial order of things so server1 is MASTER and server2 SLAVE host, we just have to switch on the keepalived on server1 machine.

[root@server1:~]# systemctl start keepalived

The automatic change of server1 to MASTER node and respective move of the VIP IP is done because of the higher priority (of importance we previously configured on server1 in keepalived.conf).
 

What we learned?
 

So what we learned in  this article?
We have seen how to easily install and configure a High Availability Load balancer with Keepalived with single floating VIP IP address with 1 MASTER and 1 SLAVE host and a Haproxy example config with few frontends / App backends. We have seen how the config can be tested for potential errors and how we can monitor whether the VRRP2 network traffic flows between nodes and how to potentially debug it further if necessery.
Further on rawly explained some of the keepalived configurations but as keepalived can do pretty much more,for anyone seriously willing to deal with keepalived on a daily basis or just fine tune some already existing ones, you better read closely its manual page "man keepalived.conf" as well as the official Redhat Linux documentation page on setting up a Linux cluster with Keepalived (Be prepare for a small nightmare as the documentation of it seems to be a bit chaotic, and even I would say partly missing or opening questions on what does the developers did meant – not strange considering the havoc that is pretty much as everywhere these days.)

Finally once keepalived hosts are prepared, it was shown how to test the keepalived application cluster and Floating IP does move between nodes in case if one of the 2 keepalived nodes is inaccessible.

The same logic can be repeated multiple times and if necessery you can set multiple VIPs to expand the HA reachable IPs solution.

high-availability-with-two-vips-example-diagram

The presented idea is with haproxy forward Proxy server to proxy requests towards Application backend (servince machines), however if you need to set another set of server on the flow to  process HTML / XHTML / PHP / Perl / Python  programming code, with some common Webserver setup ( Nginx / Apache / Tomcat / JBOSS) and enable SSL Secure certificate with lets say Letsencrypt, this can be relatively easily done. If you want to implement letsencrypt and a webserver check this redundant SSL Load Balancing with haproxy & keepalived article.

That's all folks, hope you enjoyed.
If you need to configure keepalived Cluster or a consultancy write your query here 🙂

KVM Virtual Machine RHEL 8.3 Linux install on Redhat 8.3 Linux Hypervisor with custom tailored kickstart.cfg

Friday, January 22nd, 2021

kvm_virtualization-logo-redhat-8.3-install-howto-with-kickstart

If you don't have tried it yet Redhat and CentOS and other RPM based Linux operationg systems that use anaconda installer is generating a kickstart file after being installed under /root/{anaconda-ks.cfg,initial-setup- ks.cfg,original-ks.cfg} immediately after the OS installation completes. Using this Kickstart file template you can automate installation of Redhat installation with exactly the same configuration as many times as you like by directly loading your /root/original-ks.cfg file in RHEL installer.

Here is the official description of Kickstart files from Redhat:

"The Red Hat Enterprise Linux installation process automatically writes a Kickstart file that contains the settings for the installed system. This file is always saved as /root/anaconda-ks.cfg. You may use this file to repeat the installation with identical settings, or modify copies to specify settings for other systems."


Kickstart files contain answers to all questions normally asked by the text / graphical installation program, such as what time zone you want the system to use, how the drives should be partitioned, or which packages should be installed. Providing a prepared Kickstart file when the installation begins therefore allows you to perform the installation automatically, without need for any intervention from the user. This is especially useful when deploying Redhat based distro (RHEL / CentOS / Fedora …) on a large number of systems at once and in general pretty useful if you're into the field of so called "DevOps" system administration and you need to provision a certain set of OS to a multitude of physical servers or create or recreate easily virtual machines with a certain set of configuration.
 

1. Create /vmprivate storage directory where Virtual machines will reside

First step on the Hypervisor host which will hold the future created virtual machines is to create location where it will be created:

[root@redhat ~]#  lvcreate –size 140G –name vmprivate vg00
[root@redhat ~]#  mkfs.ext4 -j -b 4096 /dev/mapper/vg00-vmprivate
[root@redhat ~]# mount /dev/mapper/vg00-vmprivate /vmprivate

To view what is the situation with Logical Volumes and  VG group names:

[root@redhat ~]# vgdisplay -v|grep -i vmprivate -A7 -B7
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:0

 

  — Logical volume —
  LV Path                /dev/vg00/vmprivate
  LV Name                vmprivate
  VG Name                vg00
  LV UUID                VVUgsf-FXq2-TsMJ-QPLw-7lGb-Dq5m-3J9XJJ
  LV Write Access        read/write
  LV Creation host, time main.hostname.com, 2021-01-20 17:26:11 +0100
  LV Status              available
  # open                 1
  LV Size                150.00 GiB


Note that you'll need to have the size physically available on a SAS / SSD Hard Drive physically connected to Hypervisor Host.

To make the changes Virtual Machines storage location directory permanently mounted add to /etc/fstab

/dev/mapper/vg00-vmprivate  /vmprivate              ext4    defaults,nodev,nosuid 1 2

[root@redhat ~]# echo '/dev/mapper/vg00-vmprivate  /vmprivate              ext4    defaults,nodev,nosuid 1 2' >> /etc/fstab

 

2. Second we need to install the following set of RPM packages on the Hypervisor Hardware host

[root@redhat ~]# yum install qemu-kvm qemu-img libvirt virt-install libvirt-client virt-manager libguestfs-tools virt-install virt-top -y

3. Enable libvirtd on the host

[root@redhat ~]#  lsmod | grep -i kvm
[root@redhat ~]#  systemctl enable libvirtd

4. Configure network bridging br0 interface on Hypervisor


In /etc/sysconfig/network-scripts/ifcfg-eth0 you need to include:

NM_CONTROLED=NO

Next use nmcli redhat configurator to create the bridge (you can use ip command instead) but since the tool is the redhat way to do it lets do it their way ..

[root@redhat ~]# nmcli connection delete eno3
[root@redhat ~]# nmcli connection add type bridge autoconnect yes con-name br0 ifname br0
[root@redhat ~]# nmcli connection modify br0 ipv4.addresses 10.80.51.16/26 ipv4.method manual
[root@redhat ~]# nmcli connection modify br0 ipv4.gateway 10.80.51.1
[root@redhat ~]# nmcli connection modify br0 ipv4.dns 172.20.88.2
[root@redhat ~]# nmcli connection add type bridge-slave autoconnect yes con-name eno3 ifname eno3 master br0
[root@redhat ~]# nmcli connection up br0

5. Prepare a working kickstart.cfg file for VM


Below is a sample kickstart file I've used to build a working fully functional Virtual Machine with Red Hat Enterprise Linux 8.3 (Ootpa) .

#version=RHEL8
#install
# Run the Setup Agent on first boot
firstboot --enable
ignoredisk --only-use=vda
# Use network installation
#url --url=http://hostname.com/rhel/8/BaseOS
##url --url=http://171.23.8.65/rhel/8/os/BaseOS
# Use text mode install
text
#graphical
# System language
#lang en_US.UTF-8
keyboard --vckeymap=us --xlayouts='us'
# Keyboard layouts
##keyboard us
lang en_US.UTF-8
# Root password
rootpw $6$gTiUCif4$YdKxeewgwYCLS4uRc/XOeKSitvDJNHFycxWVHi.RYGkgKctTMCAiY2TErua5Yh7flw2lUijooOClQQhlbstZ81 --iscrypted
# network-stuff
# place ip=your_VM_IP, netmask, gateway, nameserver hostname 
network --bootproto=static --ip=10.80.21.19 --netmask=255.255.255.192 --gateway=10.80.21.1 --nameserver=172.30.85.2 --device=eth0 --noipv6 --hostname=FQDN.VMhost.com --onboot=yes
# if you need just localhost initially configured uncomment and comment above
##network В --device=lo --hostname=localhost.localdomain
# System authorization information
authconfig --enableshadow --passalgo=sha512 --enablefingerprint
# skipx
skipx
# Firewall configuration
firewall --disabled
# System timezone
timezone Europe/Berlin
# Clear the Master Boot Record
##zerombr
# Repositories
## Add RPM repositories from KS file if necessery
#repo --name=appstream --baseurl=http://hostname.com/rhel/8/AppStream
#repo --name=baseos --baseurl=http://hostname.com/rhel/8/BaseOS
#repo --name=inst.stage2 --baseurl=http://hostname.com ff=/dev/vg0/vmprivate
##repo --name=rhsm-baseos В  В --baseurl=http://172.54.8.65/rhel/8/rhsm/x86_64/BaseOS/
##repo --name=rhsm-appstream --baseurl=http://172.54.8.65/rhel/8/rhsm/x86_64/AppStream/
##repo --name=os-baseos В  В  В --baseurl=http://172.54.9.65/rhel/8/os/BaseOS/
##repo --name=os-appstream В  --baseurl=http://172.54.8.65/rhel/8/os/AppStream/
#repo --name=inst.stage2 --baseurl=http://172.54.8.65/rhel/8/BaseOS
# Disk partitioning information set proper disk sizing
##bootloader --location=mbr --boot-drive=vda
bootloader --append=" crashkernel=auto tsc=reliable divider=10 plymouth.enable=0 console=ttyS0 " --location=mbr --boot-drive=vda
# partition plan
zerombr
clearpart --all --drives=vda --initlabel
part /boot --size=1024 --fstype=ext4 --asprimary
part swap --size=1024
part pv.01 --size=30000 --grow --ondisk=vda
##part pv.0 --size=80000 --fstype=lvmpv
#part pv.0 --size=61440 --fstype=lvmpv
volgroup s pv.01
logvol / --vgname=s --size=15360 --name=root --fstype=ext4
logvol /var/cache/ --vgname=s --size=5120 --name=cache --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var/log --vgname=s --size=7680 --name=log --fstype=ext4 --fsoptions="defaults,nodev,noexec,nosuid"
logvol /tmp --vgname=s --size=5120 --name=tmp --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /home --vgname=s --size=5120 --name=home --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /opt --vgname=s --size=2048 --name=opt --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var/log/audit --vgname=s --size=3072 --name=audit --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var/spool --vgname=s --size=2048 --name=spool --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var --vgname=s --size=7680 --name=var --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
# SELinux configuration
selinux --disabled
# Installation logging level
logging --level=debug
# reboot automatically
reboot
###
%packages
@standard
python3
pam_ssh_agent_auth
-nmap-ncat
#-plymouth
#-bpftool
-cockpit
#-cryptsetup
-usbutils
#-kmod-kvdo
#-ledmon
#-libstoragemgmt
#-lvm2
#-mdadm
-rsync
#-smartmontools
-sos
-subscription-manager-cockpit
# Tune Linux vm.dirty_background_bytes (IMAGE-439)
# The following tuning causes dirty data to begin to be background flushed at
# 100 Mbytes, so that it writes earlier and more often to avoid a large build
# up and improving overall throughput.
echo "vm.dirty_background_bytes=100000000" >> /etc/sysctl.conf
# Disable kdump
systemctl disable kdump.service
%end

Important note to make here is the MD5 set root password string in (rootpw) line this string can be generated with openssl or mkpasswd commands :

Method 1: use openssl cmd to generate (md5, sha256, sha512) encrypted pass string

[root@redhat ~]# openssl passwd -6 -salt xyz test
$6$xyz$rjarwc/BNZWcH6B31aAXWo1942.i7rCX5AT/oxALL5gCznYVGKh6nycQVZiHDVbnbu0BsQyPfBgqYveKcCgOE0

Note: passing -1 will generate an MD5 password, -5 a SHA256 encryption and -6 SHA512 encrypted string (logically recommended for better security)

Method 2: (md5, sha256, sha512)

[root@redhat ~]# mkpasswd –method=SHA-512 –stdin

The option –method accepts md5, sha-256 and sha-512
Theoretically there is also a kickstart file generator web interface on Redhat's site here however I never used it myself but instead use above kickstart.cfg
 

6. Install the new VM with virt-install cmd


Roll the new preconfigured VM based on above ks template file use some kind of one liner command line  like below:
 

[root@redhat ~]# virt-install -n RHEL8_3-VirtualMachine –description "CentOS 8.3 Virtual Machine" –os-type=Linux –os-variant=rhel8.3 –ram=8192 –vcpus=8 –location=/vmprivate/rhel-server-8.3-x86_64-dvd.iso –disk path=/vmprivate/RHEL8_3-VirtualMachine.img,bus=virtio,size=70 –graphics none –initrd-inject=/root/kickstart.cfg –extra-args "console=ttyS0 ks=file:/kickstart.cfg"

7. Use a tiny shell script to automate VM creation


For some clarity and better automation in case you plan to repeat VM creation you can prepare a tiny bash shell script:
 

#!/bin/sh
KS_FILE='kickstart.cfg';
VM_NAME='RHEL8_3-VirtualMachine';
VM_DESCR='CentOS 8.3 Virtual Machine';
RAM='8192';
CPUS='8';
# size is in Gigabytes
VM_IMG_SIZE='140';
ISO_LOCATION='/vmprivate/rhel-server-8.3-x86_64-dvd.iso';
VM_IMG_FILE_LOC='/vmprivate/RHEL8_3-VirtualMachine.img';

virt-install -n "$VMNAME" –description "$VM_DESCR" –os-type=Linux –os-variant=rhel8.3 –ram=8192 –vcpus=8 –location="$ISO_LOCATION" –disk path=$VM_IMG_FILE,bus=virtio,size=$IMG_VM_SIZE –graphics none –initrd-inject=/root/$KS_FILE –extra-args "console=ttyS0 ks=file:/$KS_FILE"


A copy of virt-install.sh script can be downloaded here

Wait for the installation to finish it should be visualized and if all installation is smooth you should get a login prompt use the password generated with openssl tool and test to login, then disconnect from the machine by pressing CTRL + ] and try to login via TTY with

[root@redhat ~]# virst list –all
 Id   Name        State
—————————
 2    
RHEL8_3-VirtualMachine   running

[root@redhat ~]#  virsh console RHEL8_3-VirtualMachine


redhat8-login-prompt

One last thing I recommend you check the official documentation on Kickstart2 from CentOS official website

In case if you later need to destroy the VM and the respective created Image file you can do it with:
 

[root@redhat ~]#  virsh destroy RHEL8_3-VirtualMachine
[root@redhat ~]#  virsh undefine RHEL8_3-VirtualMachine

Don't forget to celebreate the success and give this nice article a credit by sharing this nice tutorial with a friend or by placing a link to it from your blog 🙂

 

 

Enjoy !

OpenVZ enable or disable auto start on Linux Hypervisor host boot for Virtual Machine containers

Wednesday, July 7th, 2021

howto-add-virtual-machine-to-auto-start-with-vz-openvz-linux-containers-4-logo-slogan-vertical-big

To make OpenVZ / Virtuozzo Hypervisor servers and you are not sure whether your configured container virtual machines are configured to automatically boot on Linux Physical OS host boot in case of restart after patch update set or after unexpected shutdown due to Kernel / OS bug a hang or due to some electricity Power outage.

To check what is your current configuration for Virtual Environment on CentOS Linux you need to check inside /etc/sysconfig/vz-scripts/VEID.conf
You need to check the value for inside the file

ONBOOT="" 

To get the exact ID of "VEID.conf of the current openvz guest VM containers exec:

[root@openvz vz-scripts]# vzlist -a
      CTID      NPROC STATUS    IP_ADDR         HOSTNAME
       300         23 running   10.10.10.1     VirtualMachine1
       301         25 running   10.10.10.2     VirtualMachine2

[root@openvz ~]# cd /etc/sysconfig/vz-scripts
[root@gbapp2 vz-scripts]# pwd
/etc/sysconfig/vz-scripts

[root@openvz vz-scripts]# grep -i ONBOOT 300.conf 301.conf
300.conf:ONBOOT="yes"
301.conf:ONBOOT="yes"

If you happen to have configured ONBOOT="no" you will need to the change to respective VEID.conf:

vi /etc/sysconfig/vz-scripts/VEID.conf

search for

ONBOOT=”no”

and change to

ONBOOT=”yes”

OpenVZ_virtuozzo-standard-process-tree-landscape

OpenVZ server process tree. The colors of the virtual severs are indicated by colors.

OpenVZ Quick cheat sheet commands

This change will auto-start the VPS container next time the host Hypervisor node is rebooted.
If you happen to have daily work with OpenVZ legacy systems like I do you might find also useful the following OpenVZ Cheatsheet pdf document.

A miniature quick cheatsheet for OpenVZ Virtualion, in case if you are like me and you have to use various virtualization technologies and tend to forget is as below:

vzlist                               # List running instances
vzlist -a                            # List all instances

 

vzctl stop <instance>
vzctl start <instance>
vzctl status <instance>

vzctl exec <instance> <command>      # Run a command

vzctl enter <instance>               # Get console

vzyum <instance> install <package>   # Install a package


# Change properties
vzctl set <instance> –hostname <hostname> –save
vzctl set <instance> –ipadd <IP> –save
vzctl set <instance> –userpasswd root:<password> –save

If need to get more insight on how OpenVZ Virtualization does work on a low level and stretch out its possibilities, an old but useful document you might want to check is OpenVZ-Users-Guide PDF.


If you need it to hava e copy of it openvz_cheat_sheet.txt.

Check if server is Physical Bare Metal or a Virtual Machine and its type

Tuesday, March 17th, 2020

check-if-linux-operating-system-is-running-on-physical-bare-metal-or-virtual-machine

In modern times the IT employee system administrator / system engineer / security engineer or a developer who has to develop and test code remotely on UNIX hosts, we have to login to multiple of different servers located in separate data centers around the world situated in Hybrid Operating system environments running multitude of different Linux OSes. Often especially for us sysadmins it is important to know whether the remote machine we have SSHed to is physical server (Bare Metal) or a virtual machines running on top of different kind of Hypervisor node OpenXen / Virtualbox / Virtuosso  / VMWare etc.
 

Then the question comes how to determine whether A remote Installed Linux is Physical or Virtual ?
 

1. Using the dmesg kernel log utility


The good old dmesg that is used to examine and control the kernel ring buffer detects plenty of useful information which gives you the info whether a server is Virtual or Bare Metal. It is present and accessible on every Linux server out there, thus using it is the best and simplest way to determine the OS system node type.

To grep whether a machine is Virtual and the Hypervisor type use:

 

nginx:~# dmesg | grep "Hypervisor detected"
[0.000000] Hypervisor detected: KVM


As you see above OS installed is using the KVM Virtualization technology.

An empty output of this command means the Remote OS is installed on a physical computer.

 

2. Detecting the OS platform the systemd way


Systemd along with the multiple over-complication of things that nearly all sysadmins (including me hate) so much introduced something useful in the fact of hostnamectl command
that could give you the info about the OS chassis platform.

 

root@pcfreak:~# hostnamectl status
 
 Static hostname: pcfreak
         Icon name: computer-desktop
           Chassis: desktop
        Machine ID: 02425d67037b8e67cd98bd2800002671
           Boot ID: 34a83b9a79c346168082f7605c2f557c
  Operating System: Debian GNU/Linux 10 (buster)
            Kernel: Linux 4.19.0-5-amd64
      Architecture: x86-64

 

 

Below is output of a VM running on a Oracle Virtualbox HV.

 

linux:~# hostnamectl status
Static hostname: ubuntuserver
 Icon name: computer-vm
 Chassis: vm
 Machine ID: 2befe86cf8887ca098f509e457554beb
 Boot ID: 8021c02d65dc46b1885afb25fddcf18c
 Virtualization: oracle
 Operating System: Ubuntu 16.04.1 LTS
 Kernel: Linux 4.4.0-78-generic
 Architecture: x86-64

 

3. Detect concrete container virtualization with systemd-detect-virt 


Another Bare Metal or VM identify tool that was introducted some time ago by freedesktop project is systemd-detect-virt (usually command is part of systemd package).
It is useful to detect the exact virtualization on a systemd running OS systemd-detect-virt is capable to detect many type of Virtualization type that are rare like: IBM zvm S390 Z/VM, bochs, bhyve (a FreeBSD hypervisor), Mac OS's parallels, lxc (linux containers), docker containers, podman etc.

The output from the command is either none (if no virtualization is present or the VM Hypervisor Host type):

 

server:~# systemd-detect-virt
none

 

quake:~# systemd-detect-virt
oracle

 

4. Install and use facter to report per node facts

 

debian:~# apt-cache show facter|grep -i desc -A2
Description-en: collect and display facts about the system
 Facter is Puppet’s cross-platform system profiling library. It discovers and
 reports per-node facts, which are collected by the Puppet agent and are made

Description-md5: 88cdf9a1db3df211de4539a0570abd0a
Homepage: https://github.com/puppetlabs/facter
Tag: devel::lang:ruby, devel::library, implemented-in::ruby,
root@jeremiah:/home/hipo# apt-cache show facter|grep -i desc -A1
Description-en: collect and display facts about the system
 Facter is Puppet’s cross-platform system profiling library. It discovers and

Description-md5: 88cdf9a1db3df211de4539a0570abd0a
Homepage: https://github.com/puppetlabs/facter

 


– Install facter on Debian / Ubuntu / deb based Linux

 

# apt install facter –yes


– Install facter on RedHat / CentOS RPM based distros

# yum install epel-release

 

# yum install facter


– Install facter on OpenSuSE / SLES

# zypper install facter


Once installed on the system to find out whether the remote Operating System is Virtual:

# facter 2> /dev/null | grep virtual
is_virtual => false
virtual => physical


If the machine is a virtual machine you will get some different output like:

# facter 2> /dev/null | grep virtual
is_virtual => true
virtual => kvm


If you're lazy to grep you can use it with argument.

# facter virtual
physical

 

6. Use lshw and dmidecode (list hardware configuration tool)


If you don't have the permissions to install facter on the system and you can see whether lshw (list hardware command) is not already present on remote host.

# lshw -class system  
storage-host                  
    description: Computer
    width: 64 bits
    capabilities: smbios-2.7 vsyscall32

If the system is virtual you'll get an output similar to:

# lshw -class system  
debianserver 
 description: Computer
 product: VirtualBox
 vendor: innotek GmbH
 version: 1.2
 serial: 0
 width: 64 bits
 capabilities: smbios-2.5 dmi-2.5 vsyscall32
 configuration: family=Virtual Machine uuid=78B58916-4074-42E2-860F-7CAF39F5E6F5


Of course as it provides a verbosity of info on Memory / CPU type / Caches / Cores / Motherboard etc. virtualization used or not can be determined also with dmidecode / hwinfo and other tools that detect the system hardware this is described thoroughfully in my  previous article Get hardware system info on Linux.


7. Detect virtualziation using virt-what or imvirt scripts


imvirt is a little script to determine several virtualization it is pretty similar to virt-what the RedHat own script for platform identification. Even though virt-what is developed for RHEL it is available on other distros, Fedoda, Debian, Ubuntu, Arch Linux (AUR) just like is imvirt.

installing both of them is with the usual apt-get / yum or on Arch Linux with yay package manager (yay -S virt-what) …

Once run the output it produces for physical Dell / HPE / Fujitsu-Siemens Bare Metal servers would be just empty string.

# virt-what
#

Or if the system is Virtual Machine, you'll get the type, for example KVM (Kernel-based Virtual Machine) / virtualbox / qemu etc.

#imvirt
Physical

 

Conclusion


It was explained how to do a simple check whether the server works on a physical hardware or on a virtual Host hypervisor. The most basic and classic way is with dmesg. If no access to dmesg is due to restrictions you can try the other methods for systemd enabled OSes with hostnamectl / systemd-detect-virt. Other means if the tools are installed or you have the permissions to install them is with facter / lshw or with virt-what / imvirt scripts.
There definitely perhaps much more other useful tools to grasp hardware and virtualization information but this basics could be useful enough for shell scripting purposes.
If you know other tools, please share.
 

Why du and df reporting different on a filesystem / How to fix inconsistency between used space on FS and disk showing full strangeness

Wednesday, July 24th, 2019

linux-why-du-and-df-shows-different-result-inconsincy-explained-filesystem-full-oddity

If you're a sysadmin on a large server environment such as a couple of hundred of Virtual Machines running Linux OS on either physical host or OpenXen / VmWare hosted guest Virtual Machine, you might end up sometimes at an odd case where some mounted partition mount point reports its file use different when checked with
df
cmd than when checked with du command, like for example:
 

root@sqlserver:~# df -hT /var/lib/mysql
Filesystem   Type  Size Used Avail Use% Mounted On
/dev/sdb5      ext4    19G  3,4G    14G  20% /var/lib/mysql

Here the '-T' argument is used to show us the filesystem.

root@sqlserver:~# du -hsc /var/lib/mysql
0K    /var/lib/mysql/
0K    total

 

1. Simple debug on what might be the root cause for df / du inconsistency reporting

 

Of course the basic thing to do when in that weird situation is to be totally shocked how this is possible and to investigate a bit what is the biggest first level sub-directories that eat up the space on the mounted location, with du:

 

# du -hkx –max-depth=1 /var/lib/mysql/|uniq|sort -n
4       /var/lib/mysql/test
8       /var/lib/mysql/ezmlm
8       /var/lib/mysql/micropcfreak
8       /var/lib/mysql/performance_schema
12      /var/lib/mysql/mysqltmp
24      /var/lib/mysql/speedtest
64      /var/lib/mysql/yourls
144     /var/lib/mysql/narf
320     /var/lib/mysql/webchat_plus
424     /var/lib/mysql/goodfaithair
528     /var/lib/mysql/moonman
648     /var/lib/mysql/daniel
852     /var/lib/mysql/lessn
1292    /var/lib/mysql/gallery

The given output is in Kilobytes so it is a little bit hard to read, if you're used to Mbytes instead, do

 

 # du -hmx –max-depth=1 /var/lib/mysql/|uniq|sort -n|less

 

I've also investigated on the complete /var directory contents sorted by size with:

 

 # du -akx ./ | sort -n
5152564    ./cache/rsnapshot/hourly.2/localhost
5255788    ./cache/rsnapshot/hourly.2
5287912    ./cache/rsnapshot
7192152    ./cache


Even after finding out the bottleneck dirs and trying to clear up a bit, continued facing that inconsistently shown in two commands and if you're likely to be stunned like me and try … to move some files to a different filesystem to free up space or assigned inodes with a hope that shown inconsitency output will be fixed as it might be caused  due to some kernel / FS caching ?? and this will eventually make the mounted FS to refresh …

But unfortunately, if you try it you'll figure out clearing up a couple of Megas or Gigas will make no difference in cmd output.

In my exact case /var/lib/mysql is a separate mounted ext4 filesystem, however same issue was present also on a Network Filesystem (NFS) and thus, my first thought that this is caused by a network failure problem or NFS bug turned to be wrong.

After further short investigation on the inodes on the Filesystem, it was clear enough inodes are available:
 

# df -i /var/lib/mysql
Filesystem       Inodes  IUsed   IFree IUse% Mounted on
/dev/sdb5      1221600  2562 1219038   1% /var/lib/mysql

 

So the filled inodes count assumed issue also has been rejected.
P.S. (if you're not well familiar with them read manual, i.e. – man 7 inode).
 

– Remounting the mounted filesystem

To make sure the filesystem shown inconsistency between du and df is not due to some hanging network mount or bug, first logical thing I did is to remount the filesytem showing different in size, in my case this was done with:
 

# mount -o remount,rw -t ext4 /var/lib/mysql

For machines with NFS remote mounted storage locations, used:

# mount -o remount,rw -t nfs /var/www


FS remount did not solved it so I continued to ponder what oddity and of course I thought of a workaround (in case if this issues are caused by kernel bug or OS lib issue) reboot might be the solution, however unfortunately restarting the VMs was not a wanted easy to do solution, thus I continued investigating what is wrong …

Next check of course was to check, what kind of network connections are opened to the affected hosts with:
 

# netstat -tupanl


Did not found anything that might point me to the reported different Megabytes issue, so next step was to check what is the situation with currently opened files by running processes on the weird df / du reported systems with lsof, and boom there I observed oddity such as multiple files

 

# lsof -nP | grep '(deleted)'

COMMAND   PID   USER   FD   TYPE DEVICE    SIZE NLINK  NODE NAME
mysqld   2588  mysql    4u   REG 253,17      52     0  1495 /var/lib/mysql/tmp/ibY0cXCd (deleted)
mysqld   2588  mysql    5u   REG 253,17    1048     0  1496 /var/lib/mysql/tmp/ibOrELhG (deleted)
mysqld   2588  mysql    6u   REG 253,17       777884290     0  1497 /var/lib/mysql/tmp/ibmDFAW8 (deleted)
mysqld   2588  mysql    7u   REG 253,17       123667875     0 11387 /var/lib/mysql/tmp/ib2CSACB (deleted)
mysqld   2588  mysql   11u   REG 253,17       123852406     0 11388 /var/lib/mysql/tmp/ibQpoZ94 (deleted)

 

Notice that There were plenty of '(deleted)' STATE files shown in memory an overall of 438:

 

# lsof -nP | grep '(deleted)' |wc -l
438


As I've learned a bit online about the problem, I found it is also possible to find deleted unlinked files only without any greps (to list all deleted files in memory files with lsof args only):

 

# lsof +L1|less


The SIZE field (fourth column)  shows a number of files that are really hard in size and that are kept in open on filesystem and in memory, totally messing up with the filesystem. In my case this is temp files created by MYSQLD daemon but depending on the server provided service this might be apache's www-data, some custom perl / bash script executed via a cron job, stalled rsync jobs etc.
 

2. Check all the list open files with the mysql / root user as part of the the server filesystem inconsistency debugging with:

 

– Grep opened files on server by user

# lsof |grep mysql
mysqld    1312                       mysql  cwd       DIR               8,21       4096          2 /var/lib/mysql
mysqld    1312                       mysql  rtd       DIR                8,1       4096          2 /
mysqld    1312                       mysql  txt       REG                8,1   20336792   23805048 /usr/sbin/mysqld
mysqld    1312                       mysql  mem       REG               8,21      24576         20 /var/lib/mysql/tc.log
mysqld    1312                       mysql  DEL       REG               0,16                 29467 /[aio]
mysqld    1312                       mysql  mem       REG                8,1      55792   14886933 /lib/x86_64-linux-gnu/libnss_files-2.28.so

 

# lsof | grep root
COMMAND    PID   TID TASKCMD          USER   FD      TYPE             DEVICE   SIZE/OFF       NODE NAME
systemd      1                        root  cwd       DIR                8,1       4096          2 /
systemd      1                        root  rtd       DIR                8,1       4096          2 /
systemd      1                        root  txt       REG                8,1    1489208   14928891 /lib/systemd/systemd
systemd      1                        root  mem       REG                8,1    1579448   14886924 /lib/x86_64-linux-gnu/libm-2.28.so

Other command that helped to track the discrepancy between df and du different file usage on FS is:
 

# du -hxa  / | egrep '^[[:digit:]]{1,1}G[[:space:]]*'
 

 

3. Fixing large files kept in memory filesystem problem


What is the real reason for ending up with this file handlers opened by running backgrounded programs on the Linux OS?
It could be multiple  but most likely it is due to exceeded server / client interactions or breaking up RAM or HDD drive with writing plenty of logs on the FS without ending keeping space occupied or Programming library bugs used by hanged service leaving the FH opened on storage.

What is the solution to file system files left in memory problem?

The best solution is to first fix custom script or hanged service and then if possible to simply restart the server to make the kernel / services reload or if this is not possible just restart the problem creation processes.

Once the process is identified like in my case this was MySQL on systemd enabled newer OS distros, just do:

 

 

# systemctl restart mysqld.service


or on older init.d system V ones:

# /etc/init.d/service restart


For custom hanged scripts being listed in ps axuwef you can grep the pid and do a kill -HUP (if the script is written in a good way to recognize -HUP and restart the sub-running process properly – BE EXTRA CAREFUL IF YOU'RE RESTARTING BROKEN SCRIPTS as this might cause your running service disruptions …).

# pgrep -l script.sh
7977 script.sh


# kill -HUP PID

 

Now finally this should either mitigate or at best case completely solve the reported disagreement between df and du, after which the calculated / reported disk space should be back to normal and show up approximately the same (note that size changes a bit as mysql service is writting data) constantly extending the size between the two checks.

 

# df -hk /var/lib/mysql; du -hskc /var/lib/mysql
Filesystem       Inodes  IUsed   IFree IUse% Mounted on
/dev/sdb5        19097172 3472744 14631296  20% /var/lib/mysql
3427772    /var/lib/mysql
3427772    total

 

What we learned?

What I've explained in this article is why and how it comes that 'zoombie' files reside on a filesystem
appearing to be eating disk space on a mounted local or network partition, giving strange inconsistent
reports, leading to system service disruptions and impossibility to have correctly shown information on used
disk space on mounted drive.

I went through with some standard logic on debugging service / filesystem / inode issues up explainat, that led me to the finding about deleted files being kept in filesystem and producing the filesystem strange sized / showing not correct / filled even after it was extended with tune2fs and was supposed to have extra 50GBs.

Finally it was explained shortly how to HUP / restart hanging script / service to fix it.

Some few good readings that helped to fix the issue:

What to do when du and df report different usage is here
df in linux not showing correct free space after file removal is here
Why do “df” and “du” commands show different disk usage?
 

Install VMWare tools on Debian and Ubuntu Linux – Enable VMWare Fullscreen and copy paste between OS host and Virtual machine

Wednesday, May 28th, 2014

install-vmware-tools-on-debian-gnu-linux-and-ubuntu-howto

If you need to use Virtual Machine to run some testing on heterogenous Operating Systems and you have chosen VMWare as a Virtual Machine. You will soon notice some of Virtual Machines functionality like copy between host operating system and Virtual Machine, true fullscreen mode and most importantly Copy paste between your host operating system and VMWare is not working. I'm not too much into Virtualization these days so for me it was truely shocking that a proprietary software like VMWare, claimed to be the best and most efficient Virtual Machine nowadays is not supporting copy / paste, fullscreen and copy between host and guest OS.  For those arguing why I'm using VMWare at all as it is proprietary and there is already free software Virtual Machines like QEMU and Oracle's VirtualBox its simply because now I have the chance to install and use VMWare 9 Enterprise on my work place at HP with a free Corporate license – in other words I'm using VMWare just for the sake of educating myself and would always recommend VirtualBox for those looking for good substitute free alternative to VMWare.

Before trying out VMWare, I tried Virtualbox to emulate Linux on my HP work PC running Windows with VirtualBox I was having issues with keyboard not working (because of lack of support of USB, no full screen support and lack of copy / paste between OS-es), I've just recently understood this is not because Virtualbox is bad Virtualization solution but because I forgot to install VirtualBox Oracle VM VirtualBox Extension Pack which allows support for USB, enables copy paste and full screen support. The equivalent to Virtualbox Oracle VM VirtualBox in VMWare world is called VMWare-Tools and once the guest operating system is installed inside VMWare VM, its necessary to install vmware-tools to enable better screen resolution and copy paste.
 

In Windows Virtual Machine installation of vmware-tools is pretty straight forward you go through VMWare's menus

 

VM -> Install Vmware-tools

install-vmware-tools-on-linux-guest-host-os-debian-redhat-screenshot

follow the instructions and you're done, however as always installing VMWare-tools on Linux is little bit more complicated you need to run few commands from Linux installed inside the Virtual Machine to install vmware-tools. Here is how vmware-tools is installed on Debian / Ubuntu / Linux Mint and rest of Debian based operating systems:

  1. Install Build essentials and gcc You need to have this installed some developer tools as well as GCC compiler in order for the vmware-tools to compile a special Linux kernel module which enables extra support (integration) between the VMWare VM and the installed inside VM Linux distro

apt-get install --yes build-essential gcc
...

2. Install appropriate Linux headers corresponding to current Linux OS installed kernel

apt-get install --yes linux-headers-$(uname -r)
....

3. Mount CD (Virtual) Content to obtain the vmware-tools version for your Linux

Be sure to have first checked from VMWare menus on menus VM -> Intall Vmware Tools
This step is a little bit strange but just do it without too much questioning …


mount /dev/cdrom /mnt/
umount /media/cdrom0/
mount /media/cdrom
mount /dev/sr0 /mnt/cdrom/
mount /dev/sr0 /mnt/

 

Note that /dev/sr0, might already be mounted and sometimes it might be necessary to unmount it first (don't remember exactly if I unmounted it or not)

4. Copy and Untar VMwareTools-9.2.0-799703.tar.gz

cp -rpf /media/cdrom/VMwareTools-9.2.0-799703.tar.gz /tmp/
cd /tmp/
tar -zxvvf VMwareTools-9.2.0-799703.tar.gz
...

5. Run vmware-tools installer

cd vmware-tools-distrib/
./vmware-install.pl

You will be asked multiple questions you can safely press enter to answer with default settings to all settings, hopefully if all runs okay this will make VMWare Tools installed
 

Creating a new VMware Tools installer database using the tar4 format.
Installing VMware Tools.
In which directory do you want to install the binary files?
[/usr/bin]
What is the directory that contains the init directories (rc0.d/ to rc6.d/)?
[/etc]
What is the directory that contains the init scripts?
[/etc/init.d]
In which directory do you want to install the daemon files?
[/usr/sbin]
In which directory do you want to install the library files?
[/usr/lib/vmware-tools]
The path "/usr/lib/vmware-tools" does not exist currently. This program is
going to create it, including needed parent directories. Is this what you want?
[yes]
In which directory do you want to install the documentation files?
[/usr/share/doc/vmware-tools]
The path "/usr/share/doc/vmware-tools" does not exist currently. This program
is going to create it, including needed parent directories. Is this what you
want? [yes]
The installation of VMware Tools 9.2.0 build-799703 for Linux completed
successfully. You can decide to remove this software from your system at any
time by invoking the following command: "/usr/bin/vmware-uninstall-tools.pl".
Before running VMware Tools for the first time, you need to configure it by
invoking the following command: "/usr/bin/vmware-config-tools.pl". Do you want
this program to invoke the command for you now? [yes]
Initializing…
Making sure services for VMware Tools are stopped.
Stopping VMware Tools services in the virtual machine:
Guest operating system daemon: done
Unmounting HGFS shares: done
Guest filesystem driver: done
[EXPERIMENTAL] The VMware FileSystem Sync Driver (vmsync) is a new feature that creates backups of virtual machines. Please refer to the VMware Knowledge Base for more details on this capability. Do you wish to enable this feature?
[no]
Before you can compile modules, you need to have the following installed…
make
gcc
kernel headers of the running kernel
Searching for GCC…
Detected GCC binary at "/usr/bin/gcc-4.6".
The path "/usr/bin/gcc-4.6" appears to be a valid path to the gcc binary.
Would you like to change it? [no]
Searching for a valid kernel header path…
Detected the kernel headers at "/lib/modules/3.2.0-4-amd64/build/include".
The path "/lib/modules/3.2.0-4-amd64/build/include" appears to be a valid path
to the 3.2.0-4-amd64 kernel headers.
Would you like to change it? [no]
The vmblock enables dragging or copying files between host and guest in a
Fusion or Workstation virtual environment. Do you wish to enable this feature?
[no] yes
make: Leaving directory `/tmp/vmware-root/modules/vmblock-only'

No X install found.
Creating a new initrd boot image for the kernel.
update-initramfs: Generating /boot/initrd.img-3.2.0-4-amd64
Checking acpi hot plug done
Starting VMware Tools services in the virtual machine:
Switching to guest configuration: done
VM communication interface: done
VM communication interface socket family: done
File system sync driver: done
Guest operating system daemon: done
The configuration of VMware Tools 8.6.10 build-913593 for Linux for this
running kernel completed successfully.
You must restart your X session before any mouse or graphics changes take
effect.
You can now run VMware Tools by invoking "/usr/bin/vmware-toolbox-cmd" from the
command line or by invoking "/usr/bin/vmware-toolbox" from the command line
during an X server session.
To enable advanced X features (e.g., guest resolution fit, drag and drop, and
file and text copy/paste), you will need to do one (or more) of the following:
1. Manually start /usr/bin/vmware-user
2. Log out and log back into your desktop session; and,
3. Restart your X session.
Enjoy,
–the VMware team
Found VMware Tools CDROM mounted at /mnt. Ejecting device /dev/sr0 …

.To make sure vmware-tools compiled modules are loaded into Linux kernel inside VM, restart the Virtual Machine. Once Linux boots again and you login to gnome-terminal to check what is vmware-tools status (e.g. if properly loaded) run:

service vmware-tools status
vmtoolsd is running

install-vmware-tools-on-debian-gnu-linux-and-ubuntu-virtual-machine-screenshot

This method of installing works on Debian 7 (Wheezy) but same steps should work on any Ubuntu and rest of Debian derivatives. For Redhat (RPM) based Linux distributions to install vmware-tools after mounting cdrom drive following above instructions you will have an rpm package instead of .tar.gz archive so all you have to do is install the rpm, e.g. launch smth. like:

rpm -Uhv /mnt/cdrom/VMwareTools-9.2.0-799703.i386.rpm
Cheers 😉

What is VT-x (Intel Virtualization) and AMD V (AMD Virtualization)

Wednesday, June 4th, 2014

what-is-vt-x-inel-amd-virtualization-amd-v
As I'm lately educating myself in field of Virtualziation and Virtual Machines, the interesting question poped up What is Virtualization on a Hardware Level and what are Intel's and AMD technologies supporting it?

 

  • Intel Virtualialization (Vt-x)

Is Intel's hardware assistance for processors running virtualization platforms. Intel's Virtualization for short is know as VT-x. Intel VT-x extensions are probably the best recognized extensions, adding migration, priority and memory handling capabilities to a wide range of Intel processors.
Intel VT includes series of extensions for hardware virtualization adding virtualization support to Intel chipsets, so that Virtual Machines could assign specific I/O Devices. Intel VT includes a series of extensions for hardware virtualization Intel Virtualization is better described here.
 

  • AMD-V (AMD virtualization)


Is a set of hardware extensions for the X86 processor architecture. Advanced Micro Dynamics (AMD) designed the extensions to perform repetitive tasks normally performed by software and improve resource use and virtual machine (VM) performance. Early virtualization efforts relied on software emulation to replace hardware functionality. But software emulation can be a slow and inefficient process. Because many virtualization tasks were handled through software, VM behavior and resource control were often poor, resulting in unacceptable VM performance on the server. AMD Virtualization (AMD-V) technology was first announced in 2004 and added to AMD's Pacifica 64-bit x86 processor designs. By 2006, AMD's Athlon 64 X2 and Athlon 64 FX processors appeared with AMD-V technology, and today, the technology is available on Turion 64 X2, second- and third-generation Opteron, Phenom and Phenom II processors. Just like with Intel Virtualization AMD-V Technology enables extra hardware support for assignment of specifics I/O on per virtualized OS. AMD V Virtualization is described more thoroughly here

 

Windows: VMWare Start / Stop from command line stop-vmware.bat / start-vmware.bat script

Wednesday, June 4th, 2014

vmware_start-stop-from-command-line-on-windows-os-bat-script
I'm experimenting with different Virtual Machines these days, because often running VMWare together with other Virtual Machines (like VirtualBox) might be causing crashes or VM instability – hence it is always best to have VMWare completely stopped. Unfortunately VMWare keeps running a number of respawning processes (vmnat.exe, vmnetdhcp.exe, vmware-authd.exe, vmware-usbarbitrator64.exe) which cannot be killed from Task Manager with Process KillEnd Tree option. Thus to make this services stop it is necessery run from cmd.exe (which is Run as Administrator):
 

NET STOP "VMware Workstation Server"
NET STOP "VMware USB Arbitration service"
NET STOP "VMware NAT Service"
NET STOP "VMware DHCP Service"
NET STOP "VMware Authorization Service"

If you will be doing regular START / STOP of VMWare on Windows servers it will be handy to create a little batch script stop-vmware.bat containing:
 

@ECHO OFF
NET STOP "VMware Workstation Server"
NET STOP "VMware USB Arbitration service"
NET STOP "VMware NAT Service"
NET STOP "VMware DHCP Service"
NET STOP "VMware Authorization Service"


Later whether it is necessery to start VMWare from Windows command line execute above services in reverse order (to prevent from getting warnings or errors on vmware dependent services.

NET START "VMware Authorization Service"
NET START "VMware DHCP Service"
NET START "VMware NAT Service"
NET START "VMware USB Arbitration service"
NET START "VMware Workstation Server"

To script it as a start script create file start-vmware.bat with:
 

NET START "VMware Authorization Service"
NET START "VMware DHCP Service"
NET START "VMware NAT Service"
NET START "VMware USB Arbitration service"
NET START "VMware Workstation Server"


Of course it is possible to also stop / start VMWare from GUI's Windows Services interface by righclicking on services with VMWare names and selecting "Start" / "Stop".