Posts Tagged ‘mount dev’

Resize KVM .img QCOW Image file and Create new LVM partition and ext4 filesystem inside KVM Virtual Machine

Friday, November 10th, 2023

LVM-add-space-to-RHEL-Linux-on-KVM_Virtual_machine-howto

Part of migration project for a customer I'm working on is migration of a couple of KVM based Guest virtual machine servers. The old machines has a backup solution stratetegy using IBM's TSM and the new Machines should use the Cheaper solution adopted by the Customer company using the CommVault backup solution (an enterprise software thath is used for data backup and recovery not only to local Tape Library / Data blobs on central backup servers infra but also in Cloud infrastructure.

To install the CommVault software on the Redhat Linux-es, the official install documentation (prepared by the team who prepared the CommVault) infrastructure for the customer recommends to have a separate partition for the CommVault backups under /opt directory  (/opt/commvault) and the partition should be as a minumum at least 10 Gigabytes of size. 

Unfortunately on our new prepared KVM VM guest machines, it was forgotten to have the separate /opt of 10GB prepared in advanced. And we ended up with Virtual Machines that has a / (root directory) of 68GB size and a separate /var and /home LVM parititons. Thus to correct the issues it was required to find a way to add another separate LVM partition inside the KVM VirtualMachine.img (QCOW Image file). 

This seemed to be an easy task at first as that might be possible with simple .img partition mount with losetup command kpartx and simple lvreduce command in some way such as

# mount /dev/loop0 /mnt/test/

# kpartx -a /dev/loop0
# kpartx -l /dev/loop0
# ls -al /dev/mapper/*

… 

# lvreduce 

etc. however unfortunately kpartx though not returning error did not provided the new /dev/mapper devices to be used with LVM tools and this approach seems to not be possible on RHEL 8.8 as the kpartx couldn't list.

 

A colleague of mine Mr. Paskalev suggested that we can perhaps try to mount the partition with default KVM tool to mount .img partitions which is guestmount but unfortunately
with a command like:
 

# guestmount -a /kvm/VM.img -i –rw /mnt/test/

But unfortunately this mounted the filesystem in fuse filesystem and the LVM /dev/mapper of the VM can't be seen so we decided to abondon this method.

After some pondering with Dimitar Paskalev and Dimitar Hristov, thanks to joint efforts we found the way to do it, below are the steps we followed to succeed in creating new LVM ext4 partition required.
One would wonder how many system
 

1. Check enough space is available on the HV machine

 

The VMs are held under /kvm so in this case:

[root@hypervisor-host ~]# df -h|grep -i /kvm
/dev/mapper/vg00-vmprivate  206G   27G  169G  14% /kvm

 

2. Shutdown the running VM and make sure it is stopped
 

[root@hypervisor-host ~]# virsh shutdown vm-host

 

[root@hypervisor-host ~]# virsh list –all
 Id   Name       State
————————–
 4    lpdkv01f   running
 5    vm-host   shut off

 

3. Check current Space status of VM

 

[root@hypervisor-host ~]# qemu-img info /kvm/vm-host.img       
image: /kvm/vm-host.img
file format: qcow2
virtual size: 100 GiB (107374182400 bytes)
disk size: 8.62 GiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: true
    refcount bits: 16
    corrupt: false
    extended l2: false

 

4. Resize (extend VM) with whatever size you want    
 

[root@hypervisor-host ~]# qemu-img resize /kvm/vm-host.img +10G

 

5. Start VM    
 

[root@hypervisor-host ~]# virsh start vm-host


7. Check the LVM and block devices on HVs (not necessery but good for an overview)
 

[root@hypervisor-host ~]# pvs
  PV         VG   Fmt  Attr PSize   PFree 
  /dev/sda2  vg00 lvm2 a–  277.87g 19.87g
  
[root@hypervisor-host ~]# vgs
  VG   #PV #LV #SN Attr   VSize   VFree 
  vg00   1  11   0 wz–n- 277.87g 19.87g

 

[root@hypervisor-host ~]# lsblk 
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                  8:0    0 278.9G  0 disk 
├─sda1               8:1    0     1G  0 part /boot
└─sda2               8:2    0 277.9G  0 part 
  ├─vg00-root      253:0    0    15G  0 lvm  /
  ├─vg00-swap      253:1    0     1G  0 lvm  [SWAP]
  ├─vg00-var       253:2    0     5G  0 lvm  /var
  ├─vg00-spool     253:3    0     2G  0 lvm  /var/spool
  ├─vg00-audit     253:4    0     3G  0 lvm  /var/log/audit
  ├─vg00-opt       253:5    0     2G  0 lvm  /opt
  ├─vg00-home      253:6    0     5G  0 lvm  /home
  ├─vg00-tmp       253:7    0     5G  0 lvm  /tmp
  ├─vg00-log       253:8    0     5G  0 lvm  /var/log
  ├─vg00-cache     253:9    0     5G  0 lvm  /var/cache
  └─vg00-vmprivate 253:10   0   210G  0 lvm  /vmprivate

  
8 . Check logical volumes on Hypervisor host
 

[root@hypervisor-host ~]# lvdisplay 
  — Logical volume —
  LV Path                /dev/vg00/swap
  LV Name                swap
  VG Name                vg00
  LV UUID                3tNa0n-HDVw-dLvl-EC06-c1Ex-9jlf-XAObKm
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:45 +0200
  LV Status              available
  # open                 2
  LV Size                1.00 GiB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:1
   
  — Logical volume —
  LV Path                /dev/vg00/var
  LV Name                var
  VG Name                vg00
  LV UUID                JBerim-fxVv-jU10-nDmd-figw-4jVA-8IYdxU
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:45 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:2
   
  — Logical volume —
  LV Path                /dev/vg00/spool
  LV Name                spool
  VG Name                vg00
  LV UUID                nFlmp2-iXg1-tFxc-FKaI-o1dA-PO70-5Ve0M9
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:45 +0200
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:3
   
  — Logical volume —
  LV Path                /dev/vg00/audit
  LV Name                audit
  VG Name                vg00
  LV UUID                e6H2OC-vjKS-mPlp-JOmY-VqDZ-ITte-0M3npX
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:46 +0200
  LV Status              available
  # open                 1
  LV Size                3.00 GiB
  Current LE             768
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:4
   
  — Logical volume —
  LV Path                /dev/vg00/opt
  LV Name                opt
  VG Name                vg00
  LV UUID                oqUR0e-MtT1-hwWd-MhhP-M2Y4-AbRo-Kx7yEG
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:46 +0200
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:5
   
  — Logical volume —
  LV Path                /dev/vg00/home
  LV Name                home
  VG Name                vg00
  LV UUID                ehdsH7-okS3-gPGk-H1Mb-AlI7-JOEt-DmuKnN
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:47 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:6
   
  — Logical volume —
  LV Path                /dev/vg00/tmp
  LV Name                tmp
  VG Name                vg00
  LV UUID                brntSX-IZcm-RKz2-CP5C-Pp00-1fA6-WlA7lD
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:47 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:7
   
  — Logical volume —
  LV Path                /dev/vg00/log
  LV Name                log
  VG Name                vg00
  LV UUID                ZerDyL-birP-Pwck-yvFj-yEpn-XKsn-sxpvWY
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:47 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:8
   
  — Logical volume —
  LV Path                /dev/vg00/cache
  LV Name                cache
  VG Name                vg00
  LV UUID                bPPfzQ-s4fH-4kdT-LPyp-5N20-JQTB-Y2PrAG
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:48 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:9
   
  — Logical volume —
  LV Path                /dev/vg00/root
  LV Name                root
  VG Name                vg00
  LV UUID                mZr3p3-52R3-JSr5-HgGh-oQX1-B8f5-cRmaIL
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:48 +0200
  LV Status              available
  # open                 1
  LV Size                15.00 GiB
  Current LE             3840
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:0
   
  — Logical volume —
  LV Path                /dev/vg00/vmprivate
  LV Name                vmprivate
  VG Name                vg00
  LV UUID                LxNRWV-le3h-KIng-pUFD-hc7M-39Gm-jhF2Aj
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-09-18 11:54:19 +0200
  LV Status              available
  # open                 1
  LV Size                210.00 GiB
  Current LE             53760
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:10

9. Check Hypervisor existing partitions and space
 

[root@hypervisor-host ~]# fdisk -l
Disk /dev/sda: 278.9 GiB, 299439751168 bytes, 584843264 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0581e6e2

Device     Boot   Start       End   Sectors   Size Id Type
/dev/sda1  *       2048   2099199   2097152     1G 83 Linux
/dev/sda2       2099200 584843263 582744064 277.9G 8e Linux LVM


Disk /dev/mapper/vg00-root: 15 GiB, 16106127360 bytes, 31457280 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-swap: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-var: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-spool: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-audit: 3 GiB, 3221225472 bytes, 6291456 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-opt: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-home: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-tmp: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-log: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-cache: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-vmprivate: 210 GiB, 225485783040 bytes, 440401920 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

 

10. List block devices on VM
 

[root@vm-host ~]# lsblk 
NAME               MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                 11:0    1 1024M  0 rom  
vda                252:0    0  100G  0 disk 
├─vda1             252:1    0    1G  0 part /boot
├─vda2             252:2    0   88G  0 part 
│ ├─vg00-root      253:0    0   68G  0 lvm  /
│ ├─vg00-home      253:2    0   10G  0 lvm  /home
│ └─vg00-var       253:3    0   10G  0 lvm  /var
├─vda3             252:3    0    1G  0 part [SWAP]
└─vda4             252:4    0   10G  0 part 

 

 

11. Create new LVM partition with fdisk or cfdisk
 

If there is no cfdisk new resized space with qemu-img could be setup with a fdisk, though I personally always prefer to use cfdisk

[root@vm-host ~]# fdisk /dev/vda
# > p (print)
# > m (manfile)
# > n
# … follow on screen instructions to select start and end blocks
# > t (change partition type)
# > select and set to 8e
# > w (write changes)

[root@vm-host ~]# cfdisk /dev/vda


Setup new partition from Free space as [ primary ] partition and Choose to be of type LVM


12. List partitions to make sure new LVM partition is present
 

[root@vm-host ~]# fdisk -l
Disk /dev/vda: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xe7b2d9fd

Device     Boot     Start       End   Sectors Size Id Type
/dev/vda1  *         2048   2099199   2097152   1G 83 Linux
/dev/vda2         2099200 186646527 184547328  88G 8e Linux LVM
/dev/vda3       186646528 188743679   2097152   1G 82 Linux swap / Solaris
/dev/vda4       188743680 209715199  20971520  10G 8e Linux LVM

The extra added 10 Giga is seen under /dev/vda4.
  — Physical volume —
  PV Name               /dev/vda4
  VG Name               vg01
  PV Size               10.00 GiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              2559
  Free PE               0
  Allocated PE          2559
  PV UUID               yvMX8a-sEka-NLA7-53Zj-fFdZ-Jd2K-r0Db1z
   
  — Physical volume —
  PV Name               /dev/vda2
  VG Name               vg00
  PV Size               <88.00 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              22527
  Free PE               0
  Allocated PE          22527
  PV UUID               i4UpGr-h9Cd-iKBu-KqEI-15vK-CGc1-DwRPj8
   
[root@vm-host ~]# 

 

13. List LVM Physical Volumes
 

[root@vm-host ~]# pvdisplay 
  — Physical volume —
  PV Name               /dev/vda2
  VG Name               vg00
  PV Size               <88.00 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              22527
  Free PE               0
  Allocated PE          22527
  PV UUID               i4UpGr-h9Cd-iKBu-KqEI-15vK-CGc1-DwRPj8

 


  
  Notice the /dev/vda4 is not seen in pvdisplay (Physical Volume display command) because not created yet, so lets create it.
 

14. Initialize new Physical Volume to be available for use by LVM
 

[root@vm-host ~]# pvcreate /dev/vda4


15. Inform the OS for partition table changes
 

If partprobe is not available as command on the host, below obscure command should do the trick.
 

[root@vm-host ~]# echo "- – -" | tee /sys/class/scsi_host/host*/scan

However usually, better to use partprobe to inform the Operating System of partition table changes

[root@vm-host ~]# partprobe


16. Use lsblk again to see the new /dev/vda4 LVM is listed into "vda" root block device
 

[root@vm-host ~]# 
[root@vm-host ~]# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0            11:0    1 1024M  0 rom  
vda           252:0    0  100G  0 disk 
├─vda1        252:1    0    1G  0 part /boot
├─vda2        252:2    0   88G  0 part 
│ ├─vg00-root 253:0    0   68G  0 lvm  /
│ ├─vg00-home 253:1    0   10G  0 lvm  /home
│ └─vg00-var  253:2    0   10G  0 lvm  /var
├─vda3        252:3    0    1G  0 part [SWAP]
└─vda4        252:4    0   10G  0 part 
[root@vm-host ~]# 


17. Create new Volume Group (VG) on /dev/vda4 block device
 

Before creating a new VG, list what kind of VG is on the machine to be sure the new created one will not be already present.
 

[root@vm-host ~]# vgdisplay 
  — Volume group —
  VG Name               vg00
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <88.00 GiB
  PE Size               4.00 MiB
  Total PE              22527
  Alloc PE / Size       22527 / <88.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               oyo1oY-saSm-0IKk-gZnf-Knwz-utO7-Aw8c60

vg00 is existing only, so we can use vg01 as a Volume Group name for the new volume group where the fresh 10GB LVM partition will lay

[root@vm-host ~]# vgcreate vg01 /dev/vda4
  Volume group "vg01" successfully created

 

18. Create new Logical Volume (LV) and extend it to occupy the full space available on Volume Group vg01

 

 

[root@vm-host ~]# lvcreate -n commvault -l 100%FREE vg01
  Logical volume "commvault" created.

  An alternative way to create the same LV is by running:

lvcreate -n commvault -L 10G vg01


19. Relist block devices with lsblk to make sure the new created Logical Volume commvault is really present and seen, in case of it missing re-run again partprobe cmd
 

[root@vm-host ~]# lsblk 
NAME               MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                 11:0    1 1024M  0 rom  
vda                252:0    0  100G  0 disk 
├─vda1             252:1    0    1G  0 part /boot
├─vda2             252:2    0   88G  0 part 
│ ├─vg00-root      253:0    0   68G  0 lvm  /
│ ├─vg00-home      253:1    0   10G  0 lvm  /home
│ └─vg00-var       253:2    0   10G  0 lvm  /var
├─vda3             252:3    0    1G  0 part [SWAP]
└─vda4             252:4    0   10G  0 part 
  └─vg01-commvault 253:3    0   10G  0 lvm  

 

As it is not mounted yet, the VG will be not seen in df free space but will be seen as a volume group with vgdispaly
 

[root@vm-host ~]# df -h
Filesystem                  Size  Used Avail Use% Mounted on
devtmpfs                    2.8G     0  2.8G   0% /dev
tmpfs                       2.8G   33M  2.8G   2% /dev/shm
tmpfs                       2.8G   17M  2.8G   1% /run
tmpfs                       2.8G     0  2.8G   0% /sys/fs/cgroup
/dev/mapper/vg00-root        67G  2.4G   61G   4% /
/dev/mapper/vg00-var        9.8G 1021M  8.3G  11% /var
/dev/mapper/vg00-home       9.8G   24K  9.3G   1% /home
/dev/vda1                   974M  242M  665M  27% /boot
tmpfs                       569M     0  569M   0% /run/user/0

 

[root@vm-host ~]# vgdisplay 
  — Volume group —
  VG Name               vg01
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <10.00 GiB
  PE Size               4.00 MiB
  Total PE              2559
  Alloc PE / Size       2559 / <10.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               nYP0tv-IbFw-fBVT-slBB-H1hF-jD0h-pE3V0S
   
  — Volume group —
  VG Name               vg00
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <88.00 GiB
  PE Size               4.00 MiB
  Total PE              22527
  Alloc PE / Size       22527 / <88.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               oyo1oY-saSm-0IKk-gZnf-Snwz-utO7-Aw8c60
  


20. Create new ext4 filesystem on the just created vg01-commvault   
 

[root@vm-host ~]# mkfs.ext4 /dev/mapper/vg01-commvault 

[root@vm-host ~]# mkfs.ext4 /dev/mapper/vg01-commvault 
mke2fs 1.45.6 (20-Mar-2020)
Discarding device blocks: done                            
Creating filesystem with 2620416 4k blocks and 655360 inodes
Filesystem UUID: 1491d8b1-2497-40fe-bc40-5faa6a2b2644
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 


21. Mount vg01-commvault into /opt directory
 

[root@vm-host ~]# mkdir -p /opt/

[root@vm-host ~]# mount /dev/mapper/vg01-commvault /opt/


22. Check mount is present on VM guest OS
 

[root@vm-host ~]# mount|grep -i /opt
/dev/mapper/vg01-commvault on /opt type ext4 (rw,relatime)
[root@vm-host ~]# 

[root@vm-host ~]# df -h|grep -i opt
/dev/mapper/vg01-commvault  9.8G   24K  9.3G   1% /opt
[root@vm-host ~]# 
 

23. Add vg01-commvault to be auto mounted via /etc/fstab on next Virtual Machine reboot
 

[root@vm-host ~]# echo '/dev/mapper/vg01-commvault /opt         ext4            defaults        1        2' >> /etc/fstab

[root@vm-host ~]# rpm -ivh commvault-fs.Instance001-11.0.0-80.240.0.3589820.240.4083067.el8.x86_64.rpm

[root@vm-host ~]# systemctl status commvault
● commvault.Instance001.service – commvault Service
   Loaded: loaded (/etc/systemd/system/commvault.Instance001.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2023-11-10 15:13:59 CET; 27s ago
  Process: 9972 ExecStart=/opt/commvault/Base/Galaxy start direct -focus Instance001 (code=exited, status=0/SUCCESS)
    Tasks: 54
   Memory: 155.5M
   CGroup: /system.slice/commvault.Instance001.service
           ├─10132 /opt/commvault/Base/cvlaunchd
           ├─10133 /opt/commvault/Base/cvd
           ├─10135 /opt/commvault/Base/cvfwd
           └─10137 /opt/commvault/Base/ClMgrS

Nov 10 15:13:57 vm-host.ffm.de.int.atosorigin.com systemd[1]: Starting commvault Service…
Nov 10 15:13:58 vm-host.ffm.de.int.atosorigin.com Galaxy[9972]: Cleaning up /opt/commvault/Base/Temp …
Nov 10 15:13:58 vm-host.ffm.de.int.atosorigin.com Galaxy[9972]: Starting Commvault services for Instance001 …
Nov 10 15:13:59 vm-host.ffm.de.int.atosorigin.com Galaxy[9972]: [22B blob data]
Nov 10 15:13:59 vm-host.ffm.de.int.atosorigin.com systemd[1]: Started commvault Service.
[root@vm-host ~]# 

 

24. Install Commvault backup client RPM in new mounted LVM under /opt

[root@vm-host ~]#  rpm -ivh commvault.rpm

KVM Virtual Machine RHEL 8.3 Linux install on Redhat 8.3 Linux Hypervisor with custom tailored kickstart.cfg

Friday, January 22nd, 2021

kvm_virtualization-logo-redhat-8.3-install-howto-with-kickstart

If you don't have tried it yet Redhat and CentOS and other RPM based Linux operationg systems that use anaconda installer is generating a kickstart file after being installed under /root/{anaconda-ks.cfg,initial-setup- ks.cfg,original-ks.cfg} immediately after the OS installation completes. Using this Kickstart file template you can automate installation of Redhat installation with exactly the same configuration as many times as you like by directly loading your /root/original-ks.cfg file in RHEL installer.

Here is the official description of Kickstart files from Redhat:

"The Red Hat Enterprise Linux installation process automatically writes a Kickstart file that contains the settings for the installed system. This file is always saved as /root/anaconda-ks.cfg. You may use this file to repeat the installation with identical settings, or modify copies to specify settings for other systems."


Kickstart files contain answers to all questions normally asked by the text / graphical installation program, such as what time zone you want the system to use, how the drives should be partitioned, or which packages should be installed. Providing a prepared Kickstart file when the installation begins therefore allows you to perform the installation automatically, without need for any intervention from the user. This is especially useful when deploying Redhat based distro (RHEL / CentOS / Fedora …) on a large number of systems at once and in general pretty useful if you're into the field of so called "DevOps" system administration and you need to provision a certain set of OS to a multitude of physical servers or create or recreate easily virtual machines with a certain set of configuration.
 

1. Create /vmprivate storage directory where Virtual machines will reside

First step on the Hypervisor host which will hold the future created virtual machines is to create location where it will be created:

[root@redhat ~]#  lvcreate –size 140G –name vmprivate vg00
[root@redhat ~]#  mkfs.ext4 -j -b 4096 /dev/mapper/vg00-vmprivate
[root@redhat ~]# mount /dev/mapper/vg00-vmprivate /vmprivate

To view what is the situation with Logical Volumes and  VG group names:

[root@redhat ~]# vgdisplay -v|grep -i vmprivate -A7 -B7
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:0

 

  — Logical volume —
  LV Path                /dev/vg00/vmprivate
  LV Name                vmprivate
  VG Name                vg00
  LV UUID                VVUgsf-FXq2-TsMJ-QPLw-7lGb-Dq5m-3J9XJJ
  LV Write Access        read/write
  LV Creation host, time main.hostname.com, 2021-01-20 17:26:11 +0100
  LV Status              available
  # open                 1
  LV Size                150.00 GiB


Note that you'll need to have the size physically available on a SAS / SSD Hard Drive physically connected to Hypervisor Host.

To make the changes Virtual Machines storage location directory permanently mounted add to /etc/fstab

/dev/mapper/vg00-vmprivate  /vmprivate              ext4    defaults,nodev,nosuid 1 2

[root@redhat ~]# echo '/dev/mapper/vg00-vmprivate  /vmprivate              ext4    defaults,nodev,nosuid 1 2' >> /etc/fstab

 

2. Second we need to install the following set of RPM packages on the Hypervisor Hardware host

[root@redhat ~]# yum install qemu-kvm qemu-img libvirt virt-install libvirt-client virt-manager libguestfs-tools virt-install virt-top -y

3. Enable libvirtd on the host

[root@redhat ~]#  lsmod | grep -i kvm
[root@redhat ~]#  systemctl enable libvirtd

4. Configure network bridging br0 interface on Hypervisor


In /etc/sysconfig/network-scripts/ifcfg-eth0 you need to include:

NM_CONTROLED=NO

Next use nmcli redhat configurator to create the bridge (you can use ip command instead) but since the tool is the redhat way to do it lets do it their way ..

[root@redhat ~]# nmcli connection delete eno3
[root@redhat ~]# nmcli connection add type bridge autoconnect yes con-name br0 ifname br0
[root@redhat ~]# nmcli connection modify br0 ipv4.addresses 10.80.51.16/26 ipv4.method manual
[root@redhat ~]# nmcli connection modify br0 ipv4.gateway 10.80.51.1
[root@redhat ~]# nmcli connection modify br0 ipv4.dns 172.20.88.2
[root@redhat ~]# nmcli connection add type bridge-slave autoconnect yes con-name eno3 ifname eno3 master br0
[root@redhat ~]# nmcli connection up br0

5. Prepare a working kickstart.cfg file for VM


Below is a sample kickstart file I've used to build a working fully functional Virtual Machine with Red Hat Enterprise Linux 8.3 (Ootpa) .

#version=RHEL8
#install
# Run the Setup Agent on first boot
firstboot --enable
ignoredisk --only-use=vda
# Use network installation
#url --url=http://hostname.com/rhel/8/BaseOS
##url --url=http://171.23.8.65/rhel/8/os/BaseOS
# Use text mode install
text
#graphical
# System language
#lang en_US.UTF-8
keyboard --vckeymap=us --xlayouts='us'
# Keyboard layouts
##keyboard us
lang en_US.UTF-8
# Root password
rootpw $6$gTiUCif4$YdKxeewgwYCLS4uRc/XOeKSitvDJNHFycxWVHi.RYGkgKctTMCAiY2TErua5Yh7flw2lUijooOClQQhlbstZ81 --iscrypted
# network-stuff
# place ip=your_VM_IP, netmask, gateway, nameserver hostname 
network --bootproto=static --ip=10.80.21.19 --netmask=255.255.255.192 --gateway=10.80.21.1 --nameserver=172.30.85.2 --device=eth0 --noipv6 --hostname=FQDN.VMhost.com --onboot=yes
# if you need just localhost initially configured uncomment and comment above
##network В --device=lo --hostname=localhost.localdomain
# System authorization information
authconfig --enableshadow --passalgo=sha512 --enablefingerprint
# skipx
skipx
# Firewall configuration
firewall --disabled
# System timezone
timezone Europe/Berlin
# Clear the Master Boot Record
##zerombr
# Repositories
## Add RPM repositories from KS file if necessery
#repo --name=appstream --baseurl=http://hostname.com/rhel/8/AppStream
#repo --name=baseos --baseurl=http://hostname.com/rhel/8/BaseOS
#repo --name=inst.stage2 --baseurl=http://hostname.com ff=/dev/vg0/vmprivate
##repo --name=rhsm-baseos В  В --baseurl=http://172.54.8.65/rhel/8/rhsm/x86_64/BaseOS/
##repo --name=rhsm-appstream --baseurl=http://172.54.8.65/rhel/8/rhsm/x86_64/AppStream/
##repo --name=os-baseos В  В  В --baseurl=http://172.54.9.65/rhel/8/os/BaseOS/
##repo --name=os-appstream В  --baseurl=http://172.54.8.65/rhel/8/os/AppStream/
#repo --name=inst.stage2 --baseurl=http://172.54.8.65/rhel/8/BaseOS
# Disk partitioning information set proper disk sizing
##bootloader --location=mbr --boot-drive=vda
bootloader --append=" crashkernel=auto tsc=reliable divider=10 plymouth.enable=0 console=ttyS0 " --location=mbr --boot-drive=vda
# partition plan
zerombr
clearpart --all --drives=vda --initlabel
part /boot --size=1024 --fstype=ext4 --asprimary
part swap --size=1024
part pv.01 --size=30000 --grow --ondisk=vda
##part pv.0 --size=80000 --fstype=lvmpv
#part pv.0 --size=61440 --fstype=lvmpv
volgroup s pv.01
logvol / --vgname=s --size=15360 --name=root --fstype=ext4
logvol /var/cache/ --vgname=s --size=5120 --name=cache --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var/log --vgname=s --size=7680 --name=log --fstype=ext4 --fsoptions="defaults,nodev,noexec,nosuid"
logvol /tmp --vgname=s --size=5120 --name=tmp --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /home --vgname=s --size=5120 --name=home --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /opt --vgname=s --size=2048 --name=opt --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var/log/audit --vgname=s --size=3072 --name=audit --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var/spool --vgname=s --size=2048 --name=spool --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var --vgname=s --size=7680 --name=var --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
# SELinux configuration
selinux --disabled
# Installation logging level
logging --level=debug
# reboot automatically
reboot
###
%packages
@standard
python3
pam_ssh_agent_auth
-nmap-ncat
#-plymouth
#-bpftool
-cockpit
#-cryptsetup
-usbutils
#-kmod-kvdo
#-ledmon
#-libstoragemgmt
#-lvm2
#-mdadm
-rsync
#-smartmontools
-sos
-subscription-manager-cockpit
# Tune Linux vm.dirty_background_bytes (IMAGE-439)
# The following tuning causes dirty data to begin to be background flushed at
# 100 Mbytes, so that it writes earlier and more often to avoid a large build
# up and improving overall throughput.
echo "vm.dirty_background_bytes=100000000" >> /etc/sysctl.conf
# Disable kdump
systemctl disable kdump.service
%end

Important note to make here is the MD5 set root password string in (rootpw) line this string can be generated with openssl or mkpasswd commands :

Method 1: use openssl cmd to generate (md5, sha256, sha512) encrypted pass string

[root@redhat ~]# openssl passwd -6 -salt xyz test
$6$xyz$rjarwc/BNZWcH6B31aAXWo1942.i7rCX5AT/oxALL5gCznYVGKh6nycQVZiHDVbnbu0BsQyPfBgqYveKcCgOE0

Note: passing -1 will generate an MD5 password, -5 a SHA256 encryption and -6 SHA512 encrypted string (logically recommended for better security)

Method 2: (md5, sha256, sha512)

[root@redhat ~]# mkpasswd –method=SHA-512 –stdin

The option –method accepts md5, sha-256 and sha-512
Theoretically there is also a kickstart file generator web interface on Redhat's site here however I never used it myself but instead use above kickstart.cfg
 

6. Install the new VM with virt-install cmd


Roll the new preconfigured VM based on above ks template file use some kind of one liner command line  like below:
 

[root@redhat ~]# virt-install -n RHEL8_3-VirtualMachine –description "CentOS 8.3 Virtual Machine" –os-type=Linux –os-variant=rhel8.3 –ram=8192 –vcpus=8 –location=/vmprivate/rhel-server-8.3-x86_64-dvd.iso –disk path=/vmprivate/RHEL8_3-VirtualMachine.img,bus=virtio,size=70 –graphics none –initrd-inject=/root/kickstart.cfg –extra-args "console=ttyS0 ks=file:/kickstart.cfg"

7. Use a tiny shell script to automate VM creation


For some clarity and better automation in case you plan to repeat VM creation you can prepare a tiny bash shell script:
 

#!/bin/sh
KS_FILE='kickstart.cfg';
VM_NAME='RHEL8_3-VirtualMachine';
VM_DESCR='CentOS 8.3 Virtual Machine';
RAM='8192';
CPUS='8';
# size is in Gigabytes
VM_IMG_SIZE='140';
ISO_LOCATION='/vmprivate/rhel-server-8.3-x86_64-dvd.iso';
VM_IMG_FILE_LOC='/vmprivate/RHEL8_3-VirtualMachine.img';

virt-install -n "$VMNAME" –description "$VM_DESCR" –os-type=Linux –os-variant=rhel8.3 –ram=8192 –vcpus=8 –location="$ISO_LOCATION" –disk path=$VM_IMG_FILE,bus=virtio,size=$IMG_VM_SIZE –graphics none –initrd-inject=/root/$KS_FILE –extra-args "console=ttyS0 ks=file:/$KS_FILE"


A copy of virt-install.sh script can be downloaded here

Wait for the installation to finish it should be visualized and if all installation is smooth you should get a login prompt use the password generated with openssl tool and test to login, then disconnect from the machine by pressing CTRL + ] and try to login via TTY with

[root@redhat ~]# virst list –all
 Id   Name        State
—————————
 2    
RHEL8_3-VirtualMachine   running

[root@redhat ~]#  virsh console RHEL8_3-VirtualMachine


redhat8-login-prompt

One last thing I recommend you check the official documentation on Kickstart2 from CentOS official website

In case if you later need to destroy the VM and the respective created Image file you can do it with:
 

[root@redhat ~]#  virsh destroy RHEL8_3-VirtualMachine
[root@redhat ~]#  virsh undefine RHEL8_3-VirtualMachine

Don't forget to celebreate the success and give this nice article a credit by sharing this nice tutorial with a friend or by placing a link to it from your blog 🙂

 

 

Enjoy !

How to Increase virtualbox Linux install machine VM .VDI hard disk size to free space on root partition – Move /usr to a new partition

Tuesday, October 10th, 2017

extend-vdi-virtual-machine-partition-with-virtualbox-howto-virtualbox-logo
 

How to Increse Hard Disk size of VirtualBox Virtual Machine .VDI file to Free Space on root Partition or Howto move /usr large partition to separate new partition?


I just wondered how to increase hard disk size of Virtualbox Virtual Machine image .VDI, because for some stupid reason I've created my initial hard disk size for Linux partition to be the default 10 Gigabytes.

The problem is the packages I need to have installed on the Virtual Machine which will be a testbed for future tests of a production website applications are taking up too much space, once installed so I'm left with no space
in /var/lib/mysql for the database import. So what can I do in that case is to simply free up disk space or Merge ROOT partition with another partition.

Since merging the partition is not a trivial job and would require me to have installable CD with the Linux distro (in my case that's Debian Linux) or have a bootable USB flash drive, I preferred second approach to problem e.g. to free up disk space on ROOT partition by creating a second partition and move the /usr folder to reside there.

Before that it is of course necessery to  have extended the .VDI file using VirtualBox, so more space than the default 10GB preconfigured are available, this is easily done on Windows OS as, VBox is provided with GUI clickable option to do it, but for who knows what reason that is not the case with Linux, so Linux users only option to increase VDI file is to manually run command part of the virtualbox package, that is not a hard task really but it requires some typing and basic knowledge on how to run commands in terminal.


To .VDI resize (extend), we first go to default location where VirtualBox stores its image .VDI files (by default as of moment of writting this article – this is ~/"VrtualBox VMs"  (or home directory of logged in user dir VirtualBox VMs), the command to use is VBoxManage

 

root@jericho:/home/hipo# cd VirtualBox VMs/
root@jericho:/home/hipo/VirtualBox VMs# ls
Debian 6  Debian 9  Windows 10
root@jericho:/home/hipo/VirtualBox VMs# cd Debian 6/
r
oot@jericho:/home/hipo/VirtualBox VMs/Debian 6# ls
Debian 6.vbox  Debian 6.vbox-prev  Debian 6.vdi  Logs  NewVirtualDisk1.vdi  Snapshots

root@jericho:/home/hipo/VirtualBox VMs/Debian 6# VBoxManage modifyhd Debian 6.vdi –resize 20000
0%…10%…20%…30%…40%…50%…60%…70%…80%…90%…100%
root@jericho:/home/hipo/VirtualBox VMs/Debian 6#


 

Above command does resize the 10GB default created partition for Linux, where I have installed Linux which was 99% full of data, because of the many packages I installed to 20GB size, to make it bigger just use the respective size, be it 30000 (for 30GB) or 100000 (for 100GB) etc.

Even though in this example VBoxManage virtual partition resize command was done for GNU / Linux Operating System, it can be done for any other Operating as well to resize the size of the Virtual .VDI file (Virtual Machine) partition, be it Windows 7 / 8 / 10 or the rest of Free Operating systems FreeBSD / OpenBSD / BSD that are installed in a VM etc.

Next Launch the Virtual Machine with VBox Server client Program and install there Gparted (GNU Parted), as we'll need it to create a new Hard Disk Partition:

 

$ VirtualBox

 


Inside virtualmachine's in gnome-terminal / xterm etc. depending on the graphical environment used do install with apt-get:

 

debian:# apt-get install –yes gparted

 

debian~:# gparted


Notice that gparted has to be ran as a root superuser.

 

Run GParted and create new EXT3 filesystem that is 10GB (the size of the new created partition).

If you have installed Debian to place all partitions under / (root directory /dev/sda1) then the fresh new partition to create should be
/dev/sda3, anyways just look closely in EXT3 and in your case if the partiition is named differently create according to proper partition /dev/ naming.

I'll not run into details on how to create the partition with GParted as the program interface is very self-explanatory, the only thing is to apply the update to create partition and the ext3 filesystem, that's being done
with a green tick:

gparted-create-ext3-partition-howto-linux

Next step is to check with fdisk whether, we have ext3 properly created  filesystem as we've done already with GPARTED:

Once we have the partition created with EXT3 filesystem, we're ready to move /usr temporary to other folder, I use usually /root for the move but you can create anywhere a new folder for that and move to there.

To move to /root directory run again in terminal:

 

debian:~# mv /usr /root
debian:~# mkdir /usr

 

howto-extend-root-filesystem-disk-space-linux-move-usr-folder-to-root-temporary-debian-gnu-linux

 

Note that during the move operations, your Desktop icons will become without (with broken) pictures and the default Debian background picture is to disappear, that's because the GUI environment will soon realize /usr/ libraries that're periodically reloaded in memory are missing and will be unable to reload them as it does in a cycle.

That should take a few minutes, so grab a coffee or if you're a smoker (hope not as smoking kills 🙂 ), in 5 / 10 minutes time depending on your computer / server configuration, it will be over, so we're ready to create new /usr dir and mount the  new partition:

 

debian:~# mount /dev/sda3 /usr

 

howto-extend-root-filesystem-disk-space-linux-move-usr-folder-to-root-temporary-debian-gnu-linux1

 

Now we check with mount command whether mount is fine:

 

check-if-filesystem-is-properly-mounted-linux-debian-screenshot


Now  /dev/sda3 is mounted under /usr  and we have to move back /root/usr directory content back to the newly mounted /usr so we run command:
 

debian:~#  mv /root/usr/* /usr/*


Finally we need to create proper records for the new partition inside /etc/fstab (fstab –FileSystem Tab file – the file which describes instructs the Linux OS what partition to boot where, what)

HOW TO CHECK LINUX UUID FOR A PARTITION??

Before adding anything to /etc/fstab you need to check the UUID of /dev/sda3 (or whatever the partition is called), without proper UUID, the system might fail to boot.
So here is how to check the UUID we'll need for config:

 

hipo@debian:~$ /sbin/blkid /dev/sda3
/dev/sda3: UUID="2273db4b-3069-4f78-90fc-e7483c0305bd" SEC_TYPE="ext2" TYPE="ext3"

hipo@debian:~$ ls -al /dev/disk/by-uuid/
total 0
drwxr-xr-x 2 root root 100 Oct  6 05:16 .
drwxr-xr-x 6 root root 120 Oct  6 05:16 ..
lrwxrwxrwx 1 root root  10 Oct  6 05:16 2273db4b-3069-4f78-90fc-e7483c0305bd -> ../../sda3
lrwxrwxrwx 1 root root  10 Oct  6 05:16 b98d92cd-41aa-4e18-a474-9b8df445dbe5 -> ../../sda1
lrwxrwxrwx 1 root root  10 Oct  6 05:16 f27f7448-f200-4983-b54f-b9e5206f77ac -> ../../sda5

As you can see our /dev/sda3 UUID is 2273db4b-3069-4f78-90fc-e7483c0305bd

Further on lets view and edit /etc/fstab you can also download a copy of my Virtual Machine fstab here

 

debian:~# cat /etc/fstab

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    defaults        0       0
# / was on /dev/sda1 during installation
UUID=b98d92cd-41aa-4e18-a474-9b8df445dbe5 /               ext3    errors=remount-ro 0       1
# swap was on /dev/sda5 during installation
UUID=f27f7448-f200-4983-b54f-b9e5206f77ac none            swap    sw              0       0
/dev/scd0       /media/cdrom0   udf,iso9660 user,noauto     0       0
/dev/scd1       /media/cdrom1   udf,iso9660 user,noauto     0       0


We need to add following line to  /etc/fstab:
 

UUID=2273db4b-3069-4f78-90fc-e7483c0305bd    /usr        ext3 error=remount-ro    0    1

 


Open the file with your favourite text editor (gedit / nano / pico / vim / joe) etc.

debian:~# vim /etc/fstab

 

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    defaults        0       0
# / was on /dev/sda1 during installation
UUID=b98d92cd-41aa-4e18-a474-9b8df445dbe5 /               ext3    errors=remount-ro 0       1
# swap was on /dev/sda5 during installation
UUID=f27f7448-f200-4983-b54f-b9e5206f77ac none            swap    sw              0       0
/dev/scd0       /media/cdrom0   udf,iso9660 user,noauto     0       0
/dev/scd1       /media/cdrom1   udf,iso9660 user,noauto     0       0

UUID=2273db4b-3069-4f78-90fc-e7483c0305bd     /usr        ext3 error=remount-ro    0    1    

Basicly it should be possible to add (for historic reasons) also instead of UUID=2273db4b-3069-4f78-90fc-e7483c0305bd  /dev/sda3
So it looks like so but, the better practice is to use UUID line given.

Well that's all folks now /usr directory will contain all your heavy root partition (disk filling) libraries and files, so you can happily use /var/lib/mysql or /var/lib/postgresql /var/www to store your web application files and import your databases.

Big thanks to Ubuntu Forums article – How do I increase the hard disk size of the Virtual Machine article for inspiring this post.

Hope that helps anyone and thanks and other comments are always welcome ! 🙂

Migrate Webserver and SQL data from old SATA Hard drive to SSD to boost websites performance / Installing new SSD KINGSTON 120GB hard disk on Linux

Monday, March 28th, 2016

ssd-linux-migrate-webserver-and-mysql-from-old-SATA-to-SSD-Kingston-Hard-drive-to-boost-performance-installing-new-SSD-on-Debian-linux
Blog and websites hosted on a server were giving bad performance lately and the old SATA Hard Disk on the Lenovo Edge server seemed to be overloaded from In/Out operations and thus slowing down the websites opeining time as well as SQL queries (especially the ones from Related Posts WordPress plugin was quite slow. Sometimes my blog site opening times were up to 8-10 seconds.

To deal with the issue I obviously needed a better speed of I/O of hard drive thus as I've never used SSD hard drives so far,  I decided to buy a new SSD (Solid State Drive) KINGSTON SV300S37A120G, 605ABBF2, max UDMA/133  hard disk.
Mounting the hard disk physically on the computer tower case wasn't a big deal as there are no rotating elements of the SSD it doesn't really matter how it is mounted main thing is that it is being hooked up somewhere to the case.

I was not sure whether the SSD HDD is supported by my Debian GNU / Linux so I had see whether Linux Operating System has properly detected your hard disk use dmesg

1. Check if SSD Hard drive is supported in Linux

 

linux:~# dmesg|grep -i kingston
[    1.182734] ata5.00: ATA-8: KINGSTON SV300S37A120G, 605ABBF2, max UDMA/133
[    1.203825] scsi 4:0:0:0: Direct-Access     ATA      KINGSTON SV300S3 605A PQ: 0 ANSI: 5

 

linux:~# dmesg|grep -i sdb
[    1.207819] sd 4:0:0:0: [sdb] 234441648 512-byte logical blocks: (120 GB/111 GiB)
[    1.207847] sd 4:0:0:0: [sdb] Write Protect is off
[    1.207848] sd 4:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[    1.207860] sd 4:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    1.207928]  sdb: unknown partition table
[    1.208319] sd 4:0:0:0: [sdb] Attached SCSI disk

 

Well great news as you see from above output obviously the Kingston SSD HDD was detected by the kernel.
I've also inspected whether the proper dimensions of hard drive (all 120 Gigabytes are being detected by the OS):

 

linux:~# fdisk -l /dev/sdb
Disk /dev/sdb: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Even better as the proper HDD sizing was detected by Linux kernel.
Next thing to do was of course to create ext4 filesystem on the SSD HDD.
I wanted to give 2 separate partitions for my Webserver Websites DocumentRoot directories which all lay under the standard Apache location inside /var/www as well as MySQL data folder which is also under the standard Debian based Linuces – /var/lib/mysql as the SQL data directory was just 3.3 GB size, I've decided to reserve 20GB gigabytes for the MySQL and another 100 GB for my PHP / CSS / JS / HTML and other data files /var/www.
 

2. Create SSD partitions with cfdisk

Hence I needed to create:

1. SSD partition of 100GB
2. SSD partition of 20GB

I have cfdisk installed and I believe, the easiest way to create the partitions is using interactive partitioner as CFDISK instead of fdisk: so in order to make the proper partitions I've ran

 

linux:~# cfdisk /dev/sdb


I' will skip explainig details on how to use CFDISK as it is pretty standard – display or manipulate disk partition table tool.
Just press on NEW button (moving with arrow keys buttons) and choose the 2 partitions size 100000 and 20000 MB (one thing to note here is that you have to choose between Primary and  Logical creation of partitions, as my SSD is a secondary drive and I already have a ) and then press the
WRITE button to save all the partition changes.

!!! Be very careful here as you might break up your other disks data make sure you're really modifying the SSD Hard Drive and not your other /dev/sda or other attached external Hard drive or ATA / SATA disk.
Press the WRITE button only once you're absolutely sure, you do it at your own (always create backup of your other data and don't blame me if something goes wrong) …

Once created the two partitions will look like in the screenshot below:
creating-linux-partitions-with-cfdisk-linux-partitioning-tool.png

 


3. Create ext4 filesystem 100 and 20 GB partitions

Next thing to do before the two partitions are ready to mount under Webserver's files documentroot /var/www and /var/lib/mysql is to create ext4 filesystem, though some might prefer to stick to ext3 or reiserfs partition, I would recommend you use ext4 for the reason ext4 according to my quick research is said to perform much better with SSD Hard Drives.

The tool to create the ext4 filesystems is mkfs4.ext4 it is provided by debian package e2fsprogs I have it already installed on my server, if you don't have it just go on and install it with:
 

linux:~# apt-get install –yes e2fsprogs

 

To create the two ext4 partitions run:
 

linux:~# mkfs4.ext4 /dev/sdb5

 

linux:~# mfs4.ext4 /dev/sdb6


Here the EXT4 filesystem on partition that is supposed to be 100 Gigabytes will take 2, 3 minutes as the dimensions of partition are a bit bigger, so if you don't want to get boring go grab a coffee, once the partitions are ready you can evaluate whether everyhing is properly created with fdisk you should get output like the one below

 

linux:~# fdisk -l /dev/sdb
Disk /dev/sdb: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1              63   234441647   117220792+   5  Extended
/dev/sdb5             126    39070079    19534977   83  Linux
/dev/sdb6        39070143   234441647    97685752+  83  Linux

 

4. Mount newly created SSD partitions under /var/www and /var/lib/mysql

Before I mounted /var/www and /var/lib/mysql in order to be able to mount under the already existing directories I had to:

1. Stop Apache and MySQL server
2. Move Mysql and Apache Documentroot and Data directories to -bak
3. Create new empty /var/www and /var/lib/mysql direcotries
4. Copy backpups ( /var/www-bak and /var/lib/mysql-bak ) to the newly mounted ext4 SSD partitions

To achieve that I had to issue following commands:
 

linux:~# /etc/init.d/apache2 stop
linux:~# /etc/init.d/mysql stop

linux:~# mv /var/www /var/www-bak
linux:~# mv /var/lib/mysql /var/lib/mysql-bak

linux:~# mkdir /var/www
linux:~# mkdir /var/lib/mysql
linux:~# chown -R mysql:mysql /var/lib/mysql


Then to manually mount the SSD partitions:
 

linux:~# mount  /dev/sdb5 /var/lib/mysql
linux:~# mount /dev/sdb6 /var/www


To check that the folders are mount into the SSD drive, ran mount cmd:

 

linux:~# mount
/dev/sda1 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
/dev/sdc1 on /backups type ext4 (rw)
/dev/sdb5 on /var/lib/mysql type ext4 (rw,relatime,discard,data=ordered))
/dev/sdb6 on /var/www type ext4 (rw,relatime,discard,data=ordered))

 

That's great now the filesystem mounts fine, however as it an SSD drive and SSD drives are being famous for having a number of limited writes on disk before the drive lifetime is over it is a good idea to increase a bit the lifetime of the SSD by mounting the SSD partitions with noatime and errors=remount-ro (in order to not log file access times to filesystem table and to remount the FS read only in case of some physical errors of the drive).

5. Configure SSD partitions to boot every time Linux reboots

Now great, the filesystems gets mounted fine so next thing to do is to make it automatically mount every time the Linux OS boots up, this on GNU / Linux is done through /etc/fstab, for my 2 ext4 partitions this is the content to add at the end of /etc/fstab:

 

/dev/sdb5               /var/lib/mysql      ext4        noatime,errors=remount-ro       0       1
/dev/sdb6               /var/www        ext4    noatime,errors=remount-ro       0       1

 

quickest way to add it without a text editor is to echo to the end of file:
 

linux:~# cp -rpf /etc/fstab /etc/fstab.bak_25_03_2016
linux:~# echo ' /dev/sdb5               /var/lib/mysql      ext4        noatime,errors=remount-ro,discard       0       1' >> /etc/fstab
linux:~# echo ' /dev/sdb6               /var/www        ext4    noatime,errors=remount-ro,discard       0       1 ' >> /etc/fstab


Then mount again all the filesystems including the 2 new created SSD (100 and 20 GB) partitions:
 

linux:~# umount /var/www
linux:~# umount /var/lib/mysql
linux:~# mount -a


To assure properly mounted with noatime and remount-ro on errors options:


linux:~# mount | grep -i sdb
/dev/sdb5 on /var/lib/mysql type ext4 (rw,noatime,errors=remount-ro)
/dev/sdb6 on /var/www type ext4 (rw,noatime,errors=remount-ro)

 

It is also a good idea to check a statistics of disk free command:
 

linux:~# df -h|grep -i sdb
/dev/sdb5         19G  0G    19G  0% /var/lib/mysql
/dev/sdb6         92G   0G    92G  0% /var/www


6. Copy all Webserver and SQL data from backupped directories to new SSD mounted

Last but not least is to copy all original content files from /var/www-bak and /var/lib/mysql-bak to the new freshly  created SSD partitions, though copying the files can be made with normal linux copy command (cp),
I personally prefer rsync because rsync is much quicker and more efficient in copying large amount of files in my case this were 48 Gigabytes.

To copy files from with rsync:

 

linux:~# rsync -av –log-file /var/log/backup.log  /var/www-bak /var/www
linux:~# rsync -av –log-file /var/log/backup.log  /var/lib/mysql-bak /var/lib/mysql


Then ofcourse, finally to restore my websites normal operation I had to bring up the Apache Webservers and MySQL service

 

linux:~# /etc/init.d/apache2 start
linux:~# /etc/init.d/mysql start


7. Optimizing SSD performance with periodic trim (discard of unused blocks on a mounted filesystem)

As I digged deeper into how to even further optimize SSD drive performance I learned about the cleaning action TRIM of the partitions for a long term performance proper operation, to understand it better think about trimming like Windows degrament operatin.
 

NAME
fstrim – discard unused blocks on a mounted filesystem

SYNOPSIS
fstrim [-o offset] [-l length] [-m minimum-free-extent] [-v] mountpoint

DESCRIPTION
fstrim is used on a mounted filesystem to discard (or "trim") blocks which are not in use by the filesystem. This is useful for
solid-state drives (SSDs) and thinly-provisioned storage.

By default, fstrim will discard all unused blocks in the filesystem. Options may be used to modify this behavior based on range or
size, as explained below.


Trimming is really necessery, otherwise SSD become very slow after some time. All modern SSD's support TRIM, but older SSD's from before 2010 usually don't.
Thus for an older SSD you'll want to check this on the website of the manufacturer.

As I mentioned earlier TRIM is not supported by all SSD drives, to check whether TRIM is supported by SSD:

linux:~# hdparm -I /dev/sdb|grep -i -E 'trim|discard'
                  *          Data set Management TRIM supported (limit 1 block)

It's easiest to let the system perform an automatic TRIM. That can be done in several ways.

The quickest way for trimming is to place into /etc/rc.local trim  commands, in my case it was the following commands:

 

fstrim -v /var/lib/mysql
fstrim -v /var/www

To add it I've used my favourite vim text editor.
Adding commands to rc.local will make SSD trimming be executed at boot time so this will reduce a bit the downtime during the trim with some time so perhaps for those like me which are running a crually important websites a better

An alternative way is to schedule a daily cron job to do just place a new job in /etc/cron.daily/trim e.g.:
 

linux:~# vim /etc/cron.daily/trim

 

#!/bin/sh
fstrim -v /var/lib/mysql
fstrim -v /var/www

linux:~# chmod +x /etc/cron.daily/trim

However the best way to enable automatic trimming to SSD  is to just add the discard parameter to /etc/fstab I've already done that earlier in this article.

Not really surprising the increase of websites opening (page load times) were decreased dramatically web page loading waiting time fall down 2 to 2.5 times, so the moral of story for me is always when possible from now on to use SSD in order to have superb websites opening times.

To sum it up what was achieved with moving my data into SSD Drive, before moving websites and SQL data to SSD drive the websites were opening for 6 to 10 seconds now sites open in 2 to 4.5 seconds which is below 5 seconds (the normal waiting time for a user to see your website).
By the way it should be not a news forfor people that are into Search Engine Optimization but might be for some of unexperienced new Admins and Webmasters that, all that all page opening times that  exceeds 5 secs is considered to be a slow website (and therefore perhaps not worthy to read).
The high load page times >5 secs makes the website also less interesting not only for end users but also for search engines (Google / Yahoo / Bing / Baidoo etc.) will is said to crawl it less if website is slow.
Search Engines are said to Index much better and crawl more frequently into more responsive websites.
Hence implementing SSD to a server and decreasing the page load time should bring up my visitors stats a bit too.

Well that's all for today, hope you enjoyed 🙂

How rescue unbootable Windows PC, Windows files through files Network copy to remote server shared Folder using Hirens Boot CD

Saturday, November 12th, 2011

hirens-boot-cd-logo-how-to-rescue-unbootable-pc-with-hirens-bootcd
I'm rescuing some files from one unbootable Windows XP using a livecd with Hirens Boot CD 13

In order to rescue the three NTFS Windows partitions files, I mounted them after booting a Mini Linux from Hirens Boot CD.

Mounting NTFS using Hirens BootCD went quite smoothly to mount the 3 partitions I used cmds:

# mount /dev/sda1 /mnt/sda1
# mount /dev/sda2 /mnt/sda2
# mount /dev/sdb1 /mnt/sdb1

After the three NTFS file partitions are mounted I used smbclient to list all the available Network Shares on the remote Network Samba Shares Server which by the way possessed the NETBIOS name of SERVER 😉

# smbclient -L //SERVER/
Enter root's password:
Domain=[SERVER] OS=[Windows 7 Ultimate 7600] Server=[Windows 7 Ultimate 6.1]

Sharename Type Comment
——— —- ——-
!!!MUSIC Disk
ADMIN$ Disk Remote Admin
C$ Disk Default share
Canon Inkjet S9000 (Copy 2) Printer Canon Inkjet S9000 (Copy 2)
D$ Disk Default share
Domain=[SERVER] OS=[Windows 7 Ultimate 7600] Server=[Windows 7 Ultimate 6.1]
Server Comment
——— ——-
Workgroup Master
——— ——-

Further on to mount the //SERVER/D network samba drive – (the location where I wanted to transfer the files from the above 3 mounted partitions):

# mkdir /mnt/D
# mount //192.168.0.100/D /mnt/D
#

Where the IP 192.168.0.100 is actually the local network IP address of the //SERVER win smb machine.

Afterwards I used mc to copy all the files I needed to rescue from all the 3 above mentioned win partitions to the mounted //SERVER/D
 

Save data from failing hard disk on Linux – Rescuing data from failing disk with bad blocks

Wednesday, April 16th, 2014

save-data-from-failing-hard-drive-data-recovery-badblocks-linux_1.jpg
Sooner or later your Linux Desktop or Linux server hard drive will start breaking up, whether you have a hardware or software RAID 1, 6 or 10 you can  and good hard disk health monitoring software you can react on time but sometimes as admins we have to take care of old servers which either have RAID 0 or missing RAID configuration and or disk firmware is unable to recognize failing blocks on time and remap them. Thus it is quite useful to have techniques to save data from failing hard disk drives with physical badblocks.

With ddrescue tool there is still hope for your Linux data though disk is full of unrecoverable I/O errors.

apt-cache show ddrescue
 

apt-cache show ddrescue|grep -i description -A 12

Description: copy data from one file or block device to another
 dd_rescue is a tool to help you to save data from crashed
 partition. Like dd, dd_rescue does copy data from one file or
 block device to another. But dd_rescue does not abort on errors
 on the input file (unless you specify a maximum error number).
 It uses two block sizes, a large (soft) block size and a small
 (hard) block size. In case of errors, the size falls back to the
 small one and is promoted again after a while without errors.
 If the copying process is interrupted by the user it is possible
 to continue at any position later. It also does not truncate
 the output file (unless asked to). It allows you to start from
 the end of a file and move backwards as well. dd_rescue does
 not provide character conversions.

 

To use ddrescue for saving data first thing is to shutdown the Linux host boot the system with a Rescue LiveCD like SystemRescueCD – (Linux system rescue disk), Knoppix (Most famous bootable LiveCD / LiveDVD), Ubuntu Rescue Remix or BackTrack LiveCD – (A security centered "hackers" distro which can be used also for forensics and data recovery), then mount the failing disk (I assume disk is still mountable :). Note that it is very important to mount the disk as read only, because any write operation on hard drive increases chance that it completely becomes unusable before saving your data!

To make backup of your whole hard disk data to secondary mounted disk into /mnt/second_disk

# mkdir /mnt/second_disk/rescue
# mount /dev/sda2 /mnt/second_disk/rescue
# dd_rescue -d -r 10 /dev/sda1 /mnt/second_disk/rescue/backup.img
# mount -o loop /mnt/second_disk/rescue/backup.img

In above example change /dev/sda2 to whatever your hard drive device is named.

Whether you have already an identical secondary drive attached to the Linux host and you would like to copy whole failing Linux partition (/dev/sda) to the identical drive (/dev/sdb) issue:

ddrescue -d -f -r3 /dev/sda /dev/sdb /media/PNY_usb/rescue.logfile

If you got just a few unreadable files and you would like to recover only them then run ddrescue just on the damaged files:

ddrescue -d –R -r 100 /damaged/disk/some_dir/damaged_file /mnt/secondary_disk/some_dir/recoveredfile

-d instructs to use direct I/O
-R retrims the error area on each retry
-r 100 sets the retry limit to 100 (tries to read data 100 times before resign)

Of course this is not always working as on some HDDs recovery is impossible due to hard physical damages, if above command can't recover a file in 10 attempts it is very likely that it never succeeds …

A small note to make here is that there is another tool dd_rescue (make sure you don't confuse them) – which is also for recovery but GNU ddrescue performs better with recovery.
How ddrescue works is it keeps track of the bad sectors, and go back and try to do a slow read of that data in order to read them.
By the way BSD users would happy to know there is ddrescue port already, so data recovery on BSDs *NIX filesystems if you're a Windows user you can use ddrescue to recover data too via Cygwin.
Of course final data recovery is also very much into God's hands so before launching ddrescue, don't forget to say a prayer 🙂

Install VMWare tools on Debian and Ubuntu Linux – Enable VMWare Fullscreen and copy paste between OS host and Virtual machine

Wednesday, May 28th, 2014

install-vmware-tools-on-debian-gnu-linux-and-ubuntu-howto

If you need to use Virtual Machine to run some testing on heterogenous Operating Systems and you have chosen VMWare as a Virtual Machine. You will soon notice some of Virtual Machines functionality like copy between host operating system and Virtual Machine, true fullscreen mode and most importantly Copy paste between your host operating system and VMWare is not working. I'm not too much into Virtualization these days so for me it was truely shocking that a proprietary software like VMWare, claimed to be the best and most efficient Virtual Machine nowadays is not supporting copy / paste, fullscreen and copy between host and guest OS.  For those arguing why I'm using VMWare at all as it is proprietary and there is already free software Virtual Machines like QEMU and Oracle's VirtualBox its simply because now I have the chance to install and use VMWare 9 Enterprise on my work place at HP with a free Corporate license – in other words I'm using VMWare just for the sake of educating myself and would always recommend VirtualBox for those looking for good substitute free alternative to VMWare.

Before trying out VMWare, I tried Virtualbox to emulate Linux on my HP work PC running Windows with VirtualBox I was having issues with keyboard not working (because of lack of support of USB, no full screen support and lack of copy / paste between OS-es), I've just recently understood this is not because Virtualbox is bad Virtualization solution but because I forgot to install VirtualBox Oracle VM VirtualBox Extension Pack which allows support for USB, enables copy paste and full screen support. The equivalent to Virtualbox Oracle VM VirtualBox in VMWare world is called VMWare-Tools and once the guest operating system is installed inside VMWare VM, its necessary to install vmware-tools to enable better screen resolution and copy paste.
 

In Windows Virtual Machine installation of vmware-tools is pretty straight forward you go through VMWare's menus

 

VM -> Install Vmware-tools

install-vmware-tools-on-linux-guest-host-os-debian-redhat-screenshot

follow the instructions and you're done, however as always installing VMWare-tools on Linux is little bit more complicated you need to run few commands from Linux installed inside the Virtual Machine to install vmware-tools. Here is how vmware-tools is installed on Debian / Ubuntu / Linux Mint and rest of Debian based operating systems:

  1. Install Build essentials and gcc You need to have this installed some developer tools as well as GCC compiler in order for the vmware-tools to compile a special Linux kernel module which enables extra support (integration) between the VMWare VM and the installed inside VM Linux distro

apt-get install --yes build-essential gcc
...

2. Install appropriate Linux headers corresponding to current Linux OS installed kernel

apt-get install --yes linux-headers-$(uname -r)
....

3. Mount CD (Virtual) Content to obtain the vmware-tools version for your Linux

Be sure to have first checked from VMWare menus on menus VM -> Intall Vmware Tools
This step is a little bit strange but just do it without too much questioning …


mount /dev/cdrom /mnt/
umount /media/cdrom0/
mount /media/cdrom
mount /dev/sr0 /mnt/cdrom/
mount /dev/sr0 /mnt/

 

Note that /dev/sr0, might already be mounted and sometimes it might be necessary to unmount it first (don't remember exactly if I unmounted it or not)

4. Copy and Untar VMwareTools-9.2.0-799703.tar.gz

cp -rpf /media/cdrom/VMwareTools-9.2.0-799703.tar.gz /tmp/
cd /tmp/
tar -zxvvf VMwareTools-9.2.0-799703.tar.gz
...

5. Run vmware-tools installer

cd vmware-tools-distrib/
./vmware-install.pl

You will be asked multiple questions you can safely press enter to answer with default settings to all settings, hopefully if all runs okay this will make VMWare Tools installed
 

Creating a new VMware Tools installer database using the tar4 format.
Installing VMware Tools.
In which directory do you want to install the binary files?
[/usr/bin]
What is the directory that contains the init directories (rc0.d/ to rc6.d/)?
[/etc]
What is the directory that contains the init scripts?
[/etc/init.d]
In which directory do you want to install the daemon files?
[/usr/sbin]
In which directory do you want to install the library files?
[/usr/lib/vmware-tools]
The path "/usr/lib/vmware-tools" does not exist currently. This program is
going to create it, including needed parent directories. Is this what you want?
[yes]
In which directory do you want to install the documentation files?
[/usr/share/doc/vmware-tools]
The path "/usr/share/doc/vmware-tools" does not exist currently. This program
is going to create it, including needed parent directories. Is this what you
want? [yes]
The installation of VMware Tools 9.2.0 build-799703 for Linux completed
successfully. You can decide to remove this software from your system at any
time by invoking the following command: "/usr/bin/vmware-uninstall-tools.pl".
Before running VMware Tools for the first time, you need to configure it by
invoking the following command: "/usr/bin/vmware-config-tools.pl". Do you want
this program to invoke the command for you now? [yes]
Initializing…
Making sure services for VMware Tools are stopped.
Stopping VMware Tools services in the virtual machine:
Guest operating system daemon: done
Unmounting HGFS shares: done
Guest filesystem driver: done
[EXPERIMENTAL] The VMware FileSystem Sync Driver (vmsync) is a new feature that creates backups of virtual machines. Please refer to the VMware Knowledge Base for more details on this capability. Do you wish to enable this feature?
[no]
Before you can compile modules, you need to have the following installed…
make
gcc
kernel headers of the running kernel
Searching for GCC…
Detected GCC binary at "/usr/bin/gcc-4.6".
The path "/usr/bin/gcc-4.6" appears to be a valid path to the gcc binary.
Would you like to change it? [no]
Searching for a valid kernel header path…
Detected the kernel headers at "/lib/modules/3.2.0-4-amd64/build/include".
The path "/lib/modules/3.2.0-4-amd64/build/include" appears to be a valid path
to the 3.2.0-4-amd64 kernel headers.
Would you like to change it? [no]
The vmblock enables dragging or copying files between host and guest in a
Fusion or Workstation virtual environment. Do you wish to enable this feature?
[no] yes
make: Leaving directory `/tmp/vmware-root/modules/vmblock-only'

No X install found.
Creating a new initrd boot image for the kernel.
update-initramfs: Generating /boot/initrd.img-3.2.0-4-amd64
Checking acpi hot plug done
Starting VMware Tools services in the virtual machine:
Switching to guest configuration: done
VM communication interface: done
VM communication interface socket family: done
File system sync driver: done
Guest operating system daemon: done
The configuration of VMware Tools 8.6.10 build-913593 for Linux for this
running kernel completed successfully.
You must restart your X session before any mouse or graphics changes take
effect.
You can now run VMware Tools by invoking "/usr/bin/vmware-toolbox-cmd" from the
command line or by invoking "/usr/bin/vmware-toolbox" from the command line
during an X server session.
To enable advanced X features (e.g., guest resolution fit, drag and drop, and
file and text copy/paste), you will need to do one (or more) of the following:
1. Manually start /usr/bin/vmware-user
2. Log out and log back into your desktop session; and,
3. Restart your X session.
Enjoy,
–the VMware team
Found VMware Tools CDROM mounted at /mnt. Ejecting device /dev/sr0 …

.To make sure vmware-tools compiled modules are loaded into Linux kernel inside VM, restart the Virtual Machine. Once Linux boots again and you login to gnome-terminal to check what is vmware-tools status (e.g. if properly loaded) run:

service vmware-tools status
vmtoolsd is running

install-vmware-tools-on-debian-gnu-linux-and-ubuntu-virtual-machine-screenshot

This method of installing works on Debian 7 (Wheezy) but same steps should work on any Ubuntu and rest of Debian derivatives. For Redhat (RPM) based Linux distributions to install vmware-tools after mounting cdrom drive following above instructions you will have an rpm package instead of .tar.gz archive so all you have to do is install the rpm, e.g. launch smth. like:

rpm -Uhv /mnt/cdrom/VMwareTools-9.2.0-799703.i386.rpm
Cheers 😉

How to mount /proc and /dev and in chroot on Linux

Wednesday, April 20th, 2011

I’m using a backtrack Linux to recover a broken Ubuntu Linux, to fix this disastrous situation I’m using the Ubuntu Linux through chroot after mounting my /dev/sda1, where my Linux resides with:

linux-recovery:~# mkdir /mnt/test1
linux-recovery:~# mount /dev/sda1 /mnt/test1
linux-recovery:~# chroot /mnt/test1
ubuntu:~#

I consequently needed to mount up the /proc and /dev partition inside the chroot environment.

Here is how I did it:

ubuntu:~# mount /proc
ubuntu:~# mount -a

Next I switched on on a different virtual console in the backtrack and to mount /dev issued the commands:

linux-recovery:~# mount --bind /dev /mnt/test1/dev

Now once again, I can use theapt-get inside the chroot to fix up the whole mess …