Posts Tagged ‘backup files’

How to create PCS / Corosync High Availability Cluster config backup and migrate to new Virtual Machines

Thursday, October 26th, 2023

pcs-pcmk-internals-explained-picture

The aim of this article is to illustrate how to literally migrate a an Haproxy PCS Pacemaker / Corosync Cluster configurations from old Virtual Machines that due to time passed become unsupported (The Operating System end of life (EOF)) has reached to a new ones. 
This is quite a complex task especially as you usually need to setup the Hypervisor hosts with VMWare / Xen / KVM / OpenVZ or whatever kind of virtualization is to be used. Then setup the correct network interfaces IPs failover the heartbeat lines over which the cluster will work to prevent Split Brain scenartions, the Network Bonding interfaces to guarantee a higher amount of higher availability as well as physically install and update all the cluster software on the new built Linux hosts that will be members of the new cluster in setup. 

All this configuration from scratch of a PCS Corosync cluster is a very lenght topic which I'll try to cover in some of my next articles. In short to migrate the cluster from old machines to new once all this predescribed steps are in line. 
You will need to.


1. Create backup of old cluster configuration
2. Migrate the backup to a new built VM Machine hosts
3. Import the cluster configuration into the PCS Cluster.


Bear in mind that this article discusses a migration of CentOS Linux release 7.9.2009 with its shipped versions of corosync / pacemaker and pcs 

How to create cluster config backup and migrate to new VM

1. Dump cluster assuming that is a Quality Assuare or Pre – Production host  to create full cluster config backup

[root@old-cluster-machine ~]# pcs config backup old-cluster-machine.pcs.config.bak

2. Dump cluster Production full configuration

[root@old-cluster-machine1 ~]# pcs config backup old-cluster-machine1.pcs.config.bak

This command will output a backup of 

old-cluster-machine1.pcs.config.bak.tar.bz2

3. Migrate a cluster identical config to the new Virtual machines

Usually this moval of produced backup files with pcs config backup  commands can be copied with something like FTP / SFTP  or SSL-ed / TLS-ed protocol. However if you have to move the configuration files from a paranoid Citrix environment that doesn't allow you to have any SFTP / SSH or FTP kind of transfer protocol from the location where the old config lays to the new ones. 
A simple encoding of the binary format dumped configuration to plain text files can be done and files, can be moved via a simple copy / paste operation (a bit of a hack) 🙂

Encode the cluster config to be able to migrate configuration in plain text via a simple Copy / Paste operation.

 

[root@old-cluster-machine ~]# base64 config backup old-cluster-machine.pcs.config.bak > old-cluster-machine.pcs.config.bak.tgz.b64

[root@old-cluster-machine1 ~]# base64 old-cluster-machine1.pcs.config.bak.tar.bz2 > old-cluster-machine1.pcs.config.bak.tgz.b64
[root@old-cluster-machine ~]# cat  old-cluster-machine.pcs.config.bak.tgz.b64

(Copy output and Paste to new host VM) /root/haproxy-cluster-backup)

[root@old-cluster-machine1 ~]# cat old-cluster-machine1.pcs.config.bak.tgz.b64 


(Copy output and Paste to new host VM) /root/haproxy-cluster-backup)

Login to the new hosts, where configs has to be migrated and restore the files with base64

For QA / Preprod to restore backup config

[root@dkv-newqa-vm ~]# mkdir /root/haproxy-cluster-backup
[root@dkv-newqa-vm ~]# cd /root/haproxy-cluster-backup
[root@dkv-newqa-vm ~]# base64 -d old-cluster-machine.config.bak.tgz.b64 > old-cluster-machine.pcs.config.bak.tar.bz2
[root@dkv-newqa-vm ~]#  tar -jxvf old-cluster-machine.pcs.config.bak.tar.bz2
ak.tar.bz2
version.txt
pcs_settings.conf
corosync.conf
cib.xml
pacemaker_authkey
uidgid.d/

 

For Prod to restore backup config

[root@dkv-newprod-vm  ~]# mkdir /root/haproxy-cluster-backup
[root@dkv-newprod-vm ~]# cd /root/haproxy-cluster-backup
[root@dkv-newprod-vm ~]# base64 -d old-cluster-machine.config.bak.tgz.b64 > old-cluster-machine1.pcs.config.bak.tar.bz2
ak.tar.bz2
version.txt
pcs_settings.conf
corosync.conf
cib.xml
pacemaker_authkey
uidgid.d/


N!B! An Useful hin is on RHEL 8 Linux's shipped pcs command version has also a very useful command with which you can simply dump completly the config of the cluster in straight commands which you can run directly on the new VM machines where you have migrated.

The command to print out commands that would add existing cluster resources on Redhat 8:

# pcs resource config –output-format=cmd

Another useful command for cluster migration is cibadmin

i.e. to dump cluster xml config

#cibadmin –q > cluster.xml

Later you can import the prior xml dump with it.

# cibadmin –replace –xml-file cib.xml

 

Howto Upgrade IBM Spectrum Protect Backup Client TSM 7.X to 8.1.8, Update Tivoli 8.1.8 to 8.1.11 on CentOS and Redhat Linux

Thursday, December 3rd, 2020

 

IBM-spectrum-protect-backup-logo-tivoli-tsm-logo

Having another day of a system administrator boredom, we had a task to upgrade some Tivoli TSM Backup clients running on a 20+ machines powered by CentOS and RHEL Linux to prepare the systems to be on the latest patched IBM Spectrum Backup client version available from IBM. For the task of patching I've used a central server where, I've initially downloaded the provided TSM client binaries archives. From this machine, we have copied TivSM*.tar to each and every system that needs to be patched and then patched. The task is not too complex as the running TSM in the machines are all at the same version and all running a recent patched version of Linux. Hence to make sure all works as expected we have tested TSM is upgraded from 7.X.X to 8.X.X on one machine and then test 8.1.8 to 8.1.11 upgrade on another one. Once having confirmed that Backups works as expected after upgrade. We have proceeded to do it massively on each of the rest 20+ hosts.
Below article's goal is to help some lazy sysadmin with the task to prepare an TSM Backup upgrade procedure to standartize TSM Upgrade, which as many of the IBM's softwares is very specific and its upgrade requires, a bit of manual work and extra cautious as there seems to be no easy way (or at least I don't know it), to do the upgrade by simply adding an RPM repository and doing, something like yum install tivsm*.


0. Check if there is at least 2G free of space

According to documentation the minimum space you need to a functional install without having it half installed or filling up your filesystem is 2 Gigabytes of Free Memory on a filesystem where the .tar and rpms will be living.

Thus check what is the situation with your filesystem where you wills store the .tar archice and extract .RPM files / install the RPM files.

# df -h

1. Download the correct tarball with 8.1 Client

On one central machine you would need to download the Tivoli you can do that via wget / curl / lynx whatever is at hand on the Linux server.

As of time of writting this article TSM's 8.1.11 location is at
URL:

http://public.dhe.ibm.com/storage/tivoli-storage-management/maintenance/client/v8r1/Linux/LinuxX86/BA/v8111/

I've made a local download mirror of Tivoli TSM 8.1.11 here.
In case you need to install IBM Spectrum Backup Client to a PCI secured environment to a DMZ-ed LAN network from a work PC you can Download it first from your local PC and via Citrix client upload program or WinSCP upload it to a central replication host from where you will later copy to each of the other server nodes that needs to be upgraded.

Lets Copy archive to all Server hosts where you want it later installed, using a small hack

Assuming you already have an Excel document or a Plain text document with all the IPs of the affected hosts where you will need to get TSM upgraded. Extract this data and from it create a plain text file /home/user/hosts.txt containing all the machine IPs lined up separated with carriage return separations (\n), so you can loop over each one and use scp to send the files.

– Replicate Tivoli tar to all machine hosts where you want to get IBM Spectrum installed or upgraded.
Do it with a loop like this:

# for i in $(cat hosts.txt); do scp 8.1.11.0-TIV-TSMBAC-LinuxX86.tar user@$i:/home/user/; done

 Copy to a Copy buffer temporary your server password assuming all your passwords to each machine are identical and paste your login user pass for each host to initiate transfer
 

2. SSH to each of the Machine hosts IPs

Once you login to the host you want to upgrade
Go to your user $HOME /home/user and create files where we'll temporary store Tivoli archive files and extract RPMs

[root@linux-server user]# mkdir -p ~/tsm/TSM_BCK/
[root@linux-server user]# mv 8.1.11.0-TIV-TSMBAC-LinuxX86.tar ~/tsm
[root@linux-server user]# cd tsm
[root@linux-server user]# tar -xvvf 8.1.11.0-TIV-TSMBAC-LinuxX86.tar
gskcrypt64-8.0.55.17.linux.x86_64.rpm
GSKit.pub.pgp
gskssl64-8.0.55.17.linux.x86_64.rpm
README_api.htm
README.htm
RPM-GPG-KEY-ibmpkg
TIVsm-API64.x86_64.rpm
TIVsm-APIcit.x86_64.rpm
TIVsm-BAcit.x86_64.rpm
TIVsm-BAhdw.x86_64.rpm
TIVsm-BA.x86_64.rpm
TIVsm-filepath-source.tar.gz
TIVsm-JBB.x86_64.rpm
TIVsm-WEBGUI.x86_64.rpm
update.txt

3. Create backup of old backup files

It is always a good idea to keep old backup files

[root@linux-server tsm]# cp -av /opt/tivoli/tsm/client/ba/bin/dsm.opt ~/tsm/TSM_BCK/dsm.opt_bak_$(date +'%Y_%M_%H')
[root@linux-server tsm]# cp -av /opt/tivoli/tsm/client/ba/bin/dsm.sys ~/tsm/TSM_BCK/dsm.sys_bak_$(date +'%Y_%M_%H')

[root@linux-server tsm]# [[ -f /etc/adsm/TSM.PWD ]] && cp -av /etc/adsm/TSM.PWD ~/TSM_BCK/ || echo 'file doesnt exist'

/etc/adsm/TSM.PWD this file is only there as legacy for TSM it contained encrypted passwords inver 7 for updates. In TSM v.8 encryption file is not there as new mechanism for sensitive data was introduced.
Be aware that from Tivoli 8.X it will return error
exist'

!! Note – if dsm.opt , dsm.sys files are on different locations – please use correct full path locations !!

4. Stop  dsmcad – TSM Service daemon

[root@linux-server tsm]# systemctl stop dsmcad

5. Locate and deinstall all old Clients

Depending on the version to upgrade if you're upgrading from TSM version 7 to 8, you will get output like.

[root@linux-server tsm]# rpm -qa | grep 'TIVsm-'
TIVsm-BA-7.1.6-2.x86_64
TIVsm-API64-7.1.6-2.x86_64

If you're one of this paranoid admins you can remove TIVsm packs  one by one.

[root@linux-server tsm]# rpm -e TIVsm-BA-7.1.6-2.x86_64
[root@linux-server tsm]# rpm -e TIVsm-API64-7.1.6-2.x86_64

Instead if upgrading from version 8.1.8 to 8.1.11 due to the Security CVE advisory recently published by IBM e.g. (IBM Runtime Vulnerability affects IBM Spectrum Backup archive Client) and  vulnerability in Apache Commons Log4J affecting IBM Spectrum Protect Backup Archive Client.

[root@linux-server tsm]# rpm -qa | grep 'TIVsm-'
TIVsm-API64-8.1.8-0.x86_64
TIVsm-BA-8.1.8-0.x86_64

Assuming you're not scared of a bit automation you can straight do it with below one liner too 🙂

# rpm -e $(rpm -qa | grep TIVsm)

[root@linux-server tsm]# rpm -qa | grep gsk
[root@linux-server tsm]# rpm -e gskcrypt64 gskssl64

6. Check uninstallation success:

[root@linux-server tsm]# rpm -qa | grep TIVsm
[root@linux-server tsm]# rpm -qa | grep gsk

Here you should an Empty output, if packages are not on the system, e.g. Empty output is good output ! 🙂

7. Install new client IBM Spectrum Client (Tivoli Storage Manager) and lib dependencies

[root@linux-server tsm]# rpm -ivh gskcrypt64-8.0.55.4.linux.x86_64.rpm
[root@linux-server tsm]# rpm -ivh gskssl64-8.0.55.4.linux.x86_64.rpm

 If you're lazy to type you can do as well

[root@linux-server tsm]# rpm -Uvh gsk*

Next step is to install main Tivoli SM components the the API files and BA (The Backup Archive Client)

[root@linux-server tsm]# rpm -ivh TIVsm-API64.x86_64.rpm
[root@linux-server tsm]# rpm -ivh TIVsm-BA.x86_64.rpm

If you have to do it on multiple servers and you do it manually following a guide like this, you might instead want to install them with one liner.

[root@linux-server tsm]# rpm -ivh TIVsm-API64.x86_64.rpm TIVsm-BA.x86_64.rpm

There are some Not mandatory "Common Inventory Technology" components (at some cases if you're using the API install it we did not need that), just for the sake if you need them on your servers due to backup architecture, install also below commented rpm files.

## rpm -ivh TIVsm-APIcit.x86_64.rpm

## rpm -ivh TIVsm-BAcit.x86_64.rpm

These packages not needed only for operation WebGUI TSM GUI management, (JBB) Journal Based Backup, BAhdw (the ONTAP library)


— TIVsm-WEBGUI.x86_64.rpm
— TIVsm-JBB.x86_64.rpm
— TIVsm-BAhdw.x86_64.rpm

8. Start and enable dsmcad service

[root@linux-server tsm]# systemctl stop dsmcad

You will get

##Warning: dsmcad.service changed on disk. Run 'systemctl daemon-reload' to reload units.

[root@linux-server tsm]# systemctl daemon-reload

[root@linux-server tsm]# systemctl start dsmcad


## enable dsmcad – it is disabled by default after install

[root@linux-server ~]# systemctl enable dsmcad

[root@linux-server tsm]# systemctl status dsmcad

9. Check dmscad service is really running

Once enabled IBM TSM will spawn a process in the bacground dmscad if it started properly you should have the process backgrounded.

[root@linux-server tsm]# ps -ef|grep -i dsm|grep -v grep
root      2881     1  0 18:05 ?        00:00:01 /usr/bin/dsmcad

If process is not there there might be some library or something not at place preventing the process to start …

10. Check DSMCAD /var/tsm logs for errors

After having dsmcad process enabled and running in background

[root@linux-server tsm]# grep -i Version /var/tsm/sched.log|tail -1
12/03/2020 18:06:29   Server Version 8, Release 1, Level 10.000

 

[root@linux-server tsm]# cat /var/tsm/dsmerror.log

To see the current TSM configuration files we can  grep out comments *

[root@linux-server tsm]# grep -v '*' /opt/tivoli/tsm/client/ba/bin/dsm.sys

Example Configuration of the agent:
—————————————————-
   *TSM SERVER NODE Location
   Servername           tsm_server
   COMMmethod           TCPip
   TCPPort              1400
   TCPServeraddress     tsmserver2.backuphost.com
   NodeName             NODE.SERVER-TO-BACKUP-HOSTNAME.COM
   Passwordaccess       generate
   SCHEDLOGNAME         /var/tsm/sched.log
   SCHEDLOGRETENTION    21 D
   SCHEDMODE            POLLING
   MANAGEDServices      schedule
   ERRORLOGNAME         /var/tsm/dsmerror.log
   ERRORLOGRETENTION    30 D
   INCLEXCL             /opt/tivoli/tsm/client/ba/bin/inclexcl.tsm

11. Remove tsm install directory tar ball and rpms to save space on system

The current version of Tivoli service manager is 586 Megabytes.

[root@linux-server tsm]# du -hsc 8.1.11.0-TIV-TSMBAC-LinuxX86.tar
586M    8.1.11.0-TIV-TSMBAC-LinuxX86.tar

Some systems are on purpose configured to have less space under their /home directory,
hence it is a good idea to clear up unnecessery files after completion.

Lets get rid of all the IBM Spectrum archive source files and the rest of RPMs used for installation.

[root@linux-server tsm]# rm -rf ~/tsm/{*.tar,*.rpm,*.gpg,*.htm,*.txt}

12. Check backups are really created on the configured remote Central backup server

To make sure after the upgrade the backups are continuously created and properly stored on the IBM Tivoly remote central backup server, either manually initiate a backup or wait for lets say a day and run dsmc client to show all created backups from previous day. To make sure you'll not get empty output you can on purpose modify some file by simply opening it and writting over without chaning anything e.g. modify your ~/.bashrc or ~/.bash_profile

## List all backups for '/' root directory from -fromdate='DD/MM/YY'

[root@linux-server tsm]# dsmc
Protect>
IBM Spectrum Protect
Command Line Backup-Archive Client Interface
  Client Version 8, Release 1, Level 11.0
  Client date/time: 12/03/2020 18:14:03
(c) Copyright by IBM Corporation and other(s) 1990, 2020. All Rights Reserved.

Node Name: NODE.SERVER-TO-BACKUP-HOSTNAME.COM
Session established with server TSM2_SERVER: AIX
  Server Version 8, Release 1, Level 10.000
  Server date/time: 12/03/2020 18:14:04  Last access: 12/03/2020 18:06:29
 
Protect> query backup -subdir=yes "/" -fromdate=12/3/2020
           Size        Backup Date                Mgmt Class           A/I File
           —-        ———–                ———-           — —-
         6,776  B  12/03/2020 01:26:53             DEFAULT              A  /etc/freshclam.conf
         6,685  B  12/03/2020 01:26:53             DEFAULT              A  /etc/freshclam.conf-2020-12-02
         5,602  B  12/03/2020 01:26:53             DEFAULT              A  /etc/hosts
         5,506  B  12/03/2020 01:26:53             DEFAULT              A  /etc/hosts-2020-12-02
           398  B  12/03/2020 01:26:53             DEFAULT              A  /opt/tivoli/tsm/client/ba/bin/tsmstats.ini
       114,328  B  12/03/2020 01:26:53             DEFAULT              A  /root/.bash_history
           403  B  12/03/2020 01:26:53             DEFAULT              A  /root/.lesshst

How to solve “Incorrect key file for table ‘/tmp/#sql_9315.MYI’; try to repair it” mysql start up error

Saturday, April 28th, 2012

When a server hard disk scape gets filled its common that Apache returns empty (no content) pages…
This just happened in one server I administer. To restore the normal server operation I freed some space by deleting old obsolete backups.
Actually the whole reasons for this mess was an enormous backup files, which on the last monthly backup overfilled the disk empty space.

Though, I freed about 400GB of space on the the root filesystem and on a first glimpse the system had plenty of free hard drive space, still restarting the MySQL server refused to start up properly and spit error:

Incorrect key file for table '/tmp/#sql_9315.MYI'; try to repair it" mysql start up error

Besides that there have been corrupted (crashed) tables, which reported next to above error.
Checking in /tmp/#sql_9315.MYI, I couldn't see any MYI – (MyISAM) format file. A quick google look up revealed that this error is caused by not enough disk space. This was puzzling as I can see both /var and / partitions had plenty of space so this shouldn't be a problem. Also manally creating the file /tmp/#sql_9315.MYI with:

server:~# touch /tmp/#sql_9315.MYI

Didn't help it, though the file created fine. Anyways a bit of a closer examination I've noticed a /tmp filesystem mounted besides with the other file system mounts ????
You can guess my great amazement to find this 1 Megabyte only /tmp filesystem hanging on the server mounted on the server.

I didn't mounted this 1 Megabyte filesystem, so it was either an intruder or some kind of "weird" bug…
I digged in Googling to see, if I can find more on the error and found actually the whole mess with this 1 mb mounted /tmp partition is caused by, just recently introduced Debian init script /etc/init.d/mountoverflowtmp.
It seems this script was introduced in Debian newer releases. mountoverflowtmp is some kind of emergency script, which is triggered in case if the root filesystem/ space gets filled.
The script has only two options:

# /etc/init.d/mountoverflowtmp
Usage: mountoverflowtmp [start|stop]

Once started what it does it remounts the /tmp to be 1 megabyte in size and stops its execution like it never run. Well maybe, the developers had something in mind with introducing this script I will not argue. What I should complain though is the script design is completely broken. Once the script gets "activated" and does its job. This 1MB mount stays like this, even if hard disk space is freed on the root partition – / ….

Hence to cope with this unhandy situation, once I had freed disk space on the root partition for some reason mountoverflowtmp stop option was not working,
So I had to initiate "hard" unmount:

server:~# mount -l /tmp

Also as I had a bunch of crashed tables and to fix them, also issued on each of the broken tables reported on /etc/init.d/mysql start start-up.

server:~# mysql -u root -p
mysql> use Database_Name;
mysql> repair table Table_Name extended;
....

Then to finally solve the stupid Incorrect key file for table '/tmp/#sql_XXYYZZ33444.MYI'; try to repair it error, I had to restart once again the SQL server:

Stopping MySQL database server: mysqld.
Starting MySQL database server: mysqld.
Checking for corrupt, not cleanly closed and upgrade needing tables..
root@server:/etc/init.d#

Tadadadadam!, SQL now loads and works back as before!