Posts Tagged ‘How to’

How to fresh Upgrade mistakenly installed 32-bit Windows 10 Professional to 64-bit Windows / A failure to Disk Clone old SSD 120GB to 512GB HDD due to failed Solid State Drive

Wednesday, November 17th, 2021

upgrade-windows-10-32-bit-to-64-bit-howto-picture

I've been Setting up a new PC with Windows OS that is a bit old a 11 years old Lenovo ThinkCentre model M90P with 8 GB of Memory, Intel(R) Core(TM) i5 CPU         650  @ 3.20GHz   3.19 GHz, Intel Q57 Express Chipset. The machine came to me with Windows 7 preinstalled and the intial goal was to migrate Windows as it is with its data from the old 120GB SSD to new 512 SSD and then to keep the machine at least a bit more up to date to upgrade the old Windows 7 to Windows 10.

This as usual seemed like a very trivial task for a System Administrator, and even if you haven't touched much of Windows as me it makes it look a piece of cake, however as always with computers, once you think you'll be done in 2 hours usually it takes 20+ . Some call it Murphy's law "If something could go wrong then it will go wrong". But putting this situation that I thought all well that's easy lets do it is a kind of a proud Thought for man and the to save us from this Passion of Proudness which according to Church fathers is the worst passion one can have and humiliate us a bit.

God allows some unforseen stuff to happen   🙂 The case with this machine whose original idea I had is to OK I Simply Duplicate the Old Hard Drive to the New one and Place the new one on the ThinkCentre is not a big deal turned to a small adventure 🙂

For this machine hardware I have to say, the old English saying "Old but Gold" is pretty true, especially after I've attached the Samsung 512GB NVME SSD Drive, which my dear friend and brother in Christ "Uncle Emilian" had received as a gift from another friend called Angel. To put even more rant, here name Emilian stems from the Greek Emilianos which translated to English means Adversary.. But anyways The old Intel SSD 120 GB drive which besides being already completely Full of Data,  turned to have Memory DATA Chips (that perhaps burn out / wasted),  so parts of the Drive were Unreadable.
I've realized the fauly SSD fact after, 
trying to first clone the drives with my Hardware Disk Clone device Orico Dual Bay 2.5 6629US3-C device and then using a simple bit to bit copy with dd command.

orico-6629us3-c2-bay-usb3-type-b2.5-type3-5.inch-sata


At first for some weird reason the Cloning of 120GB SSD HDD towards -> 512 GB newer one was unsuccessful – one of the 2 lamp indicators on Source and Destination Drives was continuiously blinking orange as it seemed data could not be read, even though I tried few times and wait for about 1 hour of time for the cloning to complete, so I first suspected that might be an issue with my  last year bought Disk Clone hardware device. So I've attached the 2 Hard Drives towards my Debian GNU / Linux 10 as USB attached drives using the "Toaster" device  and tried a classical copy   from terminal with Disk Druid e.g.


# dd if=/dev/sdb2 of=/dev/sdbc2 bs=180M status=progress conv=noerror, sync

 
dd: error reading '/dev/sdb2': Input/output error
1074889+17746 records in
1092635+0 records out
559429120 bytes (559 MB, 534 MiB) copied, 502933 s, 1.1 kB/s
dd: writing to '/dev/dc2': Input/output error
1074889+17747 records in
1092635+0 records out
559429120 bytes (559 MB, 534 MiB) copied, 502933 s, 1.1 kB/s

Finally I did a manual copy of files from /dev/sdb2 /dev/sdc2 with rsync and part of the files managed to be succesfully copied, about 55Gigabytes out of 110 managed to copy.  Luckily the data on the broken Intel 320 Series 120GB was not top secret stuff so wasting some bits wasn't the end of the world 🙂

Next, I've removed the broken 120Gb SSD which perhaps was about at least 9+ years old and attached to the Lenovo ThinkCentre, the new drive and as my dear friend wanted to have Windows again (his computer has Microsoft "Certificate of Authenticity"), e.g. that OEM Registration Serial Key for Windows 7.

Lenovo-ThinkCentre-M90p-certificate-of-authenticity

I've jumped in and used some old Flash USB Stick Drive to place again Windows 7 (in order to use the same active license) and from there on, I've used another old Windows 10 Installation Bootable stick of mine to upgrade the Windows 7 to Windows 10 (by using this Win 7 to Win 10 upgrade trick it is possible to still continue use your old Windows 7 License Key on Windows 10). So far so good, now I've had Windows 10 Professional Edition installed on the machine, but faced another issue the Memory of the Machine which is 8GB did not get fully detected the machine had detected only 3.22 GB of Memory, for some weird reason.

only-2-80-gb-usable-windows-10-problem-32-bit-cpu-cause-screenshot

After few minutes of investigation online, I've realized, I've installed by mistake a 32 Bit version of Windows 10 Pro…So the next step was of course to upgrade to 64 bit to work around the unrecognized 5.2GB memory… To make sure my Windows 10 Installation is up-to-date I've downloaded the latest one from the Media Creation Installation Tool from Microsoft's website used the tool to burn the Downloaded Image to an Empty USB Stick (mine is 16GB but minimum required would be 4Gb) and proceeded to reboot the Lenovo Desktop machine and boot from the Windows 10 Install Flash Drive. From there on I've had to select I need to install a 64 Bit version of Windows and Skip the Licensing Key fill in Prompt Twice (act as I have no license) as Windows already could recognize the older OEM installed 32 bit install Windows key and automatically fetches the key from there.

Before proceeding to install the 64 Bit Windows, of course double check  that the Machine you have at hand has already the License Key recognized by Microsoft  is 64 Bit capable:

To check 32 bit version of Windows before attempted upgrade is Properly Licensed :

Settings > Update & security > Activation

check-if-windows-is-already-activated-settings-update-and-security-Activation-menus

 

To check whether Hardware is 64 Capable:

Settings -> System -> About

 

is-hardware-processor-64-bit-capable-windows-screenshot

32 bit Windows on x64based processor (Machine supports 64 bit OS)

 

windows10-OS-Installation-media-install-tool

Media Creation Tool Windows 10 MS Installer tool (make sure you select 64-bit (x86) instead of the default

From the Installer, I've installed Windows just like I install a brand new fersh Win OS and after asking the few trivial Installation Program questions landed to the new working OS and proceeded to install the usual software which are a must have on a freshly installed Windows for some of them check my previous article Essential Must have software to install on Fresh  new Windows installation host.

How to move transfer binary files encoded with base64 on Linux with Copy Paste of text ASCII encoded string

Monday, October 25th, 2021

base64-encode-decode-binary-files-to-transfer-between-servers-base64-artistic-logo

If you have to work on servers in a protected environments that are accessed via multiple VPNs, Jump hosts or Web Citrix and you have no mean to copy binary files to your computer or from your computer because you have all kind of FTP / SFTP or whatever Data Copy clients disabled on remote jump host side or CITRIX server and you still are looking for a way to copy files between your PC and the Remote server Side.
Or for example if you have 2 or more servers that are in a special Demilitarized Network Zones ( DMZ ) and the machines does not have SFTP / FTP / WebServer or other kind of copy protocol service that can be used to copy files between the hosts and you still need to copy some files between the 2 or more machines in a slow but still functional way, then you might not know of one old school hackers trick you can employee to complete the copy of files between DMZ-ed Server Host A lets say with IP address (192.168.50.5) -> Server Host B (192.168.30.7). The way to complete the binary file copy is to Encode the binary on Server Host A and then, use cat  command to display the encoded string and copy whole encoded cat command output  to your (local PC buffer from where you access the remote side via SSH via the CITRIX or Jump host.). Then decode the encoded file with an encoding tool such as base64 or uuencode. In this article, I'll show how this is done with base64 and uuencode. Base64 binary is pretty standard in most Linux / Unix OS-es today on most Linux distributions it is part of the coreutils package.
The main use of base64 encoding to encode non-text Attachment files to Electronic Mail, but for our case it fits perfectly.
Keep in mind, that this hack to copy the binary from Machine A to Machine B of course depends on the Copy / Paste buffer being enabled both on remote Jump host or Citrix from where you reach the servers as well as your own PC laptop from where you access the remote side.

base64-character-encoding-string-table

Base64 Encoding and Decoding text strings legend

The file copy process to the highly secured PCI host goes like this:
 

1. On Server Host A encode with md5sum command

[root@serverA ~]:# md5sum -b /tmp/inputbinfile-to-encode
66c4d7b03ed6df9df5305ae535e40b7d *inputbinfile-to-encode

 

As you see one good location to encode the file would be /tmp as this is a temporary home or you can use alternatively your HOME dir

but you have to be quite careful to not run out of space if you produce it anywhere 🙂

 

2. Encode the binary file with base64 encoding

 [root@serverB ~]:# base64 -w0 inputbinfile-to-encode > outputbin-file.base64

The -w0 option is given to disable line wrapping. Line wrapping is perhaps not needed if you will copy paste the data.

base64-encoded-binary-file-text-string-linux-screenshot

Base64 Encoded string chunk with line wrapping

For a complete list of possible accepted arguments check here.

3. Cat the inputbinfile-to-encode just generated to display the text encoded file in your SecureCRT / Putty / SuperPutty etc. remote ssh access client

[root@serverA ~]:# cat /tmp/inputbinfile-to-encode
f0VMRgIBAQAAAAAAAAAAAAMAPgABAAAAMGEAAAAAAABAAAAAAAAAACgXAgAAAAAAAAAAA
EAAOAALAEAAHQAcAAYAAAAEAAA ……………………………………………………………… cTD6lC+ViQfUCPn9bs

 

4. Select the cat-ted string and copy it to your PC Copy / Paste buffer


If the bin file is not few kilobytes, but few megabytes copying the file might be tricky as the string produced from cat command would be really long, so make sure the SSH client you're using is configured to have a large buffer to scroll up enough and be able to select the whole encoded string until the end of the cat command and copy it to Copy / Paste buffer.

 

5. On Server Host B paste the bas64 encoded binary inside a newly created file

Open with a text editor vim / mc or whatever is available

[root@serverB ~]:# vi inputbinfile-to-encode

Some very paranoid Linux / UNIX systems might not have even a normal text editor like 'vi' if you happen to need to copy files on such one a useful thing is to use a simple cat on the remote side to open a new File Descriptor buffer, like this:

[root@server2 ~]:# cat >> inputbinfile-to-encode <<'EOF'
Paste the string here

 

6. Decode the encoded binary with base64 cmd again

[root@serverB ~]:# base64 –decode outputbin-file.base64 > inputbinfile-to-encode

 

7. Set proper file permissions (the same as on Host A)

[root@serverB ~]:#  chmod +x inputbinfile-to-encode

 

8. Check again the binary file checksum on Host B is identical as on Host A

[root@serverB ~]:# md5sum -b inputbinfile-to-encode
66c4d7b03ed6df9df5305ae535e40b7d *inputbinfile-to-encode

As you can md5sum match on both sides so file should be OK.

 

9. Encoding and decoding files with uuencode


If you are lucky and you have uuencode installed (sharutils) package is present on remote machine to encode lets say an archived set of binary files in .tar.gz format do:

Prepare the archive of all the files you want to copy with tar on Host A:

[root@Machine1 ~]:#  tar -czvf /bin/whatever /usr/local/bin/htop /usr/local/bin/samhain /etc/hosts archived-binaries-and-configs.tar.gz

[root@Machine1 ~]:# uuencode archived-binaries-and-configs.tar.gz archived-binaries-and-configs.uu

Cat / Copy / paste the encoded content as usual to a file on Host B:

Then on Machine 2 decode:

[root@Machine2 ~]:# uuencode -c < archived-binaries-and-configs.tar.gz.uu

 

Conclusion


In this short method I've shown you a hack that is used often by script kiddies to copy over files between pwn3d machines, a method which however is very precious and useful for sysadmins like me who has to admin a paranoid secured servers that are placed in a very hard to access environments.

With the same method you can encode or decode not only binary file but also any standard input/output file content. base64 encoding is quite useful stuff to use also in bash scripts or perl where you want to have the script copy file in a plain text format . Datas are encoded and decoded to make the data transmission and storing process easier. You have to keep in mind always that Encoding and Decoding are not similar to encryption and decryption as encr. deprytion gives a special security layers to the encoded that. Encoded data can be easily revealed by decoding, so if you need to copy between the servers very sensitive data like SSL certificates Private RSA / DSA key, this command line utility tool better to be not used for sesitive data copying.

 

 

Install and enable Sysstats IO / DIsk / CPU / Network monitoring console suite on Redhat 8.3, Few sar useful command examples

Tuesday, September 28th, 2021

linux-sysstat-monitoring-logo

 

Why to monitoring CPU, Memory, Hard Disk, Network usage etc. with sysstats tools?
 

Using system monitoring tools such as Zabbix, Nagios Monit is a good approach, however sometimes due to zabbix server interruptions you might not be able to track certain aspects of system performance on time. Thus it is always a good idea to 
Gain more insights on system peroformance from command line. Of course there is cmd tools such as iostat and top, free, vnstat that provides plenty of useful info on system performance issues or bottlenecks. However from my experience to have a better historical data that is systimized and all the time accessible from console it is a great thing to have sysstat package at place. Since many years mostly on every server I administer, I've been using sysstats to monitor what is going on servers over a short time frames and I'm quite happy with it. In current company we're using Redhats and CentOS-es and I had to install sysstats on Redhat 8.3. I've earlier done it multiple times on Debian / Ubuntu Linux and while I've faced on some .deb distributions complications of making sysstat collect statistics I've come with an article on Howto fix sysstat Cannot open /var/log/sysstat/sa no such file or directory” on Debian / Ubuntu Linux
 

Sysstat contains the following tools related to collecting I/O and CPU statistics:
iostat
Displays an overview of CPU utilization, along with I/O statistics for one or more disk drives.
mpstat
Displays more in-depth CPU statistics.
Sysstat also contains tools that collect system resource utilization data and create daily reports based on that data. These tools are:
sadc
Known as the system activity data collector, sadc collects system resource utilization information and writes it to a file.
sar
Producing reports from the files created by sadc, sar reports can be generated interactively or written to a file for more intensive analysis.

My experience with CentOS 7 and Fedora to install sysstat it was pretty straight forward, I just had to install it via yum install sysstat wait for some time and use sar (System Activity Reporter) tool to report collected system activity info stats over time.
Unfortunately it seems on RedHat 8.3 as well as on CentOS 8.XX instaling sysstats does not work out of the box.

To complete a successful installation of it on RHEL 8.3, I had to:

[root@server ~]# yum install -y sysstat


To make sysstat enabled on the system and make it run, I've enabled it in sysstat

[root@server ~]# systemctl enable sysstat


Running immediately sar command, I've faced the shitty error:


Cannot open /var/log/sysstat/sa18:
No such file or directory. Please check if data collecting is enabled”

 

Once installed I've waited for about 5 minutes hoping, that somehow automatically sysstat would manage it but it didn't.

To solve it, I've had to create additionally file /etc/cron.d/sysstat (weirdly RPM's post install instructions does not tell it to automatically create it)

[root@server ~]# vim /etc/cron.d/sysstat

# run system activity accounting tool every 10 minutes
0 * * * * root /usr/lib64/sa/sa1 60 59 &
# generate a daily summary of process accounting at 23:53
53 23 * * * root /usr/lib64/sa/sa2 -A &

 

  • /usr/local/lib/sa1 is a shell script that we can use for scheduling cron which will create daily binary log file.
  • /usr/local/lib/sa2 is a shell script will change binary log file to human-readable form.

 

[root@server ~]# chmod 600 /etc/cron.d/sysstat

[root@server ~]# systemctl restart sysstat


In a while if sysstat is working correctly you should get produced its data history logs inside /var/log/sa

[root@server ~]# ls -al /var/log/sa 


Note that the standard sysstat history files on Debian and other modern .deb based distros such as Debian 10 (in  y.2021) is stored under /var/log/sysstat

Here is few useful uses of sysstat cmds


1. Check with sysstat machine history SWAP and RAM Memory use


To lets say check last 10 minutes SWAP memory use:

[hipo@server yum.repos.d] $ sar -W  |last -n 10
 

Linux 4.18.0-240.el8.x86_64 (server)       09/28/2021      _x86_64_        (8 CPU)

12:00:00 AM  pswpin/s pswpout/s
12:00:01 AM      0.00      0.00
12:01:01 AM      0.00      0.00
12:02:01 AM      0.00      0.00
12:03:01 AM      0.00      0.00
12:04:01 AM      0.00      0.00
12:05:01 AM      0.00      0.00
12:06:01 AM      0.00      0.00

[root@ccnrlb01 ~]# sar -r | tail -n 10
14:00:01        93008   1788832     95.06         0   1357700    725740      9.02    795168    683484        32
14:10:01        78756   1803084     95.81         0   1358780    725740      9.02    827660    652248        16
14:20:01        92844   1788996     95.07         0   1344332    725740      9.02    813912    651620        28
14:30:01        92408   1789432     95.09         0   1344612    725740      9.02    816392    649544        24
14:40:01        91740   1790100     95.12         0   1344876    725740      9.02    816948    649436        36
14:50:01        91688   1790152     95.13         0   1345144    725740      9.02    817136    649448        36
15:00:02        91544   1790296     95.14         0   1345448    725740      9.02    817472    649448        36
15:10:01        91108   1790732     95.16         0   1345724    725740      9.02    817732    649340        36
15:20:01        90844   1790996     95.17         0   1346000    725740      9.02    818016    649332        28
Average:        93473   1788367     95.03         0   1369583    725074      9.02    800965    671266        29

 

2. Check system load? Are my processes waiting too long to run on the CPU?

[root@server ~ ]# sar -q |head -n 10
Linux 4.18.0-240.el8.x86_64 (server)       09/28/2021      _x86_64_        (8 CPU)

12:00:00 AM   runq-sz  plist-sz   ldavg-1   ldavg-5  ldavg-15   blocked
12:00:01 AM         0       272      0.00      0.02      0.00         0
12:01:01 AM         1       271      0.00      0.02      0.00         0
12:02:01 AM         0       268      0.00      0.01      0.00         0
12:03:01 AM         0       268      0.00      0.00      0.00         0
12:04:01 AM         1       271      0.00      0.00      0.00         0
12:05:01 AM         1       271      0.00      0.00      0.00         0
12:06:01 AM         1       265      0.00      0.00      0.00         0


3. Show various CPU statistics per CPU use
 

On a multiprocessor, multi core server sometimes for scripting it is useful to fetch processor per use historic data, 
this can be attained with:

 

[hipo@server ~ ] $ mpstat -P ALL
Linux 4.18.0-240.el8.x86_64 (server)       09/28/2021      _x86_64_        (8 CPU)

06:08:38 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
06:08:38 PM  all    0.17    0.02    0.25    0.00    0.05    0.02    0.00    0.00    0.00   99.49
06:08:38 PM    0    0.22    0.02    0.28    0.00    0.06    0.03    0.00    0.00    0.00   99.39
06:08:38 PM    1    0.28    0.02    0.36    0.00    0.08    0.02    0.00    0.00    0.00   99.23
06:08:38 PM    2    0.27    0.02    0.31    0.00    0.06    0.01    0.00    0.00    0.00   99.33
06:08:38 PM    3    0.15    0.02    0.22    0.00    0.03    0.01    0.00    0.00    0.00   99.57
06:08:38 PM    4    0.13    0.02    0.20    0.01    0.03    0.01    0.00    0.00    0.00   99.60
06:08:38 PM    5    0.14    0.02    0.27    0.00    0.04    0.06    0.01    0.00    0.00   99.47
06:08:38 PM    6    0.10    0.02    0.17    0.00    0.04    0.02    0.00    0.00    0.00   99.65
06:08:38 PM    7    0.09    0.02    0.15    0.00    0.02    0.01    0.00    0.00    0.00   99.70


 

sar-sysstat-cpu-statistics-screenshot

Monitor processes and threads currently being managed by the Linux kernel.

[hipo@server ~ ] $ pidstat

pidstat-various-random-process-statistics

[hipo@server ~ ] $ pidstat -d 2


pidstat-show-processes-with-most-io-activities-linux-screenshot

This report tells us that there is few processes with heave I/O use Filesystem system journalling daemon jbd2, apache, mysqld and supervise, in 3rd column you see their respective PID IDs.

To show threads used inside a process (like if you press SHIFT + H) inside Linux top command:

[hipo@server ~ ] $ pidstat -t -p 10765 1 3

Linux 4.19.0-14-amd64 (server)     28.09.2021     _x86_64_    (10 CPU)

21:41:22      UID      TGID       TID    %usr %system  %guest   %wait    %CPU   CPU  Command
21:41:23      108     10765         –    1,98    0,99    0,00    0,00    2,97     1  mysqld
21:41:23      108         –     10765    0,00    0,00    0,00    0,00    0,00     1  |__mysqld
21:41:23      108         –     10768    0,00    0,00    0,00    0,00    0,00     0  |__mysqld
21:41:23      108         –     10771    0,00    0,00    0,00    0,00    0,00     5  |__mysqld
21:41:23      108         –     10784    0,00    0,00    0,00    0,00    0,00     7  |__mysqld
21:41:23      108         –     10785    0,00    0,00    0,00    0,00    0,00     6  |__mysqld
21:41:23      108         –     10786    0,00    0,00    0,00    0,00    0,00     2  |__mysqld

10765 – is the Process ID whose threads you would like to list

With pidstat, you can further monitor processes for memory leaks with:

[hipo@server ~ ] $ pidstat -r 2

 

4. Report paging statistics for some old period

 

[root@server ~ ]# sar -B -f /var/log/sa/sa27 |head -n 10
Linux 4.18.0-240.el8.x86_64 (server)       09/27/2021      _x86_64_        (8 CPU)

15:42:26     LINUX RESTART      (8 CPU)

15:55:30     LINUX RESTART      (8 CPU)

04:00:01 PM  pgpgin/s pgpgout/s   fault/s  majflt/s  pgfree/s pgscank/s pgscand/s pgsteal/s    %vmeff
04:01:01 PM      0.00     14.47    629.17      0.00    502.53      0.00      0.00      0.00      0.00
04:02:01 PM      0.00     13.07    553.75      0.00    419.98      0.00      0.00      0.00      0.00
04:03:01 PM      0.00     11.67    548.13      0.00    411.80      0.00      0.00      0.00      0.00

 

5.  Monitor Received RX and Transmitted TX network traffic perl Network interface real time
 

To print out Received and Send traffic per network interface 4 times in a raw

sar-sysstats-network-traffic-statistics-screenshot
 

[hipo@server ~ ] $ sar -n DEV 1 4


To continusly monitor all network interfaces I/O traffic

[hipo@server ~ ] $ sar -n DEV 1


To only monitor a certain network interface lets say loopback interface (127.0.0.1) received / transmitted bytes

[hipo@server yum.repos.d] $  sar -n DEV 1 2|grep -i lo
06:29:53 PM        lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
06:29:54 PM        lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00


6. Monitor block devices use
 

To check block devices use 3 times in a raw
 

[hipo@server yum.repos.d] $ sar -d 1 3


sar-sysstats-blockdevice-statistics-screenshot
 

7. Output server monitoring data in CSV database structured format


For preparing a nice graphs with Excel from CSV strucuted file format, you can dump the collected data as so:

 [root@server yum.repos.d]# sadf -d /var/log/sa/sa27 — -n DEV | grep -v lo|head -n 10
server-name-fqdn;-1;2021-09-27 13:42:26 UTC;LINUX-RESTART    (8 CPU)
# hostname;interval;timestamp;IFACE;rxpck/s;txpck/s;rxkB/s;txkB/s;rxcmp/s;txcmp/s;rxmcst/s;%ifutil
server-name-fqdn;-1;2021-09-27 13:55:30 UTC;LINUX-RESTART    (8 CPU)
# hostname;interval;timestamp;IFACE;rxpck/s;txpck/s;rxkB/s;txkB/s;rxcmp/s;txcmp/s;rxmcst/s;%ifutil
server-name-fqdn;60;2021-09-27 14:01:01 UTC;eth1;19.42;16.12;1.94;1.68;0.00;0.00;0.00;0.00
server-name-fqdn;60;2021-09-27 14:01:01 UTC;eth0;7.18;9.65;0.55;0.78;0.00;0.00;0.00;0.00
server-name-fqdn;60;2021-09-27 14:01:01 UTC;eth2;5.65;5.13;0.42;0.39;0.00;0.00;0.00;0.00
server-name-fqdn;60;2021-09-27 14:02:01 UTC;eth1;18.90;15.55;1.89;1.60;0.00;0.00;0.00;0.00
server-name-fqdn;60;2021-09-27 14:02:01 UTC;eth0;7.15;9.63;0.55;0.74;0.00;0.00;0.00;0.00
server-name-fqdn;60;2021-09-27 14:02:01 UTC;eth2;5.67;5.15;0.42;0.39;0.00;0.00;0.00;0.00

To graph the output data you can use Excel / LibreOffice's Excel equivalent Calc or if you need to dump a CSV sar output and generate it on the fly from a script  use gnuplot 


What we've learned?


How to install and enable on cron sysstats on Redhat and CentOS 8 Linux ? 
How to continuously monitor CPU / Disk and Network, block devices, paging use and processes and threads used by the kernel per process ?  
As well as how to export previously collected data to CSV to import to database or for later use inrder to generate graphic presentation of data.
Cheers ! 🙂

 

How to redirect TCP port traffic from Internet Public IP host to remote local LAN server, Redirect traffic for Apache Webserver, MySQL, or other TCP service to remote host

Thursday, September 23rd, 2021

 

 

Linux-redirect-forward-tcp-ip-port-traffic-from-internet-to-remote-internet-LAN-IP-server-rinetd-iptables-redir

 

 

1. Use the good old times rinetd – internet “redirection server” service


Perhaps, many people who are younger wouldn't remember rinetd's use was pretty common on old Linuxes in the age where iptables was not on the scene and its predecessor ipchains was so common.
In the raise of mass internet rinetd started loosing its popularity because the service was exposed to the outer world and due to security holes and many exploits circulating the script kiddie communities
many servers get hacked "pwned" in the jargon of the script kiddies.

rinetd is still available even in modern Linuxes and over the last years I did not heard any severe security concerns regarding it, but the old paranoia perhaps and the set to oblivion makes it still unpopular soluttion for port redirect today in year 2021.
However for a local secured DMZ lans I can tell you that its use is mostly useful and I chooes to use it myself, everynow and then due to its simplicity to configure and use.
rinetd is pretty standard among unixes and is also available in old Sun OS / Solaris and BSD-es and pretty much everything on the Unix scene.

Below is excerpt from 'man rinetd':

 

DESCRIPTION
     rinetd redirects TCP connections from one IP address and port to another. rinetd is a single-process server which handles any number of connections to the address/port pairs
     specified in the file /etc/rinetd.conf.  Since rinetd runs as a single process using nonblocking I/O, it is able to redirect a large number of connections without a severe im‐
     pact on the machine. This makes it practical to run TCP services on machines inside an IP masquerading firewall. rinetd does not redirect FTP, because FTP requires more than
     one socket.
     rinetd is typically launched at boot time, using the following syntax:      /usr/sbin/rinetd      The configuration file is found in the file /etc/rinetd.conf, unless another file is specified using the -c command line option.

To use rinetd on any LInux distro you have to install and enable it with apt or yum as usual. For example on my Debian GNU / Linux home machine to use it I had to install .deb package, enable and start it it via systemd :

 

server:~# apt install –yes rinetd

server:~#  systemctl enable rinetd


server:~#  systemctl start rinetd


server:~#  systemctl status rinetd
● rinetd.service
   Loaded: loaded (/etc/init.d/rinetd; generated)
   Active: active (running) since Tue 2021-09-21 10:48:20 EEST; 2 days ago
     Docs: man:systemd-sysv-generator(8)
    Tasks: 1 (limit: 4915)
   Memory: 892.0K
   CGroup: /system.slice/rinetd.service
           └─1364 /usr/sbin/rinetd


rinetd is doing the traffic redirect via a separate process daemon, in order for it to function once you have service up check daemon is up as well.

root@server:/home/hipo# ps -ef|grep -i rinet
root       359     1  0 16:10 ?        00:00:00 /usr/sbin/rinetd
root       824 26430  0 16:10 pts/0    00:00:00 grep -i rinet

+ Configuring a new port redirect with rinetd

 

Is pretty straight forward everything is handled via one single configuration – /etc/rinetd.conf

The format (syntax) of a forwarding rule is as follows:

     [bindaddress] [bindport] [connectaddress] [connectport]


Besides that rinetd , could be used as a primitive firewall substitute to iptables, general syntax of allow deny an IP address is done with (allow, deny) keywords:
 

allow 192.168.2.*
deny 192.168.2.1?


To enable logging to external file ,you'll have to include in the configuration:

# logging information
logfile /var/log/rinetd.log

Here is an example rinetd.conf configuration, redirecting tcp mysql 3306, nginx on port 80 and a second web service frontend for ILO to server reachable via port 8888 and a redirect from External IP to local IP SMTP server.

 

#
# this is the configuration file for rinetd, the internet redirection server
#
# you may specify global allow and deny rules here
# only ip addresses are matched, hostnames cannot be specified here
# the wildcards you may use are * and ?
#
# allow 192.168.2.*
# deny 192.168.2.1?


#
# forwarding rules come here
#
# you may specify allow and deny rules after a specific forwarding rule
# to apply to only that forwarding rule
#
# bindadress    bindport  connectaddress  connectport


# logging information
logfile /var/log/rinetd.log
83.228.93.76        80            192.168.0.20       80
192.168.0.2        3306            192.168.0.19        3306
83.228.93.76        443            192.168.0.20       443
# enable for access to ILO
83.228.93.76        8888            192.168.1.25 443

127.0.0.1    25    192.168.0.19    25


83.228.93.76 is my external ( Public )  IP internet address where 192.168.0.20, 192.168.0.19, 192.168.0.20 (are the DMZ-ed Lan internal IPs) with various services.

To identify the services for which rinetd is properly configured to redirect / forward traffic you can see it with netstat or the newer ss command
 

root@server:/home/hipo# netstat -tap|grep -i rinet
tcp        0      0 www.pc-freak.net:8888   0.0.0.0:*               LISTEN      13511/rinetd      
tcp        0      0 www.pc-freak.n:http-alt 0.0.0.0:*               LISTEN      21176/rinetd        
tcp        0      0 www.pc-freak.net:443   0.0.0.0:*               LISTEN      21176/rinetd      

 

+ Using rinetd to redirect External interface IP to loopback's port (127.0.0.1)

 

If you have the need to redirect an External connectable living service be it apache mysql / privoxy / squid or whatever rinetd is perhaps the tool of choice (especially since there is no way to do it with iptables.

If you want to redirect all traffic which is accessed via Linux's loopback interface (localhost) to be reaching a remote host 11.5.8.1 on TCP port 1083 and 1888, use below config

# bindadress    bindport  connectaddress  connectport
11.5.8.1        1083            127.0.0.1       1083
11.5.8.1        1888            127.0.0.1       1888

 

For a quick and dirty solution to redirect traffic rinetd is very useful, however you'll have to keep in mind that if you want to redirect traffic for tens of thousands of connections constantly originating from the internet you might end up with some disconnects as well as notice a increased use of rinetd CPU use with the incrased number of forwarded connections.

 

2. Redirect TCP / IP port using DNAT iptables firewall rules

 

Lets say you have some proxy, webservice or whatever service running on port 5900 to be redirected with iptables.
The easeiest legacy way is to simply add the redirection rules to /etc/rc.local​. In newer Linuxes rc.local so if you decide to use,
you'll have to enable rc.local , I've written earlier a short article on how to enable rc.local on newer Debian, Fedora, CentOS

 

# redirect 5900 TCP service 
sysctl -w net.ipv4.conf.all.route_localnet=1
iptables -t nat -I PREROUTING -p tcp –dport 5900 -j REDIRECT –to-ports 5900
iptables -t nat -I OUTPUT -p tcp -o lo –dport 5900 -j REDIRECT –to-ports 5900
iptables -t nat -A OUTPUT -o lo -d 127.0.0.1 -p tcp –dport 5900 -j DNAT  –to-destination 192.168.1.8:5900
iptables -t nat -I OUTPUT –source 0/0 –destination 0/0 -p tcp –dport 5900 -j REDIRECT –to-ports 5900

 

Here is another two example which redirects port 2208 (which has configured a bind listener for SSH on Internal host 192.168.0.209:2208) from External Internet IP address (XXX.YYY.ZZZ.XYZ) 
 

# Port redirect for SSH to VM on openxen internal Local lan server 192.168.0.209 
-A PREROUTING  -p tcp –dport 2208 -j DNAT –to-destination 192.168.0.209:2208
-A POSTROUTING -p tcp –dst 192.168.0.209 –dport 2208 -j SNAT –to-source 83.228.93.76

 

3. Redirect TCP traffic connections with redir tool

 

If you look for an easy straight forward way to redirect TCP traffic, installing and using redir (ready compiled program) might be a good idea.


root@server:~# apt-cache show redir|grep -i desc -A5 -B5
Version: 3.2-1
Installed-Size: 60
Maintainer: Lucas Kanashiro <kanashiro@debian.org>
Architecture: amd64
Depends: libc6 (>= 2.15)
Description-en: Redirect TCP connections
 It can run under inetd or stand alone (in which case it handles multiple
 connections).  It is 8 bit clean, not limited to line mode, is small and
 light. Supports transparency, FTP redirects, http proxying, NAT and bandwidth
 limiting.
 .
 redir is all you need to redirect traffic across firewalls that authenticate
 based on an IP address etc. No need for the firewall toolkit. The
 functionality of inetd/tcpd and "redir" will allow you to do everything you
 need without screwy telnet/ftp etc gateways. (I assume you are running IP
 Masquerading of course.)

Description-md5: 2089a3403d126a5a0bcf29b22b68406d
Homepage: https://github.com/troglobit/redir
Tag: interface::daemon, network::server, network::service, role::program,
 use::proxying
Section: net
Priority: optional

 

 

server:~# apt-get install –yes redir

Here is a short description taken from its man page 'man redir'

 

DESCRIPTION
     redir redirects TCP connections coming in on a local port, [SRC]:PORT, to a specified address/port combination, [DST]:PORT.  Both the SRC and DST arguments can be left out,
     redir will then use 0.0.0.0.

     redir can be run either from inetd or as a standalone daemon.  In –inetd mode the listening SRC:PORT combo is handled by another process, usually inetd, and a connected
     socket is handed over to redir via stdin.  Hence only [DST]:PORT is required in –inetd mode.  In standalone mode redir can run either in the foreground, -n, or in the back‐
     ground, detached like a proper UNIX daemon.  This is the default.  When running in the foreground log messages are also printed to stderr, unless the -s flag is given.

     Depending on how redir was compiled, not all options may be available.

 

+ Use redir to redirect TCP traffic one time

 

Lets say you have a MySQL running on remote machine on some internal or external IP address, lets say 192.168.0.200 and you want to redirect all traffic from remote host to the machine (192.168.0.50), where you run your Apache Webserver, which you want to configure to use
as MySQL localhost TCP port 3306.

Assuming there are no irewall restrictions between Host A (192.168.0.50) and Host B (192.168.0.200) is already permitting connectivity on TCP/IP port 3306 between the two machines.

To open redirection from localhost on 192.168.0.50 -> 192.168.0.200:

 

server:~# redir –laddr=127.0.0.1 –lport=3306 –caddr=192.168.0.200 –cport=3306

 

If you need other third party hosts to be additionally reaching 192.168.0.200 via 192.168.0.50 TCP 3306.

root@server:~# redir –laddr=192.168.0.50 –lport=3306 –caddr=192.168.0.200 –cport=3306


Of course once you close, the /dev/tty or /dev/vty console the connection redirect will be cancelled.

 

+ Making TCP port forwarding from Host A to Host B permanent


One solution to make the redir setup rules permanent is to use –rinetd option or simply background the process, nevertheless I prefer to use instead GNU Screen.
If you don't know screen is a vVrtual Console Emulation manager with VT100/ANSI terminal emulation to so, if you don't have screen present on the host install it with whatever Linux OS package manager is present and run:

 

root@server:~#screen -dm bash -c 'redir –laddr=127.0.0.1 –lport=3306 –caddr=192.168.0.200 –cport=3306'

 

That would run it into screen session and detach so you can later connect, if you want you can make redir to also log connections via syslog with ( -s) option.

I found also useful to be able to track real time what's going on currently with the opened redirect socket by changing redir log level.

Accepted log level is:

 

  -l, –loglevel=LEVEL
             Set log level: none, err, notice, info, debug.  Default is notice.

 

root@server:/ # screen -dm bash -c 'redir –laddr=127.0.0.1 –lport=3308 –caddr=192.168.0.200 –cport=3306 -l debug'

 

To test connectivity works as expected use telnet:
 

root@server:/ # telnet localhost 3308
Trying 127.0.0.1…
Connected to localhost.
Escape character is '^]'.
g
5.5.5-10.3.29-MariaDB-0+deb10u1-log�+c2nWG>B���o+#ly=bT^]79mysql_native_password

6#HY000Proxy header is not accepted from 192.168.0.19 Connection closed by foreign host.

once you attach to screen session with

 

root@server:/home #  screen -r

 

You will get connectivity attempt from localhost logged : .
 

redir[10640]: listening on 127.0.0.1:3306
redir[10640]: target is 192.168.0.200:3306
redir[10640]: Waiting for client to connect on server socket …
redir[10640]: target is 192.168.0.200:3306
redir[10640]: Waiting for client to connect on server socket …
redir[10793]: peer IP is 127.0.0.1
redir[10793]: peer socket is 25592
redir[10793]: target IP address is 192.168.0.200
redir[10793]: target port is 3306
redir[10793]: Connecting 127.0.0.1:25592 to 127.0.0.1:3306
redir[10793]: Entering copyloop() – timeout is 0
redir[10793]: Disconnect after 1 sec, 165 bytes in, 4 bytes out

The downsides of using redir is redirection is handled by the separate process which is all time hanging in the process list, as well as the connection redirection speed of incoming connections might be about at least 30% slower to if you simply use a software (firewall ) redirect such as iptables. If you use something like kernel IP set ( ipsets ). If you hear of ipset for a first time and you wander whta it is below is short package description.

 

root@server:/root# apt-cache show ipset|grep -i description -A13 -B5
Maintainer: Debian Netfilter Packaging Team <pkg-netfilter-team@lists.alioth.debian.org>
Architecture: amd64
Provides: ipset-6.38
Depends: iptables, libc6 (>= 2.4), libipset11 (>= 6.38-1~)
Breaks: xtables-addons-common (<< 1.41~)
Description-en: administration tool for kernel IP sets
 IP sets are a framework inside the Linux 2.4.x and 2.6.x kernel which can be
 administered by the ipset(8) utility. Depending on the type, currently an
 IP set may store IP addresses, (TCP/UDP) port numbers or IP addresses with
 MAC addresses in a  way which ensures lightning speed when matching an
 entry against a set.
 .
 If you want to
 .
  * store multiple IP addresses or port numbers and match against the
    entire collection using a single iptables rule.
  * dynamically update iptables rules against IP addresses or ports without
    performance penalty.
  * express complex IP address and ports based rulesets with a single
    iptables rule and benefit from the speed of IP sets.

 .
 then IP sets may be the proper tool for you.
Description-md5: d87e199641d9d6fbb0e52a65cf412bde
Homepage: http://ipset.netfilter.org/
Tag: implemented-in::c, role::program
Section: net
Priority: optional
Filename: pool/main/i/ipset/ipset_6.38-1.2_amd64.deb
Size: 50684
MD5sum: 095760c5db23552a9ae180bd58bc8efb
SHA256: 2e2d1c3d494fe32755324bf040ffcb614cf180327736c22168b4ddf51d462522

How to create SD Card DATA dump image to .ISO with dd and mount it with imdisk from command line on Windows CygWin with MobaXterm

Saturday, September 18th, 2021

dd-command-logo
I'm forced to use Windows every now and then and do some ordinary things which I do usually on Linux such as dumping the content of my Android phone SD Card SanDisk, Kingston etc. to .ISO image etc.

On Linux creating and mounting a data copy of a whole SD Card is a relatively simple thing and there are plenty of ways to do it such as using the dd ( command-line utility for Unix and Unix-like operating systems whose primary purpose is to convert and copy files as said in the command manual .- e.g. ''man dd'. ). On Microsoft Windows environment perhaps one of easiest ways is to use WinCDEmu (which is relatively free under LGPL License).
WinCDEmu is capable of doing plenty of things such as:
 

  • One-click mounting of ISO, CUE, NRG, MDS/MDF, CCD, IMG images.

  • Supports unlimited amount of virtual drives.

  • Runs on 32-bit and 64-bit Windows versions from XP to Windows 10.

  • Allows creating ISO images through a context menu in Explorer.

  • Small installer size – less than 2MB!

  • Have a portable version

WinCDEmu is a nice piece of software that perhaps every Win poweruser can enjoy, plus it has a nice Graphical frontend:

wincdemu-graphical-create-iso-and-mount-so-ms-windows-software

But what if you're a console geek, like me and you end up forced to be using Windows on your Work PC and you still need to create .iso dump of your Mobile SD Card or external attached Hard Drive, without the graphical mambo jumbo in the old fashioned way with dd?

Luckily Windows advanced command lined users could massively benefit from Cygwin + Mobaxterm (if you don't know or used MobaXterm and you still use things like Putty / SuperPutty or SecureCRT – perhaps you can reconsider and make your sysadmin life easier with MobaXerm gnome-terminal like SSH tabbed Windows alternative.

Once having mobaxterm + cygwin you have dd installed on the Windows host as it is part of the busybox minimal environment and you can use it in the same manner as your used in Linux environment.

sdcard-sandisk-drive-my-computer-windows-screenshot
 

1. Using dd to copy files on Linux / UNIX OS with a dialog status bar

To use dd the usual syntax on Linux / BSD / Unix is:
 

dd if=/dev/dev-name_ID of=/path/to/directory/dump/location.iso bs=2048

 

As 2048 BS (Bytes) per second is quite a low value usually on Modern operating systems, this bytesize is usually increased to some MBs  ( Megabytes).

For example if the reading from carrier  is Solid State Drive Disk (SSD) supporting 100 MBs per second and the output SD Card is a 32 Bit Kingston Plus+ drive with whose write speed is up to 50 ~ 100 MBs, you can use cmd as:

dd if=/dev/dev-name_ID of=/path/to/directory/dump/location.iso bs=100M


If you need to have a progress on the dd copy (in case if you copy some large SD Card 128 GB or 256GB or a full copy of a hard drive partition that is really big lets say 8 Terabytes of data, dialog and pv comes quite handy.

To use them install them first:

# apt-get install –yes pv dialog


Next to have a beautiful ncurses dialog box with the status (very useful if you're shell scripting), use:
 

(pv -n /dev/sda | dd of=/dev/sdb bs=128M conv=notrunc,noerror) 2>&1 | dialog –gauge "Running dd command (cloning), please wait…" 10 70 0

pv-dialog-dd-command-ncurses-status-screenshot-gnu-linux
 

2. Listing the avaialble copy drives /dev/sda /dev/sdb1 … etc. disk locations on Windows 7 / 10 / 11 OS

[User.T420-89] ➤ for F in /dev/s* ; do echo "$F    $(cygpath -w $F)" ; done

check-drives-loop-on-cygwin-to-be-used-later-with-dd-copy-iso-creating-imageCheck drives device naming on WIndows PC – Screenshot extract from Mobaxterm

As you can see the drive location we've seen in Windows Explorer is located at drive E: above bash for loop reveals us this is located and readable from CygWin / MobaxTerm at /dev/sdb1


3. Create .iso image file on WIndows OS with dd command
 

To create a full data copy dump of to .iso (image file) with dd on Windows , I had to run:

[User.T420-89] ➤ dd if=/dev/sdb1 of=sdcard-blu-r1-hd-sdcard-backup_10092021a.img bs=100M

75+1 records in
75+1 records out
7944011776 bytes (7.4GB) copied, 391.794316 seconds, 19.3MB/s


dd-copy-drive-data-screenshot-100mb-bitesize-windows-mobaxterm


4. Mount the newly create dd Image with imdisk

In order to test the image is properly created, you can attempt to mount it from command line on Linux, mounting it is quite easy and is up to mounting the just created .img file as a loopback (loop) device, like so: 

# mount -o loop file.iso /mnt/dir

Unfortunately cygwin and mobaxterm's embedded mount command on Win OS does not support the loopback device so to have it you have to install and use some additional program  such as the upmentioned WinCDEmu or if you prefer to do it fully from command line and further on automate the process of creating a dump of images of attached drives out of a multiple computers (lets say belonging to a Windows Active Directory domain). You might install and use something like:


imdisk 

imdisk-gui-interface-ms-windows-screenshot

imdisk handy tool is  created by Olof Lagerkvist. It is free and open-source software, which  will let you mount image files of hard drive, cd-rom or floppy, and create one or several ramdisks with various parameters either from a command line or via its Graphical interface.

To use imdisk download it from its home page on sourceforge extract and install it, pretty much as any other software it has both 32 bit version as a legacy for old computers as well as 64 bit exe installer.
The general command line use of it follows a cmd syntax like:

  • Mounting .iso image files from command line on WIndows host with imdisk


[User.T420-89] ➤ ImDisk.exe -a -f "sdcard-blu-r1-hd-sdcard-backup_10092021.img" -m #:

Where:
 

  • #: – is the actual drive you would like to mount to.
     
  • -a option stands for attach to, it will configure and attach a virtual disk with the parameters specified and attach it to the system.
     
  • -f – is self explanatory, provides the iso image file naming 

If you want to attach the newly created image to lets say  L:\ windows new mapped drive

ImDisk.exe -a -f "sdcard-blu-r1-hd-sdcard-backup_10092021.img" -m l:

  • Unmount mounted .img image with imdisk from cmd line

[User.T420-89] ➤ imdisk.exe -l
\Device\ImDisk0
                                                                                                                              ✘

[User.T420-89] ➤ imdisk.exe -D -m l:
Notifying applications…
Flushing file buffers…
Locking volume…
Failed, forcing dismount…
Removing device…
Removing mountpoint…
Done.

imdisk-detach-attached-drive-mobaxterm-windows-screenshot

 

What we learned ?

What we have learned in this article is how to use Mobaxterm embedded dd Data Convert and Copy command to prepare full image backups of SD card or external drives on Windows OS. Also few alternative ways were entions such as using WinCDEmu free  open source alternative to DaemonTools program to create / mount or convert the image for the GUI lovers. Also for hard core sysadmins as me was shown how to list drives devices attached to the Win PC {/dev/sda,/dev/sdb} etc. and how to copy partition data with dd just like one would do on Linux OS. Finally to test the created image, I've shown you how to use the imdisk free software tool to attach and detach image to a mapped local Windows drive.

Hope this article learned you something new.

Fix Out of inodes on Postfix Linux Mail Cluster. How to clean up filesystem running out of Inodes, Filesystem inodes on partition is 100% full

Wednesday, August 25th, 2021

Inode_Entry_inode-table-content

Recently we have faced a strange issue with with one of our Clustered Postfix Mail servers (the cluster is with 2 nodes that each has configured Postfix daemon mail servers (running on an OpenVZ virtualized environment).
A heartbeat that checks liveability of clusters and switches nodes in case of one of the two gets broken due to some reason), pretty much a standard SMTP cluster.

So far so good but since the cluster is a kind of abondoned and is pretty much legacy nowadays and used just for some Monitoring emails from different scripts and systems on servers, it was not really checked thoroughfully for years and logically out of sudden the alarming email content sent via the cluster stopped working.

The normal sysadmin job here  was to analyze what is going on with the cluster and fix it ASAP. After some very basic analyzing we catched the problem is caused by a  "inodes full" (100% of available inodes were occupied) problem, e.g. file system run out of inodes on both machines perhaps due to a pengine heartbeat process  bug  leading to producing a high number of .bz2 pengine recovery archive files stored in /var/lib/pengine>

Below are the few steps taken to analyze and fix the problem.
 

1. Finding out about the the system run out of inodes problem


After logging on to system and not finding something immediately is wrong with inodes, all I can see from crm_mon is cluster was broken.
A plenty of emails were left inside the postfix mail queue visible with a standard command

[root@smtp1: ~ ]# postqueue -p

It took me a while to find ot the problem is with inodes because a simple df -h  was showing systems have enough space but still cluster quorum was not complete.
A bit of further investigation led me to a  simple df -i reporting the number of inodes on the local filesystems on both our SMTP1 and SMTP2 got all occupied.

[root@smtp1: ~ ]# df -i
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/simfs            500000   500000  0   100% /
none                   65536      61   65475    1% /dev

As you can see the number of inodes on the Virual Machine are unfortunately depleted

Next step was to check directories occupying most inodes, as this is the place from where files could be temporary moved to a remote server filesystem or moved to another partition with space on a server locally attached drives.
Below command gives an ordered list with directories locally under the mail root filesystem / and its respective occupied number files / inodes,
the more files under a directory the more inodes are being occupied by the files on the filesystem.

 

run-out-if-inodes-what-is-inode-find-out-which-filesystem-or-directory-eating-up-all-your-system-inodes-linux_inode_diagram.gif
1.1 Getting which directory consumes most of the inodes on the systems

 

[root@smtp1: ~ ]# { find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n; } 2>/dev/null
….
…..

…….
    586 /usr/lib64/python2.4
    664 /usr/lib64
    671 /usr/share/man/man8
    860 /usr/bin
   1006 /usr/share/man/man1
   1124 /usr/share/man/man3p
   1246 /var/lib/Pegasus/prev_repository_2009-03-10-1236698426.308128000.rpmsave/root#cimv2/classes
   1246 /var/lib/Pegasus/prev_repository_2009-05-18-1242636104.524113000.rpmsave/root#cimv2/classes
   1246 /var/lib/Pegasus/prev_repository_2009-11-06-1257494054.380244000.rpmsave/root#cimv2/classes
   1246 /var/lib/Pegasus/prev_repository_2010-08-04-1280907760.750543000.rpmsave/root#cimv2/classes
   1381 /var/lib/Pegasus/prev_repository_2010-11-15-1289811714.398469000.rpmsave/root#cimv2/classes
   1381 /var/lib/Pegasus/prev_repository_2012-03-19-1332151633.572875000.rpmsave/root#cimv2/classes
   1398 /var/lib/Pegasus/repository/root#cimv2/classes
   1696 /usr/share/man/man3
   400816 /var/lib/pengine

Note, the above command orders the files from bottom to top order and obviosuly the bottleneck directory that is over-eating Filesystem inodes with an exceeding amount of files is
/var/lib/pengine
 

2. Backup old multitude of files just in case of something goes wrong with the cluster after some files are wiped out


The next logical step of course is to check what is going on inside /var/lib/pengine just to find a very ,very large amount of pe-input-*NUMBER*.bz2 files were suddenly produced.

 

[root@smtp1: ~ ]# ls -1 pe-input*.bz2 | wc -l
 400816


The files are produced by the pengine process which is one of the processes that is controlling the heartbeat cluster state, presumably it is done by running process:

[root@smtp1: ~ ]# ps -ef|grep -i pengine
24        5649  5521  0 Aug10 ?        00:00:26 /usr/lib64/heartbeat/pengine


Hence in order to fix the issue, to prevent some inconsistencies in the cluster due to the file deletion,  copied the whole directory to another mounted parition (you can mount it remotely with sshfs for example) or use a local one if you have one:

[root@smtp1: ~ ]# cp -rpf /var/lib/pengine /mnt/attached_storage


and proceeded to clean up some old multitde of files that are older than 2 years of times (720 days):


3. Clean  up /var/lib/pengine files that are older than two years with short loop and find command

 


First I made a list with all the files to be removed in external text file and quickly reviewed it by lessing it like so

[root@smtp1: ~ ]#  cd /var/lib/pengine
[root@smtp1: ~ ]# find . -type f -mtime +720|grep -v pe-error.last | grep -v pe-input.last |grep -v pe-warn.last -fprint /home/myuser/pengine_older_than_720days.txt
[root@smtp1: ~ ]# less /home/myuser/pengine_older_than_720days.txt


Once reviewing commands I've used below command to delete the files you can run below command do delete all older than 2 years that are different from pe-error.last / pe-input.last / pre-warn.last which might be needed for proper cluster operation.

[root@smtp1: ~ ]#  for i in $(find . -type f -mtime +720 -exec echo '{}' \;|grep -v pe-error.last | grep -v pe-input.last |grep -v pe-warn.last); do echo $i; done


Another approach to the situation is to simply review all the files inside /var/lib/pengine and delete files based on year of creation, for example to delete all files in /var/lib/pengine from 2010, you can run something like:
 

[root@smtp1: ~ ]# for i in $(ls -al|grep -i ' 2010 ' | awk '{ print $9 }' |grep -v 'pe-warn.last'); do rm -f $i; done


4. Monitor real time inodes freeing

While doing the clerance of old unnecessery pengine heartbeat archives you can open another ssh console to the server and view how the inodes gets freed up with a command like:

 

# check if inodes is not being rapidly decreased

[root@csmtp1: ~ ]# watch 'df -i'


5. Restart basic Linux services producing pid files and logs etc. to make then workable (some services might not be notified the inodes on the Hard drive are freed up)

Because the hard drive on the system was full some services started to misbehaving and /var/log logging was impacted so I had to also restart them in our case this is the heartbeat itself
that  checks clusters nodes availability as well as the logging daemon service rsyslog

 

# restart rsyslog and heartbeat services
[root@csmtp1: ~ ]# /etc/init.d/heartbeat restart
[root@csmtp1: ~ ]# /etc/init.d/rsyslog restart

The systems had been a data integrity legacy service samhain so I had to restart this service as well to reforce the /var/log/samhain log file to again continusly start writting data to HDD.

# Restart samhain service init script 
[root@csmtp1: ~ ]# /etc/init.d/samhain restart


6. Check up enough inodes are freed up with df

[root@smtp1 log]# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/simfs 500000 410531 19469 91% /
none 65536 61 65475 1% /dev


I had to repeat the same process on the second Postfix cluster node smtp2, and after all the steps like below check the status of smtp2 node and the postfix queue, following same procedure made the second smtp2 cluster member as expected 🙂

 

7. Check the cluster node quorum is complete, e.g. postfix cluster is operating normally

 

# Test if email cluster is ok with pacemaker resource cluster manager – lt-crm_mon
 

[root@csmtp1: ~ ]# crm_mon -1
============
Last updated: Tue Aug 10 18:10:48 2021
Stack: Heartbeat
Current DC: smtp2.fqdn.com (bfb3d029-89a8-41f6-a9f0-52d377cacd83) – partition with quorum
Version: 1.0.12-unknown
2 Nodes configured, unknown expected votes
4 Resources configured.
============

Online: [ smtp2.fqdn.com smtp1.fqdn.com ]

failover-ip (ocf::heartbeat:IPaddr2): Started csmtp1.ikossvan.de
Clone Set: postfix_clone
Started: [ smtp2.fqdn.com smtp1fqdn.com ]
Clone Set: pingd_clone
Started: [ smtp2.fqdn.com smtp1.fqdn.com ]
Clone Set: mailto_clone
Started: [ smtp2.fqdn.com smtp1.fqdn.com ]

 

8.  Force resend a few hundred thousands of emails left in the email queue


After some inodes gets freed up due to the file deletion, i've reforced a couple of times the queued mail servers to be immediately resent to remote mail destinations with cmd:

 

# force emails in queue to be resend with postfix

[root@smtp1: ~ ]# sendmail -q


– It was useful to watch in real time how the queued emails are quickly decreased (queued mails are successfully sent to destination addresses) with:

 

# Monitor  the decereasing size of the email queue
[root@smtp1: ~ ]# watch 'postqueue -p|grep -i '@'|wc -l'

How to yum Install Gnome GUI, Latest Guest Addition Tools, Google Chrome latest version and rdesktop / xfreerdp / remmina remote RDP VNC clients On CentOS 7 / 8

Thursday, July 29th, 2021

centos7-logo

I've just reinstalled my CentOS 7 Virtual Machine since after I tried to migrate a .vdi Virtual Box image to the new company laptop using a copy of Virtualbox VM via Microsoft OneDrive was a failure.
Thus I have rebuild all my CentOS Linux programs preinstalled on the old Virtual Machines from scratch, I use this virtual machine for a very simple tasks, so basicly most imporant tools I use is a plain SSH and VNC and Remote Desktop clients just to be able to remotely connect to remote Home based server.


1.Install GNOME Graphical Environment from command line on CentOS 7 with yum and configure it to start GUI on next OS boot


I've used a minimal CentOS installation – ISO CentOS-7-x86_64-DVD-1908.iso and this brings up the OS with a text mode only as usually CentOS is used to roll on Servers and rarely and many times admins did not want to have GUI at all, however my case is different since I do like to use Graphical Environment as I use my CentOS for all kind of testing that can be later applied to a Production machines that doesn't have a GUI, hence to install GNOME on CentOS run below cmds:
 

[root@centos ~ ]# yum group list
Loaded plugins: fastestmirror
There is no installed groups file.
Maybe run: yum groups mark convert (see man yum)
Loading mirror speeds from cached hostfile
Available Environment Groups:
 Minimal Install
 Compute Node
 Infrastructure Server
 File and Print Server
 Basic Web Server
 Virtualization Host
 Server with GUI
 GNOME Desktop
 KDE Plasma Workspaces
 Development and Creative Workstation
Available Groups:
 Compatibility Libraries
 Console Internet Tools
 Development Tools
 Graphical Administration Tools
 Legacy UNIX Compatibility
 Scientific Support
 Security Tools
 Smart Card Support
 System Administration Tools
 System Management
Done


[root@centos ~ ]# yum groupinstall "GNOME Desktop" "Graphical Administration Tools" -y


Enable GUI to be automatically start on CentOS VM boot in systemd this is configured with the "targets" instead of the well known classical runlevels (the well known /etc/inittab) is now obsolete in newer Linux distros.

[root@centos ~ ]# ln -sf /lib/systemd/system/runlevel5.target /etc/systemd/system/default.target


2. Install Guest Additions Tools on CentOS


The most basic thing to do once I've had the CentOS Linux release 7.7.1908 (Core) rolled out on the VirtualBox is of course to enable Guest Additions Tools

First I had to install of course Guest Additions Tools to allow myself to have a copy paste in clip board via the Host Machine (Windows 10) and the Guest Machine.
To do I had to:

[root@centos ~ ]# yum install kernel-headers.x86_64 -y

[root@centos ~ ]# rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

[root@centos ~ ]#  yum install perl gcc dkms kernel-devel kernel-headers make bzip2

To check the required VBoxLinuxAdditions.run script kernel headers are at place:

[root@centos ~ ]# ls -l /usr/src/kernels/$(uname -r)


You should get a list of kernel header files

Then once I've done the Insert Guest Additions CD Image from the VirtualBox VM upper menu. I've had to mount and load the guest additions via the script:
 

[root@centos ~ ]# mkdir /mnt/cdrom
[root@centos ~ ]# mount /dev/cdrom /mnt/cdrom
[root@centos ~ ]# sh VBoxLinuxAdditions.run

After rebooting the Virtual Machine, I've used the full screen functionality to test and configured immediately Shared Clipboard and Drag and Drop to be both set to (Bidirectional) as well as configured a Shared folder to provide my Windows Desktop under /mnt/shared_folder (as read write) as I usually do to be able to easily copy files from the VM and to the Windows.

3. Install Google Chrome on the CentOS Virtual Machine with yum
 

Next I've installed the chrome browser that was pretty trivial it is up to fetching the required 32 or 64 bit latest chrome binary this is usually on URL:

[root@centos ~ ]# wget https://dl.google.com/linux/direct/google-chrome-stable_current_x86_64.rpm

and installing Google Chrome with superuser with command:

[root@centos ~ ]# yum install ./google-chrome-stable_current_*.rpm -y

 

Loaded plugins: fastestmirror, langpacks
Examining ./google-chrome-stable_current_x86_64.rpm: google-chrome-stable-92.0.4515.107-1.x86_64
Marking ./google-chrome-stable_current_x86_64.rpm to be installed
Resolving Dependencies
–> Running transaction check
—> Package google-chrome-stable.x86_64 0:92.0.4515.107-1 will be installed
–> Processing Dependency: liberation-fonts for package: google-chrome-stable-92.0.4515.107-1.x86_64
Loading mirror speeds from cached hostfile
 * base: mirror.digitalnova.at
 * epel: fedora.ipacct.com
 * extras: mirror.digitalnova.at
 * updates: mirror.digitalnova.at
–> Processing Dependency: libvulkan.so.1()(64bit) for package: google-chrome-stable-92.0.4515.107-1.x86_64
–> Running transaction check
—> Package liberation-fonts.noarch 1:1.07.2-16.el7 will be installed
–> Processing Dependency: liberation-narrow-fonts = 1:1.07.2-16.el7 for package: 1:liberation-fonts-1.07.2-16.el7.noarch
—> Package vulkan.x86_64 0:1.1.97.0-1.el7 will be installed
–> Processing Dependency: vulkan-filesystem = 1.1.97.0-1.el7 for package: vulkan-1.1.97.0-1.el7.x86_64
–> Running transaction check
—> Package liberation-narrow-fonts.noarch 1:1.07.2-16.el7 will be installed
—> Package vulkan-filesystem.noarch 0:1.1.97.0-1.el7 will be installed
–> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package                 Arch   Version         Repository                 Size
================================================================================
Installing:
 google-chrome-stable    x86_64 92.0.4515.107-1 /google-chrome-stable_current_x86_64
                                                                          259 M
Installing for dependencies:
 liberation-fonts        noarch 1:1.07.2-16.el7 base                       13 k
 liberation-narrow-fonts noarch 1:1.07.2-16.el7 base                      202 k
 vulkan                  x86_64 1.1.97.0-1.el7  base                      3.6 M
 vulkan-filesystem       noarch 1.1.97.0-1.el7  base                      6.3 k

Transaction Summary
================================================================================
Install  1 Package (+4 Dependent packages)

Total size: 263 M
Total download size: 3.8 M
Installed size: 281 M
Is this ok [y/d/N]: y
Downloading packages:
(1/4): liberation-fonts-1.07.2-16.el7.noarch.rpm           |  13 kB   00:00     
(2/4): liberation-narrow-fonts-1.07.2-16.el7.noarch.rpm    | 202 kB   00:00     
(3/4): vulkan-filesystem-1.1.97.0-1.el7.noarch.rpm         | 6.3 kB   00:00     
(4/4): vulkan-1.1.97.0-1.el7.x86_64.rpm                    | 3.6 MB   00:00     
——————————————————————————–
Total                                              3.0 MB/s | 3.8 MB  00:01     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : vulkan-filesystem-1.1.97.0-1.el7.noarch                      1/5 
  Installing : vulkan-1.1.97.0-1.el7.x86_64                                 2/5 
  Installing : 1:liberation-narrow-fonts-1.07.2-16.el7.noarch               3/5 
  Installing : 1:liberation-fonts-1.07.2-16.el7.noarch                      4/5 
  Installing : google-chrome-stable-92.0.4515.107-1.x86_64                  5/5 
  Verifying  : vulkan-1.1.97.0-1.el7.x86_64                                 1/5 
  Verifying  : 1:liberation-narrow-fonts-1.07.2-16.el7.noarch               2/5 
  Verifying  : 1:liberation-fonts-1.07.2-16.el7.noarch                      3/5 
  Verifying  : google-chrome-stable-92.0.4515.107-1.x86_64                  4/5 
  Verifying  : vulkan-filesystem-1.1.97.0-1.el7.noarch                      5/5 

Installed:
  google-chrome-stable.x86_64 0:92.0.4515.107-1                                 

Dependency Installed:
  liberation-fonts.noarch 1:1.07.2-16.el7                                       
  liberation-narrow-fonts.noarch 1:1.07.2-16.el7                                
  vulkan.x86_64 0:1.1.97.0-1.el7                                                
  vulkan-filesystem.noarch 0:1.1.97.0-1.el7             


4. Install usable Windows VNC and remote desktop (RDP Client) for CentOS Linux


There is a plenty of clients to choice from if you need to have an RDP client for Linux, but perhaps the most useful ones I usually use are remmina / rdesktop and freerdp. Usually I use remmina on Debian Linux, but under the VM somehow I was not able to make remmina work in Full Screen mode while connected to remote Windows 7 VPS server, thus I've first tried xfreerdp (that comes from default CentOS repositories) and is open source alternative to rdesktop (which is non free distributed binary).
 

[root@centos ~ ]$ sudo yum -y install freerdp


The basic use is:

[hipo@centos ~ ]$ xfreerdp –toggle-fullscreen <remote-server-address>


Unfortunately I did not succeeded to make xfreerdp be able to show me remote desktop in FullScreen mode so had to use additional repository package called nux-dextop to have rdesktop at my disposal.

To install it had to run:

[root@centos ~ ]# rpm –import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro 
[root@centos ~ ]# rpm -Uvh http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-1.el7.nux.noarch.rpm    
[root@centos ~ ]# yum install rdesktop

To connect to the remote RDP host in Fullscreen with rdesktop :
 

rdesktop -f <remote-server-address>

windows-7-remote-desktop-screenshot-connected-with-rdesktop

As telnet is not installed by default and it is so useful to check ports

5. Install GNU Image Manipulation Program for better screnshotting and Graphic edits


I usually do install GIMP (GNU Image Manipulation Program) since this is my favourite tool to make screenshot on Linux as well as do some minor graphic edits whenever necessery. I warmly recommend gimp to anyone. If you don't have basic GIMP tool and you plan to be daily working a lot with Linux sooner or later some skills with the program will be of a major use even for the most advanced sysadmin :)_

root@centos ~ ]# yum install -y gimp

 

6. Install useful administration tools for daily sysadmin work – telnet, nmap, iftop, htop, iotop, iptraf-ng, tcpdump

 

Having basic analys tools and remote communication port testing, DNS, resolving and connection, cpu, mem statistics I find mostly useful. 

[root@centos .ssh]# yum install telnet nmap iftop htop vnstat sysstat iptraf-ng bind-utils -y

 

 

7. Set Open Explorer and SHOW Desktop key binding shortcuts for GNOME (to make daily work easier)

 


Another useful I do use in my newly installed Virtual Machines is the key combination of Windows (button key) + E – to easily open the GNOME equivalent of Windows Explorer (Nautilus) and Windows (key) + D to hide the active selected Window and Show Desktop. This is configured pretty easy in GNOME through:
 

gnome-control-center

Keyboard (Section)

Perhaps there is other stuff I need to add on the freshly installed Operating System if I remember something else interesting

configure-home-folder-and-hide-all-normal-windows-gnome-key-binding-howto-screenshot

 

8. Install gnome-tweaks to tweak a bit the desktop icon positionsing and additional gnome-shell extras

[root@centos hipo]# yum install -y gnome-shell-extension-workspace-indicator.noarch gnome-shell-extension-workspace-indicator.noarch gnome-shell-extension-suspend-button.noarch gnome-shell-extension-refresh-wifi.noarch gnome-shell-extension-updates-dialog.noarch gnome-shell-extension-windowoverlay-icons.noarch gnome-shell-extension-places-menu.noarch gnome-shell-extension-drive-menu.noarch gnome-shell-extension-apps-menu.noarch gnome-shell-extension-auto-move-windows.noarch gnome-tweaks gnome-shell-extension-systemMonitor.noarch gnome-shell-extension-openweather.noarch gnome-shell-extension-user-theme.noarch gnome-shell-extension-topicons-plus.noarch


Next step is to use gnome-tweaks to set multiple custom preference stuff you like on the gnome 3.28 GUI 

 

gnome-tweak-tool1

gnome-tweak-tool2

gnome-tweak-tool3

9. Change ( Fix) timezone to correct time on the Virtual Machine

[root@localhost ~]# timedatectl 
      Local time: Fri 2021-07-30 12:20:51 CEST
  Universal time: Fri 2021-07-30 10:20:51 UTC
        RTC time: Fri 2021-07-30 10:20:48
       Time zone: Europe/Berlin (CEST, +0200)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: yes
 Last DST change: DST began at
                  Sun 2021-03-28 01:59:59 CET
                  Sun 2021-03-28 03:00:00 CEST
 Next DST change: DST ends (the clock jumps one hour backwards) at
                  Sun 2021-10-31 02:59:59 CEST
                  Sun 2021-10-31 02:00:00 CET

[root@localhost ~]# ls -l /etc/localtime
lrwxrwxrwx. 1 root root 35 Jul 29 14:03 /etc/localtime -> ../usr/share/zoneinfo/Europe/Berlin


To change to correct timezone, you need to find out the long name for the timezone you want to use. The timezone naming convention usually uses “Region/City” format.

To list all available time zones, you can either list the files in the /usr/share/zoneinfo directory or use the timedatectl command.

[root@centos ~]# timedatectl list-timezones|tail -n 10
Pacific/Pohnpei
Pacific/Port_Moresby
Pacific/Rarotonga
Pacific/Saipan
Pacific/Tahiti
Pacific/Tarawa
Pacific/Tongatapu
Pacific/Wake
Pacific/Wallis
UTC


As I'm situated in Sofia Bulgaria to set the correct timezone to UTC (Universal Time Clock)  + 2 Hrs, i've checked the correct Continent/Country like so:

[root@centos ~]# timedatectl list-timezones|grep -i Sofia
Europe/Sofia

Once I've my Capital / Country time location  identified to set to it:

[root@centos ~]# timedatectl set-timezone your_time_zone

 

10. Configure remote connection hostname SSH aliases via ssh config ( ~/.ssh/config)

 


I'm having separate Virtual Machines running on my OpenXen virtualization Hypervisor server at different ports which I remember by heart under different hostnames, this saves me time to always type on command line long commands such as:
 

 

 

#  ssh long-hostname -p Port_number

 to make accessibility to remote machines via a simple Hostname Aliases, that forwards to remote port (that gets forwarded via a Local Network configure Netwrork Address Translation), I use the .ssh/config nice Host / Hostname / User / Port directives like below samples:

[hipo@centos .ssh]$ cat config 
Host pcfreak
User root
Port 2248
HostName 83.228.93.76

Host freak
User root
Port 2249
HostName 213.91.190.233


Host pcfrxenweb
User root
Port 2251
Hostname 83.228.93.76

Host pcfrxen
User root
Port 2250
Hostname 213.91.190.233

Now to connect to pcfrxen for example I simply type:

ssh pcfrxen

type in the password to remote VM and I'm in 🙂

The same could be achieved also with Adding Custom Hostname IP Aliases via ~/.bashrc or iteration script as I've explained earlier that fakes like custom /etc/hosts, but I usuaully prefer to use .ssh/config instead like explained above.

Note that above steps should work also on RHEL / Fedora Linux with a minor modifications, as usually this two distros share the RPM package manager. If someone tries to follow the guide and have success on any of this distros please drop a comment with feedback.

How to automate open xen Hypervisor Virtual Machines backups shell script

Tuesday, June 22nd, 2021

openxen-backup-logo As a sysadmin that have my own Open Xen Debian Hypervisor running on a Lenovo ThinkServer few months ago due to a human error I managed to mess up one of my virtual machines and rebuild the Operating System from scratch and restore files service and MySQl data from backup that really pissed me of and this brought the need for having a decent Virtual Machine OpenXen backup solution I can implement on the Debian ( Buster) 10.10 running the free community Open Xen version 4.11.4+107-gef32c7afa2-1. The Hypervisor is a relative small one holding just 7 VM s:

HypervisorHost:~#  xl list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0 11102    24     r—–  214176.4
pcfrxenweb                                  11 12288     4     -b—-  247425.5
pcfrxen                                     12 16384    10     -b—-  1371621.4
windows7                                    20  4096     2     -b—-   97887.2
haproxy2                                    21  4096     2     -b—-   11806.9
jitsi-meet                                  22  2048     2     -b—-   12843.9
zabbix                                      23  2048     2     -b—-   20275.1
centos7                                     24  2040     2     -b—-   10898.2

HypervisorHost:~# xl list|grep -v 'Name ' |grep  -v 'Domain-0'  |wc -l
7


The backup strategy of the script is very simple to shutdown the running VM machine, make a copy with rsync to a backup location the image of each of the Virtual Machines in a bash shell loop for each virtual machine shown in output of xl command and backup to a preset local directory in my case this is /backups/ the backup of each virtual machine is produced within a separate backup directory with a respective timestamp. Backup VM .img files are produced in my case to mounted 2x external attached hard drives each of which is a 4 Terabyte Seagate Plus Backup (Storage). The original version of the script was made to be a slightly different by Zhiqiang Ma whose script I used for a template to come up with my xen VM backup solution. To prevent the Hypervisor's load the script is made to do it with a nice of (nice -n 10) this might be not required or you might want to modify it to better suit your needs. Below is the script itself you can fetch a copy of it /usr/sbin/xen_vm_backups.sh :

#!/bin/bash

# Author: Zhiqiang Ma (http://www.ericzma.com/)
# Modified to work with xl and OpenXen by Georgi Georgiev – https://pc-freak.net
# Original creation dateDec. 27, 2010
# Script takes all defined vms under xen_name_list and prepares backup of each
# after shutting down the machine prepares archive and copies archive in externally attached mounted /backup/disk1 HDD
# Latest update: 08.06.2021 G. Georgiev – hipo@pc-freak.net

mark_file=/backups/disk1/tmp/xen-bak-marker
log_file=/var/log/xen/backups/bak-$(date +%Y_%m_%d).log
err_log_file=/var/log/xen/backups/bak_err-$(date +%H_%M_%Y_%m_%d).log
xen_dir=/xen/domains
xen_vmconfig_dir=/etc/xen/
local_bak_dir=/backups/disk1/tmp
#bak_dir=xenbak@backup_host1:/lhome/xenbak
bak_dir=/backups/disk1/xen-backups/xen_images/$(date +%Y_%m_%d)/xen/domains
#xen_name_list="haproxy2 pcfrxenweb jitsi-meet zabbix windows7 centos7 pcfrxenweb pcfrxen"
xen_name_list="windows7 haproxy2 jitsi-meet zabbix centos7"

if [ ! -d /var/log/xen/backups ]; then
echo mkdir -p /var/log/xen/backups
 mkdir -p /var/log/xen/backups
fi

if [ ! -d $bak_dir ]; then
echo mkdir -p $bak_dir
 mkdir -p $bak_dir

fi


# check whether bak runned last week
if [ -e $mark_file ] ; then
        echo  rm -f $mark_file
 rm -f $mark_file
else
        echo  touch $mark_file
 touch $mark_file
  # exit 0
fi

# set std and stderr to log file
        echo mv $log_file $log_file.old
       mv $log_file $log_file.old
        echo mv $err_log_file $err_log_file.old
       mv $err_log_file $err_log_file.old
        echo "exec 2> $err_log_file"
       exec 2> $err_log_file
        echo "exec > $log_file"
       exec > $log_file


# check whether the VM is running
# We only backup running VMs

echo "*** Check alive VMs"

xen_name_list_tmp=""

for i in $xen_name_list
do
        echo "/usr/sbin/xl list > /tmp/tmp-xen-list"
        /usr/sbin/xl list > /tmp/tmp-xen-list
  grepinlist=`grep $i" " /tmp/tmp-xen-list`;
  if [[ “$grepinlist” == “” ]]
  then
    echo $i is not alive.
  else
    echo $i is alive.
    xen_name_list_tmp=$xen_name_list_tmp" "$i
  fi
done

xen_name_list=$xen_name_list_tmp

echo "Alive VM list:"

for i in $xen_name_list
do
   echo $i
done

echo "End alive VM list."

###############################
date
echo "*** Backup starts"

###############################
date
echo "*** Copy VMs to local disk"

for i in $xen_name_list
do
  date
  echo "Shutdown $i"
        echo  /usr/sbin/xl shutdown $i
        /usr/sbin/xl shutdown $i
        if [ $? != ‘0’ ]; then
                echo 'Not Xen Disk image destroying …';
                /usr/sbin/xl destroy $i
        fi
  sleep 30

  echo "Copy $i"
  echo "Copy to local_bak_dir: $local_bak_dir"
      echo /usr/bin/rsync -avhW –no-compress –progress $xen_dir/$i/{disk.img,swap.img} $local_bak_dir/$i/
     time /usr/bin/rsync -avhW –no-compress –progress $xen_dir/$i/{disk.img,swap.img} $local_bak_dir/$i/
      echo /usr/bin/rsync -avhW –no-compress –progress $xen_vmconfig_dir/$i.cfg $local_bak_dir/$i.cfg
     time /usr/bin/rsync -avhW –no-compress –progress $xen_vmconfig_dir/$i.cfg $local_bak_dir/$i.cfg
  date
  echo "Create $i"
  # with vmmem=1024"
  # /usr/sbin/xm create $xen_dir/vm.run vmid=$i vmmem=1024
          echo /usr/sbin/xl create $xen_vmconfig_dir/$i.cfg
          /usr/sbin/xl create $xen_vmconfig_dir/$i.cfg
## Uncomment if you need to copy with scp somewhere
###       echo scp $log_file $bak_dir/xen-bak-111.log
###      echo  /usr/bin/rsync -avhW –no-compress –progress $log_file $local_bak_dir/xen-bak-111.log
done

####################
date
echo "*** Compress local bak vmdisks"

for i in $xen_name_list
do
  date
  echo "Compress $i"
      echo tar -z -cfv $bak_dir/$i-$(date +%Y_%m_%d).tar.gz $local_bak_dir/$i-$(date +%Y_%m_%d) $local_bak_dir/$i.cfg
     time nice -n 10 tar -z -cvf $bak_dir/$i-$(date +%Y_%m_%d).tar.gz $local_bak_dir/$i/ $local_bak_dir/$i.cfg
    echo rm -vf $local_bak_dir/$i/ $local_bak_dir/$i.cfg
    rm -vrf $local_bak_dir/$i/{disk.img,swap.img}  $local_bak_dir/$i.cfg
done

####################
date
echo "*** Copy local bak vmdisks to remote machines"

copy_remote () {
for i in $xen_name_list
do
  date
  echo "Copy to remote: vm$i"
        echo  scp $local_bak_dir/vmdisk0-$i.tar.gz $bak_dir/vmdisk0-$i.tar.gz
done

#####################
date
echo "Backup finishes"
        echo scp $log_file $bak_dir/bak-111.log

}

date
echo "Backup finished"

 

Things to configure before start using using the script to prepare backups for you is the xen_name_list variable

#  directory skele where to store already prepared backups
bak_dir=/backups/disk1/xen-backups/xen_images/$(date +%Y_%m_%d)/xen/domains

# The configurations of the running Xen Virtual Machines
xen_vmconfig_dir=/etc/xen/
# a local directory that will be used for backup creation ( I prefer this directory to be on the backup storage location )
local_bak_dir=/backups/disk1/tmp
#bak_dir=xenbak@backup_host1:/lhome/xenbak
# the structure on the backup location where daily .img backups with be produced with rsync and tar archived with bzip2
bak_dir=/backups/disk1/xen-backups/xen_images/$(date +%Y_%m_%d)/xen/domains

# list here all the Virtual Machines you want the script to create backups of
xen_name_list="windows7 haproxy2 jitsi-meet zabbix centos7"

If you need the script to copy the backup of Virtual Machine images to external Backup server via Local Lan or to a remote Internet located encrypted connection with a passwordless ssh authentication (once you have prepared the Machines to automatically login without pass over ssh with specific user), you can uncomment the script commented section to adapt it to copy to remote host.

Once you have placed at place /usr/sbin/xen_vm_backups.sh use a cronjob to prepare backups on a regular basis, for example I use the following cron to produce a working copy of the Virtual Machine backups everyday.
 

# crontab -u root -l 

# create windows7 haproxy2 jitsi-meet centos7 zabbix VMs backup once a month
00 06 1,2,3,4,5,6,7,8,9,10,11,12 * * /usr/sbin/xen_vm_backups.sh 2>&1 >/dev/null


I do clean up virtual machines Images that are older than 95 days with another cron job

# crontab -u root -l

# Delete xen image files older than 95 days to clear up space from backup HDD
45 06 17 * * find /backups/disk1/xen-backups/xen_images* -type f -mtime +95 -exec rm {} \; 2>&1 >/dev/null

#### Delete xen config backups older than 1 year+3 days (368 days)
45 06 17 * * find /backups/disk1/xen-backups/xen_config* -type f -mtime +368 -exec rm {} \; 2>&1 >/dev/null

 

# Delete xen image files older than 95 days to clear up space from backup HDD
45 06 17 * * find /backups/disk1/xen-backups/xen_images* -type f -mtime +95 -exec rm {} \; 2>&1 >/dev/null

#### Delete xen config backups older than 1 year+3 days (368 days)
45 06 17 * * find /backups/disk1/xen-backups/xen_config* -type f -mtime +368 -exec rm {} \; 2>&1 >/dev/null

How to Install and Play old Arcade Multiple Arcade Machine Emulator Games on Linux in 2021 with xmame and GXMame GUI Frontend

Friday, May 14th, 2021

mame-multiple-arcade-machine-emulator
I've earlier blogged on how to install and play old arcade games with xmame compiled from source on the now old Debian 7 Linux under the article Install xmame from source on Debian Linux 7.0 (Wheezy) to play for better MAME (Arcade Games Emulation)
as well article on using the newer version of MAME emulator instead of xmame under the article Playing Arcade old school games on Debian Linux as well as how to make the MAME emulator work with a joystick see my previous How to configure Joystick ( Gamepad ) on Debian, Ubuntu, Mint GNU / Linux easily.

Since I have preinstalled my notebook with fresh Debian 10 Buster, for a long time I did not have the time as well as desire to play my favourite games of the youth to name a few this is Xain'd Sleena / Cadillacs and Dinosaurs (1993) / the Punisher (1993) / Captain Commando (1991) / Super Mario / Contra / Final Fight etc. and rest of the the SEGA Mega Drive / GameBoy / Nintendo / Terminator game (fake clone of nintendo) and other killing Arcade Classics of the late 90s and early year 2000, which we played on a public houses on a game cabinets with a joysticks. 
Hence I tried to reproduce some of my articles just as of 2021 to see whether still we can get a nice playable MAME emulator on Linux with a Graphical GUI for Mate or GNOME. And it seems using a straight mame out of Debian standard repositories did not work with some of the more sophisticated ROM .zip files from the nices such as Punisher or XSLEENA.zip, this is how this article got born in an attempt to give a way for such old school game freaks as me to be able to play there favorite games of the youth.

Below is the few steps adapted mostly from above articles with some head banging and loosing multiple hours wondering until I finally got a working XMAME ROM emulator with the simple but working GXMAME graphical frontend for M.A.M.E..

To compile xmame with joystick support be it analog or whatever joystick you have to use a Makefile like below

https://www.pc-freak.net/files/xmame-0.103-Makefile-for-joystick

Copy it on your PC and remove it to Makefile in the xmame-0.103 source dir:

# cd /usr/local/src/xmame-0.103


I have to install all the necessery package dependencies development header files, some of which are mentioned in the beginning of post articles and some of which I had to install manually, such as the preset Debian meta-package build-essentials Things to install on newly installed GNU / Linux (My favourite must have Linux text and GUI programs missing in fresh Linux installs), just to mention a few I remember had to install based on some compilation errors:

# apt-get install build-essential
# apt-get install –yes zlib1g-dev

# apt-get install –yes libexpat1-dev
# apt-get install –yes libghc-x11-dev
# apt-get install –yes x11proto-video-dev
# apt-get install –yes libxv-dev
# apt-get install libxext6
# apt-get install libxext6-dev
# apt-get install libxext-dev
# apt-get install libjpeg62-turbo-dev
# apt-get install libxinerama-dev
# apt-get install libgtk-3-dev
# apt-get install syslog-ng-dev
# apt-get install libgtk2.0-dev


If I'm missing some package necessery here you will have to find it yourself based on the *.h file produced as error during compile you should look it up with a cmd like:

# apt-file search glib2.h


And install it further.

You will need to edit Makefile or take and straight use or if necessery adapt an already prepared Makefile for my purpose:
 

# wget https://www.pc-freak.net/files/xmame-0.103-Makefile-for-joystick;
# mv https://www.pc-freak.net/files/xmame-0.103-Makefile-for-joystick Makefile
# make && make install


Some .zip roms does not properly work with the newer mame you need to instead use xmame ..

Below is the version I use on Debian 10 as of May 2021 year.

hipo@jeremiah:/usr/local/src$ xmame –version
GLINFO: loaded OpenGL library libGL.so!
GLINFO: loaded GLU    library libGLU.so!
xmame (x11) version 0.103 (May 13 2021)


To make the joystick work in xmame you will need to have a preset of modules loaded on the Linux for my old Genius joystick this is what works.

hipo@jeremiah:~/Games$ cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
snd-seq
3c59x
snd-emu10k1
snd-pcm-oss
snd-mixer-oss
snd-seq-oss
joydev
ns558
sidewinder
gameport
analog
adi
pcigame
iforce
evdev
usbhid


Fill in your joystick module there and make sure you manually load each one of the modules with modprobe command.
Next calibrate the joystick with some tool like jktest-gtk

jktest-gtk-linux-screenshot
If you need a good frontend for MATE / GNOME for xmame try gxmame. It is a real pain in the ass to configure, note that the only working version of xmame with good configuration as well as gxmame with a game list that is prebuild.
If you need xmame-0.103.tar.bz2 exact xmame version I'm using thethe archive is on:

https://www.pc-freak.net/files/xmame-0.103.tar.bz2
An old bundle of both gxmame and xmamerc configs is here

https://www.pc-freak.net/files/xmame.tar.gz

https://www.pc-freak.net/files/xmame-config-for-joystick-hipo.tar.gz
For example my old working version of xmame ~/.xmame is here https://www.pc-freak.net/files/xmame-config-for-joystick-hipo.tar.gz

Configuration of gxmame (even though it has has a GUI for configuring is below:

https://www.pc-freak.net/files/gxmame.tar.gz

xmamerc sample working file is here

https://www.pc-freak.net/files/xmamerc

My newest current version as of Debian 10 Squeeze of xmame and gxmame is below:

https://www.pc-freak.net/files/gxmame-config-newest_may_2021.tar.gz

https://www.pc-freak.net/files/xmame-config-newest_may_2021.tar.gz

Note that my rom files and stuff is located and configured in neewst configs to be under /home/hipo/Games/roms/ if your location is others grep it in .xmame/* and .gxmame/* files and set the correct PATH locations.

A VERY IMPORTANT NOTE is not to use the stable version of GXMAME even though it worked in 2003 fine as the project is abandoned and unsupported as of 2021 the latest downloadable stable gxmame file on Sourceforge gxmame-0.34b website is not working correctly (even though it compiles fine).
You have to compile and use instead the newer version gxmame-0.35beta1.

If you want to use gxmame with a joystick you need to compile it with the respective option:
 

root@jeremiah:/usr/local/src/gxmame-0.35beta1# ./configure –enable-joystick

 

gxmame 0.35beta1

Print debugging messages…… : no
Joystick support………….. : yes

GXMame will be installed in /usr/local/bin.
Warning: You have an old copy of gxmame at /usr/local/bin/gxmame.

 

configure complete, now type 'make'

To install compiled binaries do the usual:
 

# make && make install

gxmame binary should be installed under /usr/local/bin/gxmame once launched you should get the shiny gxmame GUI.

gxmame-screenshot-debian-10-gnu-linux-mate-desktop

Perhaps there was other stuff I've done in the process I forgot to document here, so if you try to follow my guide and something does not work please tell me what I'm missing and you can't handle it either contact me.

The guide is for Debian Linux but should work on other .Deb based Linux distros such as Ubuntu / Linux Mint etc.

To enjoy my 4GB present of ROM files containing many of best well known M.A.M.E. ARCADE GAMES check archive here. Note that this collection was downloaded on the Internet and I do not hold any responsibility of the archive. If it contains files with any copyright infringment this is to be on your own.
 

How to calculate connections from IP address with shell script and log to Zabbix graphic

Thursday, March 11th, 2021

We had to test the number of connections incoming IP sorted by its TCP / IP connection state.

For example:

TIME_WAIT, ESTABLISHED, LISTEN etc.


The reason behind is sometimes the IP address '192.168.0.1' does create more than 200 connections, a Cisco firewall gets triggered and the connection for that IP is filtered out. To be able to know in advance that this problem is upcoming. a Small userparameter script is set on the Linux servers, that does print out all connections from IP by its STATES sorted out.

 

The script is calc_total_ip_match_zabbix.sh is below:

#!/bin/bash
#  check ESTIMATED / FIN_WAIT etc. netstat output for IPs and calculate total
# UserParameter=count.connections,(/usr/local/bin/calc_total_ip_match_zabbix.sh)
CHECK_IP='192.168.0.1';
f=0; 

 

for i in $(netstat -nat | grep "$CHECK_IP" | awk '{print $6}' | sort | uniq -c | sort -n); do

echo -n "$i ";
f=$((f+i));
done;
echo
echo "Total: $f"

 

root@pcfreak:/bashscripts# ./calc_total_ip_match_zabbix.sh 
1 TIME_WAIT 2 ESTABLISHED 3 LISTEN 

Total: 6

 

root@pcfreak:/bashscripts# ./calc_total_ip_match_zabbix.sh 
2 ESTABLISHED 3 LISTEN 
Total: 5


images/zabbix-webgui-connection-check1

To make process with Zabbix it is necessery to have an Item created and a Depedent Item.

 

webguiconnection-check1

webguiconnection-check1
 

webgui-connection-check2-item

images/webguiconnection-check1

Finally create a trigger to trigger alarm if you have more than or eqaul to 100 Total overall connections.


images/zabbix-webgui-connection-check-trigger

The Zabbix userparameter script should be as this:

[root@host: ~]# cat /etc/zabbix/zabbix_agentd.d/userparameter_webgui_conn.conf
UserParameter=count.connections,(/usr/local/bin/webgui_conn_track.sh)

 

Some collleagues suggested more efficient shell script solution for suming the overall number of connections, below is less time consuming version of script, that can be used for the calculation.
 

#!/bin/bash -x
# show FIN_WAIT2 / ESTIMATED etc. and calcuate total
count=$(netstat -n | grep "192.168.0.1" | awk ' { print $6 } ' | sort -n | uniq -c | sort -nr)
total=$((${count// /+}))
echo "$count"
echo "Total:" "$total"

      2 ESTABLISHED
      1 TIME_WAIT
Total: 3

 


Below is the graph built with Zabbix showing all the fluctuations from connections from monitored IP. ebgui-check_ip_graph