Archive for the ‘System Administration’ Category

How to install BASH and use shell scripting on Windows ?

Thursday, June 26th, 2025

install-bash-on-windows-run-and-use-shellscripting-on-windows-howto

Bash (Bourne Again SHell) is definitely a technology that will stay for years to come its simplicity and multi-platoformness is a factor that will definitely continue for many years thus even though it is mostly used on Linux / BSD / Unix, its application on Windows OS-es nowadays is perhaps increasing. Hence since so many people use Winodws nowdays (for work) it is really useful to have Bash set-up on Windows host machine.
In this article, I'll shortly explain how this is done, the article will not have anything too much interesting for the advanced admin or dev ops guy, but I hope people who are entering the business of system administration and high level computing and still orienting might benefit from it.

To install and use Bash shell terminal in Windows there are at least 3 ways:

  • Use Git Bash (Download and install it directly precompiled on WIndows)
  • Use Windows WSL emulation (install some Linux distro)
  • Use Virtualbox / Vagrant / VMware / Hyper-V emulation and install VM from public ISO image.

As a Free Software Lover, I would recommend and  always prefer to use the Free Software alternative if that is possible and thanksfully usually I use and install Git Bash or completely install Cygwin (Full set of Linux tools to run like native on Windows together with  Mobaxterm) together.

 

1. Installing Git Bash on Windows (uses MinGW Minimalist GNU for Windows)

Some might prefer to not use Microsoft for managing their bash especially the more freedom in mind people who like GNU and Free software and people.

MinGW is well known among free and open source enthusiasts.
It includes a port of the GNU Compiler Collection (GCC), GNU Binutils for Windows (assembler, linker, archive manager), a set of freely distributable Windows specific header files and static import libraries which enable the use of the Windows API, a Windows native build of the GNU Project's GNU Debugger, and miscellaneous utilities.

MinGW does not rely on third-party C runtime dynamic-link library (DLL) files, and because the runtime libraries are not distributed using the GNU General Public License (GPL), it is not necessary to distribute the source code with the programs produced, unless a GPL library is used elsewhere in the program.

 

MinGW can be run either on the native Microsoft Windows platform, cross-hosted on Linux (or other Unix), or "cross-native" on Cygwin.


To install Bash via Git, you can use Git for Windows, which includes Git Bash — a lightweight Bash emulator.


Steps to Install Git Bash on Windows
 

a. Download Git for Windows

Go to the official Git website:

https://git-scm.com/download/win

The download should start automatically.

b. Run the Installer

  • Open the downloaded .exe file
  • Follow the installation prompts

Recommended Settings:

  • Select components: Keep default
  • Editor: Choose your preferred text editor (e.g., Notepad++ or Vim)
  • Adjust PATH environment: Choose “Git from the command line and also from 3rd-party software”
  • Choose SSH executable: Use Built-in OpenSSH
  • Choose HTTPS transport backend: Use the default (OpenSSL)
  • Configure line endings: Select “Checkout Windows-style, commit Unix-style line endings”
  • Terminal emulator: Choose “Use MinTTY (the default terminal)”

Click Next through the remaining steps and then Install.

c. Launch Git Bash

After installation:

  • Press Windows key, type "Git Bash"
  • Click to launch the terminal

Now you're using a Bash shell on Windows.

Perhaps most common way is to use Windows Subsystem for Linux (WSL), people follow. WSL is a technology which is native Windows but gives MS Windows the opportunity to act in a way similar to docker containers. WSL lets you run a full Linux environment (including Bash) directly on Windows without using a virtual machine and is really fast and easy on Machine system resources. 


 2. Installing WSL bash easy from Windows 10 / 11 using  Win GUI menus


Steps to install WSL on Windows 10 / 11

Microsoft has since only continued to improve its Windows Subsystem for Linux, and an update in a Windows 10 preview build back in mid-2020 made it easier to install Bash.

That method also works the same as on Win 10 as well as on Win 11.
To install Bash shell emulation, hence open Windows Terminal as an admin user. You can do this by right-clicking the Windows icon and selecting “Windows Terminal (Admin)” from the power user menu.

(If you’re on Windows 10, you should see it listed as “Windows Powershell (Admin)” in the menu.)

 

windows-run-powershell-from-start-menu-screenshot

 


To complete WSL install with Virtualized Ubuntu OS

In Windows Terminal, run this command:

PS C:\Users\MyUser> wsl –install

Once everything is downloaded needed to run WSL emulation and Ubuntu Linux distribution,  Restart the PC.

Once your PC rebooted, installation will continue automatically.

After Ubuntu installed successfully, you’ll next be prompted to create a username and password and Ubuntu will fire up, and you will have your bash in Windows

 

Install-WSL-linux-subsystem-for-windows-from-powershell-prompt-screenshot


a. Enabling and Intalling BASH via command line (if WSL Linux subsystem for Windows is not enabled on Windows


It might be your Windows has no configured Windows Subsystem for Linux, hence if that is the case you will need to enable it following below few steps.

b. Enable WSL via dism.exe cmd

Open PowerShell as Administrator and run:

Powershell

PS C:\Users\MyUser> wsl –install

This installs WSL 2 and a default Linux distribution (like Ubuntu).

If you're on Windows 10 or on a PC where whoever installed the OS has not installed the Win Subsystem for Linux, you may need to manually enable WSL:

Launch Powershell

PS C:\Users\MyUser> dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart

PS C:\Users\MyUser> dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart

Then restart your computer and run:

from the Windows Magnifier run Powershell and type in PS1 prompt:

PS C:\Users\MyUser> wsl –set-default-version 2

c. Installing other Linux Distribution (different from Ubuntu)

If not already installed during wsl –install, open the Microsoft Store and search for:

  • Ubuntu
  • Debian
  • Kali Linux
  • etc.

Click Install on the one you want.

d. Launching WSL / Bash terminal

Once installed:

  • Open Start Menu
  • Search for your Linux distro you just installed (e.g., “Ubuntu”)
  • Launch it

This opens a Bash shell where you can run Linux commands, like in regular Linux but on your Microsoft Windows OS.
 

Sum it up

What we learned is how to install bash via Bash Git and start using it to have more hybrid environment Windows / Linux. The article explained the two main methods using GIt Bash and using embedded Windows emulator WSL with an emulated Linux distro.

Enjoy ! 🙂

 

 


 

Deploying a Puppet Server and Patching multiple Debian Linux Servers

Tuesday, June 17th, 2025

how-to-install-puppet-server-clients-for-multiple-servers-and-services-infrastructure-logo

 

Puppet Overview

Puppet is a powerful automation tool used to manage configurations and automate administrative tasks across multiple systems. This guide walks you through installing a Puppet server (master) and configuring 10 Debian-based client servers (agents) to automatically install system updates (patches) using Puppet.

Table of Contents

  1. Prerequisites

  2. Step 1: Install Puppet Server on Debian

  3. Step 2: Configure the Puppet Server

  4. Step 3: Install Puppet Agent on 10 Debian Clients

  5. Step 4: Sign Agent Certificates

  6. Step 5: Create a Puppet Module for Patching

  7. Step 6: Assign the Module and Trigger Updates

  8. Conclusion


1. Prerequisites

  • Debian server to act as the Puppet master (e.g., Debian 11)
     
  • Debian servers as Puppet agents (clients)
     
  • Root or sudo access on all systems
     
  • Static IPs or properly configured hostnames
     
  • Network connectivity between master and agents
     

2. Install Puppet Server on Debian

a. Add the Puppet APT repository

# wget https://apt.puppet.com/puppet7-release-bullseye.deb
# dpkg -i puppet7-release-bullseye.deb
# apt update

b. Install Puppet Server

# apt install puppetserver -y

c. Configure JVM memory (optional but recommended)

Edit /etc/default/puppetserver:  

JAVA_ARGS="-Xms512m -Xmx1g"


d. Enable and start the Puppet Server

# systemctl enable puppetserver 
# systemctl start puppetserver


3. Configure the Puppet Server

a. Set the hostname
 

# hostnamectl set-hostname puppet.example.com


Update /etc/hosts with your server’s IP and FQDN if DNS is not configured:  

192.168.1.10 puppet.pc-freak.net puppet

 

b. Configure Puppet

Edit /etc/puppetlabs/puppet/puppet.conf:

[main] certname = puppet.pc-freak.net
server = puppet.pc-freak.net
environment = production
runinterval = 1h

 

Restart Puppet server:

# systemctl restart puppetserver

4. Install Puppet Agent on 10 Debian Clients

Repeat this section on each client server (Debian 10/11).

a. Add the Puppet repository

# wget https://apt.puppet.com/puppet7-release-bullseye.deb
# dpkg -i puppet7-release-bullseye.deb 
# apt update


b. Install the Puppet agent

# apt install puppet-agent -y


c. Configure the agent to point to the master

# /opt/puppetlabs/bin/puppet config set server puppet.example.com –section main


d. Start the agent to request a certificate

# /opt/puppetlabs/bin/puppet agent –test

 

5. Sign Agent Certificates on the Puppet Server

Run on the Puppet master below 2 cmds:

# /usr/bin/puppetserver ca list –all


Sign all pending requests:

# /usr/bin/puppetserver ca sign –all

Verify connection to puppet server is fine:

# /opt/puppetlabs/bin/puppet node find haproxy2.pc-freak.net


6.  Create a Puppet Module for Patching

a. Create the patching module

# mkdir -p /etc/puppetlabs/code/environments/production/modules/patching/manifests


b. Add a manifest file
/etc/puppetlabs/code/environments/production/modules/patching/manifests/init.pp
:

class patching {

  exec { 'apt_update':
    command => '/usr/bin/apt update',
    path    => [‘/usr/bin’, ‘/usr/sbin’],
    unless  => '/usr/bin/test $(find /var/lib/apt/lists/ -type f -mmin -60 | wc -l) -gt 0',
  }

  exec { 'apt_upgrade':
    command => '/usr/bin/apt upgrade -y',
    path    => [‘/usr/bin’, ‘/usr/sbin’],
    require => Exec[‘apt_update’],
    unless  => '/usr/bin/test $(/usr/bin/apt list –upgradable 2>/dev/null | wc -l) -le 1',
  }

}

This class updates the package list and applies all available security and feature updates.

7.  Assign the Module and Trigger Updates

a. Edit site.pp on the Puppet master:

 

# vim /etc/puppetlabs/code/environments/production/manifests/site.pp

 

node default {
  include patching
}

node 'agent1.example.com' {
  include patching
}

b. Run Puppet manually on each agent to test:

# /opt/puppetlabs/bin/puppet agent –test

Once confirmed working, Puppet agents will run this patching class automatically every hour (default runinterval).

8. Check the status of puppetserver and puppet agent on hosts is fine
 

root@puppetserver:/etc/puppet# systemctl status puppetserver
● puppetserver.service – Puppet Server
     Loaded: loaded (/lib/systemd/system/puppetserver.service; enabled; preset: enabled)
     Active: active (running) since Mon 2025-06-16 23:44:42 EEST; 37min ago
       Docs: https://puppet.com/docs/puppet/latest/server/about_server.html
    Process: 2166 ExecStartPre=sh -c echo -n 0 > ${RUNTIME_DIRECTORY}/restart (code=exited, status=0/SUCCESS)
    Process: 2168 ExecStartPost=sh -c while ! head -c1 ${RUNTIME_DIRECTORY}/restart | grep -q '^1'; do kill -0 $MAINPID && sleep 1 || exit 1; done (code=exited, status=0/SUCCESS)
   Main PID: 2167 (java)
      Tasks: 64 (limit: 6999)
     Memory: 847.0M
        CPU: 1min 28.704s
     CGroup: /system.slice/puppetserver.service
             └─2167 /usr/bin/java -Xms512m -Xmx1g -Djruby.lib=/usr/share/jruby/lib -XX:+CrashOnOutOfMemoryError -XX:ErrorFile=/var/log/puppetserver/puppetserver_err_pid%p.log -jar /usr/share/pup>

юни 16 23:44:06 haproxy2 systemd[1]: Starting puppetserver.service – Puppet Server…
юни 16 23:44:30 haproxy2 java[2167]: 2025-06-16T23:44:30.516+03:00 [clojure-agent-send-pool-0] WARN FilenoUtil : Native subprocess control requires open access to the JDK IO subsystem
юни 16 23:44:30 haproxy2 java[2167]: Pass '–add-opens java.base/sun.nio.ch=ALL-UNNAMED –add-opens java.base/java.io=ALL-UNNAMED' to enable.
юни 16 23:44:42 haproxy2 systemd[1]: Started puppetserver.service – Puppet Server.

root@grafana:/etc/puppet# systemctl status puppet
* puppet.service – Puppet agent
     Loaded: loaded (/lib/systemd/system/puppet.service; enabled; preset: enabled)
     Active: active (running) since Mon 2025-06-16 21:22:17 UTC; 18s ago
       Docs: man:puppet-agent(8)
   Main PID: 1660157 (puppet)
      Tasks: 6 (limit: 2307)
     Memory: 135.6M
        CPU: 5.303s
     CGroup: /system.slice/puppet.service
             |-1660157 /opt/puppetlabs/puppet/bin/ruby /opt/puppetlabs/puppet/bin/puppet agent –no-daemonize
             `-1660164 "puppet agent: applying configuration"

Jun 16 21:22:17 grafana systemd[1]: Started puppet.service – Puppet agent.
Jun 16 21:22:28 grafana puppet-agent[1660157]: Starting Puppet client version 7.34.0
Jun 16 21:22:33 grafana puppet-agent[1660164]: Requesting catalog from puppet.pc-freak.net:8140 (192.168.1.58)

9. Use Puppet facter to extract interesting information from the Puppet running OS
 

facter is a powerful command-line tool Puppet agents use to gather system information (called facts). You can also run it manually on any machine to quickly inspect system details.

Here are some interesting examples to get useful info from a machine using facter:


a) Get all facts about Linux OS

facter

 

b) get OS name / version

facter os.name os.release.full
os.name => Debian
os.release.full => 12.10


c) check the machine  hostname and IP address

$ facter hostname ipaddress

hostname => puppet-client1
ipaddress => 192.168.0.220

d) Get amount of RAM on the machine

$ facter memorysize
16384 MB


e) Get CPU (Processor information)

$ facter processors

{
  count => 4,
  models => [“Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz”],
  physicalcount => 1,
  speed => "1.60 GHz"
}

10. Conclusion

You've successfully set up a Puppet server and configured a sample Debian client systems to automatically install security patches using a custom module.
To apply this on the rest of systems where puppet agent is installed repeat the process on each of the left 9 nodes.
This approach provides centralized control, consistent configuration, and peace of mind for you as system administrator if you have the task to manage multiple Debian servers
with an easy.
Of course the given configuration is very sample and to guarantee proper functiononing of your infrastrcutreu you'll have to read and experiment with puppet, however I hope article is a good
standpoint to have puppet server / agent running relatively easy.

Debugging Jitsi Meet Server Problems: A Practical Guide

Saturday, April 26th, 2025

Jitsi Meet is a powerful open-source video conferencing platform. But like any real-time communication system, it can run into issues—from video/audio glitches to full-blown connection failures. Debugging Jitsi Meet can be tricky due to its multi-component architecture. This guide walks you through a systematic approach to identify and resolve common server-side issues.

1. Understand the Architecture

Before diving into logs, it's important to understand Jitsi Meet's core components:
 

  • Jitsi Meet (Web UI) – The front-end interface.
  • Jicofo (Focus component) – Manages conference sessions.
  • Prosody (XMPP Server) – Handles user authentication and signaling.
  • JVB (Jitsi Videobridge) – Routes video/audio streams.
  • Nginx or Apache – Web server proxy (often with HTTPS and WebSocket forwarding).


Knowing how these interact helps pinpoint the failing layer.

2. Check Logs in the Right Places

Each component has its own logs. Check them in the following order:

Prosody Logs

Location: /var/log/prosody/prosody.log and prosody.err
​Look for: Authentication issues, connection denials, or component registration problems.
 

Jicofo Logs

Location:  /var/log/jitsi/jicofo.log
Look for: Room creation errors, XMPP connection failures, conference creation attempts.
 

JVB Logs

Location: /var/log/jitsi/jvb.log
​Look for: ICE failures, STUN/TURN issues, packet loss, and bridge reachability.
 

Web Server Logs (Nginx/Apache)

Location (Nginx): /var/log/nginx/error.log and access.log
Look for: HTTP errors (404, 502), WebSocket connection problems.
 

Browser Console Logs
 

Tools: Press F12 in browser → Console/Network tabs.
Look for: WebSocket failures, CORS issues, or media permission problems.
 

3. Common Problems & Fixes

"Failed to join conference"

  • Cause: Prosody may not be running or not configured correctly.​

Fix: Restart Prosody and check domain configuration in /etc/prosody/conf.avail/

 

 

No Audio or Video
 

Usual Cause: Media not reaching the bridge or blocked by firewall

Fix:

  • Verify JVB is listening on correct ports (UDP 10000).
  • ​Check firewall/NAT settings (especially on cloud VMs).
  • Use tcpdump or ss to check traffic flow.
     

WebSocket Connection Fails

 

Usual Cause: Web server (Proxy) misconfiguration.

Fix:

Ensure Nginx is forwarding WebSocket requests to /xmpp-websocket/ .
Add proper proxy settings in nginx.conf
 

Authentication Not Working


Cause: Misconfigured JWT or internal authentication.

Fix:

  • Check Prosody's config for authentication method.
  • If using JWT, verify token structure and shared secret.
     

4. Use Debugging Tools

  • Jitsi Meet in debug mode:


​Add #config.debug=true to your meeting URL.
 

  • ICE Debugging:

     

     

     

    Check about:webrtc (Firefox) or WebRTC Internals (Chrome).
    Look at ICE candidate gathering and connectivity checks.
    Test TURN/STUN:

    • Use tools like trickle-ice to validate your server's ICE configuration.

5. Networking and Firewall Checks

Make sure these ports are open:
 

  • TCP 443 – HTTPS
  • UDP 10000 – Media (JVB)
  • TCP 4443 – (Optional, fallback media)
  • TCP 5222 – XMPP (if not using BOSH/WebSocket)
     

# ss -tuln ufw status


6. Component Health Checks

Do 
# systemctl status for each main jitsi component services:

# systemctl status prosody
# systemctl status jicofo
# systemctl status jitsi-videobridge2

Check uptime, errors, or failure restarts.

7. Enable More Verbose Logs

Increase logging levels for deeper debugging:
 

  • Prosody: Edit /etc/prosody/prosody.cfg.lua → set log = { ... debug = "*" }.
  • Jicofo/JVB: Edit /etc/jitsi/jicofo/logging.properties and /etc/jitsi/videobridge/logging.properties
    → change log level to FINE or ALL .

 

8. Update & Restart Services

Sometimes updates or configs don’t apply until services are restarted:
 

# apt update && apt upgrade systemctl restart prosody jicofo jitsi-videobridge2 nginx

 

Final Closure Thoughts

Debugging Jitsi Meet requires a structured approach, start from the user-facing symptoms, trace through each service, and verify network and authentication configurations.
Debug the status of prosody, jicofo and jitsi-videobridge2, check the firewall openings are okay to the jitsi server
With some log analysis and a bit of patience, experimentation and the help of forums or Artificial Intelligence tool like ChatGPT, the Jitsi server errors will get solved.

How to Install and Set Up an NFS Server network Shares on on Linux to easify data transfer across multiple hosts

Monday, April 7th, 2025

How to Configure NFS Server in Redhat,CentOS,RHEL,Debian,Ubuntu and Oracle Linux

Network File System (NFS) is a protocol that allows one system to share directories and files with others over a network. It's commonly used in Linux environments for file sharing between systems. In this guide, we'll walk you through the steps to install and set up an NFS server on a Linux system.

Prerequisites

Before you start, make sure you have:

  • A Linux system distros (e.g., Ubuntu, CentOS, Debian, etc.)
  • Root or sudo privileges on the system.
  • A network connection between the server (NFS server) and clients (machines that will access the shared directories).
     

1. Install NFS Server Package

 

On Ubuntu / Debian based Linux systems:

a. First, update the package list 

# apt update

b. Install the NFS server package
 

# apt install nfs-kernel-server

On CentOS/REL-based systems:

 2. Install the NFS server package
 

      # yum install nfs-utils 

Once the package is installed, ensure that the necessary services are enabled.

 3. Create Shared Directory for file sharing

Decide which directory you want to share over NFS. If the directory doesn't exist, you can create one. For example:

# mkdir -p /nfs_srv_dir/nfs_share

Make sure the directory has the appropriate permissions so that the nfs clients can access it.

# chown nobody:nogroup /nfs_srv_dir/nfs_share 
# chmod 755 /nfs_srv_dir/nfs_share

4. Configure NFS Exports ( /etc/exports file)

The NFS exports file (/etc/exports) is perhaps most important file you will have to create and deal with regularly to define the expored shares, this file contains the configuration settings for directories you want to share with other systems.

       a. Open the /etc/exports file for editing:

vi /etc/exports

Add an entry for the directory you want to share. For example, if you're sharing /nfs_srv_dir/nfs_share and allowing access to all systems on the network (192.168.1.0/24), add the following line:
 

/nfs_srv_dir/nfs_share 192.168.1.0/24(rw,sync,no_subtree_check)


Here’s what each option means:

  • rw: Read and write access.
  • sync: Ensures that changes are written to disk before responding to the client.

 

Here is few lines of  example of my working /etc/exports on my home running NFS server

/var/www 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/home/jordan 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/mnt/sda1/icons-frescoes/ 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/home/mobfiles 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/mnt/sda1/icons-frescoes/ 192.168.0.200/32(rw,no_root_squash,async,subtree_check)
/home/hipo/public_html 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/home/alex/public_html 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/home/necroleak/public_html 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/bashscripts 192.168.0.209/32(rw,no_root_squash,async,subtree_check)
/backups/Family-Videos 192.168.0.200/32(ro,no_root_squash,async,subtree_check)

 

5. Export the NFS Shares with exportfs command

Once the export file is configured, you need to inform the NFS server to start sharing the directory:
 

# exportfs -a


The -a flag will make it export all the sharings.

6. Start and Enable NFS Services

You need to start and enable the NFS server so it will run on system boot.

On Ubuntu / Debian Linux run the following commands:
 

# systemctl start nfs-kernel-server 
# systemctl enable nfs-kernel-server


On CentOS / RHEL Linux:
 

# systemctl start nfs-server
# systemctl enable nfs-server


7. Allow NFS Traffic Through the Firewall

If your server has a firewall configured / enabled, you will need to allow NFS-related ports through the firewall.
These ports include 2049 TCP protocol Ports (NFS) and 111 (RPCbind) UDP and TCP protocol , and some additional ports.

On Ubuntu/Debian (assuming you are using ufw [UNCOMPLICATED FIREWALL]):

# ufw allow from 192.168.1.0/24 to any port nfs sudo ufw reload

On CentOS / RHEL Linux:

# firewall-cmd –permanent –add-service=nfs sudo firewall-cmd –permanent –add-service=mountd sudo firewall-cmd –permanent –add-service=rpc-bind sudo firewall-cmd –reload

8. Verify NFS Server is Running

To ensure the NFS server is running properly, use the following command:
 

# systemctl status nfs-kernel-server

or

# systemctl status nfs-server

You should see output indicating that the service is active and running.

 

9. Test the NFS Share (Client-Side)

To test the NFS share, you will need to mount it on a client machine. Here's how to mount it:

On the client machine, install the NFS client utilities:

Ubuntu / Debian Linux

# apt install nfs-common

For CentOS / RHEL Linux

# yum install nfs-utils


Create a mount point (Nomatter the distro),:
 

# mkdir -p /mnt/nfs_share


Mount the NFS share:

# mount -t nfs <nfs_server_ip>:/nfs_srv_dir/nfs_share /mnt/nfs_share

Replace <nfs_server_ip> with the IP address of the NFS server or DNS host alias if you have one defined in /etc/hosts file.

Verify that the share is mounted:

​# df -h

You should see the NFS share listed under the mounted file systems.

10. Configure Auto-Mount at Boot (Optional)

To have the NFS share automatically mounted at boot, you can add an entry to the /etc/fstab file on the client machine.

Open /etc/fstab for editing:

# vi /etc/fstab

Add the following line: 

<server-ip>:/nfs_srv_dir/nfs_share /mnt/nfs_share nfs defaults 0 0

Save and close the file.

The NFS share will now be automatically mounted whenever the system reboots.

Debug NFS configuration issues (basics)

 

You can continue to modify the /etc/exports file to share more directories or set specific access restrictions depending on your needs.

If you encounter any issues, checking the server logs or using
 

# exportfs -v
/var/www          192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/home/var_data      192.168.0.205/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/mnt/sda1/
        192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/mnt/sda2/info
        192.168.0.200/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/home/mobfiles    192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/home/var_data/public_html
        192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/var/public
        192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/neon/data
        192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/scripts      192.168.0.209/32(async,wdelay,hide,sec=sys,rw,secure,no_root_squash,no_all_squash)
/backups/data-limited
        192.168.0.200/32(async,wdelay,hide,sec=sys,ro,secure,no_root_squash,no_all_squash)
/disk/filetransfer
        192.168.0.200/23(async,wdelay,hide,sec=sys,ro,secure,no_root_squash,no_all_squash)
/public_shared/data
        192.168.0.200/23(async,wdelay,hide,sec=sys,ro,secure,no_root_squash,no_all_squash)


 Of course there is much more to be said on that you can for example, check /var/log/messages /var/log/syslog and other logs that can give you hints about issues, as well as manually try to mount / unmount a NFS stuck share to know more on what is going on, but for a starter that should be enough.

command can help severely in troubleshooting the NFS configuration.

Sum it up what learned ?

We learned how to  set up basic NFS server and mounted its shared directory on a client machine.
This is a great solution for centralized file sharing and collaboration on Linux systems (even though many companies are trying to not use it due to its lack of connection encryption for historical reasons NFS has been widely used over the years and has helped dramatically for the Internet as we know it to become the World Wide Web of today. Thus for a well secured network and perhaps not a critical files infrastructure, still NFS is a key player in file sharing among heterogenous networks for multitudes of Gigabytes or Terra Pentabytes of data you would like to share amoung your Personal Computers / Servers / Phones / Tablets and generally all kind of digital computer equipment devices.

Protect Application servers against sql injects, redirection handling and click jacking with Haproxy load balancer

Tuesday, April 1st, 2025

 

Lets say you are a system administrator that has to manage haproxy Load Balancers for High Availability that are throwing traffic to a set of 4 Application servers and you do only do a traffic round robin load balancing seemless without modifying the sent traffic. The haproxies are used only to send the frontend traffic towards application machines and then the traffic is returned back via another set of haproxies.


As incoming requests to application frontend is crucial to be secure, i'll give in this article few options that can be turned on in haproxy to strenghten security of backend application (against "hackers" / script kiddies ).

Here is the a sample chunk of haproxy frontend backend configuration you can use in haproxy.cfg config file for the purpose.


  frontend Incoming_Frontend
           bind 10.10.150.8:80 ssl crt /etc/haproxy/certs/your-domain-cert.net_haproxy.pem ca-file /etc/haproxy/certs/CustomCompanyCA.crt verify optional
           mode http
                http-request del-header max-forwards
                http-response set-header X-Frame-Options sameorigin
                http-response replace-header Location http[s]*://[^/:]*[:]*[0-9]*(/.*) \1
              option httplog
              timeout client 600s
              log-format %ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r

             default_backend bk_Incoming_Frontend

    backend bk_Incoming_Frontend
           mode http
           balance roundrobin
           timeout server 330s
           timeout connect 4s
           server bk_AppServer_01 10.10.250.40:8088 weight 1 check port 8088 on-marked-down shutdown-sessions
           server bk_AppServer02 10.40.251.30:8088 weight 1 check port 8088 on-marked-down shutdown-sessions
           server bk_AppServer03 10.50.252.40:8088 weight 1 check port 8088 on-marked-down shutdown-sessions
           server bk_AppServer04 10.80.253.50:8088 weight 1 check port 8088 on-marked-down shutdown-sessions

 

The configuration variables that would improve backend security is as so:

  mode http
                http-request del-header max-forwards
                http-response set-header X-Frame-Options sameorigin
                http-response replace-header Location http[s]*://[^/:]*[:]*[0-9]*(/.*) \1
              option httplog

Above config haproxy meaning explained is as follows: 

This HAProxy configuration is set up for handling HTTP traffic with some specific request and response modifications.

Let's go through each directive:

Breakdown of the Configuration:

  1. mode http

    • This tells HAProxy to operate in HTTP mode, meaning it understands and processes HTTP-specific directives (e.g., modifying headers, logging, etc.).

  2. http-request del-header max-forwards

    • This removes the Max-Forwards header from incoming HTTP requests.

    • The Max-Forwards header is used in TRACE or OPTIONS requests to limit the number of hops a request can take.

    • Removing it may help prevent some types of request-loop abuse or simplify routing.

  3. http-response set-header X-Frame-Options sameorigin

    • This sets the X-Frame-Options header in HTTP responses to sameorigin .

    • Purpose: Prevents clickjacking attacks by ensuring that the page can only be embedded in a frame if it’s from the same origin (not by third-party sites).

      For those who don't know Clickjacking is The malicious practice of manipulating a website user's activity by concealing hyperlinks beneath legitimate clickable content, thereby causing the user to perform actions of which they are unaware. For example you click a payment button on a website from a decoy website but instead of paying to the real target site your money are sent to a malicious user's bank account..

  4. http-response replace-header Location http[s]*://[^/:]*[:]*[0-9]*(/.*) \1

    • This modifies the Location header in HTTP responses.

    • It strips out the scheme ( http:// or https:// ), domain, and port, leaving only the path.

    • Example:

      • Before: Location: https://example.com:8080/path/to/resource

      • After: Location: /path/to/resource

    • This ensures that redirects remain relative instead of absolute, which can help in reverse proxy setups.

  5. option httplog

    • Enables detailed logging for HTTP traffic.

    • Logs will include request method, URL, response status, and other useful details for debugging and monitoring.


Purpose of This Configuration:

  • Security:

    • Removing Max-Forwards helps mitigate abuse.

    • X-Frame-Options: sameorigin prevents clickjacking.

  • Redirection Handling:

    • Ensures the backend does not expose internal hostnames or ports in redirects.

  • Logging:

    • Enables HTTP-specific logging for better monitoring and debugging.

This setup is typical for a reverse proxy scenario where HAProxy is fronting backend services while enforcing security measures and keeping responses clean.

What we learned ?

In this short article, we've learned about how to imrpove application security with simple haproxy load balancer by removing Max-forwards (limitation of max hops traffic could have until reaching the destination), the X-Frame-Options that prevents clickjacking and using Redirection Handling to make sure backend does not expose internal  hostnames or ports used in redirects.

Any other meaningful protection options and hints whether proxying traffic with haproxy are mostly welcome to har about in commects section. If you know such help others learn by sharing.

How to prevent /etc/resolv.conf to overwrite on every Linux boot. Make /etc/resolv.conf DNS records permanent forever

Tuesday, February 4th, 2025

how-to-make-prevent-etc-resolv.conf-to-ovewrite-on-every-linux-boot-make-etc-resolv-conf-permanent-forever

Have you recently been seriously bothered, after one of the updates from older to newer Debian / Ubuntu / CentOS or other Linux distributions by the fact /etc/resolv.conf has become a dynamic file that pretty much in the spirit of cloud technologies is being regenerated and ovewritten on each and every system (server) OS update /  reboot and due to that you start getting some wrong inappropriate DNS records /etc/resolv.conf causing you harm to the server infrastructure?

During my set of server infra i have faced that odditty for some years now and i guess every system administrator out there has suffered at a point by having to migrate an older Linux release to a newer one, where something gets messed up with DNS resolving due to that Linux OS new feature of /etc/resolv.conf not being really static any more.

The Dynamic resolv.conf file for glibc resolver is often generated used to be regenerated by resolvconf command and consequentially can be tampered by dhcpd resolved systemd service as well perhaps other mechanism depending on how the different Linux distribution architects make it to behave …

There are more than one ways to stop the annoying /etc/resolv.conf ovewritten behavior

1. Using dhcpd to stop /etc/resolv.conf being overwritten

Using dhcpd either a small null up script can be used or a separate hook script.

The null script would look like this

root@pcfreak:/root# vim /etc/dhcp/dhclient-enter-hooks.d/nodnsupdate

#!/bin/sh
make_resolv_conf() {
    :
}

root@pcfreak:/root# chmod +x /etc/dhcp/dhclient-enter-hooks.d/nodnsupdate

 

This script overrides an internal function called make_resolv_conf() that would normally overwrite resolv.conf and instead does nothing.

On old Ubuntu s and Debian versions this should work.


Alternative method is to use a small hook dhcp script like this:

root@pcfreak:/root# echo 'make_resolv_conf() { :; }' > /etc/dhcp/dhclient-enter-hooks.d/leave_my_resolv_conf_alone
chmod 755 /etc/dhcp/dhclient-enter-hooks.d/leave_my_resolv_conf_alone


Next boot when dhclient runs onreboot or when you manually run sudo ifdown -a ; sudo ifup -a , 
it loads this script nodnsupdate or the hook script and hopefully your manually configured values of /etc/resolv.conf would not mess up your file anymore.

2. Use a chattr and set immutable flag attribute to /etc/resolv.conf to prevent re-boot to ovewrite it

Anyways the universal and simple way "hack" to prevent /etc/resolv.conf many prefer to use instead of dhcp (especially as not everyone is running a dhcp on a server) , to overwrite is to delete the file and make it immutable with chattr (assuming chattr is supported by the filesystem i.e. EXT3 / EXT4 / XFS , you use on the Linux.).

You might need to check the filesystem type, before using chattr.

root@pcfreak:/root# blkid  | awk '{print $1 ,$3, $4}'
/dev/xvda1: TYPE="xfs"
/dev/xvda2: TYPE="LVM2_member"
/dev/mapper/centos-root: TYPE="xfs"
/dev/mapper/centos-swap: TYPE="swap"
/dev/loop0:
/dev/loop1:
/dev/loop2:

 

Normally EXT fs and XFS support it, note that this is not going to be the case with a network filesystem like NFS.

If you have some weird Filesystem type and you try to chattr you will get error like:

chattr: Inappropriate ioctl for device while reading flags on /etc/resolv.conf

To make /etc/resolv.conf file unchangeable on next boot by dhcpd or systemd-resolved

 a systemd service that provides network name resolution to local applications via a D-Bus interface, the resolve NSS service (nss-resolve)
 

root@pcfreak:/root# rm -f /etc/resolv.conf  
{ echo "nameserver 1.1.1.1";
echo "nameserver 1.0.0.1;
echo "search mydomain.com"; } >  /etc/resolv.conf
chattr +i  /etc/resolv.conf
reboot  


Also it is a good think if you don't plan after some update to have unexpected results caused by systemd-resolved doing something strange is to rename to /etc/systemd/resolved.conf.dpkg-bak or completely remove file

/etc/systemd/resolved.conf

To prevent dhcpd to overwrite the server /etc/resolv.conf from something automatically taken from preconfigured central DNS inside the network configurations made from /etc/network/interfaces configurations such as:

        dns-nameservers 127.0.0.1 8.8.8.8 8.8.4.4 207.67.222.222 208.67.220.220


You need to change the DHCP configuration file named dhclient.conf and use the supersede option. 
To so Edit /etc/dhcp/dhclient.conf.

Look for lines like these:

#supersede domain-name "fugue.com home.vix.com";
#prepend domain-name-servers 127.0.0.1;

Remove the preceding “#” comment and use the domain-name and/or domain-name-servers which you want (your DNS FQDN). Save and hopefully the DNS related ovewrite to /etc/resolv.conf would be stopped, e.g. changes inside /etc/resolv.conf mnually done should stay permanent.

Also it is a good practice to disable ddns-update-style direcive inside /etc/dhcp/dhcpd.conf

root@pcfreak:/root# vim /etc/dhcp/dhcpd.conf
##ddns-update-style none;

However on many newer Debian Linux as of 2025 and its .deb based derivative distros, you have to consider the /etc/resolv.conf is a symlink to another file /etc/resolvconf/run/resolv.conf

If that is the case with you then you'll have to set the immutable chattr attribute flag like so

root@pcfreak:~# chattr -V +i /etc/resolvconf/run/resolv.conf
chattr 1.47.0 (5-Feb-2023)
Flags of /etc/resolvconf/run/resolv.conf set as —-i—————–

root@pcfreak:/root# lsattr /etc/resolvconf/run/resolv.conf
—-i—————– /etc/resolvconf/run/resolv.conf

3.  Make /etc/resolv.conf permanent with simple custom a rc.local boot triggered resolv.conf ovewrite from a resolv.conf_actual template file

Consider that due to the increasing complexity of how Linux based OS-es behaves and the fact the Linux is more and more written to fit integration into the Cloud and be as easy as possible to containerize or orchestrate (with lets say docker or some cloud PODs) and other multitude of OS virtualiozation stuff modernities  /etc/resolv.conf might still continue to ovewrite ! 🙂

Thus I've come up with my very own unique and KISS (Keep it Simple Stupid) method to make sure /etc/resolv.conf is kept permanent and ovewritten on every boot for that "hack" trick you only need to have the good old /etc/rc.local enabled – i have written a short article how it can be enabled on newer debian / ubuntu / fedora / centos Linux here.

Prepare your permanent and static /etc/resolv.conf file containing your preferred server DNSes under a file /etc/resolv.conf_actual

Here is an example of one of my /etc/resolv.conf template files that gets ovweritten on each boot.

root@pcfreak:/root# cat /etc/resolv.conf_actual
domain pc-freak.net
search pc-freak.net
#nameserver 192.168.0.1

nameserver 127.0.0.1
nameserver 8.8.8.8
nameserver 8.8.4.4
nameserver 212.39.90.42
nameserver 212.39.90.43
nameserver 208.67.222.222
nameserver 208.67.220.220
options timeout:2 rotate


And in /etc/rc.local place before the exit directive inside the file simple copy over the original /etc/resolv.conf file real location.

Before proceeding to add it to execute /etc/rc.local assure yourself file is being venerated by OS.
 

root@pcfreak:/etc/dhcp# systemctl status rc-local
● rc-local.service – /etc/rc.local Compatibility
     Loaded: loaded (/etc/systemd/system/rc-local.service; enabled; preset: enabled)
    Drop-In: /usr/lib/systemd/system/rc-local.service.d
             └─debian.conf
     Active: active (exited) since Sun 2024-12-08 21:59:01 EET; 1 month 27 days ago
       Docs: man:systemd-rc-local-generator(8)
    Process: 1417 ExecStart=/etc/rc.local start (code=exited, status=0/SUCCESS)
        CPU: 302ms

Notice: journal has been rotated since unit was started, output may be incomplete.

root@pcfreak:/root# vim /etc/rc.local

 

cp -rpf /etc/resolv.conf_actual /etc/resolvconf/run/resolv.conf


NB ! Make sure those line is placed before any exit 0 command in /etc/rc.local otherwise that won''t work

That's it folks 🙂 
Using this simple trick you should be no longer bothered by a mysterious /etc/resolv.conf overwritten on next server reboot or system update (via a puppet / ansible or some other centralized update automation stuff) causing you a service or infrastructure outage.

Enjoy !

Zabbix Power Shell PS1 script to write zero or one if string is matched inside log file

Monday, December 2nd, 2024

How to Install and Configure Zabbix Server and Client on Rocky Linux 9 - Cộng Đồng Linux

At work we had setup zabbix log file processing for few servers for a service that is doing a Monitoring Health Checks for a a special application via an encrypted strong encrypted tunnel. The app based on the check reports whether the remote side has processed data or not.
As me and my team are not maintainers of the zabbix-server where the zabbix-agents are sending the data, there is a multiple content of data being sent in simply "" empty strings via a zabbix Item setup. Those empty strings however gets stored in the zabbix-server database and since this check is made frequently about 500 hundred records of empty string lines are being written to the zabbix server, we got complaint by the zabbix adminsitrators, that we have to correct our Monitoring setup to not flood the zabbix-server.

Since zabbix cannot catch up the "" empty string and we cannot supress the string from being written in the Item, we needed a way to change the monitoring so that the configured Application check returns 1 (on error) and 0 (on success).

Zabbix even though advanced has a strange when zabbix log[] function, e.g. 

log[/path/to/log,,,,skip]

log function, used to analyze a log file and cut out last or first lines of a file simmilar to UNIX's  head and tail over log files this is described in the zabbix log file monitoring here . If a string is matched it can return string 1, but if nothing gets matched the result is empty string "" and this empty string cannot be used in a way to analyze the data with Item is used.

There is plenty of discussions online for this weird behavior and many people do offer different approaches to solve the strange situation, but as we have tried with our colleagues sys admins  none of those really worked out.

Thus we decided to use the classical way to work around, e.g. to simply use a powershell script that would check a number of lines inside a provided log file analyze if a string gets found and print out value of "1" if the string is matched or "0" "if not and this PS1 script to be set to run via a standard zabbix userparameter script.

This worked well, as all of us are mainly managing Linux systems, and we don't have enough knowledge on powershell we have used our internal Aartificial Intelligence (AI) clone tool to LibreChat – A free and open source ChatGPT clone.

LibreChat includes OpenAI's models, but also others — both open-source and closed-source — and its website promises "seamless integration" with AI services from OpenAI, Azure, Anthropic, and Google — as well as GPT-4, Gemini Vision, and many others. ("Every AI in one place," explains LibreChat's home page.) Plugins even let you make requests to DALL-E or Stable Diffusion for image generations. (LibreChat also offers a database that tracks "conversation state" — making it possible to switch to a different AI model in mid-conversation…)

$logfile = "C:\path\to\your\logfile.log"
$searchString = "-1"
 
# Get the last 140 lines
$lines = Get-Content $logfile -Tail 140
 
# Filter lines containing the search string
$found = $lines | Where-Object { $_ -match [regex]::Escape($searchString) }
 
# Output found lines or 0 if none were found
if ($found) {
    $found | ForEach-Object { $_ }
} else {
    Write-Host 0
}

You can download and the return_zero_or_one-if-string-matches-in-log-powershell.ps1 script here

How to split large files in Windows via split command line and File Archive GUI tool easily

Tuesday, October 22nd, 2024

Moving around a very large files especially Virtualbox Virtual Machines or other VM formats between Windows host and OneDrive might be a problem due to either Azure Cloud configured limitations, or other reasons that your company Domain Administrator has configured, thus if you have to migrate your old Hardware Laptop PC Windows 10 to a newer faster better Harware / Better Performance Notebook Computer with Windows 11 and you still want to keep and move your old large files in this short and trivial article, will explain how.

The topic is easily and most of novice sysadmins should have already be faced to bump into something like this but anyways i found useful to mention about Git for Windows, as it is really useful too thus wrote this small article.

The moved huge files, in my case an experimental Virtual Machines Images which I needed to somehow migrate on the new Freshly installed Windows laptop, the Large files were 40 / 80 etc. Gigabytes or whatever large amount of files from your PC to the Cloud Onedrive and of course the most straight forward thing i tried was to simply add the file for inclusion into the Onedrive storage (via OneDrive tool setup interface), however this file, failed due to OneDrive Cloud file format security limitations or Antivirus solutions configured to filter out the large file copying or even a prohibition to be able to include any kind of Virtual Machines ISOs straight into the cloud.

With this big files comes the question:

How to copy the Virtual Machines from your Old Hardware Laptop to the Cloud (without being able to use an external SSD Hard Drive or a USB SSD Flash drive, due to Domain policy configured for your windows to be unable to copy to externally connected Drive but only to read from such.) ?
 

Here are few sample approaches to do it both from command line (useful if you have to repeat the process or script it and deploy to multiple hosts) or for single hosts via an Archiver tool:

 

1. Using split command Git for Windows (Bash) MINGW64 shell 

Download Git for Windows – https://git-scm.com/download install it and you will get the MINGW64 bash for Windows executable.

Run it either invoke bash command from command line or trigger Windows Run command prompt (Windows button + R) and type full path to executable
 

C:\Program Files\Git\git-bash.exe


Git-for-Windows-bash-for-windows-MINGW64-windows-11-screenshot

Use the integrated program split and to cut it into pieces use:

 

# split MyVeryLargeFileVM.vdi -b 800m


To split the .VDI virtualbox file to lets say 5 Gigabite pieces:

# split MyVeryLargeFile.vdi -b 5g

The output files will be named pieces will be named as in a normal UNIX / GNU split command in the format and each piece of 5GB will be named like:

xaa
xab
xac
xad

If you want to get a more meaningful name for the spilitted files you can set a generated split file prefix with suffixes to be 5 digits long:

# split MyVeryLargeFile.vdi MyVeryLargeFileVM-parts_ -b 5g -d -a 5

  • the -d flag for using numerical suffixes (instead of default aa, ab, ac, etc…),
  • and the option -a 5 to tell it I want the suffixes to be 5 digits long:

2. Split large files by Archiving them with Winrar (ShareWare) tool

If you have already Winrar installed and you don't want to bother with too much typing from the command line, You can use good old WinRAR as a file splitter/joiner as well.

To split a file into smaller files, select "Store" as the compression method and enter the desired value (bytes) into "Split to volumes" box.
This way you can have split files named as filename.part1.rar, filename.part2.rar, etc.

WinRAR_cut-split-large-files-into-pieces-screenshot-Windows

3. Split files with 7-Zip (FreeWare)

Assuming you have the 7-Zip installed on the PC, you can do the archiving of the Big file to a smaller pieces one, you can create the splitted file from 7Zip interfaces menus:

7zip-file-split-files-into-multiple-pieces-windows-screenshot

Or directly cut the single file into multiple volumes, directly from Windows Explorer by Selecting the file and using fall down menus :
7zip-creation-of-multiple-parts-file-from-single-one-screenshot-Windows

7-zip-split-huge-files-to-lower-parts-set-volume-size

Sum it up what learned ? 

What we learned is how to cut large files into multiple single consequential ones for easy copy between Network sides, via both Git 4 Windows and manual copy paste of parted multiple files to OneDrive / DropBox / pCloud or Google Drive.
There is a plenty of other approaches to take as there is also file GUI tools, besides using GNU Win / Gnu Tools for Windows or Cygwin / Gsplit GUI tool  or some kind of the many Archiver toolsavailable for Windows, another option to split the large files is to use a bunch of PowerShell and Batch scripts written that can help you do the file split for both binaries files or Text files. but i'll stop here as I believe that is pretty much enough for most basic needs.

 

How to Copy / Backup Windows USB drive from one USB to a second

Friday, October 18th, 2024

Did you know that when you copy all the files from a USB Drive you don’t copy all the data?

Did you know that there may be files that are not even visible?

In this tutorial you will discover how to copy all of your USB Drive sector by sector, that is to say, that you will see how to create a copy identical to your USB drive without missing anything!

This can be useful if you have formatted your USB stick in error and want to use it, you can create an image for the USB Drive on your computer and then you can recover the formatted data in the image afterward!

The software used in this tutorial is called ImageUSB, it is free, portable, and easy to use.

Don’t use this method if you want only to copy some files, use this to clone/backup your USB Drive with all its master boot record, partition tables, and data.

Let’s go!

Clone Your USB Drive with ImageUSB on Windows 10

Start by downloading and extracting ImageUSB from this official URL: https://www.osforensics.com/tools/write-usb-images.html

Double-click on  imageUSB.exe .

Select your USB Drive from the list, select “Create image from USB drive“. Choose the location for the binary image file (.bin) that will be created from the USB drive.

Click on “Create“.Click “Yes” to confirm your choices.

imageusb clone usb flash drive backup restore 3 create image

Click “Yes” to overwrite the bin file in case it’s already there.

Wait for a couple of minutes…

After the image is created you should see this message. Click “OK“.

Now if you want to restore an image to your USB Drive, just select your USB Drive and choose “Write image to USB drive“. Choose your bin image and click on “Write“.

imageusb clone usb flash drive backup restore 7 write

This program is not recommended on different sizes USB Drives…
Use it mostly for backup/restore on the same USB Drive for your bootable software.

There you have it, the copy of USB to second USB completed !

Enjoy ! 

 

 

Recover lost / forgotten root password for CentOS 7 Linux / Boot CentOS 6 into Single User mode to reset admin pass

Friday, September 27th, 2024

centos-community-enterprise-operating-system-logo.

If you have some old CentOS 7 Virtual machine hanging for a long time and you don't remember the root password or you don't remember where you have stored it, but you have something important as data left over, you might need to recover root password for your CentOS 7 Virtual Machine.

I recently had to resolve that issue and here is the few easy steps to take to recover the lost root password.

Assuming you have tried to boot the VM and the VM boots fine and your few attempts to input manually some default passwords of yours failed, next 

1. Reboot the Virtual Machine to the GRUB boot menu

 

grub.png

The GRUB boot screen should appear and be there for few secs

2. Edit the boot loader kernel options ( add add rd.break enforcing=0 )

 

How to reset root password on CentOS Linux - Clouvider

Press 'e' to Edit the boot loader and modify the boot commands options passed to the linux kernel.

In GRUB edit mode:

add rd.break enforcing=0


to the end of the line starting with linux at the end of passed parameters list as shown in the picture.

When done editing, press Ctrl-x (Control button x key simultaneously) to boot with changed parameters.

ALTERNATIVE WAY TO BOOT THE SYSTEM INTO ROOT WITHOUT PASSWORD PROMPT:

Alternative options to use instead of add rd.break.enforcing=0 are to substitute the rhgb quiet kernel option with init=/bin/bash

Edit CentOS Grub Boot Menu Entries rhgb quiet options shot

Modify kernel parameters pass init=/bin/bash to kernel to boot emergency mode centos linux

 

As you might wonder for the meaning of the passed 2 parameters:

rd.break breaks the boot process at initramfs while
enforcing=0 disables the SELinux (which often enabled by default on CentOS).

Another way is to 

3. Boot in CentOS emergency mode and Reset the root password
 

When done editing, press Ctrl-x to boot with changed parameters.

As you might wonder for the meaning of the passed parameters:

rd.break breaks the boot process at initramfs while
enforcing=0 disables the SELinux (which often enabled by default on CentOS).

Whence system boots up with the modified kernel options cmd, the switch_root prompt will appear.
As the emerency mode boots the filesystem into read-only mode under /sysroot default directory, in order to be able to
modify the MD5 root password stored hash inside RO mounted /sysroot/etc/shadow you need to remount the Filesystme
in read-write mode.

To Remount the read-only file system /sysroot in write mode:

# mount -o remount,rw /sysroot

As the /sysroot is not the root directory to be able to use a standard passwd command you need to make /sysroot
as the default root folder for the booted linux by chrooting into it.
 

  • Generate MD5 password manually (for Hardcore masochistic admins 🙂 )

If you're a hard core linux sysadmin of course, generate your own new md5 password and directly modify /etc/shadow copy pasting the md5 string.

If you want to manually generate the md5 string, you can do it depending on the required encryption algorithm with:

For (md5, sha256, sha512) encrypted pass

# openssl passwd -6 -salt xyz  yourpass

For   (md5, sha256, sha512) encrypted pwd

# mkpasswd –method=SHA-512 –stdin

For (des, md5, sha256, sha512) encrypted pw

# perl -e 'print crypt("YourPasswd", "salt", "sha512"),"\n"'


Once the string is generated;

# vim  /etc/shadow


and exchange the old with new string for MD5

  • Change password with chroot (the easy common way)

remount read write the filesystem in emergency single user mode CentOS LINUX

# chroot /sysroot

That should drop you into another shell bash-4.x

 

Reset root user password in CentOS 7

# passwd
Changing password for user root.
New password:
Retype new password:

We need have to sync the entire filesystem we have to use the sync command, for novice sys admins who never heard about this command, below
short description:

The Linux sync command synchronizes cached data to permanent storage.
This data includes modified superblocks, modified inodes, delayed reads and writes, and others. sync uses several system calls:

sync()
syncfs()
fsync()
fdatasync()


For example, the sync command utilizes the sync() system call to write all buffered modifications to file data and metadata to an underlying storage device.

As a Linux systems administrator or developer, understanding the sync command can be crucial for efficient file synchronization. Additionally, sync can be helpful after crashes or when the file system becomes corrupted.

In this tutorial, we’ll explore the various aspects of the sync command. Also, we’ll see how we can use sync in different scenarios.

# sync

# exec /sbin/init

Try out the root password after booting normally into CentOS and the new set administrator pass should work.


Resetting forgotten (lost) root password on CentOS 6

The process is absolutely the same except on the Step 1 (in the modification of GRUB boot menu by pressing e key), add to

rhgb quiet

at the end one 'S'

This S character means 'boot CentOS into Single user mode'

rhgb quiet S

 

Go to single user mode on CentOS 6 Linux in boot loader S kernel setting

Then, press ENTER key and press b key to boot CentOS 6 into to single user mode.