Posts Tagged ‘copy’

iSH, the best free SSH / Telnet client for iOS iPhone, iPad equivallent of MobaXterm and fully functional Alpine Linux emulator

Wednesday, February 8th, 2023

ish-linux-terminal-emulator-for-iphone-ipad-ios-logo-screenshoticon

Since few months I've switched my old BLU r1 HD Phone (a great old low budget phone for its price) to a friend's iPhone 10 ( X ) who gifted it for me. Coming from Android world, everyone who has experience with it is a pain in the ass as some of the Apps, which are into Google's play store does not have the same equivalent into Apple's install Package manager tool AppStore. Some of the crucial tools which I was interested as a freshly new migrated user from Android to iPhone was to have a decent SSH / Telnet client and Terminal, with which I can easily connect to my Linux servers both home and work. 

As Android Phone user, to connect and manage my SSH sessions I used most often some of the most popular Connectbot / SSHDroid / JuiceSSH.
On Android I've usually installed all of these tools but most frequently used Connectbot, which quickly become my favourite SSH client for Android over time.

The reasons why I really loved Connectbot and used it on Android OS in short:

  • It is Completely free
  • Ad-free
  • Open-source (too bad not Free software but still step better)
  • Copy and paste text between Applications
  • Customizable interface (i.e. font size, keyboard layout, SSH auth agent, etc.)


connectbot-android-ssh-remote-connect-client-screenshot

I've seen some people used and preferred Termius but never myself really liked this client, as it was including some Advertisements or for don't remember why reason.
Switching to iOS mobile operating system, of course was quite a shock especially the moment I found out the standard loved SSH Remote Client programs are used are not available or have only a paid version. Thus it took me quite a while of a research and googling until I found some decent stuff.

termius-ssh-telnet-client-ios-screenshot

Tried for a time with Termius as well but again, its Ads and lack of some functionality pissed me off, so I've moved on to Shelly.

shelly-iphone-ssh-telnet-client-ios-screenshot

Shelly is really not a bad tool but has limitation over the SSH sessions you can add and other limitations, which can only be unlocked with an "Upgrade", to its paid version, thus I decided after few weeks of attempts to make it my remote server management mobile tool for iPhone, I've dropped it off as well.

Then I found the Blink Shell App – Blink Shell is a professional, desktop grade terminal for iOS. As overall the tool is really great and is easy to use but again to have it used in its full power you need the paid version and until you pay for it every now and then you got interruption of your shell for some really annoying ads.
Thus even though I used it for a times this few tools with whom basicly you can do basic remote ssh / telnet session operations eventually,  started looking for a better SSH Client Free alternative for iPhone Users.

Then came a friend at home for a dinner my dear friend Milen (Static) and he show me iOS.
The moment I saw this tool I totally loved it, for its simplicity and its resemblance to a classical TTY Physical old Linux console I used back in the days and its ability to resemble easily any improved functionaltiy through simple screen (multiple session management) command tool or tmux.

Wait, what's iSH ? And why it is the Best SSH / Telnet client to manage your servers remotely on iOS Mobiles (iPhone and IPads) ? 

iSH is a project to get a Linux shell environment running locally on your iOS device, using a usermode x86 emulator.


In other wors iSH is Linux emulator with busybox and a package ports for many of the standard Linux tools you get by simple apt-get / yum or if I have to compare you get via the MobaXterm's advanced apt-cyg (Cygwin packages) tool capabilities.

Once iSH is installed it comes with pre-installed apk command line package management tool, with which you can install stuff like openssh-client / screen / tmux / mc (midnight commander) etc. apk, is an apt like command like tool which uses as a basis for installing its packages Alpine Linux repositories.
Alpine Linux is perhaps little known as it is not one of these main stream disributions, such as Fedora or Ubuntu, but for those more concerned about security  Alpine Linux is well known as it is a security-oriented, lightweight Linux distribution based on musl libc and busybox. What makes the Linux even more attractive and perhaps the reason why the iSH developers decided to use it as a basis for their iSH emulator is it being actively developed and its tightened security makes it a good compliment to the quite closed and security focused mobile platform iOS.

iSH is available straight from AppStore , so to use it install it and run it (it is really a great news that iOS does not require iphone to be jailbreak – ed, and it is an ordinary installable software straight from AppStore):
iSH, already comes with some of the standard programs you would expect in a Linux environment such as Vi, wget, zip / unzip, and tar.
However to fit it better for my use over ssh and improve its capabilities, as well as support and use multiple Virtual windows ssh, just like you do on a Linux xterm
run from ish shell: 

# apk add openssh-client
# apk add screen
# apk add vim
# apk add mc


ish-screenshot-terminal3-linux-emulator-iphone-alpine

ish-screenshot-terminal2-linux-emulator-iphone-alpine

ish-screenshot-terminal1-linux-emulator-iphone-alpine-linux

I also like to have a Midnight Commander and VIM Text editor installed out of the box to be able to move around in Ncurses interface through my iPhone.

ish-iphone-keyboard-key-shortcuts

Note that, just like most GNU / Linux distributions, iOS shell will run a normal bash shell.
From there on to use iSH as my default SSH client and enable my just installed GNU screen some Windowing beauty for readability whence I use the screen with multiple ssh logins to different servers as well make the screen Virtual consoles to have ability for scroll back and scroll up of console text to work, I do set up the following .screenrc inside my /home/iPhoneuser

The .screenrc to setup on the iSH to easify your work with screen is as follows:
 

# An alternative hardstatus to display a bar at the bottom listing the
# windownames and highlighting the current windowname in blue. (This is only
# enabled if there is no hardstatus setting for your terminal)
hardstatus on
hardstatus alwayslastline
hardstatus string "%{.bW}%-w%{.rW}%n %t%{-}%+w %=%{..G} %H %{..Y} %m/%d %C%a "
# Enable scrolling fix the annoying screen scrolling problem
termcapinfo xterm* ti@:te@
# Scroll up
bindkey -d "^[[5S" eval copy "stuff 5\025"
bindkey -m "^[[5S" stuff 5\025

# Scroll down
bindkey -d "^[[5T" eval copy "stuff 5\004"
bindkey -m "^[[5T" stuff 5\004

# Scroll up more
bindkey -d "^[[25S" eval copy "stuff \025"
bindkey -m "^[[25S" stuff \025

# Scroll down more
bindkey -d "^[[25T" eval copy "stuff \004"
bindkey -m "^[[25T" stuff \004

You can download the same .screenrc file from here straight with wget from the console:

# wget https://www.pc-freak.net/files/.screenrc


Run GNU screen manager

 

 # screen

You will end up with a screen session, to open a new session for Virtual Terminal use virtual keyboard from ISH and Press

CTRL + A + C

To open other Virtual Windows inside screen just press CTRL + A + C as many times as you need it, each session will appear ina small window on the down corner as you can see in screenshot

ish-terminal-with-screen-multiple-virtual-terminals-screenshot-iphone-ios

To move across the Screen unnamed 3 Virtual Windows 0 ash 1 ash and 2 ash use the Virtual keyboard

for next WIndow use key combination:
 

CTRL + A + N (where + is just to indicate you have to press them once after another and not actually press the + 🙂 )


For Previous Window use:

CTRL + A + P

Or use CTRL + A and type 

:number 3 (where number is the number of window)

The available iSH commands without adding any further packages which are part of the busybox install are as follows:

Available /bin/ directory commands:

arch  ash  base64  bbconfig  busybox  cat  chgrp  chmod  chown  conspy  cp  date  dd  df  dmesg  dnsdomainname  dumpkmap  echo  ed  egrep  false  fatattr  fdflush  fgrep  fsync  getopt  grep  gunzip  gzip  hostname  ionice  iostat  ipcalc  kbd_mode  kill  link  linux32  linux64  ln  login  ls  lzop  makemime  mkdir  mknod  mktemp  more  mount  mountpoint  mpstat  mv  netstat  nice  pidof  ping  ping6  pipe_progress  printenv  ps  pwd  reformime  rev  rm  rmdir  run-parts  sed  setpriv  setserial  sh  sleep  stty  su  sync  tar  touch  true  umount  uname  usleep  watch  zcat  


Available /usr/bin/ commands:    

awk  basename  beep  blkdiscard  bunzip2  bzcat  bzip2  cal  chvt  cksum  clear  cmp  comm  cpio  crontab  cryptpw  cut  dc  deallocvt  diff  dirname  dos2unix  du  dumpleases  eject  env  expand  expr  factor  fallocate  find  flock  fold  free  fuser  getconf  getent  groups  hd  head  hexdump  hostid  iconv  id  install  ipcrm  ipcs  killall  ldd  less  logger  lsof  lsusb  lzcat  lzma  lzopcat  md5sum  mesg  microcom  mkfifo  mkpasswd  nc  nl  nmeter  nohup  nproc  nsenter  nslookup  od  passwd  paste  patch  pgrep  pkill  pmap  printf  pscan  pstree  pwdx  readlink  realpath  renice  reset  resize  scanelf  seq  setkeycodes  setsid  sha1sum  sha256sum  sha3sum  sha512sum  showkey  shred  shuf  smemcap  sort  split  ssl_client  strings  sum  tac  tail  tee  test  time  timeout  top  tr  traceroute  traceroute6  truncate  tty  ttysize  udhcpc6  unexpand  uniq  unix2dos  unlink  unlzma  unlzop  unshare  unxz  unzip  uptime  uudecode  uuencode  vi  vlock  volname  wc  wget  which  whoami  whois  xargs  xxd  xzcat  yes  


If you're a maniac developer you can even use iSH, to do some programs development with vim with Python / Perl or PHP as these are available from the Alpine repositories and installable via a simple apk add packagename for security experts nmap and some security tools are also available but unfortunately not everything is still working as this project is in active development and iOS has some security limitations if OS is not ROOTED 🙂

Hence some of the packages you can install via apk manager will be failing actually.
There is a list of What works and what doesn't still on iSH on the project github wiki check it out here.

There is much more funny stuff you can do with it, and actually my quick research on how people use iSH on their phones lead me to some Videos talking about iOS and Ethical hacking etc, but I'll stop here as I dont have the time to dig deeper to it. 
If you know or have some good use of iSH or some other goody you are using as a hack please share in comments.

Enjoy ! 🙂

How to fresh Upgrade mistakenly installed 32-bit Windows 10 Professional to 64-bit Windows / A failure to Disk Clone old SSD 120GB to 512GB HDD due to failed Solid State Drive

Wednesday, November 17th, 2021

upgrade-windows-10-32-bit-to-64-bit-howto-picture

I've been Setting up a new PC with Windows OS that is a bit old a 11 years old Lenovo ThinkCentre model M90P with 8 GB of Memory, Intel(R) Core(TM) i5 CPU         650  @ 3.20GHz   3.19 GHz, Intel Q57 Express Chipset. The machine came to me with Windows 7 preinstalled and the intial goal was to migrate Windows as it is with its data from the old 120GB SSD to new 512 SSD and then to keep the machine at least a bit more up to date to upgrade the old Windows 7 to Windows 10.

This as usual seemed like a very trivial task for a System Administrator, and even if you haven't touched much of Windows as me it makes it look a piece of cake, however as always with computers, once you think you'll be done in 2 hours usually it takes 20+ . Some call it Murphy's law "If something could go wrong then it will go wrong". But putting this situation that I thought all well that's easy lets do it is a kind of a proud Thought for man and the to save us from this Passion of Proudness which according to Church fathers is the worst passion one can have and humiliate us a bit.

God allows some unforseen stuff to happen   🙂 The case with this machine whose original idea I had is to OK I Simply Duplicate the Old Hard Drive to the New one and Place the new one on the ThinkCentre is not a big deal turned to a small adventure 🙂

For this machine hardware I have to say, the old English saying "Old but Gold" is pretty true, especially after I've attached the Samsung 512GB NVME SSD Drive, which my dear friend and brother in Christ "Uncle Emilian" had received as a gift from another friend called Angel. To put even more rant, here name Emilian stems from the Greek Emilianos which translated to English means Adversary.. But anyways The old Intel SSD 120 GB drive which besides being already completely Full of Data,  turned to have Memory DATA Chips (that perhaps burn out / wasted),  so parts of the Drive were Unreadable.
I've realized the fauly SSD fact after, 
trying to first clone the drives with my Hardware Disk Clone device Orico Dual Bay 2.5 6629US3-C device and then using a simple bit to bit copy with dd command.

orico-6629us3-c2-bay-usb3-type-b2.5-type3-5.inch-sata


At first for some weird reason the Cloning of 120GB SSD HDD towards -> 512 GB newer one was unsuccessful – one of the 2 lamp indicators on Source and Destination Drives was continuiously blinking orange as it seemed data could not be read, even though I tried few times and wait for about 1 hour of time for the cloning to complete, so I first suspected that might be an issue with my  last year bought Disk Clone hardware device. So I've attached the 2 Hard Drives towards my Debian GNU / Linux 10 as USB attached drives using the "Toaster" device  and tried a classical copy   from terminal with Disk Druid e.g.


# dd if=/dev/sdb2 of=/dev/sdbc2 bs=180M status=progress conv=noerror, sync

 
dd: error reading '/dev/sdb2': Input/output error
1074889+17746 records in
1092635+0 records out
559429120 bytes (559 MB, 534 MiB) copied, 502933 s, 1.1 kB/s
dd: writing to '/dev/dc2': Input/output error
1074889+17747 records in
1092635+0 records out
559429120 bytes (559 MB, 534 MiB) copied, 502933 s, 1.1 kB/s

Finally I did a manual copy of files from /dev/sdb2 /dev/sdc2 with rsync and part of the files managed to be succesfully copied, about 55Gigabytes out of 110 managed to copy.  Luckily the data on the broken Intel 320 Series 120GB was not top secret stuff so wasting some bits wasn't the end of the world 🙂

Next, I've removed the broken 120Gb SSD which perhaps was about at least 9+ years old and attached to the Lenovo ThinkCentre, the new drive and as my dear friend wanted to have Windows again (his computer has Microsoft "Certificate of Authenticity"), e.g. that OEM Registration Serial Key for Windows 7.

Lenovo-ThinkCentre-M90p-certificate-of-authenticity

I've jumped in and used some old Flash USB Stick Drive to place again Windows 7 (in order to use the same active license) and from there on, I've used another old Windows 10 Installation Bootable stick of mine to upgrade the Windows 7 to Windows 10 (by using this Win 7 to Win 10 upgrade trick it is possible to still continue use your old Windows 7 License Key on Windows 10). So far so good, now I've had Windows 10 Professional Edition installed on the machine, but faced another issue the Memory of the Machine which is 8GB did not get fully detected the machine had detected only 3.22 GB of Memory, for some weird reason.

only-2-80-gb-usable-windows-10-problem-32-bit-cpu-cause-screenshot

After few minutes of investigation online, I've realized, I've installed by mistake a 32 Bit version of Windows 10 Pro…So the next step was of course to upgrade to 64 bit to work around the unrecognized 5.2GB memory… To make sure my Windows 10 Installation is up-to-date I've downloaded the latest one from the Media Creation Installation Tool from Microsoft's website used the tool to burn the Downloaded Image to an Empty USB Stick (mine is 16GB but minimum required would be 4Gb) and proceeded to reboot the Lenovo Desktop machine and boot from the Windows 10 Install Flash Drive. From there on I've had to select I need to install a 64 Bit version of Windows and Skip the Licensing Key fill in Prompt Twice (act as I have no license) as Windows already could recognize the older OEM installed 32 bit install Windows key and automatically fetches the key from there.

Before proceeding to install the 64 Bit Windows, of course double check  that the Machine you have at hand has already the License Key recognized by Microsoft  is 64 Bit capable:

To check 32 bit version of Windows before attempted upgrade is Properly Licensed :

Settings > Update & security > Activation

check-if-windows-is-already-activated-settings-update-and-security-Activation-menus

 

To check whether Hardware is 64 Capable:

Settings -> System -> About

 

is-hardware-processor-64-bit-capable-windows-screenshot

32 bit Windows on x64based processor (Machine supports 64 bit OS)

 

windows10-OS-Installation-media-install-tool

Media Creation Tool Windows 10 MS Installer tool (make sure you select 64-bit (x86) instead of the default

From the Installer, I've installed Windows just like I install a brand new fersh Win OS and after asking the few trivial Installation Program questions landed to the new working OS and proceeded to install the usual software which are a must have on a freshly installed Windows for some of them check my previous article Essential Must have software to install on Fresh  new Windows installation host.

CentOS disable SELinux permanently or one time on grub Linux kernel boot time

Saturday, July 24th, 2021

selinux-artistic-penguin-logo-protect-data

 

1. Office 365 cloud connected computer and a VirtualBox hosted machine with SELINUX preventing it to boot

At my job we're in process of migrating my old Lenovo Laptop Thinkpad model L560 Laptop to Dell Latitude 5510 wiith Intel Core i5 vPro CPU and 256 Gb SSD Hard Drive.  The new laptops are generally fiine though they're not even a middle class computers and generally I prefer thinkpads. The sad thing out of this is our employee decided to migrate to Office 365 (again perhaps another stupid managerial decision out of an excel sheet wtih a balance to save some money … 

As you can imagine Office 365 is not really PCI Standards compliant and not secure since our data is stored in Microsoft cloud and theoretically Microsoft has and owns our data or could wipe loose the data if they want to. The other obvious security downside I've noticed with the new "Secure PCI complaint laptop" is the initial PC login screen which by default offers fingerprint authentication or the even worse  and even less secure face recognition, but obviosly everyhing becomes more and more crazy and people become less and less cautious for security if that would save money or centralize the data … In the name of security we completely waste security that is very dubious paradox I don't really understand but anyways, enough rant back to the main topic of this article is how to and I had to disable selinux?

As part of Migration I've used Microsoft OneDrive to copy old files from the Thinkpad to the Latitude (as on the old machine USB's are forbidden and I cannot copy over wiith a siimple USB driive, as well as II have no right to open the laptop and copy data from the Hard driive, and even if we had this right without breaking up some crazy company policy that will not be possible as the hard drive data on old laptop is encrypted, the funny thing is that the new laptop data comes encrypted and there is no something out of the box as BitDefender or McAffee incryption (once again, obviously our data security is a victim of some managarial decisions) …
 

2. OneDrive copy problems unable to sync some of the copied files to Onedrive


Anyways as the Old Laptop's security is quite paranoid and we're like Fort Nox, only port 80 and port 443 connections to the internet can be initiated to get around this harsh restrictions it was as simple to use a Virtualbox Virtual Machine. So on old laptop I've installed a CentOS 7 image which I used so far and I used one drive to copy my vbox .vdi image on the new laptop work machine.

The first head buml was the .vdi which seems to be prohibited to be copied to OneDrive, so to work around this I had to rename the origianl CentOS7.vdi to CentOS7.vdi-renamed on old laptop and once the data is in one drive copy my Vitualbox VM/ directory from one drive to the Dell Latitude machine and rename the .vdi-named towards .vdi as well as import it from the latest installed VirtualBox on the new machine.
 

3. Disable SELINUX from initial grub boot


So far so good but as usual happens with miigrations I've struck towards another blocker, the VM image once initiated to boot from Virtualbox badly crashed with some complains that selinux cannot be loaded.
Realizing CentOS 7 has the more or less meaningless Selinux, I've took the opportunity to disable SeLinux.

To do so I've booted the Kernel with Selinux disabled from GRUB2 loader prompt before Kernel and OS Userland boots.

 

 

I thought I need to type the information on the source in grub. What I did is very simple, on the Linux GRUB boot screen I've pressed

'e' keyboard letter

that brought the grub boot loader into edit mode.

Then I had to add selinux=0 on the edited selected kernel version, as shown in below screenshot:

selinux-disable-from-grub.png

Next to boot the Linux VM without Selinux enabled one time,  just had to press together

Ctrl+X then add selinux=0 on the edited selected kernel version, that should be added as shown in the screenshot somewhere after the line of
root=/dev/mapper/….

4. Permanently Disable Selinux on CentOS 7


Once I managed to boot Virtual Machine properly with Oracle Virtualbox, to permanently disabled selinux I had to:

 

Once booted into CentOS, to check the status of selinux run:

 

# sestatus
Copy
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      31

 

5. Disable SELinux one time with setenforce command


You can temporarily change the SELinux mode from targeted to permissive with the following command:

 

# setenforce 0


Next o permanently disable SELinux on your CentOS 7 next time the system boots, Open the /etc/selinux/config file and set the SELINUX mod parameter to disabled.

On CentOS 7 you can  edit the kernel parameters in /etc/default/grub (in the GRUB_CMDLINE_LINUX= key) and set selinux=0 so on next VM / PC boot we boot with a SELINUX disabled for example add   RUB_CMDLINE_LINUX=selinux=0 to the file then you have to regenerate your Grub config like this:
 

# grub2-mkconfig -o /etc/grub2.cfg
# grub2-mkconfig -o /etc/grub2-efi.cfg


Further on to disable SeLinux on OS level edit /etc/selinux
 

Default /etc/selinux/config with selinux enabled should look like so:

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#       enforcing – SELinux security policy is enforced.
#       permissive – SELinux prints warnings instead of enforcing.
#       disabled – No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of these two values:
#       targeted – Targeted processes are protected,
#       mls – Multi Level Security protection.
SELINUXTYPE=targeted


To disable SeLinux modify the file to be something like:

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#       enforcing – SELinux security policy is enforced.
#       permissive – SELinux prints warnings instead of enforcing.
#       disabled – No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#       targeted – Targeted processes are protected,
#       mls – Multi Level Security protection.
SELINUXTYPE=targeted

6. Check SELINUX status is disabled

# sestatus

SELinux status:                 disabled

So in this article shottly was explained shortly the fake security adopted by using Microsoft Cloud environment Offiice 365, my faced OneDrive copy issues (which prevented even my old laptop Virtual Machine to boot properly and the handy trick to rename the file that is unwilling to get copied from old PC towards m$ OneDrive as well as the grub trick to disable Selinux permanently from grub2.

Fix Oracle virtualbox Virtual Machine inacessible error in Linux / Windows Host OS

Monday, November 2nd, 2020

Fix Oracle virtualbox Virtual Machine inacessible error in Linux / Windows Host OS

Say you have installed a Virtual Machine on a Host OS be it Windows 7 / 10 or some GNU / Linux / FreeBSD OS and suddenly after a PC shutdown / restart the Virtual machine shows as it couldn't be run by Ora Virtualbox anymore with a message of VM Machine "inaccessible", what is causing this and how to fix it?

Virtualbox-virtual-machine-in-inaccessible-state-screenshot
This error is usually caused by the Guest Operating System's forceful behavior or an OS system crash such as sudden hang due to kernel error or due to sudden electricity drop. Whatever the reason if improper sending of termination process signals to Oracle Vbox are not properly handled this is causing the VBox to not recreate its own files and kill (close) the application of Oracle Virtualbox program in gracious way.

On Windows Virtualization Host OS the common files that needs to be tampered by Virtualbox are found under Virtualbox VMs\Name-of-VM:

virtualbox-vm-inacessible-_error-screenshot.png

For example if you have a Ubuntu installed

Virtualbox-virtual-machine-in-inaccessible-state-screenshot.png
C:\Users\\VirtualBox VMs\Ubuntu

If you have CentOS7 installed instead it will be under

C:\Users\\VirtualBox VMs\CentOS7

Usually under this dir you will find files which are with .vbox extension.

The virtual box files with extension .vbox contain metadata the virtualbox hypervisor requires to resolve the guest virtual OS' configuration.
If the main .vbox file is corrupted (i.e. reporting that it is empty) then use the backup .vbox-prev file to recover the contents of the original file.

How to solve the Virtualbox inacessible weird error:

Usually under the Virtualbox VMs\VM_name you'll find a directory / filestructure like:
 

Logs/
Snapshot/
VM_Name.vbox
VM_Name.vbox-prev
VM_Name.vbox-tmp

 

 

1. Considering .vbox is empty Rename the empty .vbox files a temporary name (e.g. rename VM_Name.vbox to VM_Name-empty.vbox). If you're unsure whether it is empty you can check by opening the file in a text editor and make sure it is empty.

2. Then make a copy of the backup file VM_Name.vbox-prev, where the copy will have the same name as the original but with the word "copy" appended to it (i.e. VM_Name.vbox-prev is renamed to VM_Name_copy.vbox-prev) as well as copy VM_Name.vbox-tmp to VM_Name_copy.vbox-tmp.

! Note that it is important to retain the original backup .vbox-prev file it should not be altered or itself renamed.

3. Now go rename the copy of the newly created .vbox-prev file VM_Name.vbox-prev to the name of the empty .vbox file (VM_Name.vbox).

Now that this is done you may add the .vbox file (guest os) back into the VBOX hypervisor.

If for some reason this did not work and you have a proper working copy of the Virtual Machine under VM_Name.vbox-tmp another approach to try is:
 

1.  Go to your Virtualbox folder i.e. C:\Users\hipo\VirtualBox VMs\Ubuntu

2. Check for file extensions files Ubuntu.vbox-tmp or Ubuntu.vbox-prev needed are there.

3. Overwrite Ubuntu.vbox file with the -tmp one, Just rename file from Ubuntu.vbox-tmp to Ubuntu.vbox

4.  Exit from Virtual Machine and Power it on again.

Hopefully this should recovered the state and snapshot of the "inaccessible" guest VM and
you should now see error gone away and VM will work as before.

 

How to redirect / forward all postfix emails to one external email address?

Thursday, October 29th, 2020

Postfix_mailserver-logo-howto-forward-email-with-regular-expression-or-maildrop

Lets say you're  a sysadmin doing email migration of a Clustered SMTP and due to that you want to capture for a while all incoming email traffic and redirect it (forward it) towards another single mailbox, where you can review the mail traffic that is flowing for a few hours and analyze it more deeper. This aproach is useful if you have a small or middle sized mail servers and won't be so useful on a mail server that handels few  hundreds of mails hourly. In below article I'll show you how.

How to redirect all postfix mail for a specific domain to single external email address?

There are different ways but if you don't want to just intercept the traffic and a create a copy of email traffic using the always_bcc integrated postfix option (as pointed in my previous article postfix copy every email to a central mailbox).  You can do a copy of email flow via some custom written dispatcher script set to be run by the MTA on each mail arriva, or use maildrop filtering functionality below is very simple example with maildrop in case if you want to filter out and deliver to external email address only email targetted to specific domain.

If you use maildrop as local delivery agent to copy email targetted to specifidc domain to another defined email use rule like:

if ( /^From:.*domain\.com/:h ) {
  cc "!someothermail@domain2.com"
}


To use maildrop to just forward email incoming from a specific sender towards local existing email address on the postfix to an external email address  use something like:

if ( /^From: .*linus@mail.example.com.*/ )
{
        dotlock "forward.lock" {
          log "Forward mail"
          to "|/usr/sbin/sendmail linuxbox@collector.example.com"
        }
}

Then to make the filter active assuming the user has a physical unix mailbox, paste above to local user's  $HOME/.mailfilter.

What to do if your mail delivered via your Email-Server.com are sent from a monitoring and alarming scripts that are sending towards many mailboxes that no longer exist after the migration?

To achive capturing all normal attempted to be sent traffic via the mail server, we can forward all served mails towards a single external mail address you can use the nice capability of postfix to understand PCRE perl compatible regular expressions. Regular expressions in postfix of course has its specific I recommend you take a look to the postfix regexp table documentation here, as well as check the Postfix Regex / Tester / Debugger online tool – useful to validate a regexp you want to implement.

How to use postfix regular expression to do a redirect of all sent emails via your postfix mail relayhost towards external mail servers?

 

In main.cf /etc/postfix/main.cf include this line near bottom or as a last line:

virtual_maps = hash:/etc/postfix/virtual, regexp:/etc/postfix/virtual-regexp

One defines the virtual file which can be used to define any of your virtual domains you want to simulate as present on the local postfix, the regexp: does load the file which is read by postfix where you can type the regular expression applied to every incoming email via SMTP port 25 or encrypted MTA ports 385 / 995 etc.

So how to redirect all postfix mail to one external email address for later analysis?

Create file /etc/postfix/virtual-regexp

/.+@.+/ external-forward-email@gmail.com

Next build the mapfile (this will generate /etc/postfix/virtual-regexp.db )
 

# postmap /etc/postfix/virtual-regexp

This also requires a virtual.db to exist. If it doesn't create an empty file called virtual and run again postmap postfix .db generator

# touch /etc/postfix/virtual && postmap /etc/postfix/virtual


Note in /etc/postfix/virtual you can add your postfix mail domains for which you want the MTA to accept mail as a local mail.

In case you need to view all postfix defined virtual domains configured to accept mail locally on the mail server.
 

$ postconf -n | grep virtual
virtual_alias_domains = mydomain.com myanotherdomain.com
virtual_alias_maps = hash:/etc/postfix/virtual


The regexp /.+@.+/ external-forward-email@gmail.com applied will start forwarding mails immediately after you reload the MTA with:

# systemctl restart postfix


If you want to exclude target mail domains to not be captured by above regexp, in /etc/postfix/virtual-regexp place:

/.+@exclude-domain1.com/ @exclude-domain1.com
/.+@exclude-domain2.com/ @exclude-domain2.com

Time for a test. Send a test email


Next step is to Test it mail forwarding works as expected
 

# echo -e "Tseting body" | mail -s "testing subject" -r "testing@test.com" whatevertest-user@mail-recipient-domain.com

Reinstall all Debian packages with a copy of apt deb package list from another working Debian Linux installation

Wednesday, July 29th, 2020

Reinstall-all-Debian-packages-with-copy-of-apt-packages-list-from-another-working-Debian-Linux-installation

Few days ago, in the hurry in the small hours of the night, I've done something extremely stupid. Wanting to move out a .tar.gz binary copy of qmail installation to /var/lib/qmail with all the dependent qmail items instead of extracting to admin user /root directory (/root), I've extracted it to the main Operating system root / directrory.
Not noticing this, I've quickly executed rm -rf var with the idea to delete all directory tree under /root/var just 3 seconds later, I've realized I'm issuing the rm -rf var with the wrong location WITH a root user !!!! Being scared on what I've done, I've quickly pressed CTRL+C to immedately cancel the deletion operation of my /var.

wrong-system-var-rm-linux-dont-do-that-ever-or-your-system-will-end-up-irreversably-damaged

But as you can guess, since the machine has an Slid State Drive drive and SSD memory drive are much more faster in I/O operations than the classical ATA / SATA disks. I was not quick enough to cancel the operation and I've noticed already some part of my /var have been R.I.P-pped in the heaven of directories.

This was ofcourse upsetting so for a while I rethinked the situation to get some ideas on what I can do to recover my system ASAP!!! and I had the idea of course to try to reinstall All my installed .deb debian packages to restore system closest to the normal, before my stupid mistake.

Guess my unpleasent suprise when I have realized dpkg and respectively apt-get apt and aptitude package management tools cannot anymore handle packages as Debian Linux's package dependency database has been damaged due to missing dpkg directory 

 

/var/lib/dpkg 

 

Oh man that was unpleasent, especially since I've installed plenty of stuff that is custom on my Mate based desktop and, generally reinstalling it updating the sytem to the latest Debian security updates etc. will be time consuming and painful process I wanted to omit.

So of course the logical thing to do here was to try to somehow recover somehow a database copy of /var/lib/dpkg  if that was possible, that of course led me to the idea to lookup for a way to recover my /var/lib/dpkg from backup but since I did not maintained any backup copy of my OS anywhere that was not really possible, so anyways I wondered whether dpkg does not keep some kind of database backups somewhere in case if something goes wrong with its database.
This led me to this nice Ubuntu thred which has pointed me to the part of my root rm -rf dpkg db disaster recovery solution.
Luckily .deb package management creators has thought about situation similar to mine and to give the user a restore point for /var/lib/dpkg damaged database

/var/lib/dpkg is periodically backed up in /var/backups

A typical /var/lib/dpkg on Ubuntu and Debian Linux looks like so:
 

hipo@jeremiah:/var/backups$ ls -l /var/lib/dpkg
total 12572
drwxr-xr-x 2 root root    4096 Jul 26 03:22 alternatives
-rw-r–r– 1 root root      11 Oct 14  2017 arch
-rw-r–r– 1 root root 2199402 Jul 25 20:04 available
-rw-r–r– 1 root root 2199402 Oct 19  2017 available-old
-rw-r–r– 1 root root       8 Sep  6  2012 cmethopt
-rw-r–r– 1 root root    1337 Jul 26 01:39 diversions
-rw-r–r– 1 root root    1223 Jul 26 01:39 diversions-old
drwxr-xr-x 2 root root  679936 Jul 28 14:17 info
-rw-r—– 1 root root       0 Jul 28 14:17 lock
-rw-r—– 1 root root       0 Jul 26 03:00 lock-frontend
drwxr-xr-x 2 root root    4096 Sep 17  2012 parts
-rw-r–r– 1 root root    1011 Jul 25 23:59 statoverride
-rw-r–r– 1 root root     965 Jul 25 23:59 statoverride-old
-rw-r–r– 1 root root 3873710 Jul 28 14:17 status
-rw-r–r– 1 root root 3873712 Jul 28 14:17 status-old
drwxr-xr-x 2 root root    4096 Jul 26 03:22 triggers
drwxr-xr-x 2 root root    4096 Jul 28 14:17 updates

Before proceeding with this radical stuff to move out /var/lib/dpkg/info from another machine to /var mistakenyl removed oned. I have tried to recover with the well known:

  • extundelete
  • foremost
  • recover
  • ext4magic
  • ext3grep
  • gddrescue
  • ddrescue
  • myrescue
  • testdisk
  • photorec

Linux file deletion recovery tools from a USB stick loaded with a Number of LiveCD distributions, i.e. tested recovery with:

  • Debian LiveCD
  • Ubuntu LiveCD
  • KNOPPIX
  • SystemRescueCD
  • Trinity Rescue Kit
  • Ultimate Boot CD


but unfortunately none of them couldn't recover the deleted files … 

The reason why the standard file recovery tools could not recover ?

My assumptions is after I've done by rm -rf var; from sysroot,  issued the sync (- if you haven't used it check out man sync) command – that synchronizes cached writes to persistent storage and did a restart from the poweroff PC button, this should have worked, as I've recovered like that in the past) in a normal Sys V System with a normal old fashioned blocks filesystem as EXT2 . or any other of the filesystems without a journal, however as the machine run a EXT4 filesystem with a journald and journald, this did not work perhaps because something was not updated properly in /lib/systemd/systemd-journal, that led to the situation all recently deleted files were totally unrecoverable.

1. First step was to restore the directory skele of /var/lib/dpkg

# mkdir -p /var/lib/dpkg/{alternatives,info,parts,triggers,updates}

 

2. Recover missing /var/lib/dpkg/status  file

The main file that gives information to dpkg of the existing packages and their statuses on a Debian based systems is /var/lib/dpkg/status

# cp /var/backups/dpkg.status.0 /var/lib/dpkg/status

 

3. Reinstall dpkg package manager to make package management working again

Say a warm prayer to the Merciful God ! and do:

# apt-get download dpkg
# dpkg -i dpkg*.deb

 

4. Reinstall base-files .deb package which provides basis of a Debian system

Hopefully everything will be okay and your dpkg / apt pair will be in normal working state, next step is to:

# apt-get download base-files
# dpkg -i base-files*.deb

 

5. Do a package sanity and consistency check and try to update OS package list

Check whether packages have been installed only partially on your system or that have missing, wrong or obsolete control  data  or  files.  dpkg  should suggest what to do with them to get them fixed.

# dpkg –audit

Then resynchronize (fetch) the package index files from their sources described in /etc/apt/sources.list

# apt-get update


Do apt db constistency check:

#  apt-get check


check is a diagnostic tool; it updates the package cache and checks for broken dependencies.
 

Take a deep breath ! …

Do :

ls -l /var/lib/dpkg
and compare with the above list. If some -old file is not present don't worry it will be there tomorrow.

Next time don't forget to do a regular backup with simple rsync backup script or something like Bacula / Amanda / Time Vault or Clonezilla.
 

6. Copy dpkg database from another Linux system that has a working dpkg / apt Database

Well this was however not the end of the story … There were still many things missing from my /var/ and luckily I had another Debian 10 Buster install on another properly working machine with a similar set of .deb packages installed. Therefore to make most of my programs still working again I have copied over /var from the other similar set of package installed machine to my messed up machine with the missing deleted /var.

To do so …
On Functioning Debian 10 Machine (Working Host in a local network with IP 192.168.0.50), I've archived content of /var:

linux:~# tar -czvf var_backup_debian10.tar.gz /var

Then sftped from Working Host towards the /var deleted broken one in my case this machine's hostname is jericho and luckily still had SSHD and SFTP running processes loaded in memory:

jericho:~# sftp root@192.168.0.50
sftp> get var_backup_debian10.tar.gz

Now Before extracting the archive it is a good idea to make backup of old /var remains somewhere for example somewhere in /root 
just in case if we need to have a copy of the dpkg backup dir /var/backups

jericho:~# cp -rpfv /var /root/var_backup_damaged

 
jericho:~# tar -zxvf /root/var_backup_debian10.tar.gz 
jericho:/# mv /root/var/ /

Then to make my /var/lib/dpkg contain the list of packages from my my broken Linux install I have ovewritten /var/lib/dpkg with the files earlier backupped before  .tar.gz was extracted.

jericho:~# cp -rpfv /var /root/var_backup_damaged/lib/dpkg/ /var/lib/

 

7. Reinstall All Debian  Packages completely scripts

 

I then tried to reinstall each and every package first using aptitude with aptitude this is done with

# aptitude reinstall '~i'

However as this failed, tried using a simple shell loop like below:

for i in $(dpkg -l |awk '{ print $2 }'); do echo apt-get install –reinstall –yes $i; done

Alternatively, all .deb package reninstall is also possible with dpkg –get-selections and with awk with below cmds:

dpkg –get-selections | grep -v deinstall | awk '{print $1}' > list.log;
awk '$1=$1' ORS=' ' list.log > newlist.log
;
apt-get install –reinstall $(cat newlist.log)

It can also be run as one liner for simplicity:

dpkg –get-selections | grep -v deinstall | awk '{print $1}' > list.log; awk '$1=$1' ORS=' ' list.log > newlist.log; apt-get install –reinstall $(cat newlist.log)

This produced a lot of warning messages, reporting "package has no files currently installed" (virtually for all installed packages), indicating a severe packages problem below is sample output produced after each and every package reinstall … :

dpkg: warning: files list file for package 'iproute' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'brscan-skey' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libapache2-mod-php7.4' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libexpat1:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libexpat1:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'php5.6-readline' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'linux-headers-4.19.0-5-amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libgraphite2-3:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libgraphite2-3:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libbonoboui2-0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libxcb-dri3-0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libxcb-dri3-0:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'liblcms2-2:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'liblcms2-2:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libpixman-1-0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libpixman-1-0:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'gksu' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'liblogging-stdlog0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'mesa-vdpau-drivers:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'mesa-vdpau-drivers:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libzvbi0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libzvbi0:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libcdparanoia0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libcdparanoia0:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'python-gconf' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'php5.6-cli' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libpaper1:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'mixer.app' missing; assuming package has no files currently installed

After some attempts I found a way to be able to work around the warning message, for each package by simply reinstalling the package reporting the issue with

apt –reinstall $package_name


Though reinstallation started well and many packages got reinstalled, unfortunately some packages such as apache2-mod-php5.6 and other php related ones  started failing during reinstall ending up in unfixable states right after installation of binaries from packages was successfully placed in its expected locations on disk. The failures occured during the package setup stage ( dpkg –configure $packagename) …

The logical thing to do is a recovery attempt with something like the usual well known by any Debian admin:

apt-get install –fix-missing

As well as Manual requesting to reconfigure (issue re-setup) of all installed packages also did not produced a positive result

dpkg –configure -a

But many packages were still failing due to dpkg inability to execute some post installation scripts from respective .deb files.
To work around that and continue installing the rest of packages I had to manually delete all files related to the failing package located under directory 

/var/lib/dpkg/info#

For example to omit the post installation failure of libapache2-mod-php5.6 and have a succesful install of the package next time I tried reinstall, I had to delete all /var/lib/dpkg/info/libapache2-mod-php5.6.postrm, /var/lib/dpkg/info/libapache2-mod-php5.6.postinst scripts and even sometimes everything like libapache2-mod-php5.6* that were present in /var/lib/dpkg/info dir.

The problem with this solution, however was the package reporting to install properly, but the post install script hooks were still not in placed and important things as setting permissions of binaries after install or applying some configuration changes right after install was missing leading to programs failing to  fully behave properly or even breaking up even though showing as finely installed …

The final solution to this problem was radical.
I've used /var/lib/dpkg database (directory) from ther other working Linux machine with dpkg DB OK found in var_backup_debian10.tar.gz (linux:~# host with a working dpkg database) and then based on the dpkg package list correct database responding on jericho:~# to reinstall each and every package on the system using Debian System Reinstaller script taken from the internet.
Debian System Reinstaller works but to reinstall many packages, I've been prompted again and again whether to overwrite configuration or keep the present one of packages.
To Omit the annoying [Y / N ] text prompts I had made a slight modification to the script so it finally looked like this:
 

#!/bin/bash
# Debian System Reinstaller
# Copyright (C) 2015 Albert Huang
# Copyright (C) 2018 Andreas Fendt

# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.

# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.

# You should have received a copy of the GNU General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.

# —
# This script assumes you are using a Debian based system
# (Debian, Mint, Ubuntu, #!), and have sudo installed. If you don't
# have sudo installed, replace "sudo" with "su -c" instead.

pkgs=`dpkg –get-selections | grep -w 'install$' | cut -f 1 |  egrep -v '(dpkg|apt)'`

for pkg in $pkgs; do
    echo -e "\033[1m   * Reinstalling:\033[0m $pkg"    

    apt-get –reinstall -o Dpkg::Options::="–force-confdef" -o Dpkg::Options::="–force-confold" -y install $pkg || {
        echo "ERROR: Reinstallation failed. See reinstall.log for details."
        exit 1
    }
done

 

 debian-all-packages-reinstall.sh working modified version of Albert Huang and Andreas Fendt script  can be also downloaded here.

Note ! Omitting the text confirmation prompts to install newest config or keep maintainer configuration is handled by the argument:

 

-o Dpkg::Options::="–force-confold


I however still got few NCurses Console selection prompts during the reinstall of about 3200+ .deb packages, so even with this mod the reinstall was not completely automatic.

Note !  During the reinstall few of the packages from the list failed due to being some old unsupported packages this was ejabberd, ircd-hybrid and a 2 / 3 more.
This failure was easily solved by completely purging those packages with the usual

# dpkg –purge $packagename

and reruninng  debian-all-packages-reinstall.sh on each of the failing packages.

Note ! The failing packages were just old ones left over from Debian 8 and Debian 9 before the apt-get dist-upgrade towards 10 Duster.
Eventually I got a success by God's grance, after few hours of pains and trials, ending up in a working state package database and a complete set of freshly reinstalled packages.

The only thing I had to do finally is 2 hours of tampering why GNOME did not automatically booted after the system reboot due to failing gdm
until I fixed that I've temprary used ligthdm (x-display-manager), to do I've

dpkg –reconfigure gdm3

lightdm-x-display-manager-screenshot-gdm3-reconfige

 to work around this I had to also reinstall few libraries, reinstall the xorg-server, reinstall gdm and reinstall the meta package for GNOME, using below set of commands:
 

apt-get install –reinstall libglw1-mesa libglx-mesa0
apt-get install –reinstall libglu1-mesa-dev
apt install –reinstallgsettings-desktop-schemas
apt-get install –reinstall xserver-xorg-video-intel
apt-get install –reinstall xserver-xorg
apt-get install –reinstall xserver-xorg-core
apt-get install –reinstall task-desktop
apt-get install –reinstall task-gnome-desktop

 

As some packages did not ended re-instaled on system because on the original host from where /var/lib/dpkg db was copied did not have it I had to eventually manually trigger reinstall for those too:

 

apt-get install –reinstall –yes vlc
apt-get install –reinstall –yes thunderbird
apt-get install –reinstall –yes audacity
apt-get install –reinstall –yes gajim
apt-get install –reinstall –yes slack remmina
apt-get install –yes k3b
pt-get install –yes gbgoffice
pt-get install –reinstall –yes skypeforlinux
apt-get install –reinstall –yes vlc
apt-get install –reinstall –yes libcurl3-gnutls libcurl3-nss
apt-get install –yes virtualbox-5.2
apt-get install –reinstall –yes vlc
apt-get install –reinstall –yes alsa-tools-gui
apt-get install –reinstall –yes gftp
apt install ./teamviewer_15.3.2682_amd64.deb –yes

 

Note that some of above packages requires a properly configured third party repositories, other people might have other packages that are missing from the dpkg list and needs to be reinstalled so just decide according to your own case of left aside working system present binaries that doesn't belong to any dpkg installed package.

After a bit of struggle everything is back to normal Thanks God! 🙂 !
 

 

Nginx increase security by putting websites into Linux jails howto

Monday, August 27th, 2018

linux-jail-nginx-webserver-increase-security-by-putting-it-and-its-data-into-jail-environment

If you're sysadmining a large numbers of shared hosted websites which use Nginx Webserver to interpret PHP scripts and serve HTML, Javascript, CSS … whatever data.

You realize the high amount of risk that comes with a possible successful security breach / hack into a server by a malicious cracker. Compromising Nginx Webserver by an intruder automatically would mean that not only all users web data will get compromised, but the attacker would get an immediate access to other data such as Email or SQL (if the server is running multiple services).

Nowadays it is not so common thing to have a multiple shared websites on the same server together with other services, but historically there are many legacy servers / webservers left which host some 50 or 100+ websites.

Of course the best thing to do is to isolate each and every website into a separate Virtual Container however as this is a lot of work and small and mid-sized companies refuse to spend money on mostly anything this might be not an option for you.

Considering that this might be your case and you're running Nginx either as a Load Balancing, Reverse Proxy server etc. , even though Nginx is considered to be among the most secure webservers out there, there is absolutely no gurantee it would not get hacked and the server wouldn't get rooted by a script kiddie freak that just got in darknet some 0day exploit.

To minimize the impact of a possible Webserver hack it is a good idea to place all websites into Linux Jails.

linux-jail-simple-explained-diagram-chroot-jail

For those who hear about Linux Jail for a first time,
chroot() jail is a way to isolate a process / processes and its forked children from the rest of the *nix system. It should / could be used only for UNIX processes that aren't running as root (administrator user), because of the fact the superuser could break out (escape) the jail pretty easily.

Jailing processes is a concept that is pretty old that was first time introduced in UNIX version 7 back in the distant year 1979, and it was first implemented into BSD Operating System ver. 4.2 by Bill Joy (a notorious computer scientist and co-founder of Sun Microsystems). Its original use for the creation of so called HoneyPot – a computer security mechanism set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of information systems that appears completely legimit service or part of website whose only goal is to track, isolate, and monitor intruders, a very similar to police string operations (baiting) of the suspect. It is pretty much like а bait set to collect the fish (which in this  case is the possible cracker).

linux-chroot-jail-environment-explained-jailing-hackers-and-intruders-unix

BSD Jails nowadays became very popular as iPhones environment where applications are deployed are inside a customly created chroot jail, the principle is exactly the same as in Linux.

But anyways enough talk, let's create a new jail and deploy set of system binaries for our Nginx installation, here is the things you will need:

1. You need to have set a directory where a copy of /bin/ls /bin/bash /bin/,  /bin/cat … /usr/bin binaries /lib and other base system Linux system binaries copy will reside.

 

server:~# mkdir -p /usr/local/chroot/nginx

 


2. You need to create the isolated environment backbone structure /etc/ , /dev, /var/, /usr/, /lib64/ (in case if deploying on 64 bit architecture Operating System).

 

server:~# export DIR_N=/usr/local/chroot/nginx;
server:~# mkdir -p $DIR_N/etc
server:~# mkdir -p $DIR_N/dev
server:~# mkdir -p $DIR_N/var
server:~# mkdir -p $DIR_N/usr
server:~# mkdir -p $DIR_N/usr/local/nginx
server:~# mkdir -p $DIR_N/tmp
server:~# chmod 1777 $DIR_N/tmp
server:~# mkdir -p $DIR_N/var/tmp
server:~# chmod 1777 $DIR_N/var/tmp
server:~# mkdir -p $DIR_N/lib64
server:~# mkdir -p $DIR_N/usr/local/

 

3. Create required device files for the new chroot environment

 

server:~# /bin/mknod -m 0666 $D/dev/null c 1 3
server:~# /bin/mknod -m 0666 $D/dev/random c 1 8
server:~# /bin/mknod -m 0444 $D/dev/urandom c 1 9

 

mknod COMMAND is used instead of the usual /bin/touch command to create block or character special files.

Once create the permissions of /usr/local/chroot/nginx/{dev/null, dev/random, dev/urandom} have to be look like so:

 

server:~# ls -l /usr/local/chroot/nginx/dev/{null,random,urandom}
crw-rw-rw- 1 root root 1, 3 Aug 17 09:13 /dev/null
crw-rw-rw- 1 root root 1, 8 Aug 17 09:13 /dev/random
crw-rw-rw- 1 root root 1, 9 Aug 17 09:13 /dev/urandom

 

4. Install nginx files into the chroot directory (copy all files of current nginx installation into the jail)
 

If your NGINX webserver installation was installed from source to keep it latest
and is installed in lets say, directory location /usr/local/nginx you have to copy /usr/local/nginx to /usr/local/chroot/nginx/usr/local/nginx, i.e:

 

server:~# /bin/cp -varf /usr/local/nginx/* /usr/local/chroot/nginx/usr/local/nginx

 


5. Copy necessery Linux system libraries to newly created jail
 

NGINX webserver is compiled to depend on various libraries from Linux system root e.g. /lib/* and /lib64/* therefore in order to the server work inside the chroot-ed environment you need to transfer this libraries to the jail folder /usr/local/chroot/nginx

If you are curious to find out which libraries exactly is nginx binary dependent on run:

server:~# ldd /usr/local/nginx/usr/local/nginx/sbin/nginx

        linux-vdso.so.1 (0x00007ffe3e952000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f2b4762c000)
        libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007f2b473f4000)
        libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f2b47181000)
        libcrypto.so.0.9.8 => /usr/local/lib/libcrypto.so.0.9.8 (0x00007f2b46ddf000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f2b46bc5000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f2b46826000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f2b47849000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f2b46622000)


The best way is to copy only the libraries in the list from ldd command for best security, like so:

 

server: ~# cp -rpf /lib/x86_64-linux-gnu/libthread.so.0 /usr/local/chroot/nginx/lib/*
server: ~# cp -rpf library chroot_location

etc.

 

However if you're in a hurry (not a recommended practice) and you don't care for maximum security anyways (you don't worry the jail could be exploited from some of the many lib files not used by nginx and you don't  about HDD space), you can also copy whole /lib into the jail, like so:

 

server: ~# cp -rpf /lib/ /usr/local/chroot/nginx/usr/local/nginx/lib

 

NOTE! Once again copy whole /lib directory is a very bad practice but for a time pushing activities sometimes you can do it …


6. Copy /etc/ some base files and ld.so.conf.d , prelink.conf.d directories to jail environment
 

 

server:~# cp -rfv /etc/{group,prelink.cache,services,adjtime,shells,gshadow,shadow,hosts.deny,localtime,nsswitch.conf,nscd.conf,prelink.conf,protocols,hosts,passwd,ld.so.cache,ld.so.conf,resolv.conf,host.conf}  \
/usr/local/chroot/nginx/usr/local/nginx/etc

 

server:~# cp -avr /etc/{ld.so.conf.d,prelink.conf.d} /usr/local/chroot/nginx/nginx/etc


7. Copy HTML, CSS, Javascript websites data from the root directory to the chrooted nginx environment

 

server:~# nice -n 10 cp -rpf /usr/local/websites/ /usr/local/chroot/nginx/usr/local/


This could be really long if the websites are multiple gigabytes and million of files, but anyways the nice command should reduce a little bit the load on the server it is best practice to set some kind of temporary server maintenance page to show on the websites index in order to prevent the accessing server clients to not have interrupts (that's especially the case on older 7200 / 7400 RPM non-SSD HDDs.)
 

 

8. Stop old Nginx server outside of Chroot environment and start the new one inside the jail


a) Stop old nginx server

Either stop the old nginx using it start / stop / restart script inside /etc/init.d/nginx (if you have such installed) or directly kill the running webserver with:

 

server:~# killall -9 nginx

 

b) Test the chrooted nginx installation is correct and ready to run inside the chroot environment

 

server:~# /usr/sbin/chroot /usr/local/chroot/nginx /usr/local/nginx/nginx/sbin/nginx -t
server:~# /usr/sbin/chroot /usr/local/chroot/nginx /usr/local/nginx/nginx/sbin/nginx

 

c) Restart the chrooted nginx webserver – when necessery later

 

server:~# /usr/sbin/chroot /nginx /usr/local/chroot/nginx/sbin/nginx -s reload

 

d) Edit the chrooted nginx conf

If you need to edit nginx configuration, be aware that the chrooted NGINX will read its configuration from /usr/local/chroot/nginx/nginx/etc/conf/nginx.conf (i'm saying that if you by mistake forget and try to edit the old config that is usually under /usr/local/nginx/conf/nginx.conf

 

 

How to Copy large data directories between 2 Linux / Unix servers without direct ssh / ftp access between server1 and server2 other by using SSH, TAR and Unix pipes

Monday, April 27th, 2015

how-to-copy-large-data-directories-between-2-linux-unix-servers-without-direct-ssh-ftp-access-btween-each-other

In a Web application data migration project, I've come across a situation where I have to copy / transfer 500 Gigabytes of data from Linux server 1 (host A) to Linux server 2 (host B). However the two machines doesn't have direct access to each other (via port 22) for security reasons and hence I cannot use sshfs to mount remotely host dir via ssh and copy files like local ones.

As this is a data migration project its however necessery to migrate the data finding a way … Normal way companies do it is to copy the data to External Hard disk storage and send it via some Country Post services or some employee being send in Data center to attach the SAN to new server where data is being migrated However in my case this was not possible so I had to do it different.

I have access to both servers as they're situated in the same Corporate DMZ network and I can thus access both UNIX machines via SSH.

Thanksfully there is a small SSH protocol + TAR archiver and default UNIX pipe's capabilities hack that makes possible to transfer easy multiple (large) files and directories. The only requirement to use this nice trick is to have SSH client installed on the middle host from which you can access via SSH protocol Server1 (from where data is migrated) and Server2 (where data will be migrated).

If the hopping / jump server from which you're allowed to have access to Linux  servers Server1 and Server2 is not Linux and you're missing the SSH client and don't have access on Win host to install anything on it just use portable mobaxterm (as it have Cygwin SSH client embedded )

Here is how:
 

jump-host:~$ ssh server1 "tar czf – /somedir/" | pv | ssh server2 "cd /somedir/; tar xf


As you can see from above command line example an SSH is made to server1  a tar is used to archive the directory / directories containing my hundred of gigabytes and then this is passed to another opened ssh session to server 2  via UNIX Pipe mechanism and then TAR archiver is used second time to unarchive previously passed archived content. pv command which is in the middle is not obligitory though it is a nice way to monitor status about data transfer like below:
 

500GB 0:00:01 [10,5MB/s] [===================================================>] 27%


P.S. If you don't have PV installed install it either with apt-get on Debian:

 

debian:~# apt-get install –yes pv

 

Or on CentOS / Fedora / RHEL etc.

 

[root@centos ~]# yum -y install pv

 

Below is a small chunk of PV manual to give you better idea of what it does:

NAME
       pv – monitor the progress of data through a pipe

SYNOPSIS
       pv [OPTION] [FILE]…
       pv [-h|-V]

DESCRIPTION
       pv  allows  a  user to see the progress of data through a pipeline, by giving information such as time elapsed, percentage
       completed (with progress bar), current throughput rate, total data transferred, and ETA.

       To use it, insert it in a pipeline between two processes, with the appropriate options.  Its standard input will be passed
       through to its standard output and progress will be shown on standard error.

       pv  will  copy  each  supplied FILE in turn to standard output (- means standard input), or if no FILEs are specified just
       standard input is copied. This is the same behaviour as cat(1).

       A simple example to watch how quickly a file is transferred using nc(1):

              pv file | nc -w 1 somewhere.com 3000

       A similar example, transferring a file from another process and passing the expected size to pv:

              cat file | pv -s 12345 | nc -w 1 somewhere.com 3000


Note that with too big file transfers using PV will delay data transfer because everything will have to pass through another 2 pipes, however for file transfers up to few gigabytes its really nice to include it.

If you only need to transfer huge .tar.gz archive and you don't bother about traffic security (i.e. don't care whether transferred traffic is going through encrypted SSH tunnel and don't want to put an overhead to both systems for encrypting the data and you have some unfiltered ports between host 1 and host 2 you can run netcat on host 2 to listen for connections and forward .tar.gz content via netcat's port like so:
 

linux2:~$ nc -l -p 12345 > /path/destinationfile
linux2:~$ cat /path/sourcfile | nc desti.nation.ip.address 12345


Another way to transfer large data without having connection with server1 and server2 but having connection to a third host PC is to use rsync and good old SSH Tunneling, like so:
 

jump-host:~$ ssh -R 2200:Linux-server1:22 root@Linux-server2 "rsync -e 'ssh -p 2200' –stats –progress -vaz /directory/to/copy root@localhost:/copy/destination/dir"

Fix MySQL ibdata file size – ibdata1 file growing too large, preventing ibdata1 from eating all your server disk space

Thursday, April 2nd, 2015

fix-solve-mysql-ibdata-file-size-ibdata1-file-growing-too-large-and-preventing-ibdata1-from-eating-all-your-disk-space-innodb-vs-myisam

If you're a webhosting company hosting dozens of various websites that use MySQL with InnoDB  engine as a backend you've probably already experienced the annoying problem of MySQL's ibdata1 growing too large / eating all server's disk space and triggering disk space low alerts. The ibdata1 file, taking up hundreds of gigabytes is likely to be encountered on virtually all Linux distributions which run default MySQL server <= MySQL 5.6 (with default distro shipped my.cnf). The excremental ibdata1 raise appears usually due to a application software bug on how it queries the database. In theory there are no limitation for ibdata1 except maximum file size limitation set for the filesystem (and there is no limitation option set in my.cnf) meaning it is quite possible that under certain conditions ibdata1 grow over time can happily fill up your server LVM (Storage) drive partitions.

Unfortunately there is no way to shrink the ibdata1 file and only known work around (I found) is to set innodb_file_per_table option in my.cnf to force the MySQL server create separate *.ibd files under datadir (my.cnf variable) for each freshly created InnoDB table.
 

1. Checking size of ibdata1 file

On Debian / Ubuntu and other deb based Linux servers datadir is /var/lib/mysql/ibdata1

server:~# du -hsc /var/lib/mysql/ibdata1
45G     /var/lib/mysql/ibdata1
45G     total


2. Checking info about Databases and Innodb storage Engine

server:~# mysql -u root -p
password:

mysql> SHOW DATABASES;
+——————–+
| Database           |
+——————–+
| information_schema |
| bible              |
| blog               |
| blog-sezoni        |
| blogmonastery      |
| daniel             |
| ezmlm              |
| flash-games        |


Next step is to get some understanding about how many existing InnoDB tables are present within Database server:

 

mysql> SELECT COUNT(1) EngineCount,engine FROM information_schema.tables WHERE table_schema NOT IN ('information_schema','performance_schema','mysql') GROUP BY engine;
+————-+——–+
| EngineCount | engine |
+————-+——–+
|         131 | InnoDB |
|           5 | MEMORY |
|         584 | MyISAM |
+————-+——–+
3 rows in set (0.02 sec)

To get some more statistics related to InnoDb variables set on the SQL server:
 

mysqladmin -u root -p'Your-Server-Password' var | grep innodb


Here is also how to find which tables use InnoDb Engine

mysql> SELECT table_schema, table_name
    -> FROM INFORMATION_SCHEMA.TABLES
    -> WHERE engine = 'innodb';

+————–+————————–+
| table_schema | table_name               |
+————–+————————–+
| blog         | wp_blc_filters           |
| blog         | wp_blc_instances         |
| blog         | wp_blc_links             |
| blog         | wp_blc_synch             |
| blog         | wp_likes                 |
| blog         | wp_wpx_logs              |
| blog-sezoni  | wp_likes                 |
| icanga_web   | cronk                    |
| icanga_web   | cronk_category           |
| icanga_web   | cronk_category_cronk     |
| icanga_web   | cronk_principal_category |
| icanga_web   | cronk_principal_cronk    |


3. Check and Stop any Web / Mail / DNS service using MySQL

server:~# ps -efl |grep -E 'apache|nginx|dovecot|bind|radius|postfix'

Below cmd should return empty output, (e.g. Apache / Nginx / Postfix / Radius / Dovecot / DNS etc. services are properly stopped on server).

4. Create Backup dump all MySQL tables with mysqldump

Next step is to create full backup dump of all current MySQL databases (with mysqladmin):

server:~# mysqldump –opt –allow-keywords –add-drop-table –all-databases –events -u root -p > dump.sql
server:~# du -hsc /root/dump.sql
940M    dump.sql
940M    total

 

If you have free space on an external backup server or remotely mounted attached (NFS or SAN Storage) it is a good idea to make a full binary copy of MySQL data (just in case something wents wrong with above binary dump), copy respective directory depending on the Linux distro and install location of SQL binary files set (in my.cnf).
To check where are MySQL binary stored database data (check in my.cnf):

server:~# grep -i datadir /etc/mysql/my.cnf
datadir         = /var/lib/mysql

If server is CentOS / RHEL Fedora RPM based substitute in above grep cmd line /etc/mysql/my.cnf with /etc/my.cnf

if you're on Debian / Ubuntu:

server:~# /etc/init.d/mysql stop
server:~# cp -rpfv /var/lib/mysql /root/mysql-data-backup

Once above copy completes, DROP all all databases except, mysql, information_schema (which store MySQL existing user / passwords and Access Grants and Host Permissions)

5. Drop All databases except mysql and information_schema

server:~# mysql -u root -p
password:

 

mysql> SHOW DATABASES;

DROP DATABASE blog;
DROP DATABASE sessions;
DROP DATABASE wordpress;
DROP DATABASE micropcfreak;
DROP DATABASE statusnet;

          etc. etc.

ACHTUNG !!! DON'T execute!DROP database mysql; DROP database information_schema; !!! – cause this might damage your User permissions to databases

6. Stop MySQL server and add innodb_file_per_table and few more settings to prevent ibdata1 to grow infinitely in future

server:~# /etc/init.d/mysql stop

server:~# vim /etc/mysql/my.cnf
[mysqld]
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_buffer_pool_size=4G

Delete files taking up too much space – ibdata1 ib_logfile0 and ib_logfile1

server:~# cd /var/lib/mysql/
server:~#  rm -f ibdata1 ib_logfile0 ib_logfile1
server:~# /etc/init.d/mysql start
server:~# /etc/init.d/mysql stop
server:~# /etc/init.d/mysql start
server:~# ps ax |grep -i mysql

 

You should get no running MySQL instance (processes), so above ps command should return blank.
 

7. Re-Import previously dumped SQL databases with mysql cli client

server:~# cd /root/
server:~# mysql -u root -p < dump.sql

Hopefully import should went fine, and if no errors experienced new data should be in.

Altearnatively if your database is too big and you want to import it in less time to mitigate SQL downtime, instead import the database with:

server:~# mysql -u root -p
password:
mysql>  SET FOREIGN_KEY_CHECKS=0;
mysql> SOURCE /root/dump.sql;
mysql> SET FOREIGN_KEY_CHECKS=1;

 

If something goes wrong with the import for some reason, you can always copy over sql binary files from /root/mysql-data-backup/ to /var/lib/mysql/
 

8. Connect to mysql and check whether databases are listable and re-check ibdata file size

Once imported login with mysql cli and check whther databases are there with:

server:~# mysql -u root -p
SHOW DATABASES;

Next lets see what is currently the size of ibdata1, ib_logfile0 and ib_logfile1
 

server:~# du -hsc /var/lib/mysql/{ibdata1,ib_logfile0,ib_logfile1}
19M     /var/lib/mysql/ibdata1
1,1G    /var/lib/mysql/ib_logfile0
1,1G    /var/lib/mysql/ib_logfile1
2,1G    total

Now ibdata1 will grow, but only contain table metadata. Each InnoDB table will exist outside of ibdata1.
To better understand what I mean, lets say you have InnoDB table named blogdb.mytable.
If you go into /var/lib/mysql/blogdb, you will see two files
representing the table:

  •     mytable.frm (Storage Engine Header)
  •     mytable.ibd (Home of Table Data and Table Indexes for blogdb.mytable)

Now construction will be like that for each of MySQL stored databases instead of everything to go to ibdata1.
MySQL 5.6+ admins could relax as innodb_file_per_table is enabled by default in newer SQL releases.


Now to make sure your websites are working take few of the hosted websites URLs that use any of the imported databases and just browse.
In my case ibdata1 was 45GB after clearing it up I managed to save 43 GB of disk space!!!

Enjoy the disk saving! 🙂