Posts Tagged ‘shell script’

Rsync copy files with root privileges between servers with root superuser account disabled

Tuesday, December 3rd, 2019

 

rsync-copy-files-between-two-servers-with-root-privileges-with-root-superuser-account-disabled

Sometimes on servers that follow high security standards in companies following PCI Security (Payment Card Data Security) standards it is necessery to have a very weird configurations on servers,to be able to do trivial things such as syncing files between servers with root privileges in a weird manners.This is the case for example if due to security policies you have disabled root user logins via ssh server and you still need to synchronize files in directories such as lets say /etc , /usr/local/etc/ /var/ with root:root user and group belongings.

Disabling root user logins in sshd is controlled by a variable in /etc/ssh/sshd_config that on most default Linux OS
installations is switched on, e.g. 

grep -i permitrootlogin /etc/ssh/sshd_config
PermitRootLogin yes


Many corporations use Vulnerability Scanners such as Qualys are always having in their list of remote server scan for SSH Port 22 to turn have the PermitRootLogin stopped with:

 

PermitRootLogin no


In this article, I'll explain a scenario where we have synchronization between 2 or more servers Server A / Server B, whatever number of servers that have already turned off this value, but still need to
synchronize traditionally owned and allowed to write directories only by root superuser, here is 4 easy steps to acheive it.

 

1. Add rsyncuser to Source Server (Server A) and Destination (Server B)


a. Execute on Src Host:

 

groupadd rsyncuser
useradd -g 1000 -c 'Rsync user to sync files as root src_host' -d /home/rsyncuser -m rsyncuser

 

b. Execute on Dst Host:

 

groupadd rsyncuser
useradd -g 1000 -c 'Rsync user to sync files dst_host' -d /home/rsyncuser -m rsyncuser

 

2. Generate RSA SSH Key pair to be used for passwordless authentication


a. On Src Host
 

su – rsyncuser

ssh-keygen -t rsa -b 4096

 

b. Check .ssh/ generated key pairs and make sure the directory content look like.

 

[rsyncuser@src-host .ssh]$ cd ~/.ssh/;  ls -1

id_rsa
id_rsa.pub
known_hosts


 

3. Copy id_rsa.pub to Destination host server under authorized_keys

 

scp ~/.ssh/id_rsa.pub  rsyncuser@dst-host:~/.ssh/authorized_keys

 

Next fix permissions of authorized_keys file for rsyncuser as anyone who have access to that file (that exists as a user account) on the system
could steal the key and use it to run rsync commands and overwrite remotely files, like overwrite /etc/passwd /etc/shadow files with his custom crafted credentials
and hence hack you 🙂
 

Hence, On Destionation Host Server B fix permissions with:
 

su – rsyncuser; chmod 0600 ~/.ssh/authorized_keys
[rsyncuser@dst-host ~]$

 

For improved security here to restrict rsyncuser to be able to run only specific command such as very specific script instead of being able to run any command it is good to use little known command= option
once creating the authorized_keys

 

4. Test ssh passwordless authentication works correctly


For that Run as a normal ssh from rsyncuser

On Src Host

 

[rsyncuser@src-host ~]$ ssh rsyncuser@dst-host


Perhaps here is time that for those who, think enabling a passwordless authentication is not enough secure and prefer to authorize rsyncuser via a password red from a secured file take a look in my prior article how to login to remote server with password provided from command line as a script argument / Running same commands on many servers 

5. Enable rsync in sudoers to be able to execute as root superuser (copy files as root)

 


For this step you will need to have sudo package installed on the Linux server.

Then, Execute once logged in as root on Destionation Server (Server B)

 

[root@dst-host ~]# grep 'rsyncuser ALL' /etc/sudoers|wc -l || echo ‘rsyncuser ALL=NOPASSWD:/usr/bin/rsync’ >> /etc/sudoers
 

 

Note that using rsync with a ALL=NOPASSWD in /etc/sudoers could pose a high security risk for the system as anyone authorized to run as rsyncuser is able to overwrite and
respectivle nullify important files on Destionation Host Server B and hence easily mess the system, even shell script bugs could produce a mess, thus perhaps a better solution to the problem
to copy files with root privileges with the root account disabled is to rsync as normal user somewhere on Dst_host and use some kind of additional script running on Dst_host via lets say cron job and
will copy gently files on selective basis.

Perhaps, even a better solution would be if instead of granting ALL=NOPASSWD:/usr/bin/rsync in /etc/sudoers is to do ALL=NOPASSWD:/usr/local/bin/some_copy_script.sh
that will get triggered, once the files are copied with a regular rsyncuser acct.

 

6. Test rsync passwordless authentication copy with superuser works


Do some simple copy, lets say copy files on Encrypted tunnel configurations located under some directory in /etc/stunnel on Server A to /etc/stunnel on Server B

The general command to test is like so:
 

rsync -aPz -e 'ssh' '–rsync-path=sudo rsync' /var/log rsyncuser@$dst_host:/root/tmp/


This will copy /var/log files to /root/tmp, you will get a success messages for the copy and the files will be at destination folder if succesful.

 

On Src_Host run:

 

[rsyncuser@src-host ~]$ dst=FQDN-DST-HOST; user=rsyncuser; src_dir=/etc/stunnel; dst_dir=/root/tmp;  rsync -aP -e 'ssh' '–rsync-path=sudo rsync' $src_dir  $rsyncuser@$dst:$dst_dir;

 

7. Copying files with root credentials via script


The simlest file to use to copy a bunch of predefined files  is best to be handled by some shell script, the most simple version of it, could look something like this.
 

#!/bin/bash
# On server1 use something like this
# On server2 dst server
# add in /etc/sudoers
# rsyncuser ALL=NOPASSWD:/usr/bin/rsync

user='rsyncuser';

dst_dir="/root/tmp";
dst_host='$dst_host';
src[1]="/etc/hosts.deny";
src[2]="/etc/sysctl.conf";
src[3]="/etc/samhainrc";
src[4]="/etc/pki/tls/";
src[5]="/usr/local/bin/";

 

for i in $(echo ${src[@]}); do
rsync -aPvz –delete –dry-run -e 'ssh' '–rsync-path=sudo rsync' "$i" $rsyncuser@$dst_host:$dst_dir"$i";
done


In above script as you can see, we define a bunch of files that will be copied in bash array and then run a loop to take each of them and copy to testination dir.
A very sample version of the script rsync_with_superuser-while-root_account_prohibited.sh 
 

Conclusion


Lets do short overview on what we have done here. First Created rsyncuser on SRC Server A and DST Server B, set up the key pair on both copied the keys to make passwordless login possible,
set-up rsync to be able to write as root on Dst_Host / testing all the setup and pinpointing a small script that can be used as a backbone to develop something more complex
to sync backups or keep system configurations identicatial – for example if you have doubts that some user might by mistake change a config etc.
In short it was pointed the security downsides of using rsync NOPASSWD via /etc/sudoers and few ideas given that could be used to work on if you target even higher
PCI standards.

 

Create and Configure SSL bundle file for GoGetSSL issued certificate in Apache Webserver on Linux

Saturday, November 3rd, 2018

gogetssl-install-certificate-on-linux-howto-sslcertificatechainfile-obsolete

I had a small task to configure a new WildCard SSL for domains on a Debian GNU / Linux Jessie running Apache 2.4.25.

The official documentation on how to install the SSL certificate on Linux given by GoGetSSL (which is by COMODO was obsolete as of time of writting this article and suggested as install instructions:
 

SSLEngine on
SSLCertificateKeyFile /etc/ssl/ssl.key/server.key
SSLCertificateFile /etc/ssl/ssl.crt/yourDomainName.crt
SSLCertificateChainFile /etc/ssl/ssl.crt/yourDomainName.ca-bundle


Adding such configuration to domain Vhost and testing with apache2ctl spits an error like:

 

root@webserver:~# apache2ctl configtest
AH02559: The SSLCertificateChainFile directive (/etc/apache2/sites-enabled/the-domain-name-ssl.conf:17) is deprecated, SSLCertificateFile should be used instead
Syntax OK

 


To make issued GoGetSSL work with Debian Linux, hence, here is the few things done:

The files issued by Gogetssl.COM were the following:

 

AddTrust_External_CA_Root.crt
COMODO_RSA_Certification_Authority.crt
the-domain-name.crt


The webserver had already SSL support via mod_ssl Apache module, e.g.:

 

root@webserver:~# ls -al /etc/apache2/mods-available/*ssl*
-rw-r–r– 1 root root 3112 окт 21  2017 /etc/apache2/mods-available/ssl.conf
-rw-r–r– 1 root root   97 сеп 19  2017 /etc/apache2/mods-available/ssl.load
root@webserver:~# ls -al /etc/apache2/mods-enabled/*ssl*
lrwxrwxrwx 1 root root 26 окт 19  2017 /etc/apache2/mods-enabled/ssl.conf -> ../mods-available/ssl.conf
lrwxrwxrwx 1 root root 26 окт 19  2017 /etc/apache2/mods-enabled/ssl.load -> ../mods-available/ssl.load


For those who doesn't have mod_ssl enabled, to enable it quickly run:

 

# a2enmod ssl


The VirtualHost used for the domains had Apache config as below:

 

 

 

NameVirtualHost *:443

<VirtualHost *:443>
    ServerAdmin support@the-domain-name.com
    ServerName the-domain-name.com
    ServerAlias *.the-domain-name.com the-domain-name.com

    DocumentRoot /home/the-domain-namecom/www
    SSLEngine On
#    <Directory />
#        Options FollowSymLinks
#        AllowOverride None
#    </Directory>
    <Directory /home/the-domain-namecom/www>
        Options Indexes FollowSymLinks MultiViews
        AllowOverride None
        Include /home/the-domain-namecom/www/htaccess_new.txt
        Order allow,deny
        allow from all
    </Directory>

    ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
    <Directory "/usr/lib/cgi-bin">
        AllowOverride None
        Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
        Order allow,deny
        Allow from all
    </Directory>

    ErrorLog ${APACHE_LOG_DIR}/error.log

    # Possible values include: debug, info, notice, warn, error, crit,
    # alert, emerg.
    LogLevel warn

    CustomLog ${APACHE_LOG_DIR}/access.log combined

#    Alias /doc/ "/usr/share/doc/"
#   <Directory "/usr/share/doc/">
#       Options Indexes MultiViews FollowSymLinks
#       AllowOverride None
#       Order deny,allow
#       Deny from all
#       Allow from 127.0.0.0/255.0.0.0 ::1/128
#   </Directory>
SSLCertificateKeyFile /etc/apache2/ssl/the-domain-name.com.key
SSLCertificateFile /etc/apache2/ssl/chain.crt

 

</VirtualHost>

The config directives enabling and making the SSL actually work are:
 

SSLEngine On
SSLCertificateKeyFile /etc/apache2/ssl/the-domain-name.com.key
SSLCertificateFile /etc/apache2/ssl/chain.crt

 

The chain.crt file is actually a bundle file containing a bundle of the gogetssl CA_ROOT and RSA_Certification_Authority 3 files, to prepare that file, I've used bundle.sh small script found on serverfault.com here I've made a mirror of bundle.sh on pc-freak.net here   the script content is as follows:

To prepare the chain.crt  bundle, I ran:

 

sh create-ssl-bundle.sh _iq-test_cc.crt chain.crt
sh create-ssl-bundle.sh _iq-test_cc.crt >chain.crt
sh create-ssl-bundle.sh COMODO_RSA_Certification_Authority.crt >> chain.crt
sh create-ssl-bundle.sh bundle.sh AddTrust_External_CA_Root.crt >> chain.crt


Then I copied the file to /etc/apache2/ssl together with the-domain-name.com.key file earlier generated using openssl command earlier explained in my article how to install RapidSSL certificate on Linux

/etc/apache2/ssl was not previously existing (on Debian Linux), so to create it:

 

root@webserver:~# mkdir /etc/apache2/ssl
root@webserver:~# ls -al /etc/apache2/ssl/chain.crt
-rw-r–r– 1 root root 20641 Nov  2 12:27 /etc/apache2/ssl/chain.crt
root@webserver:~# ls -al /etc/apache2/ssl/the-domain-name.com.key
-rw-r–r– 1 root root 6352 Nov  2 20:35 /etc/apache2/ssl/the-domain-name.com.key

 

As I needed to add the SSL HTTPS configuration for multiple domains, further on I've wrote and used a tiny shell script add_new_vhost.sh which accepts as argument the domain name I want to add. The script works with a sample Skele (Template) file, which is included in the script itself and can be easily modified for the desired vhost config.
To add my multiple domains, I've used the script as follows:
 

sh add_new_vhost.sh add-new-site-domain.com
sh add_new_vhost.sh add-new-site-domain1.com


etc.

Here is the complete script as well:

 

#!/bin/sh
# Shell script to add easily new domains for virtual hosting on Debian machines
# arg1 should be a domain name
# This script takes the domain name which you type as arg1 uses it and creates
# Docroot / cgi-bin directory for the domain, create seperate site's apache log directory
# then takes a skele.com file and substitutes a skele.com with your domain name and directories
# This script's aim is to easily enable sysadmin to add new domains in Debian
sites_base_dir=/var/www/jail/home/www-data/sites/;
# the directory where the skele.com file is
skele_dir=/etc/apache2/sites-available;
# base directory where site log dir to be created
cr_sep_log_file_d=/var/log/apache2/sites;
# owner of the directories
username='www-data';
# read arg0 and arg1
arg0=$0;
arg1=$1;
if [[ -z $arg1 ]]; then
echo "Missing domain name";
exit 1;
fi

 

# skele template
echo "#
#  Example.com (/etc/apache2/sites-available/www.skele.com)
#
<VirtualHost *>
        ServerAdmin admin@design.bg
        ServerName  skele.com
        ServerAlias www.skele.com


        # Indexes + Directory Root.
        DirectoryIndex index.php index.htm index.html index.pl index.cgi index.phtml index.jsp index.py index.asp

        DocumentRoot /var/www/jail/home/www-data/sites/skelecom/www/docs
        ScriptAlias /cgi-bin "/var/www/jail/home/www-data/sites/skelecom/cgi-bin"
        
        # Logfiles
        ErrorLog  /var/log/apache2/sites/skelecom/error.log
        CustomLog /var/log/apache2/sites/skelecom/access.log combined
#       CustomLog /dev/null combined
      <Directory /var/www/jail/home/www-data/sites/skelecom/www/docs/>
                Options FollowSymLinks MultiViews -Includes
                AllowOverride None
                Order allow,deny
                allow from all
                # This directive allows us to have apache2's default start page
                # in /apache2-default/, but still have / go to the right place
#               RedirectMatch ^/$ /apache2-default/
        </Directory>

        <Directory /var/www/jail/home/www-data/sites/skelecom/www/docs/>
                Options FollowSymLinks ExecCGI -Includes
                AllowOverride None
                Order allow,deny
                allow from all
        </Directory>

</VirtualHost>
" > $skele_dir/skele.com;

domain_dir=$(echo $arg1 | sed -e 's/\.//g');
new_site_dir=$sites_base_dir/$domain_dir/www/docs;
echo "Creating $new_site_dir";
mkdir -p $new_site_dir;
mkdir -p $sites_base_dir/cgi-bin;
echo "Creating sites's Docroot and CGI directory";
chown -R $username:$username $new_site_dir;
chown -R $username:$username $sites_base_dir/cgi-bin;
echo "Creating site's Log files Directory";
mkdir -p $cr_sep_log_file_d/$domain_dir;
echo "Creating sites's VirtualHost file and adding it for startup";
sed -e "s#skele.com#$arg1#g" -e "s#skelecom#$domain_dir#g" $skele_dir/skele.com >> $skele_dir/$arg1;
ln -sf $skele_dir/$arg1 /etc/apache2/sites-enabled/;
echo "All Completed please restart apache /etc/init.d/apache restart to Load the new virtual domain";

# Date Fri Jan 11 16:27:38 EET 2008


Using the script saves a lot of time to manually, copy vhost file and then edit it to change ServerName directive, for vhosts whose configuration is identical and only the ServerName listener has to change, it is perfect to create all necessery domains, I've created a simple text file with each of the domains and run it in a loop:
 

while :; do sh add_new_vhost.sh $i; done < domain_list.txt
 

 

Virtual Keyboard for Linux and other Freedom respecting operating Systems

Monday, July 30th, 2018

How to install and Use Linux Virtual Keyboard and other freedom respecting Operating Systems

  •  Looking for a quick way to use VIRTUAL KEYBOARD ON LINUX COMPUTER OPERATING SYSTEM, you can do it just this 1 task in 3 simple steps  ???
    – Logical question emerges, WHY ??? would you need a virtual keyboard on Free Software OS such as Linux?
    Well, just because sometimes it is much more secure to use a Virtual Keyboard, especially if you have doubt that your keyboard has been tapped or a Key Logger (Sniffer), intercepting the Keyboard IN / OUT jacks, is installed on the computer or you might have sit on a computer of ,a friend running Linux, and you want to make sure he did not install sniffer to intercept your ,SSH login passwords and ,later hack into your Servers, after stealing, the password

 

  • Assuming you're on : – Debian / Ubuntu Linux, or other of the numerous IT systems such as ,FreeBSD / OpeBSD etc. out there, you can run simply this commands:

     

  •  apt-get install –yes florence
    * A. To make it, easily invokable for laters, create a small bash, shell script in directory; – location /usr/bin/virtual-keyboard like, the one below:

    vim /usr/bin/virtual-keyboard

    * B.. INside the file Place following 1 liner code
     

    #!/bin/sh
    /usr/bin/florence

     

    * C… To later invoke it any time:
    Press ALT + F2 (or use Run Command Dialog in GNOME / KDE / Windomaker / IceWM whatever or any other crazy graphic environment of your choice and run:

    /usr/bin/virtual-keyboard

 

How to use wget and curl via HTTP Proxy server / How to set a HTTPS proxy server on a bash shell on Linux

Wednesday, January 27th, 2016

linux-ssl-proxy-configuration-from-command-line-with-wget-and-curl-howto

I've been working a bit on a client's automation, the task is to automate process of installations of Apaches / Tomcats / JBoss and Java servers, so me and colleagues don't waste too
much time in trivial things. To complete that I've created a small repository on a Apache with a WebDav server with major versions of each general branch of Application servers and Javas.
In order to access the remote URL where the .tar.gz binaries archives reside, I had to use a proxy serve as the client runs all his network in a DMZ and all Web Port 80 and 443 HTTPS traffic inside the client network
has to pass by the network proxy.

Thus to make the downloads possible via the shell script, writting I needed to set the script to use the HTTPS proxy server. I've been using proxy earlier and I was pretty aware of the http_proxy bash shell
variable thus I tried to use this one for the Secured HTTPS proxy, however the connection was failing and thanks to colleague Anatoliy I realized the whole problem is I'm trying to use http_proxy shell variable
which has to only be used for unencrypted Proxy servers and in this case the proxy server is over SSL encrypted HTTPS protocol so instead the right variable to use is:
 

https_proxy


The https_proxy var syntax, goes like this:

proxy_url='http-proxy-url.net:8080';
export https_proxy="$proxy_url"

how-to-set-https_proxy_url-on-linux-freebsd-openbsd-bsd-and-unix-from-terminal-console

Once the https_proxy variable is set  UNIX's wget non interactive download tool starts using the proxy_url variable set proxy and the downloads in my script works.

Hence to make the different version application archives download work out, I've used wget like so:
 

 wget –no-check-certificate –timeout=5 https://full-path-to-url.net/file.rar


For other BSD / HP-UX / SunOS UNIX Servers where  shells are different from Bourne Again (Bash) Shell, the http_proxy and  https_proxy variable might not be working.
In such cases if you have curl (command line tool) is available instead of wget to script downloads you can use something like:
 

 curl -O -1 -k –proxy http-proxy-url.net:8080 https://full-path-to-url.net/file.rar

The http_proxy and https_proxy variables works perfect also on Mac OS X, default bash shell, so Mac users enjoy.
For some bash users in some kind of firewall hardened environments like in my case, its handy to permanently set a proxy to all shell activities via auto login Linux / *unix scripts .bashrc or .bash_profile that saves the inconvenience to always
set the proxy so lynx and links, elinks text console browsers does work also anytime you login to shell.

Well that's it, my script enjoys proxying traffic 🙂
 

Run 2 and more Skypes simultaneously on Mac OS X – Run multiple Skype acccounts on same Mac

Saturday, June 21st, 2014

run-2-and-more-skypes-simultaneously-on-mac-os-x-multiple-skype-account-login-on-mac
For people running Mac OS X, the question of 
how is it possible to use 2 skype accounts in parallel on Mac probably makes good sense?

I don't own a Mac notebook and thefore I'm a Mac newbie, however, I'm into situation where I and my wife Svetlana went (for 3 days) to my hometown Dobrich and we have with us only her Mac OS X powered Mac Book air.

 

One user is already logged in Skype, (my wife) is expecting some relatives and friends to contact us and  same time I had to login to check few servers via ssh and discuss some server downtime issues from yesterday in Skype .
Thus we
need 2 skype instances to run separately on her Macbook air powered PC with Mac OS X Leopard
 

Earlier I've blogged how to make 2 and more Skype accounts work simultaneously on one Windows PC because I had to set it up for a company, in this short article I will explain how is possible to run many skype clients on Mac OS X.

 

1. Open Mac Terminal from Finder

finder-terminal-screenshot-mac-os-x-leopard-run-many-skypes-mac-os

2. In Terminal run the first Skype Instance

Type in Terminal:

open /Applications/Skype.app/Contents/MacOS/Skype

3. Run Second Skype instance

In older Skype Mac OS versions, I read the

/secondary

Skype command option was there and could be used to run a second parallel skype instance on Mac, however in newer releases this option was removed and if you try to invoke it warning window pops up saying an instance is already running.

mac-os-x-you-have-another-copy-of-skype-running-screenshot

To get around the issue and run the second Skype, quickest way is to run another Skype client under privileged user through sudo command (this is unsecure – but anyways as Mac OS is proprietary and we don't have access to code and probably there are tons of spy and report software integrated into the OS, it doesn't really matter.)

mac-os-x-skype-run-screenshot-pic

To get around the issue and run the second Skype, quickest way is to run another Skype client under privileged user through sudo command (this is unsecure – but anyways as Mac OS is proprietary and we don't have access to code and probably there are tons of spy and report software integrated into the OS, it doesn't really matter.)

4. Script it into 2nd_skype.sh for later use

To run and use two parallel skypes regularly it might be useful to make shell script out of it and place it somewhere, 2nd_skype.sh script should be something like:


#!/bin/bash
open /Applications/Skype.app/Contents/MacOS/Skype
sudo /Applications/Skype.app/Contents/MacOS/Skype

Then make the script executable with:

chmod a+x 2nd_skype.sh

5. Run more than 2 Skypes (Run multiple Skypes on same Mac PC hack)

There is another "hack" method with deleting the Skype.pid (Process ID). Skype recognize where it is running by checking its Skype.pid on start up.

Deleting the pid after each next Skype client launch,  allow the user to run as many Skypes as you want on Mac OS X but it is not clear for how long it time it will work.

rm -f ~/Library/Application Support/Skype/Skype.pid

Then launch again Skype in background from Mac Terminal

open -nW '/Application/Skype.app' &

In case if you wonder why the open command is used, since above line could be run also directly and Skype will pop-up, by using open command you instruct the program to detach itself from Terminal from which it run, so later if Terminal is closed Skype app. will not terminate.

Another approach is to create, a many users lets say 5 users and use the Skype sudo run method each client with a separate user.

sudo user1 /Applications/Skype.app/Contents/MacOS/Skype
sudo user2 /Applications/Skype.app/Contents/MacOS/Skype
sudo user3 /Applications/Skype.app/Contents/MacOS/Skype
sudo user4 /Applications/Skype.app/Contents/MacOS/Skype

sudo user5 /Applications/Skype.app/Contents/MacOS/Skype

I enclose the script with the custom icon (Skype) ready to be launched and Voila, on script launch Skype multiple login prompts pops up.

For the lazy ones who don't want to tamper with writting scripts or doing hacks to run Skype multiple times on Mac there is even a Multi Skype Launcher app for Mac.

 


 

Linux: How to change recursively directory permissions to executable (+x) flag

Monday, September 2nd, 2013

change recursively permissions of directories and subdirectories Linux and Unix with find command
I had to copy large directory from one Linux server to windows host via SFTP proto (with WinSCP). However some of directories to be copied lacked executable flag, thus WinSCP failed to list and copy them.

Therefore I needed way to set recursively, all sub-directories under directory /mirror (located on Linux server) to +x executable flag.

There are two ways to do that one is directly through find cmd, second by using find with xargs
Here is how to do it with find:

# find /mirror -type d -exec chmod 755 {} + Same done with find + xargs:

# find /path/to/base/dir -type d -print0 | xargs -0 chmod 755
To change permissions only to all files under /mirror server directory with find

# find /path/to/base/dir -type f -exec chmod 644 {} +

Same done with find + xargs:
# find /path/to/base/dir -type f -print0 | xargs -0 chmod 644

Also, tiny shell script that recursively changes directories permissions (autochmod_directories.sh) is here

Make daily Linux MySQL database backups with shell script

Thursday, May 23rd, 2013

Creating database backup with MySQL with mysqlbackupper and mysqlback shell scripts easy create mysql backups

Some time ago, I've written a tiny shell script which does dumps of Complete (SQL Script) MySQL databases. There are plenty of ways to backup MySQL database and plenty of scripts on the net but I like doing it my own way. I have few backup scripts. I prefer script database over keeping binary logs, or using some un-traditional backup methods like backing all binary data in /var/lib/mysql.

One was intended to backup with mysqldump whole database and later upload to a central server running tsh (shell). Using tsh maybe not the best method to upload, but the script can easily be modified to use ssh passwordless authentication as a method to upload.

I'm not a pro shell scripter, but MySQLBackupper script can be used as useful for learning some simple bash  shell scripting.

To use the script as intended you will have to build tsh from source. Tsh is in very early development stage (ver 0.2) but as far as I tested it before some years it does great what it is intended for. You can  MySQLBackupper.sh script from here.
Earlier, I used MysqlBackupper.sh to upload all SQL dumps to /backups directory on central backup storage server, thus I had written secondary script to classify uploaded backups based on backup archive name. Script used is mysqldumps-classify.sh and can be viewed here. Though this way of making backups, needs a bit of custom work for managing backups up to 10 / 20 servers it worked well.

I have written also another mysqlbackup script which is much more simplistic and only dumps with mysqldump and stores copies on hard disk in tar.gz archive. You can download my other simple mysqkbackup.sh here.

Only inconvenient thing about above scripts is they dump all SQL databases. Hence whether necessary to get content for single database from (complete) All database SQL (script backup), I use SED (stream editor) one liner script.

It is interesting to hear how others prepare their MySQL db backups.

screen -d Fix “Must run suid root for multiuser support.” su user detach error

Thursday, March 28th, 2013

I had to run a shell script to run automatically in detached screen during Linux system boot up via /etc/rc.local. This is needed because the server uses the tiny shell script to fetch data from remote host database and fill information into local MySQL server.

My idea was to su from root to www-data (Apache) user – the script has requirements to run with Apache user, then it has to run detached using GNU screen (multi terminal emulator. The tiny one line script I imagined would do the trick is like so:

# tty=$(tty); su www-data -c 'cd /home/user/www/enetpulse; screen -d /home/user/www/enetpulse/while_true.sh'; chmod 0720 $tty

I run this as root user to test whether it will work or not before I put it in /etc/rc.local but for my surprise got an error:
 

Must run suid root for multiuser support.

After a quick investigation on what is causing it I came across the solution which is to include screen arguments (-m -S shared). The working variant that gets around the error – i.e. successfully changes user privileges to Debian Apache user (www-data) and then detach with screen is:

# tty=$(tty); chmod a+rw $tty; su www-data -c 'cd /home/user/www/enetpulse; screen -d -m -S shared /home/user/www/enetpulse/while_true.sh'; chmod 0720 $tty;

That's all now script works out as planned on next server reboot

Script to Automatically change current MySQL server in wp-config.php to another MySQL host to minimize WordPress and Joomla downtimes

Friday, July 20th, 2012

I'm running a two servers for a couple of home hosted websites. One of the servers is serving as Apache host1 and has configured MySQL running on it and the second is used just for database host2 – (has another MySQL configured on it).
The MySQL servers are not configured to run as a MySQL MASTER and MySQL SLAVE (no mysql replication), however periodically (daily), I have a tiny shell script that is actualizing the data from the active SQL host2 server to host1.

Sometimes due to electricity problems or CPU overheats the active MySQL host at host2 gets stoned and stops working causing the 2 WordPress based websites and One joomla site inaccessible.
Until I manually get to the machine and restart host2 the 3 sites are down from the net and as you can imagine this has a very negative impact on the existing website indexing (PageRank) in Google.

When I'm at home, this is not a problem as I have physical access to the servers and if somethings gets messy I fix it quickly. The problem comes, whether I'm travelling or in another city far from home and there is no-one at home to give the hanged host hard reboot ….

Lately the problems with hang-ups of host2 happaned 3 times or so for 2 weeks, as a result the websites were inaccessible for hours and since there is nobody to reboot the server for hours; the websites keep hanging until the DB host is restarted ;;;;

To work-around this I came with the idea to write a tiny shell script to check if host2 is ping-able in order to assure the Database host is not down and then if script determines host2 (mysql) host is down it changes wp-config.php (set to use host2) to a wp-config.php (which I have beforehand configured to use) host1.

Using the script is a temporary solution, since I have to actually find the real hang-up causing troubles, but at least it saves me long downtimes. Here is a download link to the script I called change_blog_db.sh .
I've configured the script to be run on the Apache node (host1) via a crontab calling the script every 10 minutes, here is the crontab:
 

*/10 * * * * /usr/sbin/change_blog_db.sh > /dev/null 2>&1

The script is written in a way so if it determins host2 is reachable a copy of wp-config.php and Joomla's configuration.php tuned to use host2 is copied over the file config originals. In order to use the script one has to configured the head variables script section, e.g.:

host_to_ping='192.168.0.2';
blog_dir='/var/www/blog';
blog_dir2='/var/www/blog1';blog_dir3='/var/www/joomla';
notify_mail='hipo@pc-freak.net';
wp_config_orig='wp-config.php';
wp_config_localhost='wp-config-localhost.php';
wp_config_other_host='wp-config-192.168.0.2.php';
joomla_config_orig='configuration.php';
joomla_config_other_host='configuration-192.168.0.2.php';

You will have to manually prepare;;;

wp-config-localhost.php, wp-config-192.168.0.2.php ,configuration-192.168.0.2.php, wp-config-localhost.php to be existing files configured to with proper host1 and host2 IP addresses.
Hope the script will be useful to others, experiencing database downtimes with WordPress or Joomla installs.