Posts Tagged ‘operating system’

Nginx increase security by putting websites into Linux jails howto

Monday, August 27th, 2018

linux-jail-nginx-webserver-increase-security-by-putting-it-and-its-data-into-jail-environment

If you're sysadmining a large numbers of shared hosted websites which use Nginx Webserver to interpret PHP scripts and serve HTML, Javascript, CSS … whatever data.

You realize the high amount of risk that comes with a possible successful security breach / hack into a server by a malicious cracker. Compromising Nginx Webserver by an intruder automatically would mean that not only all users web data will get compromised, but the attacker would get an immediate access to other data such as Email or SQL (if the server is running multiple services).

Nowadays it is not so common thing to have a multiple shared websites on the same server together with other services, but historically there are many legacy servers / webservers left which host some 50 or 100+ websites.

Of course the best thing to do is to isolate each and every website into a separate Virtual Container however as this is a lot of work and small and mid-sized companies refuse to spend money on mostly anything this might be not an option for you.

Considering that this might be your case and you're running Nginx either as a Load Balancing, Reverse Proxy server etc. , even though Nginx is considered to be among the most secure webservers out there, there is absolutely no gurantee it would not get hacked and the server wouldn't get rooted by a script kiddie freak that just got in darknet some 0day exploit.

To minimize the impact of a possible Webserver hack it is a good idea to place all websites into Linux Jails.

linux-jail-simple-explained-diagram-chroot-jail

For those who hear about Linux Jail for a first time,
chroot() jail is a way to isolate a process / processes and its forked children from the rest of the *nix system. It should / could be used only for UNIX processes that aren't running as root (administrator user), because of the fact the superuser could break out (escape) the jail pretty easily.

Jailing processes is a concept that is pretty old that was first time introduced in UNIX version 7 back in the distant year 1979, and it was first implemented into BSD Operating System ver. 4.2 by Bill Joy (a notorious computer scientist and co-founder of Sun Microsystems). Its original use for the creation of so called HoneyPot – a computer security mechanism set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of information systems that appears completely legimit service or part of website whose only goal is to track, isolate, and monitor intruders, a very similar to police string operations (baiting) of the suspect. It is pretty much like а bait set to collect the fish (which in this  case is the possible cracker).

linux-chroot-jail-environment-explained-jailing-hackers-and-intruders-unix

BSD Jails nowadays became very popular as iPhones environment where applications are deployed are inside a customly created chroot jail, the principle is exactly the same as in Linux.

But anyways enough talk, let's create a new jail and deploy set of system binaries for our Nginx installation, here is the things you will need:

1. You need to have set a directory where a copy of /bin/ls /bin/bash /bin/,  /bin/cat … /usr/bin binaries /lib and other base system Linux system binaries copy will reside.

 

server:~# mkdir -p /usr/local/chroot/nginx

 


2. You need to create the isolated environment backbone structure /etc/ , /dev, /var/, /usr/, /lib64/ (in case if deploying on 64 bit architecture Operating System).

 

server:~# export DIR_N=/usr/local/chroot/nginx;
server:~# mkdir -p $DIR_N/etc
server:~# mkdir -p $DIR_N/dev
server:~# mkdir -p $DIR_N/var
server:~# mkdir -p $DIR_N/usr
server:~# mkdir -p $DIR_N/usr/local/nginx
server:~# mkdir -p $DIR_N/tmp
server:~# chmod 1777 $DIR_N/tmp
server:~# mkdir -p $DIR_N/var/tmp
server:~# chmod 1777 $DIR_N/var/tmp
server:~# mkdir -p $DIR_N/lib64
server:~# mkdir -p $DIR_N/usr/local/

 

3. Create required device files for the new chroot environment

 

server:~# /bin/mknod -m 0666 $D/dev/null c 1 3
server:~# /bin/mknod -m 0666 $D/dev/random c 1 8
server:~# /bin/mknod -m 0444 $D/dev/urandom c 1 9

 

mknod COMMAND is used instead of the usual /bin/touch command to create block or character special files.

Once create the permissions of /usr/local/chroot/nginx/{dev/null, dev/random, dev/urandom} have to be look like so:

 

server:~# ls -l /usr/local/chroot/nginx/dev/{null,random,urandom}
crw-rw-rw- 1 root root 1, 3 Aug 17 09:13 /dev/null
crw-rw-rw- 1 root root 1, 8 Aug 17 09:13 /dev/random
crw-rw-rw- 1 root root 1, 9 Aug 17 09:13 /dev/urandom

 

4. Install nginx files into the chroot directory (copy all files of current nginx installation into the jail)
 

If your NGINX webserver installation was installed from source to keep it latest
and is installed in lets say, directory location /usr/local/nginx you have to copy /usr/local/nginx to /usr/local/chroot/nginx/usr/local/nginx, i.e:

 

server:~# /bin/cp -varf /usr/local/nginx/* /usr/local/chroot/nginx/usr/local/nginx

 


5. Copy necessery Linux system libraries to newly created jail
 

NGINX webserver is compiled to depend on various libraries from Linux system root e.g. /lib/* and /lib64/* therefore in order to the server work inside the chroot-ed environment you need to transfer this libraries to the jail folder /usr/local/chroot/nginx

If you are curious to find out which libraries exactly is nginx binary dependent on run:

server:~# ldd /usr/local/nginx/usr/local/nginx/sbin/nginx

        linux-vdso.so.1 (0x00007ffe3e952000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f2b4762c000)
        libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007f2b473f4000)
        libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f2b47181000)
        libcrypto.so.0.9.8 => /usr/local/lib/libcrypto.so.0.9.8 (0x00007f2b46ddf000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f2b46bc5000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f2b46826000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f2b47849000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f2b46622000)


The best way is to copy only the libraries in the list from ldd command for best security, like so:

 

server: ~# cp -rpf /lib/x86_64-linux-gnu/libthread.so.0 /usr/local/chroot/nginx/lib/*
server: ~# cp -rpf library chroot_location

etc.

 

However if you're in a hurry (not a recommended practice) and you don't care for maximum security anyways (you don't worry the jail could be exploited from some of the many lib files not used by nginx and you don't  about HDD space), you can also copy whole /lib into the jail, like so:

 

server: ~# cp -rpf /lib/ /usr/local/chroot/nginx/usr/local/nginx/lib

 

NOTE! Once again copy whole /lib directory is a very bad practice but for a time pushing activities sometimes you can do it …


6. Copy /etc/ some base files and ld.so.conf.d , prelink.conf.d directories to jail environment
 

 

server:~# cp -rfv /etc/{group,prelink.cache,services,adjtime,shells,gshadow,shadow,hosts.deny,localtime,nsswitch.conf,nscd.conf,prelink.conf,protocols,hosts,passwd,ld.so.cache,ld.so.conf,resolv.conf,host.conf}  \
/usr/local/chroot/nginx/usr/local/nginx/etc

 

server:~# cp -avr /etc/{ld.so.conf.d,prelink.conf.d} /usr/local/chroot/nginx/nginx/etc


7. Copy HTML, CSS, Javascript websites data from the root directory to the chrooted nginx environment

 

server:~# nice -n 10 cp -rpf /usr/local/websites/ /usr/local/chroot/nginx/usr/local/


This could be really long if the websites are multiple gigabytes and million of files, but anyways the nice command should reduce a little bit the load on the server it is best practice to set some kind of temporary server maintenance page to show on the websites index in order to prevent the accessing server clients to not have interrupts (that's especially the case on older 7200 / 7400 RPM non-SSD HDDs.)
 

 

8. Stop old Nginx server outside of Chroot environment and start the new one inside the jail


a) Stop old nginx server

Either stop the old nginx using it start / stop / restart script inside /etc/init.d/nginx (if you have such installed) or directly kill the running webserver with:

 

server:~# killall -9 nginx

 

b) Test the chrooted nginx installation is correct and ready to run inside the chroot environment

 

server:~# /usr/sbin/chroot /usr/local/chroot/nginx /usr/local/nginx/nginx/sbin/nginx -t
server:~# /usr/sbin/chroot /usr/local/chroot/nginx /usr/local/nginx/nginx/sbin/nginx

 

c) Restart the chrooted nginx webserver – when necessery later

 

server:~# /usr/sbin/chroot /nginx /usr/local/chroot/nginx/sbin/nginx -s reload

 

d) Edit the chrooted nginx conf

If you need to edit nginx configuration, be aware that the chrooted NGINX will read its configuration from /usr/local/chroot/nginx/nginx/etc/conf/nginx.conf (i'm saying that if you by mistake forget and try to edit the old config that is usually under /usr/local/nginx/conf/nginx.conf

 

 

Install and make Apache + PHP to work with PosgreSQL database server on Debian Linux and set up server Web Posgre interface Pgpadmin howto

Wednesday, June 15th, 2016

make-apache-php-work-with-postgresql-pgsql-and-install-postgresql-db-web-admin-interface

In previous article I've wrote on how to install postgresql on Debian Linux using the deb repository this was necessery to import some PostGres DBs, however this was not enough to run the posgresql php based website aimed as connection from Apache / PHP module to PostGre was failing after a bit of investigation and a check in phpinfo(); I've realized the module PHP module for postgres pgsql.so was missing, here is what I did in order to install it:
 

debian:~# apt-get install php5-pgsql phppgadmin libapache2-mod-auth-pgsql 

PHP sessions enable configuration

As it is common a common problem with PHP applications written to use PostGres is to loose sessions and by default PHP does not have configured sessions.save_path it is a very good practice to directly enable it in /etc/php5/apache2/php.ini open the file in text editor:
 

debian:~# vim /etc/php5/apache2/php.ini


Find the commented directive line:
 

;session.save_path = “/tmp”


and uncomment it, i.e.:
 

session.save_path = “/tmp”


Quit saving vim with the usual :wq!

The 3 modules provides pgsql.so for PHP and mod_auth_pgsql.so for Apache2, the 3rd packae phpgadmin provides a Web administration interface for installed PostgreSQL servers Databases, for those experienced with MySQL Database its the same as PHPMyAdmin.

 

 Here is quick configuration for use of PostgreAdmin interface:

By default PHPPGADMIN installation process configure the Apache2 server' /etc/phppgadmin/apache.conf  to use  /etc/apache2/conf.d/phppgadmin


Here is the default my server package instaleld  file content:

 

Alias /phppgadmin /usr/share/phppgadmin

<Directory /usr/share/phppgadmin>

DirectoryIndex index.php
AllowOverride None

order deny,allow
deny from all
allow from 127.0.0.0/255.0.0.0 ::1/128
# allow from all

<IfModule mod_php5.c>
  php_flag magic_quotes_gpc Off
  php_flag track_vars On
  #php_value include_path .
</IfModule>
<IfModule !mod_php5.c>
  <IfModule mod_actions.c>
    <IfModule mod_cgi.c>
      AddType application/x-httpd-php .php
      Action application/x-httpd-php /cgi-bin/php
    </IfModule>
    <IfModule mod_cgid.c>
      AddType application/x-httpd-php .php
      Action application/x-httpd-php /cgi-bin/php
    </IfModule>
  </IfModule>
</IfModule>

</Directory>

It is generally a good practice to change the default Alias location of phppgadmin, so edit the file and change it to something like:
 

Alias /phppostgresgadmin /usr/share/phppgadmin

 

  • Then phpPgAdmin is available at http://servername.com/phppostgresadmin (only from localhost, however in my case I wanted to be able to access it also from other hosts so allowed PostgresGadmin from every hosts, to do so, I've commented in above config

 

# allow from 127.0.0.0/255.0.0.0 ::1/128

 

and uncommented #allow from all line, e.g.:
 

allow from all


Also another thing here is in your VirtualHost whenever you plan to access the PHPPGADMIN is to include in config ( in my case this is the file /etc/apache2/sites-enabled/000-default before (</VirtualHost> end line) following Alias:
 

Alias /phpposgreadmin /usr/share/phppgadmin


Then to access PostGreSQL PHP Admin interface in Firefox / Chrome open URL:

 

http://your-default-domain.com/phpposgreadmin

phpPgAdmin-postgresql-php-web-interface-debian-linux-screenshot
 

 

Configure access to a remote PostgreSQL Server

With PhpPgAdmin, you can manage many PostgreSQL servers locally (on the localhost) or on remote hosts.

First, you have to make sure that the distant PostgreSQL server can handle your request, that you can connect to it. You can do this by modifying the /etc/postgresql/9.5/main/filepg_hba.conf and adding a line like:

# PhpPgAdmin server access host all db_admin xx.xx.xx.xx 255.255.255.255 md5

Then, you need to add your distant PostgreSQL server into the config file for PhpPgAdmin. This file is  /etc/phppgadmin/config.inc.php the default postgresql port is 5432, however you might have configured it already to use some different port if you're not sure about the port number the postgresql is listening check it out:

 

debian:~# grep -i port /etc/postgresql/*/main/postgresql.conf
etc/postgresql/9.5/main/postgresql.conf:port = 5433                # (change requires restart)
/etc/postgresql/9.5/main/postgresql.conf:                    # supported by the operating system:
/etc/postgresql/9.5/main/postgresql.conf:                    # supported by the operating system:
/etc/postgresql/9.5/main/postgresql.conf:# ERROR REPORTING AND LOGGING


To login to phppgadmin interface there is no root administrator user such as in PHP so you will need to priorly create some user and later use it for connection from Postgres Web interface.

To create from console new user in postgres:
 

debian:~# su – postgres
posgres@debian:~$ psql template1
posgres@debian:~$ psql -d template1 -U postgres

 

Welcome to psql 9.5, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help on internal slash commands \g or terminate with semicolon to execute query \q to quit template1=#

template1=# CREATE USER MyNewUser WITH PASSWORD 'myPassword';


To add a new database to postgres from shell:

template1=# CREATE DATABASE NewDatabase;
 

template1=# GRANT ALL PRIVILEGES ON DATABASE NewDatabase to MyNewUser;

 

template1=# q

Last command instructs it to quit if you want to get more info about possible commands do type instead of q ? for general help or for database / table commands only h
If you need to connect to NewDatabase just to test first it works in console before trying it from postgrepgadmin

 

 

 

 

 

posgres@debian:~$ psql -d NewDatabase -U MyNewUser

 

 

 

 

Top AIX UNIX Performance tracking commands every Linux admin / user should know

Monday, March 16th, 2015

IBM_AIX_UNIX-Performance-Tracking-every-commands-Linux-sysadmin-and-user-should-know-AIX_logo

Though IBM AIX is basicly UNIX OS and many of the standard Linux commands are same or similar to AIX's if you happen to be a Linux sysadmin and you've been given some 100 AIX servers,  you will have to invest some time to read on AIX, however as a starter you should be aware to at least be able to do performance tracking on system to prevent system overloads. If that's the case I advise you check thoroughfully below commands documentation.

fcstat – Displays statistics gathered by the specified Fibre Channel device driver

filemon – Performance statistics for files, logical/physical volumes and virtual memory segments

fileplace – Displays the placement of file blocks within logical or physical volumes.

entstat – Displays the statistics gathered by the specified Ethernet device driver

iostat – Statistics for ttys, disks and cpu ipcs – Status of interprocess communication facilities

lsps – Statistics about paging space

netstat – Shows network status

netpmon – Performance statistics for CPU usage, network device-driver I/O, socket calls & NFS

nfsstat – Displays information about NFS and RPC calls

pagesize – Displays system page size ps – Display status of current processes

pstat – Statistics about system attributes

sar – System Activity Recorder

svmon – Captures a snapshot of the current contents of both real and virtual memory

traceroute – intended for use in network testing, measurement, and management.

tprof – Detailed profile of CPU usage by an application vmstat – Statistics about virtual memory and cpu/hard disk usage

topas – AIX euqivalent of Linux top command

Here are also useful examples use of above AIX performance tracking commands

To display the statistics for Fiber Channel device driver fcs0, enter:

fcstat fcs0

To monitor the activity at all file system levels and write a verbose report to the fmon.out file, enter:

filemon -v -o fmon.out -O all

To display all information about the placement of a file on its physical volumes, enter:

fileplace -piv data1

To display a continuous disk report at two second intervals for the disk with the logical name disk1, enter the following command:

iostat -d disk1 2

To display extended drive report for all disks, enter the following command:

iostat -D

To list the characteristics of all paging spaces, enter:

lsps -a

List All Ports (both listening and non listening ports)

netstat -a | more

The netpmon command uses the trace facility to obtain a detailed picture of network activity during a time interval.

netpmon -o /tmp/netpmon.log -O all;

netpfmon is very much like AIX Linux equivalent of tcpdump To print all of the supported page size with an alphabetical suffix, enter:

pagesize -af

To display the i-nodes of the system dump saved in the dumpfile core file

pstat -i dumpfile

To report current tty activity for each 2 seconds for the next 40 seconds, enter the following command:

sar -y -r 2 20

To watch system unit for 10 minutes and sort data, enter the following command:

sar -o temp 60 10

To report processor activity for the first two processors, enter the following command:

sar -u -P 0,1

To display global statistics for virtual memory in a one line format every minute for 30 minutes, enter the following command:

svmon -G -O summary=longreal -i 60 30

The traceroute command is intended for use in network testing, measurement, and management. While the ping command confirms IP network reachability, you cannot pinpoint and improve some isolated problems

traceroute aix1

Basic global program and thread-level summary / Reports processor usage

prof -x sleep 10

Single process level profiling

tprof -u -p workload -x workload

Reports virtual memory statistics

vmstat 10 10

To display fork statistics, enter the following command:

vmstat -f

To display the count of various events, enter the following command: vmstat -s To display the count of various events, enter the following command:

vmstat -s

To display time-stamp next to each column of output of vmstat, enter the following command:

vmstat -t

To display the I/O oriented view with an alternative set of columns, enter the following command:

vmstat -I

To display all the VMM statistics available, enter the following command:

vmstat -vs


If you already have some experience with some BSD (OpenBSD or FreeBSD) you will feel much more confortable with AIX as both operating system share common ancestor OS (UNIX System V), actually IBM AIX is U. System V with 4.3 BSD compatible extensions. As AIX was the first OS to introduce file system journalling, journalling capabilities on AIX are superb. AIX was and is still widely used by IBM for their mainframes, on IBM RS/6000 series (in 1990s), nowdays it runs fine on PowerPC-based systems and IA-64 systems.
For GUI loving users which end up on AIX try out SMIT (System Management Interface tool for AIX). AIX was using bash shell in prior versions up to AIX 3 but in recent releases default shell is Korn Shell (ksh88).
Nowdays AIX just like HP-UX and rest of commercial UNICes are loosing ground as most of functionalities is provided by commercial Linux distributions like RHEL so most of clients including Banks and big business clients are migrating to Linux.


Happy AIX-ing ! 🙂

Why I never liked Mandrake Linux / Mankdrake Linux has took its name from an 1930s comics Mandrake the Magician

Wednesday, May 9th, 2012

I never liked Mandrake Linux, since day 1 I saw it.
Historically Mandrake Linux was one of the best Linux distributions available for free download in the "Linux scene" some 10 to 12 years ago.

Mandrake was simple gui oriented and trendy. It also one the Linux distribution with the most simplified installer program and generally a lot of GUI software for easy configuration and use by the end user.

Though it's outside nice look, still for me it was like an "intuition" that Mandrake is not so good as it appeared.

Now many years later I found by chance that Mandrake has been sued to change their Operating System name with another, due to a law suit requit by the copyright holders of Mandrake The Magician comics. "Mandrake the Magician" used to be a very popular before the Second World war in the 1930's.

Mandrake the Magician Comics Magazine from 1930's Cover, Mandrake the Black Magic Magician

It obviously not a co-incidence that the Mandrake names was after this comics and not the mandrake herb plants available in Europe, Africa and Asia. This is clear in Mandrake Linux distro earlier mascot, you see below:

Mandrake Linux old distribution logo, magician penguin

Later on they changed Mandrake's logo to loose the connection with Mandrake The Magician and used another new crafted logo:

Mandrake GNU Linux newer logo
Its quite stunning nowdays magician obsession, has so heavily infiltrated our lives that even something like a Free Softwre Linux distribution might have some kind of reference to magician and occult stuff (I saw this from the position of being Christian) …

Later due to the name copyright infringement Mandrake Linux was renamed first to Mandragora Linux.
Instead of putting some nice name non related to occultism or magic stuff the French commercian company behind Mandrake rename it to another non-Christian name Mandragora.
Interestingly the newer name Mandragora as one can read in wikipedia means:
 

Mandragora (demon), in occultism

Well apparently, someone from the head developers of this Linux distribution has a severe obsession with magic and occultism.

Later MandrakeSoft (The French Company behind Mandrake Linux) renamed finally the distribution to Mandriva under the influence of the merger of Mandrake with the Brazillian company Connectiva this put also an over to the legal dispute copyright infringement dispute with Hearst Corporation (owning the rights of Mandrake the Magician).

Having in mind all fact on current Mandriva "dark names history", I think it is better we Christians avoid it …

How to show country flag, web browser type and Operating System in WordPress Comments

Wednesday, February 15th, 2012

!!! IMPORTANT UPDATE COMMENT INFO DETECTOR IS NO LONGER SUPPORTED (IS OBSOLETE) AND THE COUNTRY FLAGS AND OPERATING SYSTEM WILL BE NOT SHOWING INSTEAD,

!!!! TO MAKE THE COUNTRY FLAGS AND OS WP FUNCTIONALITY WORK AGAIN YOU WILL NEED TO INSTALL WP-USERAGENT !!!

I've come across a nice WordPress plugin that displays country flag, operating system and web browser used in each of posted comments blog comments.
Its really nice plugin, since it adds some transperancy and colorfulness to each of blog comments 😉
here is a screenshot of my blog with Comments Info Detector "in action":

Example of Comments Info Detector in Action on wordpress blog comments

Comments Info Detector as of time of writting is at stable ver 1.0.5.
The plugin installation and configuration is very easy as with most other WP plugins. To install the plugin;

1. Download and unzip Comments Info Detector

linux:/var/www/blog:# cd wp-content/plugins
linux:/var/www/blog/wp-content/plugins:# wget http://downloads.wordpress.org/plugin/comment-info-detector.zip
...
linux:/var/www/blog/wp-content/plugins:# unzip comment-info-detector.zip
...

Just for the sake of preservation of history, I've made a mirror of comments-info-detector 1.0.5 wp plugin for download here
2. Activate Comment-Info-Detector

To enable the plugin Navigate to;
Plugins -> Inactive -> Comment Info Detector (Activate)

After having enabled the plugin as a last 3rd step it has to be configured.

3. Configure comment-info-detector wp plugin

By default the plugin is disabled. To change it to enabled (configure it) by navigating to:

Settings -> Comments Info Detector

Next a a page will appear with variout fields and web forms, where stuff can be changed. Here almost all of it should be left as it is the only change should be in the drop down menus near the end of the page:

Display Country Flags Automatically (Change No to Yes)
Display Web Browsers and OS Automatically (Change No to Yes

Comments Info Detector WordPress plugin configuration Screenshot

After the two menus are set to "Yes" and pressing on Save Changes the plugin is enabled it will immediately start showing information inside each comment the GeoIP country location flag of the person who commented as well as OS type and Web Browser 🙂

Runing sudo command simultaneously on multiple servers with SSHSUDO

Tuesday, June 21st, 2011

ssh multiple server command execute
I just was recommended by a friend a nifty tool, which is absoutely nifty for system administrators.

The tool is called sshsudo and the project is hosted on http://code.google.com/p/sshsudo/.

Let’s say you’re responsible for 10 servers with the same operating system let’s say; CentOS 4 and you want to install tcpdump and vnstat on all of them without logging one by one to each of the nodes.

This task is really simple with using sshsudo.
A typical use of sshsudo is:


[root@centos root]# sshsudo -u root \
comp1,comp2,comp3,comp4,comp5,comp6,comp7,comp8,comp9,comp10 yum install tcpdump vnstat

Consequently a password prompt will appear on the screen;
Please enter your password:

If all the servers are configured to have the same administrator root password then just typing one the root password will be enough and the command will get issued on all the servers.

The program can also be used to run a custom admin script by automatically populating the script (upload the script), to all the servers and issuing it next on.

One typical use to run a custom bash shell script on ten servers would be:


[root@centos root]# sshsudo -r -u root \
comp1,comp2,comp3,comp4,comp5,comp6,comp7,comp8,comp9,comp10 /pathtoscript/script.sh

I’m glad I found this handy tool 😉

Cloud Computing a possible threat to users privacy and system administrator employment

Monday, March 28th, 2011

Cloud Computing screenshot

If you’re employed into an IT branch an IT hobbyist or a tech, geek you should have certainly heard about the latest trend in Internet and Networking technologies the so called Cloud Computing

Most of the articles available in newspapers and online have seriously praised and put the hopes for a better future through cloud computing.
But is really the cloud computing as good as promised? I seriously doubt that.
Let’s think about it what is a cloud? It’s a cluster of computers which are connected to work as one.
No person can precisely say where exactly on the cluster cloud a stored information is located (even the administrator!)

The data stored on the cluster is a property of a few single organizations let’s say microsoft, amazon etc., so we as users no longer have a physical possession of our data (in case if we use the cloud).

On the other hand the number of system administrators that are needed for an administration of a huge cluster is dramatically decreased, the every day system administrator, who needs to check a few webservers and a mail server on daily basis, cache web data with a squid proxy cache or just restart a server will be no longer necessary.

Therefore about few million of peoples would have to loose their jobs, the people necessary to administrate a cluster will be probably no more than few thousands as the clouds are so high that no more than few clouds will exist on the net.

The idea behind the cluster is that we the users store retrieve our desktops and boot our operating system from the cluster.
Even loading a simple webpage will have to retrieve it’s data from the cluster.

Therefore it looks like in the future the cloud computing and the internet are about to become one and the same thing. The internet might become a single super cluster where all users would connect with their user ids and do have full access to the information inside.

Technologies like OpenID are trying to make the user identification uniform, I assume a similar uniform user identication will be used in the future in a super cloud where everybody, where entering inside will have access to his/her data and will have the option to access any other data online.

The desire of humans and business for transperancy would probably end up in one day, where people will want to share every single bit of information.
Even though it looks very cool for a sci-fi movie, it’s seriously scary!

Cloud computing expenses as they’re really high would be affordable only for a multi-national corporations like Google and Microsoft

Therefore small and middle IT business (network building, expanding, network and server system integration etc.) would gradually collapse and die.

This are only a few small tiny bit of concerns but in reality the problems that cloud computing might create are a way more severe.
We the people should think seriously and try to oppose cloud computing, while we still can! It might be even a good idea if a special legislation that is aming at limiting cloud computing can be integrated and used only inside the boundary of a prescribed limitations.

Institutions like the European Parliament should be more concerned about the issues which the use of cloud computing will bring, EU legislation should very soon be voted and bounding contracts stop clouds from expanding and taking over the middle size IT business.