Posts Tagged ‘copy’

Nginx increase security by putting websites into Linux jails howto

Monday, August 27th, 2018

linux-jail-nginx-webserver-increase-security-by-putting-it-and-its-data-into-jail-environment

If you're sysadmining a large numbers of shared hosted websites which use Nginx Webserver to interpret PHP scripts and serve HTML, Javascript, CSS … whatever data.

You realize the high amount of risk that comes with a possible successful security breach / hack into a server by a malicious cracker. Compromising Nginx Webserver by an intruder automatically would mean that not only all users web data will get compromised, but the attacker would get an immediate access to other data such as Email or SQL (if the server is running multiple services).

Nowadays it is not so common thing to have a multiple shared websites on the same server together with other services, but historically there are many legacy servers / webservers left which host some 50 or 100+ websites.

Of course the best thing to do is to isolate each and every website into a separate Virtual Container however as this is a lot of work and small and mid-sized companies refuse to spend money on mostly anything this might be not an option for you.

Considering that this might be your case and you're running Nginx either as a Load Balancing, Reverse Proxy server etc. , even though Nginx is considered to be among the most secure webservers out there, there is absolutely no gurantee it would not get hacked and the server wouldn't get rooted by a script kiddie freak that just got in darknet some 0day exploit.

To minimize the impact of a possible Webserver hack it is a good idea to place all websites into Linux Jails.

linux-jail-simple-explained-diagram-chroot-jail

For those who hear about Linux Jail for a first time,
chroot() jail is a way to isolate a process / processes and its forked children from the rest of the *nix system. It should / could be used only for UNIX processes that aren't running as root (administrator user), because of the fact the superuser could break out (escape) the jail pretty easily.

Jailing processes is a concept that is pretty old that was first time introduced in UNIX version 7 back in the distant year 1979, and it was first implemented into BSD Operating System ver. 4.2 by Bill Joy (a notorious computer scientist and co-founder of Sun Microsystems). Its original use for the creation of so called HoneyPot – a computer security mechanism set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of information systems that appears completely legimit service or part of website whose only goal is to track, isolate, and monitor intruders, a very similar to police string operations (baiting) of the suspect. It is pretty much like а bait set to collect the fish (which in this  case is the possible cracker).

linux-chroot-jail-environment-explained-jailing-hackers-and-intruders-unix

BSD Jails nowadays became very popular as iPhones environment where applications are deployed are inside a customly created chroot jail, the principle is exactly the same as in Linux.

But anyways enough talk, let's create a new jail and deploy set of system binaries for our Nginx installation, here is the things you will need:

1. You need to have set a directory where a copy of /bin/ls /bin/bash /bin/,  /bin/cat … /usr/bin binaries /lib and other base system Linux system binaries copy will reside.

 

server:~# mkdir -p /usr/local/chroot/nginx

 


2. You need to create the isolated environment backbone structure /etc/ , /dev, /var/, /usr/, /lib64/ (in case if deploying on 64 bit architecture Operating System).

 

server:~# export DIR_N=/usr/local/chroot/nginx;
server:~# mkdir -p $DIR_N/etc
server:~# mkdir -p $DIR_N/dev
server:~# mkdir -p $DIR_N/var
server:~# mkdir -p $DIR_N/usr
server:~# mkdir -p $DIR_N/usr/local/nginx
server:~# mkdir -p $DIR_N/tmp
server:~# chmod 1777 $DIR_N/tmp
server:~# mkdir -p $DIR_N/var/tmp
server:~# chmod 1777 $DIR_N/var/tmp
server:~# mkdir -p $DIR_N/lib64
server:~# mkdir -p $DIR_N/usr/local/

 

3. Create required device files for the new chroot environment

 

server:~# /bin/mknod -m 0666 $D/dev/null c 1 3
server:~# /bin/mknod -m 0666 $D/dev/random c 1 8
server:~# /bin/mknod -m 0444 $D/dev/urandom c 1 9

 

mknod COMMAND is used instead of the usual /bin/touch command to create block or character special files.

Once create the permissions of /usr/local/chroot/nginx/{dev/null, dev/random, dev/urandom} have to be look like so:

 

server:~# ls -l /usr/local/chroot/nginx/dev/{null,random,urandom}
crw-rw-rw- 1 root root 1, 3 Aug 17 09:13 /dev/null
crw-rw-rw- 1 root root 1, 8 Aug 17 09:13 /dev/random
crw-rw-rw- 1 root root 1, 9 Aug 17 09:13 /dev/urandom

 

4. Install nginx files into the chroot directory (copy all files of current nginx installation into the jail)
 

If your NGINX webserver installation was installed from source to keep it latest
and is installed in lets say, directory location /usr/local/nginx you have to copy /usr/local/nginx to /usr/local/chroot/nginx/usr/local/nginx, i.e:

 

server:~# /bin/cp -varf /usr/local/nginx/* /usr/local/chroot/nginx/usr/local/nginx

 


5. Copy necessery Linux system libraries to newly created jail
 

NGINX webserver is compiled to depend on various libraries from Linux system root e.g. /lib/* and /lib64/* therefore in order to the server work inside the chroot-ed environment you need to transfer this libraries to the jail folder /usr/local/chroot/nginx

If you are curious to find out which libraries exactly is nginx binary dependent on run:

server:~# ldd /usr/local/nginx/usr/local/nginx/sbin/nginx

        linux-vdso.so.1 (0x00007ffe3e952000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f2b4762c000)
        libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007f2b473f4000)
        libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f2b47181000)
        libcrypto.so.0.9.8 => /usr/local/lib/libcrypto.so.0.9.8 (0x00007f2b46ddf000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f2b46bc5000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f2b46826000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f2b47849000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f2b46622000)


The best way is to copy only the libraries in the list from ldd command for best security, like so:

 

server: ~# cp -rpf /lib/x86_64-linux-gnu/libthread.so.0 /usr/local/chroot/nginx/lib/*
server: ~# cp -rpf library chroot_location

etc.

 

However if you're in a hurry (not a recommended practice) and you don't care for maximum security anyways (you don't worry the jail could be exploited from some of the many lib files not used by nginx and you don't  about HDD space), you can also copy whole /lib into the jail, like so:

 

server: ~# cp -rpf /lib/ /usr/local/chroot/nginx/usr/local/nginx/lib

 

NOTE! Once again copy whole /lib directory is a very bad practice but for a time pushing activities sometimes you can do it …


6. Copy /etc/ some base files and ld.so.conf.d , prelink.conf.d directories to jail environment
 

 

server:~# cp -rfv /etc/{group,prelink.cache,services,adjtime,shells,gshadow,shadow,hosts.deny,localtime,nsswitch.conf,nscd.conf,prelink.conf,protocols,hosts,passwd,ld.so.cache,ld.so.conf,resolv.conf,host.conf}  \
/usr/local/chroot/nginx/usr/local/nginx/etc

 

server:~# cp -avr /etc/{ld.so.conf.d,prelink.conf.d} /usr/local/chroot/nginx/nginx/etc


7. Copy HTML, CSS, Javascript websites data from the root directory to the chrooted nginx environment

 

server:~# nice -n 10 cp -rpf /usr/local/websites/ /usr/local/chroot/nginx/usr/local/


This could be really long if the websites are multiple gigabytes and million of files, but anyways the nice command should reduce a little bit the load on the server it is best practice to set some kind of temporary server maintenance page to show on the websites index in order to prevent the accessing server clients to not have interrupts (that's especially the case on older 7200 / 7400 RPM non-SSD HDDs.)
 

 

8. Stop old Nginx server outside of Chroot environment and start the new one inside the jail


a) Stop old nginx server

Either stop the old nginx using it start / stop / restart script inside /etc/init.d/nginx (if you have such installed) or directly kill the running webserver with:

 

server:~# killall -9 nginx

 

b) Test the chrooted nginx installation is correct and ready to run inside the chroot environment

 

server:~# /usr/sbin/chroot /usr/local/chroot/nginx /usr/local/nginx/nginx/sbin/nginx -t
server:~# /usr/sbin/chroot /usr/local/chroot/nginx /usr/local/nginx/nginx/sbin/nginx

 

c) Restart the chrooted nginx webserver – when necessery later

 

server:~# /usr/sbin/chroot /nginx /usr/local/chroot/nginx/sbin/nginx -s reload

 

d) Edit the chrooted nginx conf

If you need to edit nginx configuration, be aware that the chrooted NGINX will read its configuration from /usr/local/chroot/nginx/nginx/etc/conf/nginx.conf (i'm saying that if you by mistake forget and try to edit the old config that is usually under /usr/local/nginx/conf/nginx.conf

 

 

How to Copy large data directories between 2 Linux / Unix servers without direct ssh / ftp access between server1 and server2 other by using SSH, TAR and Unix pipes

Monday, April 27th, 2015

how-to-copy-large-data-directories-between-2-linux-unix-servers-without-direct-ssh-ftp-access-btween-each-other

In a Web application data migration project, I've come across a situation where I have to copy / transfer 500 Gigabytes of data from Linux server 1 (host A) to Linux server 2 (host B). However the two machines doesn't have direct access to each other (via port 22) for security reasons and hence I cannot use sshfs to mount remotely host dir via ssh and copy files like local ones.

As this is a data migration project its however necessery to migrate the data finding a way … Normal way companies do it is to copy the data to External Hard disk storage and send it via some Country Post services or some employee being send in Data center to attach the SAN to new server where data is being migrated However in my case this was not possible so I had to do it different.

I have access to both servers as they're situated in the same Corporate DMZ network and I can thus access both UNIX machines via SSH.

Thanksfully there is a small SSH protocol + TAR archiver and default UNIX pipe's capabilities hack that makes possible to transfer easy multiple (large) files and directories. The only requirement to use this nice trick is to have SSH client installed on the middle host from which you can access via SSH protocol Server1 (from where data is migrated) and Server2 (where data will be migrated).

If the hopping / jump server from which you're allowed to have access to Linux  servers Server1 and Server2 is not Linux and you're missing the SSH client and don't have access on Win host to install anything on it just use portable mobaxterm (as it have Cygwin SSH client embedded )

Here is how:
 

jump-host:~$ ssh server1 "tar czf – /somedir/" | pv | ssh server2 "cd /somedir/; tar xf


As you can see from above command line example an SSH is made to server1  a tar is used to archive the directory / directories containing my hundred of gigabytes and then this is passed to another opened ssh session to server 2  via UNIX Pipe mechanism and then TAR archiver is used second time to unarchive previously passed archived content. pv command which is in the middle is not obligitory though it is a nice way to monitor status about data transfer like below:
 

500GB 0:00:01 [10,5MB/s] [===================================================>] 27%


P.S. If you don't have PV installed install it either with apt-get on Debian:

 

debian:~# apt-get install –yes pv

 

Or on CentOS / Fedora / RHEL etc.

 

[root@centos ~]# yum -y install pv

 

Below is a small chunk of PV manual to give you better idea of what it does:

NAME
       pv – monitor the progress of data through a pipe

SYNOPSIS
       pv [OPTION] [FILE]…
       pv [-h|-V]

DESCRIPTION
       pv  allows  a  user to see the progress of data through a pipeline, by giving information such as time elapsed, percentage
       completed (with progress bar), current throughput rate, total data transferred, and ETA.

       To use it, insert it in a pipeline between two processes, with the appropriate options.  Its standard input will be passed
       through to its standard output and progress will be shown on standard error.

       pv  will  copy  each  supplied FILE in turn to standard output (- means standard input), or if no FILEs are specified just
       standard input is copied. This is the same behaviour as cat(1).

       A simple example to watch how quickly a file is transferred using nc(1):

              pv file | nc -w 1 somewhere.com 3000

       A similar example, transferring a file from another process and passing the expected size to pv:

              cat file | pv -s 12345 | nc -w 1 somewhere.com 3000


Note that with too big file transfers using PV will delay data transfer because everything will have to pass through another 2 pipes, however for file transfers up to few gigabytes its really nice to include it.

If you only need to transfer huge .tar.gz archive and you don't bother about traffic security (i.e. don't care whether transferred traffic is going through encrypted SSH tunnel and don't want to put an overhead to both systems for encrypting the data and you have some unfiltered ports between host 1 and host 2 you can run netcat on host 2 to listen for connections and forward .tar.gz content via netcat's port like so:
 

linux2:~$ nc -l -p 12345 > /path/destinationfile
linux2:~$ cat /path/sourcfile | nc desti.nation.ip.address 12345


Another way to transfer large data without having connection with server1 and server2 but having connection to a third host PC is to use rsync and good old SSH Tunneling, like so:
 

jump-host:~$ ssh -R 2200:Linux-server1:22 root@Linux-server2 "rsync -e 'ssh -p 2200' –stats –progress -vaz /directory/to/copy root@localhost:/copy/destination/dir"

Fix MySQL ibdata file size – ibdata1 file growing too large, preventing ibdata1 from eating all your server disk space

Thursday, April 2nd, 2015

fix-solve-mysql-ibdata-file-size-ibdata1-file-growing-too-large-and-preventing-ibdata1-from-eating-all-your-disk-space-innodb-vs-myisam

If you're a webhosting company hosting dozens of various websites that use MySQL with InnoDB  engine as a backend you've probably already experienced the annoying problem of MySQL's ibdata1 growing too large / eating all server's disk space and triggering disk space low alerts. The ibdata1 file, taking up hundreds of gigabytes is likely to be encountered on virtually all Linux distributions which run default MySQL server <= MySQL 5.6 (with default distro shipped my.cnf). The excremental ibdata1 raise appears usually due to a application software bug on how it queries the database. In theory there are no limitation for ibdata1 except maximum file size limitation set for the filesystem (and there is no limitation option set in my.cnf) meaning it is quite possible that under certain conditions ibdata1 grow over time can happily fill up your server LVM (Storage) drive partitions.

Unfortunately there is no way to shrink the ibdata1 file and only known work around (I found) is to set innodb_file_per_table option in my.cnf to force the MySQL server create separate *.ibd files under datadir (my.cnf variable) for each freshly created InnoDB table.
 

1. Checking size of ibdata1 file

On Debian / Ubuntu and other deb based Linux servers datadir is /var/lib/mysql/ibdata1

server:~# du -hsc /var/lib/mysql/ibdata1
45G     /var/lib/mysql/ibdata1
45G     total


2. Checking info about Databases and Innodb storage Engine

server:~# mysql -u root -p
password:

mysql> SHOW DATABASES;
+——————–+
| Database           |
+——————–+
| information_schema |
| bible              |
| blog               |
| blog-sezoni        |
| blogmonastery      |
| daniel             |
| ezmlm              |
| flash-games        |


Next step is to get some understanding about how many existing InnoDB tables are present within Database server:

 

mysql> SELECT COUNT(1) EngineCount,engine FROM information_schema.tables WHERE table_schema NOT IN ('information_schema','performance_schema','mysql') GROUP BY engine;
+————-+——–+
| EngineCount | engine |
+————-+——–+
|         131 | InnoDB |
|           5 | MEMORY |
|         584 | MyISAM |
+————-+——–+
3 rows in set (0.02 sec)

To get some more statistics related to InnoDb variables set on the SQL server:
 

mysqladmin -u root -p'Your-Server-Password' var | grep innodb


Here is also how to find which tables use InnoDb Engine

mysql> SELECT table_schema, table_name
    -> FROM INFORMATION_SCHEMA.TABLES
    -> WHERE engine = 'innodb';

+————–+————————–+
| table_schema | table_name               |
+————–+————————–+
| blog         | wp_blc_filters           |
| blog         | wp_blc_instances         |
| blog         | wp_blc_links             |
| blog         | wp_blc_synch             |
| blog         | wp_likes                 |
| blog         | wp_wpx_logs              |
| blog-sezoni  | wp_likes                 |
| icanga_web   | cronk                    |
| icanga_web   | cronk_category           |
| icanga_web   | cronk_category_cronk     |
| icanga_web   | cronk_principal_category |
| icanga_web   | cronk_principal_cronk    |


3. Check and Stop any Web / Mail / DNS service using MySQL

server:~# ps -efl |grep -E 'apache|nginx|dovecot|bind|radius|postfix'

Below cmd should return empty output, (e.g. Apache / Nginx / Postfix / Radius / Dovecot / DNS etc. services are properly stopped on server).

4. Create Backup dump all MySQL tables with mysqldump

Next step is to create full backup dump of all current MySQL databases (with mysqladmin):

server:~# mysqldump –opt –allow-keywords –add-drop-table –all-databases –events -u root -p > dump.sql
server:~# du -hsc /root/dump.sql
940M    dump.sql
940M    total

 

If you have free space on an external backup server or remotely mounted attached (NFS or SAN Storage) it is a good idea to make a full binary copy of MySQL data (just in case something wents wrong with above binary dump), copy respective directory depending on the Linux distro and install location of SQL binary files set (in my.cnf).
To check where are MySQL binary stored database data (check in my.cnf):

server:~# grep -i datadir /etc/mysql/my.cnf
datadir         = /var/lib/mysql

If server is CentOS / RHEL Fedora RPM based substitute in above grep cmd line /etc/mysql/my.cnf with /etc/my.cnf

if you're on Debian / Ubuntu:

server:~# /etc/init.d/mysql stop
server:~# cp -rpfv /var/lib/mysql /root/mysql-data-backup

Once above copy completes, DROP all all databases except, mysql, information_schema (which store MySQL existing user / passwords and Access Grants and Host Permissions)

5. Drop All databases except mysql and information_schema

server:~# mysql -u root -p
password:

 

mysql> SHOW DATABASES;

DROP DATABASE blog;
DROP DATABASE sessions;
DROP DATABASE wordpress;
DROP DATABASE micropcfreak;
DROP DATABASE statusnet;

          etc. etc.

ACHTUNG !!! DON'T execute!DROP database mysql; DROP database information_schema; !!! – cause this might damage your User permissions to databases

6. Stop MySQL server and add innodb_file_per_table and few more settings to prevent ibdata1 to grow infinitely in future

server:~# /etc/init.d/mysql stop

server:~# vim /etc/mysql/my.cnf
[mysqld]
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_buffer_pool_size=4G

Delete files taking up too much space – ibdata1 ib_logfile0 and ib_logfile1

server:~# cd /var/lib/mysql/
server:~#  rm -f ibdata1 ib_logfile0 ib_logfile1
server:~# /etc/init.d/mysql start
server:~# /etc/init.d/mysql stop
server:~# /etc/init.d/mysql start
server:~# ps ax |grep -i mysql

 

You should get no running MySQL instance (processes), so above ps command should return blank.
 

7. Re-Import previously dumped SQL databases with mysql cli client

server:~# cd /root/
server:~# mysql -u root -p < dump.sql

Hopefully import should went fine, and if no errors experienced new data should be in.

Altearnatively if your database is too big and you want to import it in less time to mitigate SQL downtime, instead import the database with:

server:~# mysql -u root -p
password:
mysql>  SET FOREIGN_KEY_CHECKS=0;
mysql> SOURCE /root/dump.sql;
mysql> SET FOREIGN_KEY_CHECKS=1;

 

If something goes wrong with the import for some reason, you can always copy over sql binary files from /root/mysql-data-backup/ to /var/lib/mysql/
 

8. Connect to mysql and check whether databases are listable and re-check ibdata file size

Once imported login with mysql cli and check whther databases are there with:

server:~# mysql -u root -p
SHOW DATABASES;

Next lets see what is currently the size of ibdata1, ib_logfile0 and ib_logfile1
 

server:~# du -hsc /var/lib/mysql/{ibdata1,ib_logfile0,ib_logfile1}
19M     /var/lib/mysql/ibdata1
1,1G    /var/lib/mysql/ib_logfile0
1,1G    /var/lib/mysql/ib_logfile1
2,1G    total

Now ibdata1 will grow, but only contain table metadata. Each InnoDB table will exist outside of ibdata1.
To better understand what I mean, lets say you have InnoDB table named blogdb.mytable.
If you go into /var/lib/mysql/blogdb, you will see two files
representing the table:

  •     mytable.frm (Storage Engine Header)
  •     mytable.ibd (Home of Table Data and Table Indexes for blogdb.mytable)

Now construction will be like that for each of MySQL stored databases instead of everything to go to ibdata1.
MySQL 5.6+ admins could relax as innodb_file_per_table is enabled by default in newer SQL releases.


Now to make sure your websites are working take few of the hosted websites URLs that use any of the imported databases and just browse.
In my case ibdata1 was 45GB after clearing it up I managed to save 43 GB of disk space!!!

Enjoy the disk saving! 🙂

How to get a list and Backup (Save Enabled Plugins) / Restore Enabled (Active) plugins in WordPress site with SQL query

Wednesday, January 14th, 2015

get-list-and-backup-restore-enabled-active-plugins-only-in-wordpress-with-sql-mysql-query

Getting a snapshot of all active plugins and keeping it for future in case if you install some broken plugin and you have to renable all enabled plugins from scratch is precious thing in WordPress.

… It is really annoying when you decide to try to enable few new plugins and out of a sudden your WordPress site / blog starts hanging (when accessed in browser)…

To fix it you have to Disable All Plugins and Re-enable all that used to work. However if you don't keep a copy of the plugins which were previously working and you're like me and have 109 plugins installed of which only 50 are in (Active) state / used. It could take you a day or two until you come up with a similar list to the ones you previously used … Thanksfully there is some prevention you can take by dumping a list of all plugins that are currently active and in later time only enable those in the list.

 

# mysql -u root -p
Enter password:

mysql> USE blog_db;

Here is the output I get in the moment:
 

mysql> DESCRIBE wp_options;
+————–+———————+——+—–+———+—————-+
| Field        | Type                | Null | Key | Default | Extra          |
+————–+———————+——+—–+———+—————-+
| option_id    | bigint(20) unsigned | NO   | PRI | NULL    | auto_increment |
| option_name  | varchar(64)         | NO   | UNI |         |                |
| option_value | longtext            | NO   |     | NULL    |                |
| autoload     | varchar(20)         | NO   |     | yes     |                |
+————–+———————+——+—–+———+—————-+

 

mysql> SELECT * FROM wp_options WHERE option_name = 'active_plugins';

|        38 | active_plugins | a:50:{i:0;s:45:"add-to-any-subscribe/add-to-any-subscribe.php";i:1;s:19:"akismet/akismet.php";i:2;s:43:"all-in-one-seo-pack/all_in_one_seo_pack.php";i:3;s:66:"ambrosite-nextprevious-post-link-plus/ambrosite-post-link-plus.php";i:4;s:49:"automatic-tag-selector/automatic-tag-selector.php";i:5;s:27:"autoptimize/autoptimize.php";i:6;s:35:"bm-custom-login/bm-custom-login.php";i:7;s:45:"ckeditor-for-wordpress/ckeditor_wordpress.php";i:8;s:47:"comment-info-detector/comment-info-detector.php";i:9;s:27:"comments-statistics/dcs.php";i:10;s:31:"cyr2lat-slugs/cyr2lat-slugs.php";i:11;s:49:"delete-duplicate-posts/delete-duplicate-posts.php";i:12;s:45:"ewww-image-optimizer/ewww-image-optimizer.php";i:13;s:34:"feedburner-plugin/fdfeedburner.php";i:14;s:39:"feedburner-widget/widget-feedburner.php";i:15;s:63:"feedburner_feedsmith_plugin_2.3/FeedBurner_FeedSmith_Plugin.php";i:16;s:21:"feedlist/feedlist.php";i:17;s:39:"force-publish-schedule/forcepublish.php";i:18;s:50:"google-analytics-for-wordpress/googleanalytics.php";i:19;s:81:"google-sitemap-generator-ultimate-tag-warrior-tags-addon/UTWgoogleSitemaps2_1.php";i:20;s:36:"google-sitemap-generator/sitemap.php";i:21;s:24:"headspace2/headspace.php";i:22;s:29:"my-link-order/mylinkorder.php";i:23;s:27:"php-code-widget/execphp.php";i:24;s:43:"post-plugin-library/post-plugin-library.php";i:25;s:35:"post-to-twitter/post-to-twitter.php";i:26;s:28:"profile-pics/profile-pic.php";i:27;s:27:"redirection/redirection.php";i:28;s:42:"scripts-to-footerphp/scripts-to-footer.php";i:29;s:29:"sem-dofollow/sem-dofollow.php";i:30;s:33:"seo-automatic-links/seo-links.php";i:31;s:23:"seo-slugs/seo-slugs.php";i:32;s:41:"seo-super-comments/seo-super-comments.php";i:33;s:31:"similar-posts/similar-posts.php";i:34;s:21:"sociable/sociable.php";i:35;s:44:"strictly-autotags/strictlyautotags.class.php";i:36;s:16:"text-control.php";i:37;s:19:"tidy-up/tidy_up.php";i:38;s:37:"tinymce-advanced/tinymce-advanced.php";i:39;s:33:"tweet-old-post/tweet-old-post.php";i:40;s:33:"w3-total-cache/w3-total-cache.php";i:41;s:44:"widget-settings-importexport/widget-data.php";i:42;s:54:"wordpress-23-related-posts-plugin/wp_related_posts.php";i:43;s:23:"wp-minify/wp-minify.php";i:44;s:27:"wp-optimize/wp-optimize.php";i:45;s:33:"wp-post-to-pdf/wp-post-to-pdf.php";i:46;s:29:"wp-postviews/wp-postviews.php";i:47;s:55:"wp-simple-paypal-donation/wp-simple-paypal-donation.php";i:48;s:46:"wp-social-seo-booster/wpsocial-seo-booster.php";i:49;s:31:"wptouch-pro-3/wptouch-pro-3.php";} | yes      |

Copy and paste this CVS format data to a text file or a Word document for later reference ..

To restore back to normal only active WordPress plugins, first launch following SQL query to disable all enabled wordpress plugins:

UPDATE wp_options SET option_value = 'a:0:{}' WHERE option_name = 'active_plugins';

To restore above "backupped" list of active WP plugins you have to copy paste the saved content and paste it into above UPDATE query substituting option_value=' ' with the backupped string.

P.S. – This query should work on WordPress 3.x on older wordpress ver 2.x use instead:

UPDATE wp_options SET option_value = ' ' WHERE option_name = 'active_plugins';

Because pasting the backupped Active plugins list CSV is a messy and unreadable from command line it is recommended for clarity to use PHPMyAdmin frontend (whenever it is available) on server. This little hint is a real time-saver and saves a lot of headaches. Before proceeding to any Db UPDATE SQL queries always backup your Blog database, with time structure of WordPress data changes!, so in future releases this method might not be working, however if it helped you and works on your version please drop a comment with WordPress version on which this helped you.

Enjoy! 🙂

 

How to split / rar in parts large data archive files on Linux and Windows – Transfer big files across servers located in DMZ rescticted areas

Friday, November 28th, 2014

how-to-split-rar-in-parts-large-data-archive-files-on-Linux-and-Windows-Transfer-big-files-across-servers-in-firewalled-restricted-areas

I was working on a Application Migration Project whose goal was to Install a business application called Asset Guardian and then move current company Data from the old server to the new AppServer.
F
or that purpose the company vendor Asset Guardian shipped to a Public access FTP, a huge (12GB) ZIP archive file which had to be transferred into a well secured DMZ-ed corporation network with various implemented Traffic Shaping Network policies, a resctrictive firewall allowing access to Internal Network only and to Few (Restrictive configured) Proxy Server IPs on port 80 and 8080.

One of the proxy servers allowed access to the Internet and I set this one and tried downloading the Huge Archive file  with the Windows 2012 server default browser Internet Explorer 10, though the download started it kept slow between ~ 300 – 500KB sec and when reached 3.4GB download failed. I tried resuming the download but as the remote Public FTP server where files resides doesn't support FTP RESUME function.
I thought it might be that Internet Explorer is badly managing the download so, I go forward and installed Portable Firefox (mirrored version 33.1.1 is here). Re-running download with firefox also failed, so the next logical step was for me to try downloading with Windows version of Wget (Wget) and with Portable Free Download Manager 3.9.14.1481 (mirrored here) using both of them was unable to complete download (probably due to firewall or Proxy screwing the proxy inspected traffic) thus I had to look for another way to copy the enormous archive into the company network.

To get around the issue I tried to download the file from FTP to another Server running Apache and tried re-downloading the big file archive (Asset-Guardian-data.zip) from Apache Webserver via HTTP protocol, this download method didn't work neither using plain HTTP protocol for download when downloaded file reached (3.4GB), thus I realized this is due to restrictive Proxy servers (dropping file downloads) bigger than  3.4GBs).

Then to be able to transfer the huge 12GB file, it seems the only left option was to to chop the big file on smaller file chunks and transfer them one by one.
In my case I had the Asset-Guardian-Files.zip transferred already to the Apache (Webserver) host which is running Linux so basicly the task was to Transfer Big archive file between the SuSE Linux Enterprise Server (SLES) 11 and Windows 2012 Server.

Quickesy way to do that is by using UNIX split command, i.e.:

split -b 1024m Asset-Guardian-Files.zip


The outputted files each 1GB are with naming (xaa, xab, xac, xad, xae, xaf, gaf etc.) in same folder where split command is run:

To later merge the files on the Windows 2012 server (copy) command is used:

copy /b file1 + file2 + file3 + file4 filetogether


In my case the command to issue on Win 2012 server was:

copy /b xaa + xab + xac + xae + xae + xaf + xaf + xag xah xai xaj xak Asset-Guardian-files.zip


This method to chop and transfer the file is most simple one and it doesn't require the two servers to have WinRAR or Console RAR / unrar installed.

If instead of Copy Huge File from Linux -> Windows host you need to copy too big file (lets say 100GB) between 2 Windows servers (Windows server host A and Windows server Host B – both situated in different firewall corporate networks) you will need to download to Win Host A and use Windows UNIX split equivalent tool called sfk (The Swiss File Knife) , sfk has port also for Mac OS so in case of need for need for migrating huge archive file from Mac OS X host it will serve as Linux's split – I've made SFK (current version) mirror here.

Another way to cut the 12GB file in parts and transfer to destination host via HTTP was to use rar (on the Linux host), then download the file on Win 2012 server and use Winrar Portable Free to extract the multiple files:

To make archive separate in parts set out to certain size out of a huge file with rar on Linux use:

cd /var/www
rar -a -v1000000k Asset_Guardian_Files.splitted.rar /var/www/Asset_Guardian_Files.zip

10000000Kbs = 10000000/1024 = 976MBs, hence rar produced parts will be sized to 976MB rar parts.

To find out archives check for *splitted*.rar in your /var/www

ls -al /var/www/*splitted*.rar
-rw-r–r– 1 root root 1048576 ное 28 18:34 Asset-Guardian-Files.splitted.part1.rar
-rw-r–r– 1 root root 1048576 ное 28 18:34 Asset-Guardian-Files.splitted.part2.rar
-rw-r–r– 1 root root 1048576 ное 28 18:34 Asset-Guardian-Files.splitted.part3.rar
-rw-r–r– 1 root root 1048576 ное 28 18:34 Asset-Guaridna-Filse.splitted.part4.rar

 

Then to download the files M$ Win 2012 server IE (http://my-linux-host.com/Asset-Guardian-Files.splitted.part1.rar, http://my-linux-host.com/Asset-Guardian-Files.splitted.part2.rar. etc.)

Thanks God, Problem Solved 🙂

Find all hidden files in Linux, Delete, Copy, Move all hidden files

Tuesday, April 15th, 2014

search-find-all-hidden-files-linux-delete-all-hidden-files
Listing hidden files is one of the common thing to do as sys admin. Doing manipulations with hidden files like copy / delete / move is very rare but still sometimes necessary here is how to do all this.

1. Find and show (only) all hidden files in current directory

find . -iname '.*' -maxdepth 1

maxdepth – makes files show only in 1 directory depth (only in current directory), for instance to list files in 2 subdirectories use -maxdepth 3 etc.

echo .*;

Yeah if you're Linux newbie it is useful to know echo command can be used instead of ls.
echo * command is very useful on systems with missing ls (for example if you mistakenly deleted it 🙂 )

2. Find and show (only) all hidden directories, sub-directories in current directory

To list all directories use cmd:

find /path/to/destination/ -iname ".*" -maxdepth 1 -type d

3. Log found hidden files / directories

find . -iname ".*" -maxdept 1 -type f | tee -a hidden_files.log

find . -iname ".*" -maxdepth 1 type d | tee -a hidden_directories.log
4. Delete all hidden files in current directory

cd /somedirectory
find . -iname ".*" -maxdepth 1 -type f -delete

5. Delete all hidden files in current directory

cd /somedirectory
find . -iname ".*" -maxdepth 1 -type d -delete

6. Copy all hidden files from current directory to other "backup" dir

find . -iname ".*" -maxdepth 1 -type f -exec cp -rpf '{}' directory-to-copy-to/ ;

7. Copy and move all hidden sub-directories from current directory to other "backup" dir

find . -iname ".*" -maxdepth 1 -type d -exec cp -rpf '{}' directory-to-copy-to/ ;

– Moving all hidden sub-directories from current directory to backup dir

find . -iname ".*" -maxdepth 1 -type d -exec mv '{}' directory-to-copy-to/ ;

 

Stop contact form spam emails in Joomla, Disable “E-mail a copy of this message to your own address.” in Joomla

Friday, April 11th, 2014

email-copy-of-this-message-to-your-own-address_Contact_email_form
If you happen to have installed Joomla based website and setup a contact form and everything worked fine until recently but suddenly your server starts mysteriously acting as a spam relay – even though email server is perfectly secured against spam.
You probably have some issue with a website email contact form hacked or some vulnerability which allowed hackers to upload spammer php script.

I have a website based on Joomla and just until recently everything was okay until I noticed there are tons of spam flying out from my Qmail mail server (which is configured to check spam with Spamassassin has Bayesian Filtering, Distributed Checksum Claring House, Python Razor and plenty of custom anti-spam rules.

It was just yesterday I ended into that situation, then after evaluating all the hosted website, I've realized Spam issues are caused by an Old Joomla Website Contact form!

There were two issues in the form

in the contact form you have the field with a tick:

1. Well Known Joomla Form Vulnerability
Currently all Joomla (including 1.5.22 and 1.6 versions) are vulnerable to a serious spam relay problem as described in the official Joomla site.

There is a quick dirty workaround fix to contact form vulnerability –  disable a Joomla Comonent in ../joomla/components/com_mailto/

To disable it I had to:

cd /var/www/joomla/components
mv com_mailto com_mailtoNOT_USED

Above solution was described under a post resolve joomla spam relay earlier by Anatoliy Dimitrov (after checking closely the website it happened he is a colleague at HP 🙂 )

2. Second issue causing high amount of spam sent over the email server
was: "E-mail a copy of this message to your own address." contact form tick, which was practically enabling any Spammer with a list to inect emails and spam via the form sending copies to any email out on the internet!

You would definitely want to disable  "E-mail a copy of this message to your own address."
I wonder why ever any Joomla developer came up with this "spam form"?? 

joomla-disable-email-copy-of-this-message-to-your-own-address

Here is the solution to this:

1. Login to Joomla Admin with admin account
2. Goto Components -> Contacts -> Contacts
3. Click on the relevant Contact form
4. Under Contact Parameters go to Email Parameters
5. Change field E-mail Copy from Show to Hide and click Apply button

And Hooray the E-mail a copy of this message to your own address will be gone from contact form! 🙂

I've seen already plenty of problematic hacked servers and scripts before with Joomla in my last job in International University College – where joomla was heavy used, but I never experienced Joomla Security issues myself 'till know, in future I'm planning to never ever use joomla. Though it is an easy CMS system to setup a website its quite complicated to learn the menus – I remember when creating the problematic website it took me days until I properly setup all the menus and find all joomla components … besides these there is no easy way to migrate between different versions major releases in Joomla like in Wordperss, I guess this Mail Security Issue absolutely convinced me to quit using that piece of crap in future.

In mean Time another very serious Apache security flaw leaked on the Internet just few days ago – The OpenSSL Hearbleed Bug. Thanksfully I'm not running SSL anywhere on my website but many systems are affecting making most of your SSL communication with your Internet banking, E-mail etc. in danger. If you're running Apache with SSL make sure you test it for this vulnerability. Here is description of Heartbleed SSL Critical Vulnerability.

heartbleed_ssl_remote_vulnerability_logo

"The Heartbleed Bug is a serious vulnerability in the popular OpenSSL cryptographic software library. This weakness allows stealing the information protected, under normal conditions, by the SSL/TLS encryption used to secure the Internet. SSL/TLS provides communication security and privacy over the Internet for applications such as web, email, instant messaging (IM) and some virtual private networks (VPNs).

The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop on communications, steal data directly from the services and users and to impersonate services and users."

11

 

On God and computers and how computers copy God’s creation

Friday, October 25th, 2013

People are copying Gods creation the-tree model people don't invent they copy

I've been thinking for a long time. How computers and involved technology copy God's creation. This kind of thoughts poped up in my mind right after I became a believer. As I'm having a strong IT background I tend to view thinks in world via the prism of my IT knowledge. If I have to learn a new science my mind tend to compare how this translates to my previous knowledge obtained in IT. Probably some other people out there has the same kind of thinking? I'm not sure if this is a geek thinking or it is usual and people from other fields of science tend to also understand the world by using accommodated knowledge in field of profession they practice. Anyways since my days I believed in Jesus Christ, I started to also to compare my so far knowledge with what I've red in Holy Bible and  in the book of The Living of Saints (which btw is unknown to most protestant world). It is very interesting that if you deeply look into how all Information Technology knowledge is organized you can see how Computers resembles the visible God's creation. In reality I came to realization how Moden Man self-deceives himself. We think with every new modern technology we achieved something new revolutionory which didn't existed before. But is it really true? Lets take some technology like Microsoft Active Directory (using LDAP) for example. LDAP structures data in a tree form where each branch could have a number of sub branches (variables). In reality it appears LDAP is not new it a translation of previous already existent knowledge in universe served in a different kind of form. Let me give some other examples, lets pick up the Internet, we claim its a new invention and from human point of view it is. But if we look on it via the prism of existing created world. It is just a interconnection between "BIG DATA" in real world it is absolutely the same latest researches already know all in world is data and all data in world is interconnected. So obviously the internet is another copying of the wonderful things God created in material and for those who can accept (the spiritual world) world. Many who are hard-core atheists will argue that we copy things in the world but all the material world is just a co-incidence. But having in mind that the world is so perfectly tuned "for living beings to exist" it is near to impossible that all this life and perfection emerged by random. The tree structure model is existing everywhere in OS and programming. We can see it in hiearchy of a file system, we can see it in hashes (arrays) in programming and all this just copies the over-simplified model of a real tree (which we know well from Biology is innemous times more complex). Probably the future of computing is in Biotechnologies and people's attempts to copy how living organism works. We know from well from science-fiction and cyberpunk the future should be mostly in Bio-technologies and computer as we know it but even this high-tech next generation technology will be based on existent things. Meaning man doesn't invent something so different he does copy a model and then modify the model according to environment or just makes a combination of a number of models to achieve a next one. Sorry for the rant post but I'm thinking on this for quite a while and I thought i should spit it here and interested to hear what people think and what are the arguments for or against my thesis?

Linux: How to change recursively directory permissions to executable (+x) flag

Monday, September 2nd, 2013

change recursively permissions of directories and subdirectories Linux and Unix with find command
I had to copy large directory from one Linux server to windows host via SFTP proto (with WinSCP). However some of directories to be copied lacked executable flag, thus WinSCP failed to list and copy them.

Therefore I needed way to set recursively, all sub-directories under directory /mirror (located on Linux server) to +x executable flag.

There are two ways to do that one is directly through find cmd, second by using find with xargs
Here is how to do it with find:

# find /mirror -type d -exec chmod 755 {} + Same done with find + xargs:

# find /path/to/base/dir -type d -print0 | xargs -0 chmod 755
To change permissions only to all files under /mirror server directory with find

# find /path/to/base/dir -type f -exec chmod 644 {} +

Same done with find + xargs:
# find /path/to/base/dir -type f -print0 | xargs -0 chmod 644

Also, tiny shell script that recursively changes directories permissions (autochmod_directories.sh) is here

Script to Automatically change current MySQL server in wp-config.php to another MySQL host to minimize WordPress and Joomla downtimes

Friday, July 20th, 2012

I'm running a two servers for a couple of home hosted websites. One of the servers is serving as Apache host1 and has configured MySQL running on it and the second is used just for database host2 – (has another MySQL configured on it).
The MySQL servers are not configured to run as a MySQL MASTER and MySQL SLAVE (no mysql replication), however periodically (daily), I have a tiny shell script that is actualizing the data from the active SQL host2 server to host1.

Sometimes due to electricity problems or CPU overheats the active MySQL host at host2 gets stoned and stops working causing the 2 WordPress based websites and One joomla site inaccessible.
Until I manually get to the machine and restart host2 the 3 sites are down from the net and as you can imagine this has a very negative impact on the existing website indexing (PageRank) in Google.

When I'm at home, this is not a problem as I have physical access to the servers and if somethings gets messy I fix it quickly. The problem comes, whether I'm travelling or in another city far from home and there is no-one at home to give the hanged host hard reboot ….

Lately the problems with hang-ups of host2 happaned 3 times or so for 2 weeks, as a result the websites were inaccessible for hours and since there is nobody to reboot the server for hours; the websites keep hanging until the DB host is restarted ;;;;

To work-around this I came with the idea to write a tiny shell script to check if host2 is ping-able in order to assure the Database host is not down and then if script determines host2 (mysql) host is down it changes wp-config.php (set to use host2) to a wp-config.php (which I have beforehand configured to use) host1.

Using the script is a temporary solution, since I have to actually find the real hang-up causing troubles, but at least it saves me long downtimes. Here is a download link to the script I called change_blog_db.sh .
I've configured the script to be run on the Apache node (host1) via a crontab calling the script every 10 minutes, here is the crontab:
 

*/10 * * * * /usr/sbin/change_blog_db.sh > /dev/null 2>&1

The script is written in a way so if it determins host2 is reachable a copy of wp-config.php and Joomla's configuration.php tuned to use host2 is copied over the file config originals. In order to use the script one has to configured the head variables script section, e.g.:

host_to_ping='192.168.0.2';
blog_dir='/var/www/blog';
blog_dir2='/var/www/blog1';blog_dir3='/var/www/joomla';
notify_mail='hipo@pc-freak.net';
wp_config_orig='wp-config.php';
wp_config_localhost='wp-config-localhost.php';
wp_config_other_host='wp-config-192.168.0.2.php';
joomla_config_orig='configuration.php';
joomla_config_other_host='configuration-192.168.0.2.php';

You will have to manually prepare;;;

wp-config-localhost.php, wp-config-192.168.0.2.php ,configuration-192.168.0.2.php, wp-config-localhost.php to be existing files configured to with proper host1 and host2 IP addresses.
Hope the script will be useful to others, experiencing database downtimes with WordPress or Joomla installs.