Posts Tagged ‘tar’

Tools to scan a Linux / Unix Web server for Malware and Rootkits / Lynis and ISPProtect – clean Joomla / WordPress and other CMS for malware and malicious scripts and trojan codes

Monday, March 14th, 2016

Linux-BSD-Unix-Rootkit-Malware-XSS-Injection-spammer-scripts-clean-howto-manual

If you have been hacked or have been suspicious that someone has broken up in some of the shared web hosting servers you happent o manage you already probably have tried the server with rkhuter, chroot and unhide tools which gives a general guidance where a server has been compromised

However with the evolution of hacking tools out there and the boom of Web security XSS / CSS / Database injections and PHP scripts vulnerability catching an intruder especially spammers has been becoming more and more hard to achieve.

Just lately a mail server of mine's load avarage increased about 10 times, and the CPU's and HDD I/O load jump over the sky.
I started evaluating the situation to find out what exactly went wrong with the machine, starting with a hardware analysis tools and a physical check up whether all was fine with the hardware Disks / Ram etc. just to find out the machine's hardware was working perfect.
I've also thoroughfully investigated on Logs of Apache, MySQL, TinyProxy and Tor server and bind DNS and DJBDns  which were happily living there for quite some time but didn't found anything strange.

Not on a last place I investigated TOP processes (with top command) and iostat  and realized the CPU high burst lays in exessive Input / Output of Hard Drive. Checking the Qmail Mail server logs and the queue with qmail-qstat was a real surprise for me as on the queue there were about 9800 emails hanging unsent, most of which were obviously a spam, so I realized someone was heavily spamming through the server and started more thoroughfully investigating ending up to a WordPress Blog temp folder (writtable by all system users) which was existing under a Joomla directory infrastructure, so I guess someone got hacked through the Joomla and uploaded the malicious php spammer script to the WordPress blog. I've instantly stopped and first chmod 000 to stop being execuded and after examing deleted view73.php, javascript92.php and index8239.php which were full of PHP values with binary encoded values and one was full of encoded strings which after being decoding were actually the recepient's spammed emails.
BTW, the view*.php javascript*.php and index*.php files were owned by www-data (the user with which Apache was owned), so obviously someone got hacked through some vulnerable joomla or wordpress script (as joomla there was quite obscure version 1.5 – where currently Joomla is at version branch 3.5), hence my guess is the spamming script was uploaded through Joomla XSS vulnerability).

As I was unsure wheteher the scripts were not also mirrored under other subdirectories of Joomla or WP Blog I had to scan further to check whether there are no other scripts infected with malware or trojan spammer codes, webshells, rootkits etc.
And after some investigation, I've actually caught the 3 scripts being mirrored under other webside folders with other numbering on filename view34.php javascript72.php, index8123.php  etc..

I've used 2 tools to scan and catch malware the trojan scripts and make sure no common rootkit is installed on the server.

1. Lynis (to check for rootkits)
2. ISPProtect (Proprietary but superb Website malware scanner with a free trial)

1. Lynis – Universal security auditing tool and rootkit scanner

Lynis is actually the well known rkhunter, I've used earlier to check servers BSD and Linux servers for rootkits.
To have up-to-date version of Lynis, I've installed it from source:
 

cd /tmp
wget https://cisofy.com/files/lynis-2.1.1.tar.gz
tar xvfz lynis-2.1.1.tar.gz
mv lynis /usr/local/
ln -s /usr/local/lynis/lynis /usr/local/bin/lynis

 


Then to scan the server for rootkits, first I had to update its malware definition database with:
 

lynis update info


Then to actually scan the system:
 

lynis audit system


Plenty of things will be scanned but you will be asked on a multiple times whether you would like to conduct different kind fo system services and log files, loadable kernel module rootkits and  common places to check for installed rootkits or server placed backdoors. That's pretty annoying as you will have to press Enter on a multiple times.

lynis-asking-to-scan-for-rootkits-backdoors-and-malware-your-linux-freebsd-netbsd-unix-server

Once scan is over you will get a System Scan Summary like in below screenshot:

lynis-scanned-server-for-rootkit-summer-results-linux-check-for-backdoors-tool

Lynis suggests also a very good things that might be tampered to make the system more secure, so using some of its output when I have time I'll work out on hardening all servers.

To prevent further incidents and keep an eye on servers I've deployed Lynis scan via cron job once a month on all servers, I've placed under a root cronjob on every first dae of month in following command:

 

 

server:~# crontab -u root -e
0 3 1 * * /usr/local/bin/lynis –quick 2>&1 | mail -s "lynis output of my server" admin-mail@my-domain.com)

 

2. ISPProtect – Website malware scanner

ISPProtect is a malware scanner for web servers, I've used it to scan all installed  CMS systems like WordPress, Joomla, Drupal etc.
ISPProtect is great for PHP / Pyhon / Perl and other CMS based frameworks.
ISPProtect contains 3 scanning engines: a signature based malware scanner, a heuristic malware scanner, and a scanner to show the installation directories of outdated CMS systems.
Unfortunately it is not free software, but I personally used the FREE TRIAL option  which can be used without registration to test it or clean an infected system.
I first webserver first locally for the infected site and then globally for all the other shared hosting websites.

As I wanted to check also rest of hosted websites, I've run ISPProtect over the all bunch of installed websites.
Pre-requirement of ISPProtect is to have a working PHP Cli and Clamav Anti-Virus installed on the server thus on RHEL (RPM) based servers make sure you have it installed if not:
 

server:~# yum -y install php

server:~# yum -y install clamav


Debian based Linux servers web hosting  admins that doesn't have php-cli installed should run:
 

server:~# apt-get install php5-cli

server:~# apt-get install clamav


Installing ISPProtect from source is with:

mkdir -p /usr/local/ispprotect
chown -R root:root /usr/local/ispprotect
chmod -R 750 /usr/local/ispprotect
cd /usr/local/ispprotect
wget http://www.ispprotect.com/download/ispp_scan.tar.gz
tar xzf ispp_scan.tar.gz
rm -f ispp_scan.tar.gz
ln -s /usr/local/ispprotect/ispp_scan /usr/local/bin/ispp_scan

 

To initiate scan with ISPProtect just invoke it:
 

server:~# /usr/local/bin/ispp_scan

 

ispprotect-scan-websites-for-malware-and-infected-with-backdoors-or-spamming-software-source-code-files

I've used it as a trial

Please enter scan key:  trial
Please enter path to scan: /var/www

You will be shown the scan progress, be patient because on a multiple shared hosting servers with few hundred of websites.
The tool will take really, really long so you might need to leave it for 1 hr or even more depending on how many source files / CSS / Javascript etc. needs to be scanned.

Once scan is completed scan and infections found logs will be stored under /usr/local/ispprotect, under separate files for different Website Engines and CMSes:

After the scan is completed, you will find the results also in the following files:
 

Malware => /usr/local/ispprotect/found_malware_20161401174626.txt
Wordpress => /usr/local/ispprotect/software_wordpress_20161401174626.txt
Joomla => /usr/local/ispprotect/software_joomla_20161401174626.txt
Drupal => /usr/local/ispprotect/software_drupal_20161401174626.txt
Mediawiki => /usr/local/ispprotect/software_mediawiki_20161401174626.txt
Contao => /usr/local/ispprotect/software_contao_20161401174626.txt
Magentocommerce => /usr/local/ispprotect/software_magentocommerce_20161401174626.txt
Woltlab Burning Board => /usr/local/ispprotect/software_woltlab_burning_board_20161401174626.txt
Cms Made Simple => /usr/local/ispprotect/software_cms_made_simple_20161401174626.txt
Phpmyadmin => /usr/local/ispprotect/software_phpmyadmin_20161401174626.txt
Typo3 => /usr/local/ispprotect/software_typo3_20161401174626.txt
Roundcube => /usr/local/ispprotect/software_roundcube_20161401174626.txt


ISPProtect is really good in results is definitely the best malicious scripts / trojan / trojan / webshell / backdoor / spammer (hacking) scripts tool available so if your company could afford it you better buy a license and settle a periodic cron job scan of all your servers, like lets say:

 

server:~# crontab -u root -e
0 3  1 * *   /usr/local/ispprotect/ispp_scan –update && /usr/local/ispprotect/ispp_scan –path=/var/www –email-results=admin-email@your-domain.com –non-interactive –scan-key=AAA-BBB-CCC-DDD


Unfortunately ispprotect is quite expensive so I guess most small and middle sized shared hosting companies will be unable to afford it.
But even for a one time run this tools worths the try and will save you an hours if not days of system investigations.
I'll be glad to hear from readers if aware of any available free software alternatives to ISPProtect. The only one I am aware is Linux Malware Detect (LMD).
I've used LMD in the past but as of time of writting this article it doesn't seems working any more so I guess the tool is currently unsupported / obsolete.

 

How to Copy large data directories between 2 Linux / Unix servers without direct ssh / ftp access between server1 and server2 other by using SSH, TAR and Unix pipes

Monday, April 27th, 2015

how-to-copy-large-data-directories-between-2-linux-unix-servers-without-direct-ssh-ftp-access-btween-each-other

In a Web application data migration project, I've come across a situation where I have to copy / transfer 500 Gigabytes of data from Linux server 1 (host A) to Linux server 2 (host B). However the two machines doesn't have direct access to each other (via port 22) for security reasons and hence I cannot use sshfs to mount remotely host dir via ssh and copy files like local ones.

As this is a data migration project its however necessery to migrate the data finding a way … Normal way companies do it is to copy the data to External Hard disk storage and send it via some Country Post services or some employee being send in Data center to attach the SAN to new server where data is being migrated However in my case this was not possible so I had to do it different.

I have access to both servers as they're situated in the same Corporate DMZ network and I can thus access both UNIX machines via SSH.

Thanksfully there is a small SSH protocol + TAR archiver and default UNIX pipe's capabilities hack that makes possible to transfer easy multiple (large) files and directories. The only requirement to use this nice trick is to have SSH client installed on the middle host from which you can access via SSH protocol Server1 (from where data is migrated) and Server2 (where data will be migrated).

If the hopping / jump server from which you're allowed to have access to Linux  servers Server1 and Server2 is not Linux and you're missing the SSH client and don't have access on Win host to install anything on it just use portable mobaxterm (as it have Cygwin SSH client embedded )

Here is how:
 

jump-host:~$ ssh server1 "tar czf – /somedir/" | pv | ssh server2 "cd /somedir/; tar xf


As you can see from above command line example an SSH is made to server1  a tar is used to archive the directory / directories containing my hundred of gigabytes and then this is passed to another opened ssh session to server 2  via UNIX Pipe mechanism and then TAR archiver is used second time to unarchive previously passed archived content. pv command which is in the middle is not obligitory though it is a nice way to monitor status about data transfer like below:
 

500GB 0:00:01 [10,5MB/s] [===================================================>] 27%


P.S. If you don't have PV installed install it either with apt-get on Debian:

 

debian:~# apt-get install –yes pv

 

Or on CentOS / Fedora / RHEL etc.

 

[root@centos ~]# yum -y install pv

 

Below is a small chunk of PV manual to give you better idea of what it does:

NAME
       pv – monitor the progress of data through a pipe

SYNOPSIS
       pv [OPTION] [FILE]…
       pv [-h|-V]

DESCRIPTION
       pv  allows  a  user to see the progress of data through a pipeline, by giving information such as time elapsed, percentage
       completed (with progress bar), current throughput rate, total data transferred, and ETA.

       To use it, insert it in a pipeline between two processes, with the appropriate options.  Its standard input will be passed
       through to its standard output and progress will be shown on standard error.

       pv  will  copy  each  supplied FILE in turn to standard output (- means standard input), or if no FILEs are specified just
       standard input is copied. This is the same behaviour as cat(1).

       A simple example to watch how quickly a file is transferred using nc(1):

              pv file | nc -w 1 somewhere.com 3000

       A similar example, transferring a file from another process and passing the expected size to pv:

              cat file | pv -s 12345 | nc -w 1 somewhere.com 3000


Note that with too big file transfers using PV will delay data transfer because everything will have to pass through another 2 pipes, however for file transfers up to few gigabytes its really nice to include it.

If you only need to transfer huge .tar.gz archive and you don't bother about traffic security (i.e. don't care whether transferred traffic is going through encrypted SSH tunnel and don't want to put an overhead to both systems for encrypting the data and you have some unfiltered ports between host 1 and host 2 you can run netcat on host 2 to listen for connections and forward .tar.gz content via netcat's port like so:
 

linux2:~$ nc -l -p 12345 > /path/destinationfile
linux2:~$ cat /path/sourcfile | nc desti.nation.ip.address 12345


Another way to transfer large data without having connection with server1 and server2 but having connection to a third host PC is to use rsync and good old SSH Tunneling, like so:
 

jump-host:~$ ssh -R 2200:Linux-server1:22 root@Linux-server2 "rsync -e 'ssh -p 2200' –stats –progress -vaz /directory/to/copy root@localhost:/copy/destination/dir"

Resume sftp / scp cancelled (interrupted) network transfer – Continue (large) partially downloaded files on Linux / Windows

Thursday, April 23rd, 2015

resume-sftp-scp-cancelled-interrupted-file-transfer-download-upload-network-transfer-continue-large-partially-downloaded-file-howto-linux-windows
I've recentely have a task to transfer some huge Application server long time stored data (about 70GB) of data after being archived between an old Linux host server and a new one to where the new Tomcat Application (Linux) server will be installed to fit the increased sites accessibility (server hardware overload).

The two systems are into a a paranoid DMZ network and does not have access between each other via SSH / FTP / FTPs and even no Web Access on port (80 or SSL – 443) between the two hosts, so in order to move the data I had to use a third HOP station Windows (server) which have a huge SAN network attached storage of 150 TB (as a Mapped drive I:/).

On the Windows HOP station which is giving me access via Citrix Receiver to the DMZ-ed network I'm using mobaxterm so I have the basic UNIX commands such as sftp / scp already existing on the Windows system via it.
Thus to transfer the Chronos Tomcat application stored files .tar.gz archived I've sftp-ed into the Linux host and used get command to retrieve it, e.g.:

 

sftp UserName@Linux-server.net
Password:
Connected to Linux-server.
sftp> get Chronos_Application_23_04_2015.tar.gz

….


The Secured DMZ Network seemed to have a network shaper limiting my get / Secured SCP download to be at 2.5MBytes / sec, thus the overall file transfer seemed to require a lot of time about 08:30 hours to complete. As it was the middle of day about 13:00 and my work day ends at 18:00 (this meant I would be able to keep the file retrieval session for a maximum of 5 hrs) and thus file transfer would cancel when I logout of the HOP station (after 18:00). However I've already left the file transfer to continue for 2hrs and thus about 23% of file were retrieved, thus I wondered whether SCP / SFTP Protocol file downloads could be resumed. I've checked thoroughfully all the options within sftp (interactive SCP client) and the scp command manual itself however none of it doesn't have a way to do a resume option. Then I thought for a while what I can use to continue the interrupted download and I remembered good old rsync (versatile remote and local file copying tool) which I often use to create customer backup stragies has the ability to resume partially downloaded files I wondered whether this partially downloaded file resume could be done only if file transfer was only initiated through rsync itself and luckily rsync is able to continue interrupted file transfers no matter what kind of HTTP / HTTPS / SCP / FTP program was used to start file retrievalrsync is able to continue cancelled / failed transfer due to network problems or user interaction activity), that turned even pretty easy to continue failed file transfer download from where it was interrupted I had to change to directory where file is located:
 

cd /path/to/interrupted_file/


and issue command:
 

rsync -av –partial username@Linux-server.net:/path/to/file .


the –partial option is the one that does the file resume trick, -a option stands for –archive and turns on the archive mode; equals -rlptgoD (no -H,-A,-X) arguments and -v option shows a file transfer percantage status line and an avarage estimated time for transfer to complete, an easier to remember rsync resume is like so:
 

rsync -avP username@Linux-server.net:/path/to/file .
Password:
receiving incremental file list
chronos_application_23_04_2015.tar.gz
  4364009472   8%    2.41MB/s    5:37:34

To continue a failed file upload with rsync (e.g. if you used sftp put command and the upload transfer failed or have been cancalled:
 

rsync -avP chronos_application_23_04_2015.tar.gz username@Linux-server.net:/path/where_to/upload


Of course for the rsync resume to work remote Linux system had installed rsync (package), if rsync was not available on remote system this would have not work, so before using this method make sure remote Linux / Windows server has rsync installed. There is an rsync port also for Windows so to resume large Giga or Terabyte file archive downloads easily between two Windows hosts use cwRsync.

Fixing Shellshock new critical remote bash shell exploitable vulnerability on Debian / Ubuntu / CentOS / RHEL / Fedora / OpenSuSE and Slackware

Friday, October 10th, 2014

Bash-ShellShock-remote-exploitable-Vulnerability-affecting-Linux-Mac-OSX-and-BSD-fixing-shellshock-bash-vulnerability-debian-redhat-fedora-centos-ubuntu-slackware-and-opensuse
If you still haven’t heard about the ShellShock Bash (Bourne Again) shell remote exploit vulnerability and you admin some Linux server, you will definitely have to read seriously about it. ShellShock Bash Vulnerabily has become public on Sept 24 and is described in details here.

The vulnerability allows remote malicious attacker to execute arbitrary code under certain conditions, by passing strings of code following environment variable assignments. Affected are most of bash versions starting with bash 1.14 to bash 4.3.
Even if you have patched there are some reports, there are other bash shell flaws in the way bash handles shell variables, so probably in the coming month there will be even more patches to follow.

Affected bash flaw OS-es are Linux, Mac OS and BSDs;

• Some DHCP clients

• OpenSSL servers that use ForceCommand capability in (Webserver config)

• Apache Webservers that use CGi Scripts through mod_cgi and mod_cgid as well as cgis written in bash or launching bash subshells

• Network exposed services that use bash somehow

Even though there is patch there are futher reports claiming patch ineffective from both Google developers and RedHat devs, they say there are other flaws in how batch handles variables which lead to same remote code execution.

There are a couple of online testing tools already to test whether your website or certain script from a website is vulnerable to bash remote code executions, one of the few online remote bash vulnerability scanner is here and here. Also a good usable resource to test whether your webserver is vulnerable to ShellShock remote attack is found on ShellShocker.Net.

As there are plenty of non-standard custom written scripts probably online and there is not too much publicity about the problem and most admins are lazy the vulnerability will stay unpatched for a really long time and we’re about to see more and more exploit tools circulating in the script kiddies irc botnets.

Fixing bash Shellcode remote vulnerability on Debian 5.0 Lenny.

Follow the article suggesting how to fix the remote exploitable bash following few steps on older unsupported Debian 4.0 / 3.0 (Potato) etc. – here.

Fixing the bash shellcode vulnerability on Debian 6.0 Squeeze. For those who never heard since April 2014, there is a A Debian LTS (Long Term Support) repository. To fix in Debian 6.0 use the LTS package repository, like described in following article.

If you have issues patching your Debian Wheezy 6.0 Linux bash, it might be because you already have a newer installed version of bash and apt-get is refusing to overwrite it with an older version which is provided by Debian LTS repos. The quickest and surest way to fix it is to do literally the following:


vim /etc/apt/sources.list

Paste inside to use the following LTS repositories:

deb http://http.debian.net/debian/ squeeze main contrib non-free
deb-src http://http.debian.net/debian/ squeeze main contrib non-free
deb http://security.debian.org/ squeeze/updates main contrib non-free
deb-src http://security.debian.org/ squeeze/updates main contrib non-free
deb http://http.debian.net/debian squeeze-lts main contrib non-free
deb-src http://http.debian.net/debian squeeze-lts main contrib non-free

Further on to check the available installable deb package versions with apt-get, issue:



apt-cache showpkg bash
...
...
Provides:
4.1-3+deb6u2 -
4.1-3 -
Reverse Provides:

As you see there are two installable versions of bash one from default Debian 6.0 repos 4.1-3 and the second one 4.1-3+deb6u2, another way to check the possible alternative installable versions when more than one version of a package is available is with:



apt-cache policy bash
...
*** 4.1-3+deb6u2 0
500 http://http.debian.net/debian/ squeeze-lts/main amd64 Packages
100 /var/lib/dpkg/status
4.1-3 0
500 http://http.debian.net/debian/ squeeze/main amd64 Packages

Then to install the LTS bash version on Debian 6.0 run:



apt-get install bash=4.1-3+deb6u2

Patching Ubuntu Linux supported version against shellcode bash vulnerability:
A security notice addressing Bash vulnerability in Ubuntus is in Ubuntu Security Notice (USN) here
USNs are a way Ubuntu discloses packages affected by a security issues, thus Ubuntu users should try to keep frequently an eye on Ubuntu Security Notices

apt-get update
apt-get install bash

Patching Bash Shellcode vulnerability on EOL (End of Life) versions of Ubuntu:

mkdir -p /usr/local/src/dist && cd /usr/local/src/dist
wget http://ftpmirror.gnu.org/bash/bash-4.3.tar.gz.sig
wget http://ftpmirror.gnu.org/bash/bash-4.3.tar.gz
wget http://tiswww.case.edu/php/chet/gpgkey.asc
gpg --import gpgkey.asc
gpg --verify bash-4.3.tar.gz.sig
cd ..
tar xzvf dist/bash-4.3.tar.gz
cd bash-4.3
mkdir patches && cd patches
wget -r --no-parent --accept "bash43-*" -nH -nd
ftp.heanet.ie/mirrors/gnu/bash/bash-4.3-patches/ # Use a local mirror
echo *sig | xargs -n 1 gpg --verify --quiet # see note 2

cd ..
echo patches/bash43-0?? | xargs -n 1 patch -p0 -i # see note 3 below

./configure --prefix=/usr --bindir=/bin
--docdir=/usr/share/doc/bash-4.3
--without-bash-malloc
--with-installed-readline

make
make test && make install

To solve bash vuln in recent Slackware Linux:

slackpkg update
slackpkg upgrade bash

For old Slacks, either download a patched version of bash or download the source for current installed package and apply the respective patch for the shellcode vulnerability.
There is also a GitHub project “ShellShock” Proof of Concept code demonstrating – https://github.com/mubix/shellshocker-pocs
There are also non-confirmed speculations for bash vulnerability bug to impact also:

Speculations:(Non-confirmed possibly vulnerable common server services):

• XMPP(ejabberd)

• Mailman

• MySQL

• NFS

• Bind9

• Procmail

• Exim

• Juniper Google Search

• Cisco Gear

• CUPS

• Postfix

• Qmail

Fixing ShellShock bash vulnerability on supported versions of CentOS, Redhat, Fedora

In supported versions of CentOS where EOL has not reached:

yum –y install bash

In Redhat, Fedoras recent releases to patch:

yum update bash

To upgrade the bash vulnerability in OpenSUSE:

zipper patch –cve=CVE-2014-7187

Shellcode is worser vulnerability than recent SSL severe vulnerability Hearbleed. According to Redhat and other sources this new bash vulnerability is already actively exploited in the wild and probably even worms are crawling the net stealing passwords, data and building IRC botnets for remote control and UDP flooding.

Fix “tar: Error exit delayed from previous errors” and its cause and solution

Monday, August 18th, 2014

fix-solve-tar-error-delayed-exit-from-previous-errors-tarball-error

tar: Error exit delayed from previous errors

error is a very common error encountered when creating archives (or backing up server configurations / websites / sql binary data). The error is quite unexplanatory and whenever creating files verbose in order to see the files added to archve in "real time" with lets say:

tar -czvf /tmp/filename_backup_date-of-backup.tar.gz /home/websites /home/sql


its pretty hard to track on exactly which file is the backup producing the Error exit delayed from previous errors, this is especially the case whenever adding to archive directories containing millions of tiny few kilobyte sized files. Many novice on uncautious Linux admins , might simply ignore the warning if they're in a hurry / are having excessive work to be done as there will be .tar.gz backup produced and whenever uncompressed most of the files are there and the backup error would seem not of a big issue.

However as backuping files is vital stuff, especially when moving the files from a server to be decomissioned you have to be extra careful and make the backup properly, e.g. figure out the cause of the error, to do so log the full output of tar operations with tee command, like so:

tar -czvf /tmp/filename_backup_date-of-backup.tar.gz /home/websites/ /home/sql | tee /tmp/backup_tar_full_output.log

Then you will have to review the file and lookup for errors with less search string – / (slash) – look for "error" and "permission den" keywords and this should point you to what is causing the error. In cases when millions of files are to be archived, the log might grow really big and hard to process, therefore a much quicker way to understand what's happening is to only log and show in shell standard output last file error with > (shell redirect):
 

tar -czvf /tmp/filename_backup_date-of-backup.tar.gz /home/websites /home/sql > /tmp/backup_failure-cause.log

 

tar: www.ur-website.com-http/2.0.63/conf/tnsnames.ora.20080918: Cannot open: Permission denied
tar: Removing leading `/' from member names

The error indicates clearly the cause of error is lack of Permissions to read the file tnsnames.ora.20080918 so solution is to either grant permissions to non-root user with (chmod / chown) cmds, in my case grant perms to user hipo with which tar is ran, or run again the website backup with superuser, I usually just run with root user to prevent tampering with original permissions, e.g. to solve the error, either:

$ su root
# tar -czvf /tmp/filename_backup_date-of-backup.tar.gz /home/websites /home/sql

Or even better if sudo is installed and user is added to /etc/sudoers file

$ sudo tar -czvf /tmp/filename_backup_date-of-backup.tar.gz /home/websites /home/sql


Though permission errors is the most often reason for:

tar: Error exit delayed from previous errors, you should keep in mind that in some cases the error might be caused due to failing RAID membered disk drive or single hdd failure on systems that are not in some RAID array

 

Linux: List last 10 (newest) and 10 oldest modified files in a directory with ls

Tuesday, April 8th, 2014

An useful thing on GNU / Linux sometimes is to list last or oldest modified files in directory.

Lets say you want to list last 10 modified files with ls from today / yesterday. Here is how:
 

ls -1t | head -10
my.cnf
wordperss_enabled_plugins.txt
newcerts/
mysql-hipo_pcfreakbiz.dump
NewArchive-Jan-10-15.zip
hipo_pcfreakbiz-mysqldb-any-out-1389361234.tgz
Tisho_Snimki/
wordpress/
wp-cron.php?doing_wp_cron=1.1
wp-cron.php

 

To list 10 oldest modified files on Linux:

 

ls -1t | tail -10
    my.cnf
    pcfreak_sql-15_10_05_2012
    mysql-tuning-primer*
    tuning-primer.sh*
    system-administration-services.html
    blog_backup_15_07_2012.tar.gz
    www-files/
    dump.sql
    courier-imap*
    djbdns-1.05.tar.gz


Cheers 😉

Updating Flash Player on Debian GNU / Linux / Keeping Flash player up-to-date with update-flashplugin-nonfree

Saturday, November 10th, 2012

 

Update flash player on Debian GNU / Linux update-flashplugin-nonfree macromedia flash logo

Assuming you have previously installed and running Adobe Flash Player – package flashplugin-nonfree i.e.:

debian:~# dpkg -l |grep -i flashplugin-nonfree
ii flashplugin-nonfree 1:2.8.3 Adobe Flash Player - browser plugin

and you want to Update flash player to the latest provided version for Linux, there is an update script part of flashplugin-nonfree, package /usr/sbin/update-flashplugin-nonfree. The script updates flash player to latest Linux version avaiable fetching the version from macromedia's website in a .tar.gz and untarring it substituting the old flash library.

To update your Debian FlashPlayer, launch as superuser:

debian:~# update-flashplugin-nonfree --install
--2012-11-10 00:51:48-- http://fpdownload.macromedia.com/get/flashplayer/pdc/11.2.202.251/install_flash_player_11_linux_x86_64.tar.gz Resolving fpdownload.macromedia.com... 92.123.98.70 Connecting to fpdownload.macromedia.com|92.123.98.70|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 7228964 (6.9M) [application/x-gzip] Saving to: “./install_flash_player_11_linux_x86_64.tar.gz”

0K .......... .......... .......... ..........
.......... 0% 69.5K 1m41s 50K .......... .......... .......... ..........
.......... 1% 91.1K 88s 100K .......... .......... .......... ..........
.......... 2% 70.8K 91s ........
..........

After a while (usually up to a minute), update will be completed. Restart your browser of use IceWeasel, Epiphany, Opera, Chrome etc. and test it with About Flash Player Page and / or youtube. You should be with latest Flash Linux version now.

It might be a good idea to automate future flash player updates via a cron job, I think launching the update script every two weeks is a good timing;

To do so add to root user cron like so:

10,27 * * * * /usr/sbin/update-flashplugin-nonfree –install -q 2>&1 >/dev/null

If you still haven't configured your pulseaudio to play multiple sound streams do that too.

I've seen also on Debian's Wiki FlashPlayer page, mentioning that on some systems after update to Flash Player 11 there might be laggy performance issues, due to disabled hardware acceleration in Flash Player > v. 10. If that's the case with you you might also need to put a mss.cfg like this one to /etc/adobe/mss.cfg

# wget -q http://www.pc-freak.net/files/adobe-flash-player-config-for-hardware-acceleration-mms.cfg
# mv adobe-flash-player-config-for-hardware-acceleration-mms.cfg /etc/adobe/mms.cfg

Finally if you experience, some flash video lagging issues, you could try experimenting with OverrideGPUValidation=true flash setting which in some cases improves Linux flash video performance

Firefox users might be also interested to check out www.mozilla.org/en-US/plugincheck – the URL provides information on essential Firefox video plugins and whether plugins installed are up2date or prone to remote web exploitation vulnerability.

Fix vnstat error “eth0: Not enough data available yet.” on Debian GNU / Linux

Monday, November 21st, 2011

Vnstat GNU Linux console terminal traffic statistics logo

After installing vnstat to keep an eye on server IN and OUT traffic on a Debian Squeeze server. I used the usual:

debian:~# vnstat -u -i eth0

In order to generate the initial database for the ethernet interface used by vnstat to generate its statistics.

However even though /var/lib/vnstat/eth0 got generated with above command statistics were not further generated and trying to check them with command:

debian:~# vnstat --days

Returned the error message:

eth0: Not enough data available yet.

To solve the eth0: Not enough data available yet. message I tried completely removing vnstat package by purging the package e.g.:

debian:~# apt-get --yes remove vnstat
...
debian:~# dpkg --purge vnstat
...

Even though dpkg –purge was invoked /var/lib/vnstat/ refused to be removed since it contained vnstat’s db file eth0

Therefore I deleted by hand before installing again vnstat:

debian:~# rm -rf /var/lib/vnstat/

Tried installing once again vnstat “from scratch”:

debian:~# apt-get install vnstat
...

After that I tried regenerating the vnstat db file eth0 once again with vnstat -u -i eth0 , hoping this should fix the error but it was no go and after that the error:

debian:~# vnstat --hours
eth0: Not enough data available yet.

persisted.

I checked in Debian bugs mailing lists and I found, some people complaining about the same issue with some suggsetions on how the error can be work arouned, anyways none of the suggestions worked for me.

Being irritated I further removed / purged once again vnstat and decided to give it a try by installing vnstat from source
As of time of writting this article, the latest stable vnstat version is 1.11 .
Therefore to install vnstat from source I issued:

debian:~# cd /usr/local/src
debian:/usr/local/src# wget http://humdi.net/vnstat/vnstat-1.11.tar.gz
...
debian:/usr/local/src# tar -zxvvf vnstat-1.11.tar.gz
debian:/usr/local/src# cd vnstat-1.11
debian:/usr/local/src/vnstat-1.11# make & make all & make install
debian:/usr/local/src/vnstat-1.11# cp examples/vnstat.cron /etc/cron.d/vnstat
debian:/usr/local/src/vnstat-1.11# vnstat -u -i eth0
Error: Unable to read database "/var/lib/vnstat/eth0".
Info: -> A new database has been created.

As a last step I put on root crontab to execute:

debian:~# crontab -u root -e

*/5 * * * * /usr/bin/vnstat -u >/dev/null 2>&1

This line updated vnstat db eth0 database, every 5 minutes. After the manual source install vnstat works, just fine 😉

Convert png files to ico on Linux – Create “icon” files from pictures

Tuesday, November 2nd, 2010

You will need png2ico

First you will have to download the png2ico source

Now you will have to download compile and install the program by issuing:

debian:~# wget http://www.winterdrache.de/freeware/png2ico/data/png2ico-src-2002-12-08.tar.gz
debian:~# tar -zxvf png2ico-src-2002-12-08.tar.gz...
debian:~# cd png2ico/
debian:/root/png2ico# make
debian:/root/png2ico# cp -rpf png2ico /usr/local/bin/

Convertion is pretty easy and it comes to executing simply:

debian:/home/hipo$ png2ico favicon.ico png_picture_to_convert.png

Note that your png_picture_to_convert.png has to be in a graphic dimensions of 16×16
That’s all now you should have your favicon.ico on your Linux created.

Tightening PHP Security on Apache 2.2 with ModSecurity2 on Debian Lenny Linux

Monday, April 26th, 2010

Tightening-PHP-Security-on-Apache-2.2-2.4-with-Apache-ModSecurity2
In this article you'll learn how I easily installed and configured the ModSecurity 2 on a Debian Lenny system.
First let me give you a few introductionary words to modsecurity, what is it and why it's a good idea to install and use it on your Apache Webserver.

ModSecurity is an Apache module that provides intrusion detection and prevention for web applications. It aims at shielding web applications from known and unknown attacks, such as SQL injection attacks, cross-site scripting, path traversal attacks, etc.

As you can see from ModSecurity’s description it’s a priceless module add on to Apache that is able to protect your PHP Applications and Apache server from a huge number of hacker attacks undertook against your Online Web Application or Webserver.
The only thing I don’t like about this module is that it is actually a 3rd party module (e.g. not officially part of Apache). Some time ago I remember there was even an exploit for one of the versions of the module.
So in some cases the ModSecurity could also pose a security risk, so beware!
However if you know what you'rre doing and you keep a regular track of security news on some major security websites, that shouldn’t be a concern for you.
Now let'ss proceed to the install of the ModSecurity module itself.
The install is a piece of cake on Debian though you'll be required to use the Debian Lenny backports

Here is the install of the module step by step:

1. First add the gpg key of the backports repository to your install

debian-server:~# gpg --keyserver pgp.mit.edu --recv-keys C514AF8E4BA401C3
# another possible way to add the repository as the website describes is through the command
debian-server:~# wget -O - http://backports.org/debian/archive.key | apt-key add -

2. Install the libapache-mod-security package from the backports Debian Lenny repository

debian-server~:~# apt-get -t lenny-backports install libapache2-mod-security2

Now as a last step of the install ModSeccurity install procedure you have to add some configuration directives to Apache and restart the server afterwards.

– Open your /etc/apache2/apache2.conf and place in it the following configurations


<IfModule mod_security2.c>
# Basic configuration options
SecRuleEngine On
SecRequestBodyAccess On
SecResponseBodyAccess Off

# Handling of file uploads
# TODO Choose a folder private to Apache.
# SecUploadDir /opt/apache-frontend/tmp/
SecUploadKeepFiles Off

# Debug log
SecDebugLog /var/log/apache2/modsec_debug.log
SecDebugLogLevel 0

# Serial audit log
SecAuditEngine RelevantOnly
SecAuditLogRelevantStatus ^5
SecAuditLogParts ABIFHZ
SecAuditLogType Serial
SecAuditLog /var/log/apache2/modsec_audit.log

# Maximum request body size we will
# accept for buffering
SecRequestBodyLimit 131072

# Store up to 128 KB in memory SecRequestBodyInMemoryLimit 131072
# Buffer response bodies of up to # 512 KB in length SecResponseBodyLimit 524288
</IfModule>

The ModSecurity2 module would be properly installed and configured as an Apache module.
3.All left is to restart Apache in order the new module and configurations to take effect.

debian-server:~# /etc/init.d/apache restart

Don’t forget to check the apache conf file for errors before restarting the Apache with the above command for that to happen issue the command:
debian-server:~# apache2ctl -t

If all is fine you should get as an output:

Syntax OK

4. Next to find out if the Apache ModSecurity2 module is enabled and already used by Apache as a mean of protection you,
you might want to check if the log files modsec_audit.log and modsec_debug.log files has grown and doesfeed a new content.
If they’re growing and you see messages concerning the operation of the ModSecurity2 Apache module that’s a sure sign all is fine.
5. As we have the Mod Security Apache module configured on our Debian Server, now we will need to apply some ModSecurity Core Rules .
In short ModSecurity Core Rules are some critical protection rules against attacks across almost every web architecture.
Another really neat thing about Core Rules (CRS) for ModSecurity is that they are written with a performance in mind.
So enabling this filter rules won’t be a too heavy load for your Apache server.

Here is how to install the core rules:

6. Download latest ModSecurity Code Rules

Download them from the following Code Rule url
At the time of writting this article the latest code rules are version modsecurity-crs_2.0.6.tar.gz

To download and install this rules issue some commands like:

debian-server:~# wget http://sourceforge.net/projects/mod-security/files/modsecurity-crs/0-CURRENT/modsecurity-crs_2.0.6.tar.gz/download
debian-server:~# cp -rpf ~/modsecurity-crs_2.0.6.tar.gz /etc/apache2/
debian-server:~# cd /etc/apache2/; tar -zxvvf modsecurity-crs_2.0.6.tar.gz

Besides physically storing the unarchived modsecirity-crs in your /etc/apache2 it’s also necessery to add to your Apache Ifmodule mod_security.c block of code the following two lines:

Include /etc/apache2/modsecurity-crs_2.0.6/*.conf
Include /etc/apache2/modsecurity-crs_2.0.6/base_rules/*.conf

Thus ultimately the configuration concerning ModSecurity in your Apache Server configuration should look like the following:

<IfModule mod_security2.c>
# Basic configuration options
SecRuleEngine On
SecRequestBodyAccess On
SecResponseBodyAccess Off

# Handling of file uploads
# TODO Choose a folder private to Apache.
# SecUploadDir /opt/apache-frontend/tmp/
SecUploadKeepFiles Off

# Debug log
SecDebugLog /var/log/apache2/modsec_debug.log
SecDebugLogLevel 0

# Serial audit log
SecAuditEngine RelevantOnly
SecAuditLogRelevantStatus ^5
SecAuditLogParts ABIFHZ
SecAuditLogType Serial
SecAuditLog /var/log/apache2/modsec_audit.log

# Maximum request body size we will
# accept for buffering
SecRequestBodyLimit 131072

# Store up to 128 KB in memory
SecRequestBodyInMemoryLimit 131072
SecRequestBodyInMemoryLimit 131072

# Buffer response bodies of up to
# 512 KB in length
SecResponseBodyLimit 524288
Include /etc/apache2/modsecurity-crs_2.0.6/*.conf
Include /etc/apache2/modsecurity-crs_2.0.6/base_rules/*.conf
</Ifmodule>

Once again you have to check if everything is fine with Apache configurations with:

debian-server:~# apache2ctl -t

If it’s showing once again an OK status. Then you’re ready to restart the Webserver.
debian-server:~# /etc/init.d/apache2 restart

One example goodness of setting up the ModSecurity + the Core rule sets are that after the above described installationis fully functional.

ModSecurity will be able to track if somebody tries to execute PHP Shell on your server .
ModSecurity will catch, log and block (forbid) requests to r99.txt, r59, safe0ver and possibly other hacked modifications of the php shell script

That’s it! Now Enjoy your tightened Apache Security and Hopefully catch the script kiddie trying to h4x0r yoU 🙂