Posts Tagged ‘test’

Install simscan on Qmail for better Mail server performance and get around unexisting suid perl in newer Linux Debian / Ubuntu servers

Tuesday, August 18th, 2015

qmail-fixing-clamdscan-errors-and-qq-errors-qmail-binary-migration-few-things-to-check-outclamav_logo-installing-clamav-antivirus-to-scan-periodically-debian-server-websites-for-viruses

I've been stuck with qmail-scanner-queue for a while on each and every new Qmail Mail server installation, I've done, this time it was not different but as time evolves and Qmail and Qmail Scanner Wrapper are not regularly updated it is getting, harder and harder to make a fully functional Qmail on newer Linux server distribution releases.

I know many would argue QMAIL is already obsolete but still I have plenty of old servers running QMAIL whose migration might cause more troubles than just continuing to use QMAIL. Moreover QMAIL once set-upped works like a charm.

I've been recently experiencing severe issues with clamdscan errors and I tried to work around this with compiling and using a suid wrapper, however still the clamdscan errors continued and as qmail-scanner is not actively developed and it is much slower than simscan, I've finally decided to give simscan as a mean to fix the clamdscan errors and thanksfully this worked as a solution.

Here is what I did "rawly" to make simscan work on this install:
 

Make sure simscan is properly installed on Debian Linux 7 or Ubuntu servers and probably (should work) on other Deb based Linuxes by following below steps:
 

a) Configure simscan with following compile time options as root (superuser)

./configure \
–enable-user=qscand \
–enable-clamav \
–enable-clamdscan=/usr/local/bin/clamdscan \
–enable-custom-smtp-reject=y \
–enable-per-domain=y \
–enable-attach=y \
–enable-dropmsg=n \
–enable-spam=y \
–enable-spam-hits=5 \
–enable-spam-passthru=y \
–enable-qmail-queue=/var/qmail/bin/qmail-queue \
–enable-ripmime=/usr/local/bin/ripmime \
–enable-sigtool-path=/usr/local/bin/sigtool \
–enable-received=y


b) Compile it

 

 make && make install-strip

c) Fix any wrong permissions of simscan queue directory

 

chmod g+s /var/qmail/simscan/

chown -R qscand:qscand /var/qmail/simscan/
chmod -R 777 simscan/chown -R qscand:qscand simscan/
chown -R qscand:qscand simscan/

d) Add some additional simscan options (how simscan is how to perform scans)

The restart qmail to make mailserver start using simscan instead of qmail-scanner, run below command (again as root):

echo ":clam=yes,spam=yes,spam_hits=8.5,attach=.vbs:.lnk:.scr:.wsh:.hta:.pif" > /var/qmail/control/simcontrol

 

e) Run /var/qmail/bin/simscanmk in order to convert /var/qmail/control/simcontrol into the /var/qmail/control/simcontrol.cdb database

/var/qmail/bin/simscanmk
/var/qmail/bin/simscanmk -g

f) Modify /service/qmail-smtpd/run to set simscan to be default Antivirus Wrapper Scanner

vim /service/qmail-smtpd/run

I'm using thibs's run script so I've uncommented the line there:

QMAILQUEUE="$VQ/bin/simscan"

Below two lines should stay commented as qmail-scanner is no longer used:

##QMAILQUEUE="$VQ/bin/qmail-scanner-queue"
##QMAILQUEUE="$VQ/bin/qmail-scanner-queue.pl"
export QMAILQUEUE

qmailctl restart
* Stopping qmail-smtpdssl.
* Stopping qmail-smtpd.
* Sending qmail-send SIGTERM and restarting.
* Restarting qmail-smtpd.
* Restarting qmail-smtpdssl.

g) Test whether simscan is properly sending / receiving emails:

echo "Testing Email" >> /tmp/mailtest.txt
env QMAILQUEUE=/var/qmail/bin/simscan SIMSCAN_DEBUG=3 /var/qmail/bin/qmail-inject hipo@my-mailserver.com < /tmp/mailtest.txt

Besides that as I'm using qscand:qscand as a user for my overall Qmail Thibs install I had to also do:

chown -R qscand:qscand /var/qmail/simscan/
chmod -R 777 simscan/
chown -R qscand:qscand simscan/

 

It might be a good idea to also place that lines in /etc/rc.local to auto change permissions on Linux boot, just in case something wents wrong with permissions.

Yeah, I know 777 is unsecure but without this permissions, I was still getting errors, plus the server doesn't have any accounts except the administrator, so I do not worry other system users might sniff on email 🙂

h) Test whether Qmail mail server send / receives fine with simscan

After that I've used another mail server with mail command to test whether mail is received:
 

mail -s "testing email1234" hipo@new-configured-qmail-server.com
asdfadsf
.
Cc:

Then it is necessery to also install latest clamav daemon from source in my case that's on Debian GNU / Linux 7, because somehow the Debian shipped binary version of clamav 0.98.5+dfsg-0+deb7u2 does fail to scan any incoming or outgoing email with error:
 

clamdscan: corrupt or unknown clamd scanner error or memory/resource/perms problem – exit status -1/72057594037927935

So to fix it you will have to install clamav on Debian Linux from source.


Voilla, that's all finally it worked !

Share this on

Check your Server Download / Upload Internet Speed from Console on Linux / BSD / Unix howto

Tuesday, March 17th, 2015

tux-check-internet-network-download-upload-speed-on-linux-console-terminal-linux-bsd-unix
If you've been given a new dedicated server from a New Dedicated-Server-Provider or VPS with Linux and you were told that a certain download speed to the Server is guaranteed from the server provider, in order to be sure the server's connection to the Internet told by service provider is correct it is useful to run a simple measurement console test after logging in remotely to the server via SSH.

Testing connection from Terminal is useful because as you probably know most of Linux / UNIX servers doesn't have a GUI interface and thus it is not possible to test Internet Up / Down Bandwidth through speedtest.net.
 

1. Testing Download Internet Speed given by ISP / Dedi-Server Provider from Linux Console

For the download speed (internet) test the historical approach was to just try downloading the Linux kernel source code from www.kernel.org with some text browser such as lynx or links count the seconds for which the download is completed and then multiple the kernel source archive size on the seconds to get an approximate bandwidth per second, however as nowdays internet connection speeds are much higher, thus it is better to try to download some Linux distribution iso file, you can still use kernel tar archive but it completed too fast to give you some good (adequate) statistics on Download bandwidth.

If its a fresh installed Linux server probably you will probably not have links / elinks and lynx text internet browers  installed so install them depending on deb / rpm distro with:

If on Deb Linuz distro:

 

root@pcfreak:/root# apt-get install –yes links elinks lynx

 

On RPM Based Linuz distro:
 

 

[root@fedora ~]# yum install -y lynx elinks links

 

Conduct Internet  Download Speed with links
root@pcfreak:/root# links https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.19.1.tar.xz

check_your_download_speed-from-console-linux-with-links-text-browser

(Note that the kernel link is current latest stable Kernel source code archive in future that might change, so try with latest archive.)

You can also use non-interactive tool such as wget curl or lftp to measure internet download speed

To test Download Internet Speed with wget without saving anything to disk set output to go to /dev/null 

 

root@pcfreak:~# wget -O /dev/null http://pc-freak.net/~hipo/hirens-bootcd/HirensBootCD15/Hirens.BootCD.15.0.zip

 

check_bandwidth_download-internet-speed-with-wget-from-console-non-interactively-on-linux

You see the Download speed is 104 Mbit/s this is so because I'm conducting the download from my local 100Mbit network.

For the test you can use my mirrored version of Hirens BootCD

2. Testing Uplink Internet speed provided by ISP / Server Provider from Linux (SSH) Console

To test your uplink speed you will need lftp or iperf command tool.

 

root@pcfreak:~# apt-cache show lftp|grep -i descr -A 12
Description: Sophisticated command-line FTP/HTTP client programs
 Lftp is a file retrieving tool that supports FTP, HTTP, FISH, SFTP, HTTPS
 and FTPS protocols under both IPv4 and IPv6. Lftp has an amazing set of
 features, while preserving its interface as simple and easy as possible.
 .
 The main two advantages over other ftp clients are reliability and ability
 to perform tasks in background. It will reconnect and reget the file being
 transferred if the connection broke. You can start a transfer in background
 and continue browsing on the ftp site. It does this all in one process. When
 you have started background jobs and feel you are done, you can just exit
 lftp and it automatically moves to nohup mode and completes the transfers.
 It has also such nice features as reput and mirror. It can also download a
 file as soon as possible by using several connections at the same time.

 

root@pcfreak:/root# apt-cache show iperf|grep -i desc -A 2
Description: Internet Protocol bandwidth measuring tool
 Iperf is a modern alternative for measuring TCP and UDP bandwidth performance,
 allowing the tuning of various parameters and characteristics.

 

To test Upload Speed to Internet connect remotely and upload any FTP file:

 

root@pcfreak:/root# lftp -u hipo pc-freak.net -e 'put Hirens.BootCD.15.0.zip; bye'

 

uploading-file-with-lftp-screenshot-test-upload-internet-speed-linux

On Debian Linux to install iperf:

 

root@pcfreak:/root# apt-get install –yes iperf

 

On latest CentOS 7 and Fedora (and other RPM based) Linux, you will need to add RPMForge repository and install with yum

 

[root@centos ~]# rpm -ivh  rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm

[root@centos ~]# yum -y install iperf

 

Once having iperf on the server the easiest way currently to test it is to use
serverius.net speedtest server –  located at the Serverius datacenters, AS50673 and is running on a 10GE connection with 5GB cap.

 

root@pcfreak:/root# iperf -c speedtest.serverius.net -P 10
————————————————————
Client connecting to speedtest.serverius.net, TCP port 5001
TCP window size: 16.0 KByte (default)
————————————————————
[ 12] local 83.228.93.76 port 54258 connected with 178.21.16.76 port 5001
[  7] local 83.228.93.76 port 54252 connected with 178.21.16.76 port 5001
[  5] local 83.228.93.76 port 54253 connected with 178.21.16.76 port 5001
[  9] local 83.228.93.76 port 54251 connected with 178.21.16.76 port 5001
[  3] local 83.228.93.76 port 54249 connected with 178.21.16.76 port 5001
[  4] local 83.228.93.76 port 54250 connected with 178.21.16.76 port 5001
[ 10] local 83.228.93.76 port 54254 connected with 178.21.16.76 port 5001
[ 11] local 83.228.93.76 port 54255 connected with 178.21.16.76 port 5001
[  6] local 83.228.93.76 port 54256 connected with 178.21.16.76 port 5001
[  8] local 83.228.93.76 port 54257 connected with 178.21.16.76 port 5001
[ ID] Interval       Transfer     Bandwidth
[  9]  0.0-10.2 sec  4.05 MBytes  3.33 Mbits/sec
[ 10]  0.0-10.2 sec  3.39 MBytes  2.78 Mbits/sec
[ 11]  0.0-10.3 sec  3.75 MBytes  3.06 Mbits/sec
[  4]  0.0-10.3 sec  3.43 MBytes  2.78 Mbits/sec
[ 12]  0.0-10.3 sec  3.92 MBytes  3.18 Mbits/sec
[  3]  0.0-10.4 sec  4.45 MBytes  3.58 Mbits/sec
[  5]  0.0-10.5 sec  4.06 MBytes  3.24 Mbits/sec
[  6]  0.0-10.5 sec  4.30 MBytes  3.42 Mbits/sec
[  8]  0.0-10.8 sec  3.92 MBytes  3.03 Mbits/sec
[  7]  0.0-10.9 sec  4.03 MBytes  3.11 Mbits/sec
[SUM]  0.0-10.9 sec  39.3 MBytes  30.3 Mbits/sec

 

You see currently my home machine has an Uplink of 30.3 Mbit/s per second, that's pretty nice since I've ordered a 100Mbits from my ISP (Unguaranteed Bandwidth Connection Speed) and as you might know it is a standard practice for many Internet Proviers to give Uplink speed of 1/4 from the ISP provided overall bandwidth 1/4 would be 25Mbi/s, meaning my ISP (Bergon.NET) is doing pretty well providing me with even more than promised (ordered) bandwidth.

Iperf is probably the choice of most sysadmins who have to do regular bandwidth in local networks speed between 2 servers or test  Internet Bandwidth speed on heterogenous network with Linux / BSDs / AIX / HP-UX (UNIXes). On HP-UX and AIX and other UNIXes for which iperf doesn't have port you have to compile it yourself.

If you don't have root /admin permissions on server and there is python language enterpreter installed you can use speedtest_cli.py script to test internet throughput connectivity
speedtest_cli uses speedtest.net to test server up / down link just in case if script is lost in future I've made ownload mirror of speedtest_cli.py is here

Quickest way to test net speed with speedtest_cli.py:

 

$ lynx -dump https://raw.github.com/sivel/speedtest-cli/master/speedtest_cli.py > speedtest_cli.py
$ chmod +x speedtest_cli.py
python speedtest_cli.py

speedtest_cli_pyhon_script_screenshot-on-gnu-linux-test-internet-network-speed-on-unix

Share this on

Create local network between virtual machines in Virtualbox VM – Add local LAN between Linux Virtual Machines

Wednesday, June 11th, 2014

add-virtualbox-virtual-machines-inside-local-network-create-internal-LAN-local-net-linux-windows

I want to do test MySQL Cluster following MySQL Cluster Install Guide for that purpose, I've installed 2 version of CentOS 6.5 inside Virtualbox and I wanted to make the 2 Linux hosts reachable inside a local LAN network, I consulted some colleagues who adviced me to configure two Linux hosts to use Bridget Adapter Virtualbox networking (Network configuration in Virtualbox is done on a Virtual Machine basis from):
 

Devices -> Network Settings

(Attached to: Bridged Adapter)

Note!: that by default Cable Connected (tick) is not selected so when imposing changes on Network – tick should be set)
After Specifying Attached to be Bridged Adapter to make CentOS linux refresh network settings run in gnome-terminal:

[root@centos ~]# dhclient eth0

However CentOS failed to grab itself DHCP IP address.
Thus I tried to assign manually IP addresses with ifconfig, hoping that at least this would work, e.g.:

on CentOS VM 1:

/sbin/ifconfig eth0 192.168.10.1 netmask 255.255.255.0

on CentOS VM 2:

/sbin/ifconfig eth1 192.168.10.2 netmask 255.255.255.0

To test whether there is connection between the 2 VM hosts tried ping-ing 192.168.10.2 (from 192.168.10.1) and tested with telnet if I can access remotely SSH (protocol), from CentOS VM2 1 to CentOS VM2 and vice versa, i.e.:

[root@centos ~]# telnet 192.168.10.2 22

 

Trying 192.168.10.2…
telnet: connect to address 192.168.10.2: No route to host

Then after checking other options and already knowing by using VBox NAT network option I had access to the internet, I tried to attach a standard local IP addresses to both Linux-es as Virtual interfaces (e.g eth0:1), .e.g:

On Linux VM 1:

/sbin/ifconfig eth0:0 192.168.10.1 netmask 255.255.255.0

On Linux VM 2:

/sbin/ifconfig eth1:0 192.168.10.2 netmask 255.255.255.0

Then to test again used telnet

[root@centos ~]# telnet 192.168.10.2 22

Then I found Virtualbox has a special Internal Networking support

to choose in Attached to drop down menu. According to Internal Networking Virtualbox instructions to put two Virtual Machine hosts inside an Internal network they should be both set in Internal network with identical name.
P. S. It is explicitly stated that using Internal Network will enable access between Guest Virtual Machines OS, but hosts will not have access to the Internet (which in my case doesn't really mattered as I needed the two Linux VMs just as a testbed)

virtualbox-create-internal-local-network-between-guest-host-Linux-Windows1

I tried this option but it doesn't work for me for some reason, after some time of research online on how to create local LAN network between 2 Virtual Machines luckily I decided to test all available Virtualbox Networking choices and noticed Host-only adapter.

Selecting Host-only Adapter and using terminal to re-fetch IP address over dhcp:

virtualbox-connect-in-local-lan-network-linux-and-windows-servers-hosts-only-adapter

On CentOS VM1

dhclient eht0

On CentOS VM2

dhclient eth1

assigned me two adjoining IPs – (192.168.56.101 and 192.168.56.102).

Connection between the 2 IPs 192.168.56.101 and 192.168.56.102 on TCP and UDP and ICMP protocol works, now all left is to install MySQL cluster on both nodes.

 


Share this on

Best Windows tools to Test (Benchmark) Hard Drives, SSD Drives and RAID Storage Controllers

Wednesday, April 23rd, 2014

atto-windows-hard-disk-benchmark-freeware-tool-screenshot-check-hard-disk-speed-windows
Disk Benchmarking is very useful for people involved in Graphic Design, 3D modelling, system admins  and anyone willing to squeeze maximum of his PC hardware.

If you want to do some benchmarking on newly built Windows server targetting Hard Disk performance, just bought a new hard SSD (Solid State Drives) and you want to test how well Hard Drive I/O operations behave or you want to see a regular HDD benchmarking of group of MS Windows PCs and plan hardware optiomization, check out ATTO Disk Benchmark.

So why exactly ATTO Benchmark? – Cause it is one of the best Windows Free Benchmark tools on the internet.

ATTO is a widely-accepted Disk Benchmark freeware utility to help measure storage system performance. ATTO though being freeware is among top tools utilized in industry. It is very useful in comparing different Hard Disk vendors speed, measure Windows storage systems performance with various transfer sizes and test lengths for reads and writes.

ATTO Disk Benchmark is used by manufacturers of Hardware RAID controllers, its precious tool to test Windows storage controllers, host bus adapters (HBAs).

Here is ATTO Benchmark tool specifications (quote from their webstie):
 

  • Transfer sizes from 512KB to 8MB
  • Transfer lengths from 64KB to 2GB
  • Support for overlapped I/O
  • Supports a variety of queue depths
  • I/O comparisons with various test patterns
  • Timed mode allows continuous testing
  • Non-destructive performance measurement on formatted drives
  • Transfer sizes from 512KB to 8MB
  • Transfer lengths from 64KB to 2GB
  • Support for overlapped I/O
  • Supports a variety of queue depths
  • I/O comparisons with various test patterns
  • Timed mode allows continuous testing
  • Non-destructive performance measurement on formatted drives
  • – See more at: http://www.attotech.com/disk-benchmark/#sthash.rRlgSTOE.dpuf

Here is mirrored latest version of ATTO Disk for Download. Once you get your HDD statistics you will probably want to compare to other people results. On  TomsHardware's world famous Hardware geek site there are plenty of Hard Drives performance Charts

Of course there are other GUI alternatives to ATTO Benchmark one historically famous is NBench

NBench

nbench_benchmark_windows_hard-drive-cpu-and-memory

Nbench is nice little benchmarking program for Windows NT. Nbench reports the following components of performance:

CPU speed: integer and floating operations/sec
L1 and L2 cache speeds: MB/sec
main memory speed: MB/sec
disk read and write speeds: MB/sec

SMP systems and multi-tasking OS efficiency can be tested using up to 20 separate threads of execution.

For Console Geeks or Windows server admins there are also some ports of famous *NIX Hard Disk Benchmarking tools:

NTiogen

NTiogen benchmark was written by Symbios Logic, It's Windows NT port of their popular UNIX benchmark IOGEN. NTIOGEN is the parent processes that spawns the specified number of IOGEN processes that actually do the I/O.
The program will display as output the number of processes, the average response time, the number of I/O operations per second, and the number of KBytes per second. You can download mirror copy of Ntiogen here


There are plenty of other GUI and Console HDD Benchmarking Win Tools, i.e.:

IOMeter (ex-developed by Intel and now abandoned available as open source available on SourceForge)

iometer-benchmark-disk-storage-speed-windows
 

Bench32 – Comprehensive benchmark that measures overall system performance under Windows NT or Windows 95, now obsolete not developed anymore abandoned by producer company.

ThreadMark32 – capable of bench (ex developed and supported by ADAPTEC) but also already unsupported

IOZone – filesystem benchmark tool. The benchmark generates and measures a variety of file operations. Iozone has been ported to many machines and runs under many operating systems.
 

N! B! Important note to make here is above suggested tools will provide you more realistic results than the proprietary vendor tools shipped by your hardware vendor. Using proprietary software produced by a single vendor makes it impossible to analyze and compare different hardwares, above HDD benchmarking tools are for "open systems", e.g. nomatter what the hardware producer is produced results can be checked against each other.
Another thing to consider is even though if you use any of above tools to test and compare two storage devices still results will be partially imaginary, its always best to conduct tests in Real Working Application Environments. If you're planning to launch a new services structure always test it first and don't rely on preliminary returned soft benchmarks.

if you know some other useful benchmarking software i'm missing please share.


Share this on

Top Paying Google Adsense Keywords for 2013 – Increase blog earnings with high paying keywords

Wednesday, July 31st, 2013

Most-High-Paying-Google-AdSense-Keywords-for-year-2013-Increase-your-revenues-from-Google-with-these-keywords

Whether you're a blogger and you're trying to earn some extra cash for your daily living via blogging you already know how hard it is. It is not hard it is almost impossible to earn enough from Blog or personal website to make it your primary source of income. With the Crisis the CPC (Cost Per Click) rate of Advertisements and the number of people willing to pay big money for Cost per click dropped down drastically … This means also our earnings as bloggers decreased badly … However there is still a bit of hope to boost a bit Google Adsense Paying revenues, by trying to write articles including certain high payed keywords. Not surprisingly this high pay keywords vary seriously over years. You might be shocked to know that there are some keywords for which Google pays over 100$ per CLICK!!! OMG 100$ per click this sounds unrealistic but according to some rumors online its a fact. I've found on the internet a list of 70 keywords said to make your blog money per click from 179 dollars (highest clickable advertiser pay fee) to 51 bucks at minimum. Actually the reason to write this post was to test if this rumors with so high CPC are true. I will update the post later to tell you whether really top words work or its jus ta big fraud. Here is list of 70 top earning Google Adsense revenue keywords for 2013

Keywords
CPC
MESOTHELIOMA LAW FIRM
    $179.01
DONATE CAR TO CHARITY CALIFORNIA
    $130.25
DONATE CAR FOR TAX CREDIT
    $126.65
DONATE CARS IN MA
    $125.58
DONATE YOUR CAR SACRAMENTO
    $118.20
HOW TO DONATE A CAR IN CALIFORNIA
    $111.21
SELL ANNUITY PAYMENT
    $107.46
DONATE YOUR CAR FOR KIDS
    $106.01
ASBESTOS LAWYERS
    $105.84
STRUCTURED ANNUITY SETTLEMENT
    $100.8
ANNUITY SETTLEMENTS
    $100.72
CAR INSURANCE QUOTES COLORADO
    $100.93
NUNAVUT CULTURE
    $99.52
DAYTON FREIGHT LINES
    $99.39
HARDDRIVE DATA RECOVERY SERVICES
    $98.59
DONATE A CAR IN MARYLAND
    $98.51
MOTOR REPLACEMENTS
    $98.43
CHEAP DOMAIN REGISTRATION HOSTING
    $98.39
DONATING A CAR IN MARYLAND
    $98.20
DONATE CARS ILLINOIS
    $98.13
CRIMINAL DEFENSE ATTORNEYS FLORIDA
    $98.07
BEST CRIMINAL LAWYER IN ARIZONA
    $97.93
CAR INSURANCE QUOTES UTAH
    $97.92
LIFE INSURANCE CO LINCOLN
    $97.07
HOLLAND MICHIGAN COLLEGE
    $95.74
ONLINE MOTOR INSURANCE QUOTES
    $95.73
ONLINE COLLEDGES
    $95.65
PAPERPORT PROMOTIONAL CODE
    $95.13
ONLINECLASSES
    $95.06
WORLD TRADE CENTER FOOTAGE
    $95.02
MASSAGE SCHOOL DALLAS TEXAS
    $94.90
PSYCHIC FOR FREE
    $94.61
DONATE OLD CARS TO CHARITY
    $94.55
LOW CREDIT LINE CREDIT CARDS
    $94.49
DALLAS MESOTHELIOMA ATTORNEYS
    $94.33
CAR INSURANCE QUOTES MN
    $94.29
DONATE YOUR CAR FOR MONEY
    $94.01
CHEAP AUTO INSURANCE IN VA
    $93.84
MET AUTO
    $93.70
FORENSICS ONLINE COURSE
    $93.51
HOME PHONE INTERNET BUNDLE
    $93.32
DONATING USED CARS TO CHARITY
    $93.17
PHD IN COUNSELING EDUCATION
    $92.99 
NEUSON
    $92.89
CAR INSURANCE QUOTES PA
    $92.88
ROYALTY FREE IMAGES STOCK
    $92.76
CAR INSURANCE IN SOUTH DAKOTA
    $92.72
EMAIL BULK SERVICE
    $92.55
WEBEX COSTS
    $92.38
CHEAP CAR INSURANCE FOR LADIES
    $92.23
CHEAP CAR INSURANCE IN VIRGINIA
    $92.03
REGISTER FREE DOMAINS
    $92.03
BETTER CONFERENCING CALLS
    $91.44
FUTURISTIC ARCHITECTURE
    $91.44
MORTGAGE ADVISER
    $91.29
CAR DONATE
    $88.26
VIRTUAL DATA ROOMS
    $83.18
AUTOMOBILE ACCIDENT ATTORNEY
    $76.57
AUTO ACCIDENT ATTORNEY
    $75.64
CAR ACCIDENT LAWYERS
    $75.17
DATA RECOVERY RAID
    $73.22
MOTOR INSURANCE QUOTES
    $68.61
PERSONAL INJURY LAWYER
    $66.53
CAR INSURANCE QUOTES
    $61.03
ASBESTOS LUNG CANCER
    $60.96
INJURY LAWYERS
    $60.79
PERSONAL INJURY LAW FIRM
    $60.56
ONLINE CRIMINAL JUSTICE DEGREE
    $60.4
CAR INSURANCE COMPANIES
    $58.66
BUSINESS VOIP SOLUTIONS
    $51.9

Share this on

Fixing QMAIL mail server SMTP auto-configure issues in Thunderbird and other mail IMAP / POP3 mobile clients

Friday, July 13th, 2012

One of the QMAIL mail servers, setup-uped on a Debian host has been creating some auto configuration issues. Every-time a new mail user tries to use the embedded Thunderbird client auto configuration, the auto config fails leaving the client unable to use his Mailbox through POP3 or IMAP protocols.

Since about 2 years Thunderbird and many other modern pop3 and imap mail desktop and mobile clients are by default using the auto configuration and hence it was unthinkable to manually change settings for new clients with the QMAIl install; Besides that most of the Office users are always confused, whether they have to manually change SMTP or POP3 host for a server.

Below is a screenshot displaying the warning during email auto-configuration:

Thunderbird new Mail account setup auto config warning SMTP not OKThe orange color in the button for the newly auto-detected smtp.mail-domain.com indicates, something is not right with the SMTP host.

Obviously, something was wrong with smtp.mail-domain.com, hence I checked where smtp.mail.domain.com resolves with host command. What I found was actually smtp.mail-domain.com Active ( A ) DNS records was pointing to an IP address, our company previously used for the mail server. At present time the correct mail server host name is mx.mail-domain.com and the QMAIL installation on mx.soccerfame.com is configured to be the actual SMTP server.

By default Thunderbird and many other POP3, IMAP mail clients, however automatically assume the default SMTP host for a mail server is to be configured under a host name smtp.mail-domain.com. This is really strange, especially when the primary MX record for mail-domain.com domain is pointing to mx.mail-domain.com, e.g.:

qmail:~# host -t MX mail-domain.com
soccerfame.com mail is handled by 10 mx.mail-domain.com.
soccerfame.com mail is handled by 20 mail.mail-domain.com.
soccerfame.com mail is handled by 30 mail-domain.com.

The whole warning was caused due to the fact mx.mail-domain.com was resolving to an IP like xxx.xxx.xxx.xxx, whether smtp.mail-domain.com was resolving to yyy.yyy.yyy.yyy

Both xxx.xxx.xxx.xxx and yyy.yyy.yyy.yyy hosts were configured to have a different qmail SMTP host i.e.:

The server under IP xxx.xxx.xxx.xxx – (mx.mail-domain.com) was configured in /var/qmail/control/me to be mx.mail-domain.com and the other old one yyy.yyy.yyy.yyy – (mail.mail-domain.com) had (mail.mail-domain.com) in /var/qmail/control/me

As smtp.mail-domain.com was actually being still resolved to mail.mail-domain.com, the EMAILs were improperly trying to be sent with a configured DNS hostname of smtp.mail-domain.com, where the actual one on the server was mail.mail-domain

It took, me about an hour of pondering what is causing the oddities until I got the here explained issue. As the DNS recors for the domain the sample mail-domain.com were handled by Godaddy, to fix the mess, I logged in to Godaddy and;

a) deleted – DNS record for smtp.mail-domain.com.
b) Created new CNAME record for smtp.mail-domain.com to be a domain alias for mx.soccerfame.com

A few minutes, afterwards I tried configuring once again the same email account in Thunderbird and this time both imap.mail-domain.com and smtp.mail-domain.com turned green; indicating everything is configured fine.

To be 100% sure all is working fine I first fetched, all email via the IMAP protocol without hassles and onwards sent a test email to my Gmail account; thanksfully the sent email was delivered to Gmail indicating both Get Mail and Send Mail functions worked now fine.

Thunderbird icedove new mail account setup auto config Okay
 


Share this on

Text Monitoring of connection server (traffic RX / TX) business in ASCII graphs with speedometer / Easy Monitor network traffic performance

Friday, May 4th, 2012

While reading some posts online related to MS-Windows TcpViewnetwork traffic analyzing tool. I've came across very nice tool for tracking connection speed for Linux (Speedometer). If I have to compare it, speedometer is somehow similar to nethogs and iftop bandwidth network measuring utilities .

What differentiates speedometer from iftop / nethogs / iptraf is it is more suitable for visualizing a network file or data transfers.
The graphs speedometer draws are way easier to understand, than iftop graphs.

Even complete newbies can understand it with no need for extraordinary knowledge in networking. This makes Speedometer, a top tool to visually see the amount of traffic flowing through server network interface (eth0) … (eth1) etc.

What speedometer shows is similar to the Midnight Commander's (mc) file transfer status bar, except the statistics are not only for a certain file transfer but can show overall statistics over server passing network traffic amount (though according to its manual it can be used to also track individual file transfers).

The simplicity for basic use makes speedometer nice tool to track for network congestion issues on Linux. Therefore it is a  must have outfit for every server admin. Below you see a screenshot of my terminal running speedometer on a remote server.

Speedometer ascii traffic track server network business screenshot in byobu screen like virtual terminal emulator

1. Installing speedometer on Debian / Ubuntu and Debian derivatives

For Debian and Ubuntu server administrators speedometer is already packaged as a deb so its installation is as simple as:

debian:~# apt-get --yes install speedometer
....

2. Installing speedometer from source for other Linux distributions CentOS, Fedora, SuSE etc.

Speedometer is written in python programming language, so in order to install and use on other OS Linux platforms, it is necessery to have installed (preferably) an up2date python programming language interpreter (python ver. 2.6 or higher)..
Besides that it is necessary to have installed the urwid -( console user interface library for Python) available for download via excess.org/urwid/

 

Hence to install speedometer on RedHat based Linux distributions one has to follow these steps:

a) Download & Install python urwid library

[root@centos ~]# cd /usr/local/src
[root@centos src]# wget -q http://excess.org/urwid/urwid-1.0.1.tar.gz
[root@centos src]# tar -zxvvf urwid-1.0.1.tar.gz
....
[root@centos src]# cd urwid-1.0.1
[root@centos urwid-1.0.1]# python setup.py install
running install
running build
running build_py
creating build
creating build/lib.linux-i686-2.4
creating build/lib.linux-i686-2.4/urwid
copying urwid/tests.py -> build/lib.linux-i686-2.4/urwid
copying urwid/command_map.py -> build/lib.linux-i686-2.4/urwid
copying urwid/graphics.py -> build/lib.linux-i686-2.4/urwid
copying urwid/vterm_test.py -> build/lib.linux-i686-2.4/urwid
copying urwid/curses_display.py -> build/lib.linux-i686-2.4/urwid
copying urwid/display_common.py -> build/lib.linux-i686-2.4/urwid
....

b) Download and install python-setuptools

python-setuptools is one other requirement of speedometer, happily on CentOS and Fedora the rpm package is already there and installable with yum:

[root@centos ~]# yum -y install python-setuptools
....

c) Download and install Speedometer

[root@centos urwid-1.0.1]# cd /usr/local/src/
[root@centos src]# wget -q http://excess.org/speedometer/speedometer-2.8.tar.gz
[root@centos src]# tar -zxvvf speedometer-2.8.tar.gz
.....
[root@centos src]# cd speedometer-2.8
[root@centos speedometer-2.8]# python setup.py install
Traceback (most recent call last):
File "setup.py", line 26, in ?
import speedometer
File "/usr/local/src/speedometer-2.8/speedometer.py", line 112
n = n * granularity + (granularity if r else 0)
^

While running the CentOS 5.6 installation of speedometer-2.8, I hit the
"n = n * granularity + (granularity if r else 0)
error.

After consultation with some people in #python (irc.freenode.net), I've figured out this error is caused due the outdated version of python interpreter installed by default on CentOS Linux 5.6. On CentOS 5.6 the python version is:

[root@centos ~]# python -V
Python 2.4.3

As I priorly said speedometer 2.8's minimum requirement for a python to be at v. 2.6. Happily there is quick way to update python 2.4 to python 2.6 on CentOS 5.6, as there is an RPM repository maintained by Chris Lea which contains RPM binary of python 2.6.

To update python 2.4 to python 2.6:

[root@centos speedometer-2.8]# rpm -Uvh http://yum.chrislea.com/centos/5/i386/chl-release-5-3.noarch.rpm[root@centos speedometer-2.8]# rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CHL[root@centos speedometer-2.8]# yum install python26

Now the newly installed python 2.6 is executable under the binary name python26, hence to install speedometer:

[root@centos speedometer-2.8]# python26 setup.py install
[root@centos speedometer-2.8]# chown root:root /usr/local/bin/speedometer
[root@centos speedometer-2.8]# chmod +x /usr/local/bin/speedometer

[root@centos speedometer-2.8]# python26 speedometer -i 1 -tx eth0

The -i will instruct speedometer to refresh the screen graphs once a second.

3. Using speedometer to keep an eye on send / received traffic network congestion

To observe, the amount of only sent traffic via a network interface eth0 with speedometer use:

debian:~# speedometer -tx eth0

To only keep an eye on received traffic through eth0 use:

debian:~# speedometer -rx eth0

To watch over both TX and RX (Transmitted and Received) network traffic:

debian:~# speedometer -tx eth0 -rx eth0

If you want to watch in separate windows TX and RX traffic while  running speedometer you can run in separate xterm windows speedometer -tx eth0 and speedometer -rx eth0, like in below screenshot:

Monitor Received and Transmitted server Network traffic in two separate xterm windows with speedometer ascii graphs

4. Using speedometer to test network maximum possible transfer speed between server (host A) and server (host B)

The speedometer manual suggests few examples one of which is:

How fast is this LAN?

host-a$ cat /dev/zero | nc -l -p 12345
host-b$ nc host-a 12345 > /dev/null
host-b$ speedometer -rx eth0

When I red this example in speedometer's manual, it wasn't completely clear to me what the author really meant, but a bit after when I thought over the example I got his point.

The idea behind this example is that a constant stream of zeros taken from /dev/zero will be streamed over via a pipe (|) to nc which will bind a port number 12345, anyone connecting from another host machine, lets say a server with host host-b to port 12345 on machine host-a will start receiving the /dev/zero streamed content.

Then to finally measure the streamed traffic between host-a and host-b machines a speedometer is started to visualize the received traffic on network interface eth0, thus measuring the amount of traffic flowing from host-a to host-b

I give a try to the exmpls, using for 2 test nodes my home Desktop PC, Linux running  arcane version of Ubuntu and my Debian Linux notebook.

First on the Ubuntu PC I issued
 

hipo@hip0-desktop:~$ cat /dev/zero | nc -l -p 12345
 

Note that I have previously had installed the netcat, as nc is not installed by default on Ubuntu and Debian. If you, don't have nc installed yet, install it with:

apt-get –yes install netcat

"cat /dev/zero | nc -l -p 12345" will not produce any output, but will display just a blank line.

Then on my notebook I ran the second command example, given in the speedometer manual:
 

hipo@noah:~$ nc 192.168.0.2 12345 > /dev/null

Here the 192.168.0.2 is actually the local network IP address of my Desktop PC. My Desktop PC is connected via a normal 100Mbit switch to my routing machine and receives its internet via  NAT. The second test machine (my laptop), gets its internet through a WI-FI connection received by a Wireless Router connected via a UTP cable to the same switch to which my Desktop PC is connected.

Finally to test / get my network maximum thoroughput I had to use:

hipo@noah:~$ speedometer -rx wlan0

Here, I  monitor my wlan0 interface, as this is my (laptop) wireless card interface over which I have connectivity to my local network and via which through the the WI-FI router I get connected to the internet.

Below is a snapshot captured showing approximately what is the max network thoroughput from:

Desktop PC -> to my Thinkpad R61 laptop

Using Speedometer to test network thorougput between two network server hosts screenshot Debian Squeeze Linux

As you can see in the shot approximately the maximum network thoroughput is in between:
2.55MB/s min and 2.59MB/S max, the speed is quite low for a 100 MBit local network, but this is normal as most laptop wireless adapters hardly transfer traffic in more than 10 to 20 MBits per sec.

If the same nework thoroughput test is conducted between two machines both connected to a same 100 M/bit switch, the traffic should be at least a 8 MB/sec.

There is something, else to take in consideration that probably makes the provided example network thoroughput measuring a bit inaccurate. The fact that the /dev/zero content is stremed over is slowing down the zeroes sent over network because of the  pipe ( | ) use slows down the stream.

5. Using speedometer to visualize maximum writting speed to a local hard drive on Linux

In the speedometer manual, I've noticed another interesting application of this nifty tool.

speedometer can be used to track and visualize the maximum writing speed a hard disk drive or hard drive partition can support on Linux OS:

A copy paster from the manual text is as follows:

How fast can I write data to my filesystem? (with at least 1GB free)
dd bs=1000000 count=1000 if=/dev/zero of=bigfile &
speedometer bigfile

However, when I tried copy/pasting the example in terminal, to test the maximum writing speed to an external USB hard drive, only dd command was started and speedometer failed to initialize and display graphs of the file creation speed.

I've found a little "hack" that makes the man example work by adding a 3 secs sleep like so:

debian:/media/Expansion Drive# dd bs=1000000 count=1000 if=/dev/zero of=bigfile & sleep 3; speedometer bigfile

Here is a screenshot of the bigfile created by dd and tracked "in real time" by speedometer:

How fast is writting data to local USB expandable hard disk Debian Linux speedometer screenshot

Actually the returned results from this external USB drive are, quite high, the possible reason for that is it is connected to my laptop over an USB protocol verion 3.

6. Using Speedometer to keep an eye on file download in progress

This application of speedometer is mostly useless especially on Linux where it is used as a Desktop.

However in some occasions if files are transferred over ssh or in non interactive FTP / Samba file transfers between Linux servers it can come handy.

To visualize the download and writing speed of lets say FTP transferred .AVI movie (during the actual file transfer) on the download host issue:

# speedometer Download-Folder/What-goes-around-comes-around.avi

7. Estimating approximate time for file transfer

There is another section in the speedometer manual pointing of the program use to calculate the time remaining for a file transfer.

The (man speedometer) provided example text is:

How long it will take for my 38MB transfer to finish?
speedometer favorite_episode.rm $((38*1024*1024))

At first glimpse it hard to understand (like the other manual example). A bit of reasoning and I comprehend what the man author meant by the obscure calculation:

$((38*1024*1024))

This is a formula used in which 38 has to be substituted with the exact file size amount of the transferred file. The author manual used a 38MB file so this is why he put $((38* … in the formula.

I give it a try – (just for the sake to see how it works) with a file with a size of 2500MB, in below two screenshot pictures I show my preparation to copy the file and the actual copying / "real time" transfer tracking with speedometer's status percentage completion bar.

xterm terminal copy file and estimate file copying operation speed on linux with speedometer preparation

Two xterm terminals one is copying a file the other one uses speedometer to estimate the time remaining to complete the file transfer from expansion USB hard drive to my laptop harddrive

 


Share this on

How to delete million of files on busy Linux servers (Work out Argument list too long)

Tuesday, March 20th, 2012

How to Delete million or many thousands of files in the same directory on GNU / Linux and FreeBSD

If you try to delete more than 131072 of files on Linux with rm -f *, where the files are all stored in the same directory, you will get an error:

/bin/rm: Argument list too long.

I've earlier blogged on deleting multiple files on Linux and FreeBSD and this is not my first time facing this error.
Anyways, as time passed, I've found few other new ways to delete large multitudes of files from a server.

In this article, I will explain shortly few approaches to delete few million of obsolete files to clean some space on your server.
Here are 3 methods to use to clean your tons of junk files.

1. Using Linux find command to wipe out millions of files

a.) Finding and deleting files using find's -exec switch:

# find . -type f -exec rm -fv {} \;

This method works fine but it has 1 downside, file deletion is too slow as for each found file external rm command is invoked.

For half a million of files or more, using this method will take "long". However from a server hard disk stressing point of view it is not so bad as, the files deletion is not putting too much strain on the server hard disk.
b.) Finding and deleting big number of files with find's -delete argument:

Luckily, there is a better way to delete the files, by using find's command embedded -delete argument:

# find . -type f -print -delete

c.) Deleting and printing out deleted files with find's -print arg

If you would like to output on your terminal, what files find is deleting in "real time" add -print:

# find . -type f -print -delete

To prevent your server hard disk from being stressed and hence save your self from server normal operation "outages", it is good to combine find command with ionice, e.g.:

# ionice -c 3 find . -type f -print -delete

Just note, that ionice cannot guarantee find's opeartions will not affect severely hard disk i/o requests. On  heavily busy servers with high amounts of disk i/o writes still applying the ionice will not prevent the server from being hanged! Be sure to always keep an eye on the server, while deleting the files nomatter with or without ionice. if throughout find execution, the server gets lagged in serving its ordinary client requests or whatever, stop the execution of the cmd immediately by killing it from another ssh session or tty (if physically on the server).

2. Using a simple bash loop with rm command to delete "tons" of files

An alternative way is to use a bash loop, to print each of the files in the directory and issue /bin/rm on each of the loop elements (files) like so:

for i in *; do
rm -f $i;
done

If you'd like to print what you will be deleting add an echo to the loop:

# for i in $(echo *); do \
echo "Deleting : $i"; rm -f $i; \

The bash loop, worked like a charm in my case so I really warmly recommend this method, whenever you need to delete more than 500 000+ files in a directory.

3. Deleting multiple files with perl

Deleting multiple files with perl is not a bad idea at all.
Here is a perl one liner, to delete all files contained within a directory:

# perl -e 'for(<*>){((stat)[9]<(unlink))}'

If you prefer to use more human readable perl script to delete a multitide of files use delete_multple_files_in_dir_perl.pl

Using perl interpreter to delete thousand of files is quick, really, really quick.
I did not benchmark it on the server, how quick exactly is it, but I guess the delete rate should be similar to find command. Its possible even in some cases the perl loop is  quicker …

4. Using PHP script to delete a multiple files

Using a short php script to delete files file by file in a loop similar to above bash script is another option.
To do deletion  with PHP, use this little PHP script:

<?php
$dir = "/path/to/dir/with/files";
$dh = opendir( $dir);
$i = 0;
while (($file = readdir($dh)) !== false) {
$file = "$dir/$file";
if (is_file( $file)) {
unlink( $file);
if (!(++$i % 1000)) {
echo "$i files removed\n";
}
}
}
?>

As you see the script reads the $dir defined directory and loops through it, opening file by file and doing a delete for each of its loop elements.
You should already know PHP is slow, so this method is only useful if you have to delete many thousands of files on a shared hosting server with no (ssh) shell access.

This php script is taken from Steve Kamerman's blog . I would like also to express my big gratitude to Steve for writting such a wonderful post. His post actually become  inspiration for this article to become reality.

You can also download the php delete million of files script sample here

To use it rename delete_millioon_of_files_in_a_dir.php.txt to delete_millioon_of_files_in_a_dir.php and run it through a browser .

Note that you might need to run it multiple times, cause many shared hosting servers are configured to exit a php script which keeps running for too long.
Alternatively the script can be run through shell with PHP cli:

php -l delete_millioon_of_files_in_a_dir.php.txt.

5. So What is the "best" way to delete million of files on Linux?

In order to find out which method is quicker in terms of execution time I did a home brew benchmarking on my thinkpad notebook.

a) Creating 509072 of sample files.

Again, I used bash loop to create many thousands of files in order to benchmark.
I didn't wanted to put this load on a productive server and hence I used my own notebook to conduct the benchmarks. As my notebook is not a server the benchmarks might be partially incorrect, however I believe still .they're pretty good indicator on which deletion method would be better.

hipo@noah:~$ mkdir /tmp/test
hipo@noah:~$ cd /tmp/test;
hiponoah:/tmp/test$ for i in $(seq 1 509072); do echo aaaa >> $i.txt; done

I had to wait few minutes until I have at hand 509072  of files created. Each of the files as you can read is containing the sample "aaaa" string.

b) Calculating the number of files in the directory

Once the command was completed to make sure all the 509072 were existing, I used a find + wc cmd to calculate the directory contained number of files:

hipo@noah:/tmp/test$ find . -maxdepth 1 -type f |wc -l
509072

real 0m1.886s
user 0m0.440s
sys 0m1.332s

Its intesrsting, using an ls command to calculate the files is less efficient than using find:

hipo@noah:/tmp/test$ time ls -1 |wc -l
509072

real 0m3.355s
user 0m2.696s
sys 0m0.528s

c) benchmarking the different file deleting methods with time

– Testing delete speed of find

hipo@noah:/tmp/test$ time find . -maxdepth 1 -type f -delete
real 15m40.853s
user 0m0.908s
sys 0m22.357s

You see, using find to delete the files is not either too slow nor light quick.

– How fast is perl loop in multitude file deletion ?

hipo@noah:/tmp/test$ time perl -e 'for(<*>){((stat)[9]<(unlink))}'real 6m24.669suser 0m2.980ssys 0m22.673s

Deleting my sample 509072 took 6 mins and 24 secs. This is about 3 times faster than find! GO-GO perl 🙂
As you can see from the results, perl is a great and time saving, way to delete 500 000 files.

– The approximate speed deletion rate of of for + rm bash loop

hipo@noah:/tmp/test$ time for i in *; do rm -f $i; done

real 206m15.081s
user 2m38.954s
sys 195m38.182s

You see the execution took 195m en 38 secs = 3 HOURS and 43 MINUTES!!!! This is extremely slow ! But works like a charm as the running of deletion didn't impacted my normal laptop browsing. While the script was running I was mostly browsing through few not so heavy (non flash) websites and doing some other stuff in gnome-terminal) 🙂

As you can imagine running a bash loop is a bit CPU intensive, but puts less stress on the hard disk read/write operations. Therefore its clear using it is always a good practice when deletion of many files on a dedi servers is required.

b) my production server file deleting experience

On a production server I only tested two of all the listed methods to delete my files. The production server, where I tested is running Debian GNU / Linux Squeeze 6.0.3. There I had a task to delete few million of files.
The tested methods tried on the server were:

– The find . type -f -delete method.

– for i in *; do rm -f $i; done

The results from using find -delete method was quite sad, as the server almost hanged under the heavy hard disk load the command produced.

With the for script all went smoothly. The files were deleted for a long long time (like few hours), but while it was running, the server continued with no interruptions..

While the bash loop was running, the server load avarage kept at steady 4
Taking my experience in mind, If you're running a production, server and you're still wondering which delete method to use to wipe some multitude of files, I would recommend you go  the bash for loop + /bin/rm way. Yes, it is extremely slow, expect it run for some half an hour or so but puts not too much extra load on the server..

Using the PHP script will probably be slow and inefficient, if compared to both find and the a bash loop.. I didn't give it a try yet, but suppose it will be either equal in time or at least few times slower than bash.

If you have tried the php script and you have some observations, please drop some comment to tell me how it performs.

To sum it up;

Even though there are "hacks" to clean up some messy parsing directory full of few million of junk files, having such a directory should never exist on the first place.

Frankly, keeping millions of files within the same directory is very stupid idea.
Doing so will have a severe negative impact on a directory listing performance of your filesystem in the long term.

If you know better (more efficient) ways to delete a multitude of files in a dir please share in comments.


Share this on

How to exclude files on copy (cp) on GNU / Linux / Linux copy and exclude files and directories (cp -r) exclusion

Saturday, March 3rd, 2012

I've recently had to make a copy of one /usr/local/nginx directory under /usr/local/nginx-bak, in order to have a working copy of nginx, just in case if during my nginx update to new version from source mess ups.

I did not check the size of /usr/local/nginx , so just run the usual:

nginx:~# cp -rpf /usr/local/nginx /usr/local/nginx-bak
...

Execution took more than 20 seconds, so I check the size and figured out /usr/local/nginx/logs has grown to 120 gigabytes.

I didn't wanted to extra load the production server with copying thousands of gigabytes so I asked myself if this is possible with normal Linux copy (cp) command?. I checked cp manual e.g. man cp, but there is no argument like –exclude or something.

Even though the cp command exclude feature is not implemented by default there are a couple of ways to copy a directory with exclusion of subdirectories of files on G / Linux.

Here are the 3 major ones:

1. Copy directory recursively and exclude sub-directories or files with GNU tar

Maybe the quickest way to copy and exclude directories is through a littke 'hack' with GNU tarnginx:~# mkdir /usr/local/nginx-new;
nginx:~# cd /usr/local/nginx#
nginx:/usr/local/nginx# tar cvf - \. --exclude=/usr/local/nginx/logs/* \
| (cd /usr/local/nginx-new; tar -xvf - )

Copying that way however is slow, in my case it fits me perfectly but for copying large chunks of data it is better not to use pipe and instead use regular tar operation + mv

# cd /source_directory
# tar cvf test.tar --exclude=dir_to_exclude/*\--exclude=dir_to_exclude1/* . \
# mv test.tar /destination_directory
# cd /destination# tar xvf test.tar

2. Copy folder recursively excluding some directories with rsync

P>eople who has experience with rsync , already know how invaluable this tool is. Rsync can completely be used as for substitute=de.a# rsync -av –exclude='path1/to/exclude' –exclude='path2/to/exclude' source destination

This example, can also be used as a solution to my copy nginx and exclude logs directory casus like so:

nginx:~# rsync -av --exclude='/usr/local/nginx/logs/' /usr/local/nginx/ /usr/local/nginx-new

As you can see for yourself, this is a way more readable for the tar, however it will not work on servers, where rsync is not installed and it is unusable if you have to do operations as a regular users on such for that case surely the GNU tar hack is more 'portable' across systems.
rsync has also Windows version and therefore, the same methodology should be working on MS Windows and good for batch scripting.
I've not tested it myself, yet as I've never used rsync on Windows, if someone has tried and it works pls drop me a short msg in comments.
3. Copy directory and exclude sub directories and files with find

Find in collaboration with cp can also be used to exclude certain directories while copying. Actually this method is better than the GNU tar hack and surely more efficient. For machines, where rsync is not installed it is just a perfect way to copy files from location to location, while excluding some directories, here is an example use of find and cp, for the above nginx case:

nginx:~# cd /usr/local/nginx
nginx:~# mkdir /usr/local/nginx
nginx:/usr/local/nginx# find . -type d \( ! -name logs \) -print -exec cp -rpf '{}' /usr/local/nginx-bak \;

This will find all directories inside /usr/local/nginx with find command print them on the screen, then execute recursive copy over each found directory and copy to /usr/local/nginx-bak

This example will work fine in the nginx case because /usr/local/nginx does not contain any files but only sub-directories. In other occwhere the directory does contain some files besides sub-directories the files had to also be copied e.g.:

# for i in $(ls -l | egrep -v '^d'); do\
cp -rpf $i /destination/directory

This will copy the files from source directory (for instance /usr/local/nginx/my_file.txt, /usr/local/nginx/my_file1.txt etc.), which doesn't belong to a subdirectory.

The cmd expression:

# ls -l | egrep -v '^d'

Lists only the files while excluding all the directories and in a for loop each of the files is copied to /destination/directory

If someone has better ideas, please share with me 🙂


Share this on

How to fix upside-down / inverted web camera laptop Asus K51AC issue on Ubuntu Linux and Debian GNU / Linux

Monday, February 13th, 2012

Skype Video Inverted bat like linux screenshot

Does your camera show video correctly in cheese but shows captured video upside-down (inverted) in skype ?
This is an issue a friend of mine experienced on his Asus K51AC-SX037D laptop on both Ubuntu and Debian Linux.
As you can see in the picture above it is funny as with this bug the person looks like a batman 😉
As the webcam upside-down issue was present on both latest Ubuntu 11.10 and latest stable Debian Squeeze 6.02, my guess was other GNU / Linux rpm based distro like Fedora might have applied a fix to this weird Skype inverted video (bat human like) issue.
Unfortunately testing the webcam with Skype on latest both Fedora 16 and Linux Mint 12 appeared to produce the same webcam bug.

A bit of research for the issue online and try outs of a number of suggested methods to resolve the issue led finally to a work around, thanks to this post
Here is few steps to follow to make the webcam show video like it should:

1. Install libv4l-0 package

root@linux:~# apt-get --yes install libv4-0
...

Onwards to start skype directly from terminal and test the camera type:

hipo@linux:~$ LD_PRELOAD=/usr/lib/libv4l/v4l1compat.so skype

This is the work around for 32 bit Linux install, most people however will probably have installed 64 bit Linux, for 64bit Linux installs the above command should be little different:

hipo@linux:~$ LD_PRELOAD=/usr/lib32/libv4l/v4l1compat.so skype

Once skype is launched test the camera and see if the camera capture is now uninverted, through menus:

S -> Options -> Video Devices -> Test

Skype Options Video devices screenshot

2. Create a skype Wrapper script Launcher

To make skype launch everytime with exported shell variable:
LD_PRELOAD=/usr/lib32/libv4l/v4l1compat.so

A new skype wrapper bash shell script should be created in /usr/local/bin/skype , the file should contain:

#!/bin/sh
LD_PRELOAD=/usr/lib32/libv4l/v4l1compat.so
/usr/bin/skype

To create the script with echo in a root terminal issue;

root@linux:~# echo '#!/bin/sh' >> /usr/local/bin/skype
root@linux:~# echo 'LD_PRELOAD=/usr/lib32/libv4l/v4l1compat.so' >> /usr/local/bin/skype
root@linux:~# echo '/usr/bin/skype' >> /usr/local/bin/skype
root@linux:~# chmod +x /usr/local/bin/skype

3. Edit the Skype gnome menu to substitute /usr/bin/skype Skype Launcher with /usr/local/bin/skype

Gnome 2 has a handy menu launcher, allowing to edit and add new menus and submenus (menus and items) to the Application menu, to launch the editor one has to click over Applications with last mouse button (right button) and choose Edit Menus

GNOME Edit menus screenshot

The menu editor like the one in the below screenshot will appear:

GNOME 2 Menu Editor Screenshot

In the preceeding Launcher properties window, Command: skype has to be substituted with:

GNOME2 Skype screenshot Launcher properties

Command: /usr/local/bin/skype

For console freaks who doesn't want to bother in editting Skype Launcher via GUI /usr/share/applications/skype.desktop file can be editted in terminal. Inside skype.desktop substitute:

Exec=skype

with

Exec=/usr/local/bin/skype

Skype fixed inverted bat like screenshot

As one can imagine the upside-down video image in Skype is not a problem because of Linux, but rather another bug in Skype (non-free) software program.
By the way everyone, who is using his computer with Free Software operating system FreeBSD, Linux etc. knows pretty well by experience, that Skype is a very problematic software; It is often a cause for system unexpected increased system loads, problems with (microphone not capturing), camera issues, issues with pulseaudio, problem with audio playbacks … Besides the long list of bugs there are unexpected display bugs in skype tray icon, bugs in skype messanger windows and at some rare occasions the program completely hangs and had to be killed with kill command and re-launched again.

Another worrying fact is Skype's versions available for GNU / Linux and BSD is completely out of date with its "competitor" operating systems MS Windows, MacOS X etc.
For people like me and my friend who want to use free operating system the latest available skype version is not even stable … current version fod download from skype's website is (Skype 2.2Beta)!

On FreeBSD the skype situation is even worser, freebsd have only option to run Skype ver 1.3 or v. 2.0 at best, as far as I know skype 2.2 and 2.2beta is not there.

Just as matter of comparison the latest Skype version on Windows is 5.x. Windows release is ages ahead its Linux and BSD ver. From a functional point of view the difference between Linux's 2.x and Windows 5.x is not that much different, what makes difference is is the amount of bugs which Linux and BSD skype versions contain…
Skype was about 6 months ago bought by Microsoft, therefore the prognosis for Skype Linux support in future is probably even darker. Microsoft will not probably bother to release new version of Skype for their competitor free as in freedom OSes.

I would like to thank my friend and brother in Christ Stelian for supplying me with the Skype screenshots, as well as for being kind to share how he fixed his camera with me.


Share this on