Posts Tagged ‘transfer’

Mount remote Linux SSHFS Filesystem harddisk on Windows Explorer SWISH SSHFS file mounter and a short evaluation on what is available to copy files to SSHFS from Windows PC

Monday, February 22nd, 2016

swish-mount-and-copy-files-from-windows-to-linux-via-sshfs-mount

I'm forced to use Windows on my workbook and I found it really irritating, that I can't easily share files in a DropBox, Google Drive, MS OneDrive, Amazon Storage or other cloud-storage free remote service. etc.
I don't want to use DropBox like non self-hosted Data storage because I want to keep my data private and therefore the only and best option for me was to make possible share my Linux harddisk storage
dir remotely to the Windows notebook.

I didn't wanted to setup some complex environment such as Samba Share Server (which used to be often a common option to share file from Linux server to Windows), neither wanted to bother with  installing FTP service and bother with FTP clients, or configuring some other complex stuff such as WebDav – which BTW is an accepted and heavily used solution across corporate clients to access read / write files on a remote Linux servers.
Hence, I made a quick research what else besides could be used to easily share files and data from Windows PC / notebook to a home brew or professional hosting Linux server.

It turned out, there are few of softwares that gives a similar possibility for a home lan small network Linux / Windows hybrid network users such, here is few of the many:

  • SyncThingSyncthing is an open-source file synchronization client/server application, written in Go, implementing its own, equally free Block Exchange Protocol. The source code's content-management repository is hosted on GitHub

     

     

     

     

     

    syncthing-logo

  • OwnCloud – ownCloud provides universal access to your files via the web, your computer or your mobile devices

     

     

     

     

     

    owncloud-logo

  • Seafile – Seafile is a file hosting software system. Files are stored on a central server and can be synchronized with personal computers and mobile devices via the Seafile client. Files can also be accessed via the server's web interface


seafile-client-in-browser

I've checked all of them and give a quick try of Syncthing which is really easy to start, just download the binary launch it and configure it under https://Localhost:8385 URL from a browser on the Linux server.
Syncthing seemed to be nice and easy to configure solution to be able to Sync files between Server A (Windows) and Server B(Linux) and guess many would enjoy it, if you want to give it a try you can follow this short install syncthing article.
However what I didsliked in both SyncThing and OwnCloud and Seafile and all of the other Sync file solutions was, they only supported synchronization via web and didn't seemed to have a Windows Explorer integration and did required
the server to run more services, posing another security hole in the system as such third party softwares are not easily to update and maintain.

Because of that finally after rethinking about some other ways to copy files to a locally mounted Sync directory from the Linux server, I've decided to give SSHFS a try. Mounting SSHFS between two Linux / UNIX hosts is
quite easy task with SSHFS tool

In Windows however the only way I know to transfer files to Linux via SSHFS was with WinSCP client and other SCP clients as well as the experimental:

As well as few others such as ExpandDrive, Netdrive, Dokan SSHFS (mirrored for download here)
I should say that I first decided to try copying few dozen of Gigabyte movies, text, books etc. using WinSCP direct connection, but after getting a couple of timeouts I was tired of WinSCP and decided to look for better way to copy to remote Linux SSHFS.
However the best solution I found after a bit of extensive turned to be:

SWISH – Easy SFTP for Windows

Swish is very straight forward to configure compared to all of them you download the .exe which as of time of writting is at version 0.8.0 install on the PC and right in My Computer you will get a New Device called Swish next to your local and remote drives C:/ D:/ , USBs etc.

swish-new-device-to-appear-in-my-computer-to-mount-sshfs

As you see in below screenshot two new non-standard buttons will Appear in Windows Explorer that lets you configure SWISH

windows-mount-sshfs-swish-add-sftp-connection-button-screenshot

Next and final step before you have the SSHFS remote Linux filesystem visible on Windows Xp / 7 / 8 / 10 is to fill in remote Linux hostname address (or even better fill in IP to get rid of possible DNS issues), UserName (UserID) and Direcory to mount.

swish-new-fill-in-dialog-to-make-new-linux-sshfs-mount-directory-possible-on-windows

Then you will see the SSHFS moutned:

swish-sshfs-mounted-on-windows

You will be asked to accept the SSH host-key as it used to be unknown so far

swish-mount-sshfs-partition-on-windows-from-remote-linux-accept-key

That's it now you will see straight into Windows Explorer the remote Linux SSHFS mounted:

remote-sshfs-linux-filesystem-mounted-in-windows-explorer-with-swish

Once setupped a Swish connection to copy files directly to it you can use the Send to Embedded Windows dialog, as in below screenshot

swish-send-to-files-windows-screenshot

The only 3 problem with SWISH are:

1. It doesn't support Save password, so on every Windows PC reboot when you want to connect to remote Linux SSHFS, you will have to retype remote login user pass.
Fron security stand point this is not such a bad thing, but it is a bit irritating to everytime type the password without an option to save permanently.
The good thing here is you can use Launch Key Agent
as visible in above screenshot and set in Putty Key Agent your remote host SSH key so the passwordless login will work without any authentication at
all however, this might open a security hole if your Win PC gets infected by virus, which might delete something on remote mounted SSHFS filesystem so I personally prefer to retype password on every boot.

2. it is a bit slow so if you're planning to Transfer large amounts of Data as hundreds of megabytes, expect a very slow transfer rate, even in a Local  10Mbit Network to transfer 20 – 30 GB of data, it took me about 2-3 hours or so.
SWISH is not actively supported and it doesn't have new release since 20th of June 2013, but for the general work I need it is just perfect, as I don't tent to be Synchronizing Dozens of Gigabytes all the time between my notebook PC and the Linux server.

3. If you don't use the established mounted connection for a while or your computer goes to sleep mode after recovering your connection to remote Linux HDD if opened in Windows File Explorer will probably be dead and you will have to re-enable it.

For Mac OS X users who want to mount / attach remote directory from a Linux partitions should look in fuguA Mac OS X SFTP, SCP and SSH Frontend

I'll be glad to hear from people on other good ways to achieve same results as with SWISH but have a better copy speed while using SSHFS.

Share this on

How to split / rar in parts large data archive files on Linux and Windows – Transfer big files across servers located in DMZ rescticted areas

Friday, November 28th, 2014

how-to-split-rar-in-parts-large-data-archive-files-on-Linux-and-Windows-Transfer-big-files-across-servers-in-firewalled-restricted-areas

I was working on a Application Migration Project whose goal was to Install a business application called Asset Guardian and then move current company Data from the old server to the new AppServer.
F
or that purpose the company vendor Asset Guardian shipped to a Public access FTP, a huge (12GB) ZIP archive file which had to be transferred into a well secured DMZ-ed corporation network with various implemented Traffic Shaping Network policies, a resctrictive firewall allowing access to Internal Network only and to Few (Restrictive configured) Proxy Server IPs on port 80 and 8080.

One of the proxy servers allowed access to the Internet and I set this one and tried downloading the Huge Archive file  with the Windows 2012 server default browser Internet Explorer 10, though the download started it kept slow between ~ 300 – 500KB sec and when reached 3.4GB download failed. I tried resuming the download but as the remote Public FTP server where files resides doesn't support FTP RESUME function.
I thought it might be that Internet Explorer is badly managing the download so, I go forward and installed Portable Firefox (mirrored version 33.1.1 is here). Re-running download with firefox also failed, so the next logical step was for me to try downloading with Windows version of Wget (Wget) and with Portable Free Download Manager 3.9.14.1481 (mirrored here) using both of them was unable to complete download (probably due to firewall or Proxy screwing the proxy inspected traffic) thus I had to look for another way to copy the enormous archive into the company network.

To get around the issue I tried to download the file from FTP to another Server running Apache and tried re-downloading the big file archive (Asset-Guardian-data.zip) from Apache Webserver via HTTP protocol, this download method didn't work neither using plain HTTP protocol for download when downloaded file reached (3.4GB), thus I realized this is due to restrictive Proxy servers (dropping file downloads) bigger than  3.4GBs).

Then to be able to transfer the huge 12GB file, it seems the only left option was to to chop the big file on smaller file chunks and transfer them one by one.
In my case I had the Asset-Guardian-Files.zip transferred already to the Apache (Webserver) host which is running Linux so basicly the task was to Transfer Big archive file between the SuSE Linux Enterprise Server (SLES) 11 and Windows 2012 Server.

Quickesy way to do that is by using UNIX split command, i.e.:

split -b 1024m Asset-Guardian-Files.zip


The outputted files each 1GB are with naming (xaa, xab, xac, xad, xae, xaf, gaf etc.) in same folder where split command is run:

To later merge the files on the Windows 2012 server (copy) command is used:

copy /b file1 + file2 + file3 + file4 filetogether


In my case the command to issue on Win 2012 server was:

copy /b xaa + xab + xac + xae + xae + xaf + xaf + xag xah xai xaj xak Asset-Guardian-files.zip


This method to chop and transfer the file is most simple one and it doesn't require the two servers to have WinRAR or Console RAR / unrar installed.

If instead of Copy Huge File from Linux -> Windows host you need to copy too big file (lets say 100GB) between 2 Windows servers (Windows server host A and Windows server Host B – both situated in different firewall corporate networks) you will need to download to Win Host A and use Windows UNIX split equivalent tool called sfk (The Swiss File Knife) , sfk has port also for Mac OS so in case of need for need for migrating huge archive file from Mac OS X host it will serve as Linux's split – I've made SFK (current version) mirror here.

Another way to cut the 12GB file in parts and transfer to destination host via HTTP was to use rar (on the Linux host), then download the file on Win 2012 server and use Winrar Portable Free to extract the multiple files:

To make archive separate in parts set out to certain size out of a huge file with rar on Linux use:

cd /var/www
rar -a -v1000000k Asset_Guardian_Files.splitted.rar /var/www/Asset_Guardian_Files.zip

10000000Kbs = 10000000/1024 = 976MBs, hence rar produced parts will be sized to 976MB rar parts.

To find out archives check for *splitted*.rar in your /var/www

ls -al /var/www/*splitted*.rar
-rw-r–r– 1 root root 1048576 ное 28 18:34 Asset-Guardian-Files.splitted.part1.rar
-rw-r–r– 1 root root 1048576 ное 28 18:34 Asset-Guardian-Files.splitted.part2.rar
-rw-r–r– 1 root root 1048576 ное 28 18:34 Asset-Guardian-Files.splitted.part3.rar
-rw-r–r– 1 root root 1048576 ное 28 18:34 Asset-Guaridna-Filse.splitted.part4.rar

 

Then to download the files M$ Win 2012 server IE (http://my-linux-host.com/Asset-Guardian-Files.splitted.part1.rar, http://my-linux-host.com/Asset-Guardian-Files.splitted.part2.rar. etc.)

Thanks God, Problem Solved 🙂

Share this on

luckyBackup Linux GUI back-up and synchronization tool

Wednesday, May 14th, 2014

luckybackup_best-linux-graphical-tool-for-backup_linux_gui-defacto-standard-tool
If you're a using GNU / Linux  for Desktop and you're already tired of creating backups by your own hacks using terminal and you want to make your life a little bit more easier and easily automate your important files back up through GUI program take a look at luckyBackup.

Luckibackup is a GUI frontend to the infamous rsync command line backup  tool. Luckibackup is available as a package in almost all modern Linux distributions its very easy to setup and can save you a lot of time especially if you have to manage a number of your Workplace Desktop Office Linux based computers.
Luckibackup is an absolute must have program for Linux Desktop start-up users. If you're migrating from Microsoft Windows realm and you're used to BackupPC, Luckibackup is probably the defacto Linux BackupPC substitute.

The sad news for Linux GNOME Desktop users is luckibackup is written in QT and it using it will load up a bit your notebook.
It is not installed by default so once a new Linux Desktop is installed you will have to install it manually on Debian and Ubuntu based Linux-es to install Luckibackup apt-get it.

debian:~# apt-get install --yes luckibackup
...

On Fedora and CentOS Linux install LuckiBackup via yum rpm package manager

[root@centos :~]# yum -y install luckibackup
.

Luckibackup is also ported for OpenSuSE Slackware, Gentoo, Mandriva and ArchLinux. In 2009 Luckibackup won the prize of Sourceforge Community Choice Awards for "best new project".

luckyBackup copies over only the changes you've made to the source directory and nothing more.
You will be surprised when your huge source is backed up in seconds (after the first backup).

Whatever changes you make to the source including adding, moving, deleting, modifying files / directories etc, will have the same effect to the destination.
Owner, group, time stamps, links and permissions of files are preserved (unless stated otherwise).

Luckibackup creates different multiple backup "snapshots".Each snapshot is an image of the source data that refers to a specific date-time.
Easy rollback to any of the snapshots is possible. Besides that luckibackup support Sync (just like rsync) od any directories keeping the files that were most recently modified on both of them.

Useful if you modify files on more than one PCs (using a flash-drive and don't want to bother remembering what did you use last. Luckibackup is capable of excluding certain files or directories from backupsExclude any file, folder or pattern from backup transfer.

After each operation a logfile is created in your home folder. You can have a look at it any time you want.

luckyBackup can run in command line if you wish not to use the gui, but you have to first create the profile that is going to be executed.
Type "luckybackup –help" at a terminal to see usage and supported options.
There is also TrayNotification – Visual feedback at the tray area informs you about what is going on.
 

 

 

Share this on

Best Windows tools to Test (Benchmark) Hard Drives, SSD Drives and RAID Storage Controllers

Wednesday, April 23rd, 2014

atto-windows-hard-disk-benchmark-freeware-tool-screenshot-check-hard-disk-speed-windows
Disk Benchmarking is very useful for people involved in Graphic Design, 3D modelling, system admins  and anyone willing to squeeze maximum of his PC hardware.

If you want to do some benchmarking on newly built Windows server targetting Hard Disk performance, just bought a new hard SSD (Solid State Drives) and you want to test how well Hard Drive I/O operations behave or you want to see a regular HDD benchmarking of group of MS Windows PCs and plan hardware optiomization, check out ATTO Disk Benchmark.

So why exactly ATTO Benchmark? – Cause it is one of the best Windows Free Benchmark tools on the internet.

ATTO is a widely-accepted Disk Benchmark freeware utility to help measure storage system performance. ATTO though being freeware is among top tools utilized in industry. It is very useful in comparing different Hard Disk vendors speed, measure Windows storage systems performance with various transfer sizes and test lengths for reads and writes.

ATTO Disk Benchmark is used by manufacturers of Hardware RAID controllers, its precious tool to test Windows storage controllers, host bus adapters (HBAs).

Here is ATTO Benchmark tool specifications (quote from their webstie):
 

  • Transfer sizes from 512KB to 8MB
  • Transfer lengths from 64KB to 2GB
  • Support for overlapped I/O
  • Supports a variety of queue depths
  • I/O comparisons with various test patterns
  • Timed mode allows continuous testing
  • Non-destructive performance measurement on formatted drives
  • Transfer sizes from 512KB to 8MB
  • Transfer lengths from 64KB to 2GB
  • Support for overlapped I/O
  • Supports a variety of queue depths
  • I/O comparisons with various test patterns
  • Timed mode allows continuous testing
  • Non-destructive performance measurement on formatted drives
  • – See more at: http://www.attotech.com/disk-benchmark/#sthash.rRlgSTOE.dpuf

Here is mirrored latest version of ATTO Disk for Download. Once you get your HDD statistics you will probably want to compare to other people results. On  TomsHardware's world famous Hardware geek site there are plenty of Hard Drives performance Charts

Of course there are other GUI alternatives to ATTO Benchmark one historically famous is NBench

NBench

nbench_benchmark_windows_hard-drive-cpu-and-memory

Nbench is nice little benchmarking program for Windows NT. Nbench reports the following components of performance:

CPU speed: integer and floating operations/sec
L1 and L2 cache speeds: MB/sec
main memory speed: MB/sec
disk read and write speeds: MB/sec

SMP systems and multi-tasking OS efficiency can be tested using up to 20 separate threads of execution.

For Console Geeks or Windows server admins there are also some ports of famous *NIX Hard Disk Benchmarking tools:

NTiogen

NTiogen benchmark was written by Symbios Logic, It's Windows NT port of their popular UNIX benchmark IOGEN. NTIOGEN is the parent processes that spawns the specified number of IOGEN processes that actually do the I/O.
The program will display as output the number of processes, the average response time, the number of I/O operations per second, and the number of KBytes per second. You can download mirror copy of Ntiogen here


There are plenty of other GUI and Console HDD Benchmarking Win Tools, i.e.:

IOMeter (ex-developed by Intel and now abandoned available as open source available on SourceForge)

iometer-benchmark-disk-storage-speed-windows
 

Bench32 – Comprehensive benchmark that measures overall system performance under Windows NT or Windows 95, now obsolete not developed anymore abandoned by producer company.

ThreadMark32 – capable of bench (ex developed and supported by ADAPTEC) but also already unsupported

IOZone – filesystem benchmark tool. The benchmark generates and measures a variety of file operations. Iozone has been ported to many machines and runs under many operating systems.
 

N! B! Important note to make here is above suggested tools will provide you more realistic results than the proprietary vendor tools shipped by your hardware vendor. Using proprietary software produced by a single vendor makes it impossible to analyze and compare different hardwares, above HDD benchmarking tools are for "open systems", e.g. nomatter what the hardware producer is produced results can be checked against each other.
Another thing to consider is even though if you use any of above tools to test and compare two storage devices still results will be partially imaginary, its always best to conduct tests in Real Working Application Environments. If you're planning to launch a new services structure always test it first and don't rely on preliminary returned soft benchmarks.

if you know some other useful benchmarking software i'm missing please share.

Share this on

How to change users quota to NO QUOTA on Qmail with Vpopmail Mail server install / Qmail mail over quota issue

Monday, February 20th, 2012

 

Qmail Vpopmail quota exceeded Dolphin Logo

Already on a couple of mail boxes located on one of the qmail powered mail servers I adminiter, there is an over QUOTA reached problem encountered.

Filling up the mailbox quota is not nice as mails starts get bounced back to the sender with a message QUOTA FULL or EXCEEDED MESSAGE, if this is a crucial mail waiting for some important data etc. the data is never received.
Below is a copy of the mail quota waarning notification message:

Delivered-To: email_use@my-mail-domain.net
Date: Wed, 15 Feb 2012 17:40:36 +0000
X-Comment: Rename/Copy this file to ~vpopmail/domains/.quotawarn.msg, and make appropriate changes
X-Comment: See README.quotas for more information
From: Mail Delivery System <Mailer-Daemon@different.bg>
Reply-To: email@pc-freak.net
To: Valued Customer:;
Subject: Mail quota warning
Mime-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 7bit
>
Your mailbox on the server is now more than 90% full. So that you can continue
to receive mail you need to remove some messages from your mailbox.

As you can read from the copy of the mail message above, the message content sent to the mail owner whose quota is getting full is red from /var/vpopmail/domains/.quotawarn.msg

The mail reaching quota problem is very likely to appear in cases like low mailbox quota set, but sometimes also occurs due to bugs in vpopmail quota handling.

Various interesting configuration settings for mail quotas etc. are in /home/vpopmail/etc/vlimits.default file, (assuming vpopmail is installed in /home).

In my specific case, the default vpopmail mailbox quota size was set to only 40 Megabytes.
40MB is too low if compared to todays mailbox size standards which in Gmail and Yahoo  mail services are already a couple of gigabytes.
Hence to get around the quota troubles, I  removed the quota for the mail.
To remove the quota size in vpopmail set for address (email_user@my-mail-domain.net) used cmd:

qmail-server:~# vmoduser -q NOQUOTA email_user@my-mail-domain.net

To save myself from future quota issues, I decided to apply a permanent fix to all those over quota size VPOPMAIL mailbox problems by removing completely quota restriction for all mailboxes in my vpopmail existent mail domain.

To do so, I wrote a quick simple bash loop one-liner script:

qmail-server:~# cd /home/vpopmail/domains
qmail-server:~/vpopmail/domains# cd my-mail-domain.net
qmail-server:~/vpopmail/domains/my-mail-domain.net# for i in *; do \
vmoduser -q NOQUOTA $(echo $i|grep -v vpasswd)@my-mail-domain.net; \
done

This works only on vpopmail installations which are configured to store the mail messages directly on the filesystem. Therefore this approach will not work for people who during vpopmail install had configured it to store mailboxes in MySQL or in other kind of SQL db engine.

Anyways for Vpopmail installed to use SQL backend, the script can be changed to read directly a list with all the mailboxes obtained from databasae (SQL query) and then, loop over each of the mail addresses apply the vmoduser -q NOQUOTA mail@samplemaildomain.net.

I've written also a few lines shell script (remove_vpopmail_emails_domain_quota.sh), it accepts one argument which is a vpopmail domain to which the admin would like to reset all applied mailbox quotas. The script is useful, if you have to often remove all quotas for vpopmail domainsor have to do quota wipe out simultaneously for multiple email domain names  located on different servers.

Share this on

Using rsync to copy / synchronize files or backups between Linux / BSD / Unix servers

Monday, November 21st, 2011

Rsync and Rsync over ssh logo picture

Many of us have already taken advantage of the powerful Rsync proggie, however I'm quite sure there are still people who never used rsync to transfer files between servers.. That's why I came with this small post to possibly introduce rsync to my blog readers.
Why Rsync and not Scp or SFTP? Well Rsync is designed from the start for large files transfer and optimized to do the file copying job really efficient. Some tests with scp against rsync will clearly show rsync's superiority.
Rsync is also handy to contiue copying of half copied files or backups and thus in many cases saves bandwidth and machine hdd i/o operations.

The most simple way to use rsync is:

server:~# rsync -avz -e ssh remoteuser@remotehost:/remote/directory /local/directory/

Where remoteuser@remotehost — is the the username and hostname of remote server to copy files to.
/remote/directory — is the directory where the rsync copied files should be stored
/local/directory — is the local directory from which files will be copied to remote directory

If not a preliminary passwordless ssh key (RSA / DSA) authentication is configured on remote server, the above command will prompt for a password otherwise the rsync will start doing the transfer.

If one needs to have a RSA or DSA (public / private key) passwordless SSH key authentication , a RSA key first should be generated and copied over to the remote server, like so:

server:~# ssh-keygen -t dsa
...
server:~# ssh-copy-id -i ~/.ssh/id_dsa.pub root@remotehost
...

That's all folks, enjoy rsyncing 😉

Share this on