Posts Tagged ‘kernel version’

How to mount NFS network filesystem to remote server via /etc/fstab on Linux

Friday, January 29th, 2016

mount-nfs-in-linux-via--etc-fstab-howto-mount-remote-partitions-from-application-server-to-storage-server
If you have a server topology part of a project where 3 (A, B, C) servers need to be used to deliver a service (one with application server such as Jboss / Tomcat / Apache, second just as a Storage Server holding a dozens of LVM-ed SSD hard drives and an Oracle database backend to provide data about the project) and you need to access server A (application server) to server B (the Storage "monster") one common solution is to use NFS (Network FileSystem) Mount. 
NFS mount is considered already a bit of obsoleted technology as it is generally considered unsecre, however if SSHFS mount is not required due to initial design decision or because both servers A and B are staying in a serious firewalled (DMZ) dedicated networ then NTS should be a good choice.
Of course to use NFS mount should always be a carefully selected Environment Architect decision so remote NFS mount, imply  that both servers are connected via a high-speed gigabyte network, e.g. network performance is calculated to be enough for application A <-> to network storage B two sides communication not to cause delays for systems end Users.

To test whether the NFS server B mount is possible on the application server A, type something like:

 

mount -t nfs -o soft,timeo=900,retrans=3,vers=3, proto=tcp remotenfsserver-host:/home/nfs-mount-data /mnt/nfs-mount-point


If the mount is fine to make the mount permanent on application server host A (in case of server reboot), add to /etc/fstab end of file, following:

1.2.3.4:/application/local-application-dir-to-mount /application/remote-application-dir-to-mount nfs   rw,bg,nolock,vers=3,tcp,timeo=600,rsize=32768,wsize=32768,hard,intr 1 2


If the NTFS server has a hostname you can also type hostname instead of above example sample IP 1.2.3.4, this is however not recommended as this might cause in case of DNS or Domain problems.
If you want to mount with hostname (in case if storage server IP is being commonly changed due to auto-selection from a DHCP server):

server-hostA:/application/local-application-dir-to-mount /application/remote-application-dir-to-mount nfs   rw,bg,nolock,vers=3,tcp,timeo=600,rsize=32768,wsize=32768,hard,intr 1 2

In above example you need to have the /application/local-application-dir-to-mount (dir where remote NFS folder will be mounted on server A) as well as the /application/remote-application-dir-to-mount
Also on server Storage B server, you have to have running NFS server with firewall accessibility from server A working.

The timeou=600 (is defined in) order to make the timeout for remote NFS accessibility 1 hour in order to escape mount failures if there is some minutes network failure between server A and server B, the rsize and wsize
should be fine tuned according to the files that are being red from remote NFS server and the network speed between the two in the example are due to environment architecture (e.g. to reflect the type of files that are being transferred by the 2)
and the remote NFS server running version and the Linux kernel versions, these settings are for Linux kernel branch 2.6.18.x which as of time of writting this article is obsolete, so if you want to use the settings check for your kernel version and
NTFS and google and experiment.

Anyways, if you're not sure about wsize and and rise, its perfectly safe to omit these 2 values if you're not familiar to it.

To finally check the NFS mount is fine,  grep it:

 

# mount|grep -i nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
server-hostA:/application/remote-application-dir-to-mount on /application/remote-application-dir-to-mount type nfs (rw,bg,nolock,nfsvers=3,tcp,timeo=600,rsize=32768,wsize=32768,hard,intr,addr=1.2.3.4)


That's all enjoy 🙂

 

 

ipw3945 on kernel 2.6.30

Friday, September 4th, 2009

I’ve loosed big time trying to compile ipw3945 on my debian amd64 system running kernel 2.6.30, unfortunately at the endI couldn’t make ipw3945 run correctly. However I’ll try to explain just in case if somebody out there succeeds in running the ipw3945 driver on kernel 2.6.30. First I needed to compile the ieee80211-1.2.18 subsystem correctly. That gave me a hard time cause the damned thing won’t compile on my kernel version. I’ve googled about the solution and had to combine a couple of solutions before I succeed compiling.Here is what was required
1. First in ieee80211.module.c change proc_net for init_net.proc_net.
2. Next change in ieee80211_crypt_wep.c and ieee80211_crypt_tkip.c .page for .page_link
3. Next download ieee80211_wx.c-2.6.27.patch.txt
4. Patch ieee80211-1.2.18 e.g. in my case: # cd /usr/src/ieee80211-1.2.18; patch -p0 < ieee80211_wx.c-2.6.27.patch.txt
5. Overwrite the file ieee80211_crypt_tkip.c in /usr/src/ieee80211-1.2.18 with the following ieee80211_crypt_tkip.c file.
6. Now with God’s help you might try: # make && make install

Now let’s proceed to the compilation of the ipw3945 driver itself.

I’ve used the the ipw3945-1.2.2 driver from the Intel Pro/Wireless 3945ABG Driver Website .In kernels newer than 2.6.22 on 64 bit architectures in order to make the driver compile, you have to apply the fix-for-64-bits-2.6.22-onwards.patch patch.
Now enter the ipw3945-1.2.2 source directory and execute patch -p0 < fix-for-64-bits-2.6.22-onwards.patch ,hopefully it should patch correctly.I had to also use the ipw3945-1.2.2.patch patch.
Again patch it with: # patch -p0 < ipw3945-1.2.2.patch.

Next in order to compile it I had to execute: # make IEEE80211_INC=/usr/src/ieee80211-1.2.18 IEEE80211_IGNORE_DUPLICATE=y && make install.Next I have downloaded the ipw3945d-1.7.22 . Untarred the archive file
# tar -zxvf ipw3945d-1.7.22.tgz and last but not least:
# cp -rpf x86_64/ipw3945d /etc/init.d/The ipw3945 loaded correctly with modprobe ipw3945, however the wireless device wasn’t detected … Even though the failure to make the ipw3945 driver running what I did gave me hope that eventually if I invest some more time and efforts attempting to make it work I could eventually succeed and enjoy the benefits of a better wireless networks signal strengths. Until that happens I’ll hold up to the newer iwl3945 to use my wireless.END—–

How to mount directory in memory on GNU / Linux and FreeBSD / Mount directory in RAM memory to increase performance on Linux and BSD

Tuesday, October 11th, 2011

One of the websites hosted on a server I currently manage has a cache directory which doesn’t take much space but tens of thousands of tiny files. Each second a dozen of files are created in the cache dir. Hence using a hard disk directory puts some serious load on the server consequence of the many fopen and fclose HDD I/O operations.

To get through the problem, the solution was obvious use a directory which stores its information in memory.
There are of course other benefits of using a memory to store data in as we all know as access to RAM is so many times faster.

GNU/Linux is equipped with a tmpfs since kernel version 2.4.x, primary usage of tmpfs file system across many G / Linux distributious is the /tmp directory.

Some general tmpfs information about tmpfs is explained in mount’s manual e.g.: man mount, a good other reading is the tmpfs kernel documentation file

An implementation of tmpfs is /dev/shm .

/dev/shm is a standard memory device used among Linuces, its actually an ordinary directory one can list with ls . /dev/shm is used as a “virtual directory” memory space. Below is an output of /dev/shm from my notebook, one can see few files stored in memory which belong to the pulse audio linux architecture:

linux:~$ ls -al /dev/shm
ls -al /dev/shm/total 7608drwxrwxrwt 2 root root 160 Oct 10 18:05 .
drwxr-xr-x 16 root root 3500 Oct 10 10:57 ..
p-w------- 1 root root 0 Oct 10 10:57 acpi_fakekey
-r-------- 1 hipo hipo 67108904 Oct 10 17:20 pulse-shm-2067018443
-r-------- 1 hipo hipo 67108904 Oct 10 10:59 pulse-shm-2840042043
-r-------- 1 hipo hipo 67108904 Oct 10 10:59 pulse-shm-3215031142
-r-------- 1 hipo hipo 67108904 Oct 10 18:05 pulse-shm-4157723670
-r-------- 1 hipo hipo 67108904 Oct 10 18:06 pulse-shm-702872358

To measure the size of /dev/shm across different Linux distriubtions one can use the usual df cmd, e.g.:

[root@centos: ~]$ df -h /dev/shm
Filesystem Size Used Avail Use% Mounted on
tmpfs 16G 0 16G 0% /dev/shm

Above I show a df -h /dev/shm output from a CentOS server equipped with 32 GB of memory, as you can see CentOS has reserved half of the size of the system memory (16GB) for the purposes of creating files in memory through /dev/shm. The memory is dynamically assigned, so if its not use the assigned memory by it can still be used for the purposes of the services running (which by the way is very nice).

Accoring to what, I’ve read in wikipedia about tmpfs, tmpfs defaults in Linux to half of the system physical memory.
However I’ve noticed Debian Linux hosts usually reserve less memory for /dev/shm, on my personal notebook Debian /dev/shm is only 1 Giga, where on a Debian running server, Debian automatically has set it to the humble 2GB. Setting less by the way as with the Debian example, is a rather good idea since not many desktop or server applications are written to get actively advantage of the virtual /dev/shm directory.

One can directly drop files in /dev/shm which will immediately be stored in memory, and after a system reboot the files will disappear.
Let’s say you zip archive file, testing.zip and you like to store the file in memory to do so, just copy the file in /dev/shm.

linux:~$ cp -rpf testing.zip /dev/shm

You don’t even need to be root to copy files in the “virtual memory directory”. This is a reason many crackers (script kiddies), are storing their cracking tools in /dev/shm 😉

A rather funny scenario, I’ve witness before is when you list /dev/shm on some Linux server and suddenly you see a tons of brute forcing tools and all kind of “hack” related stuff belonging to some system user. Sometimes even this malicious script tools belong to the root user…

Now as I’ve said a few words on how linux’s tmpfs works here is how to mount a directory which cache content will be stored in volatile memory:

linux:~# mount -t tmpfs -o size=3G,mode=0755 tmpfs /var/www/site/cache

As you can see the above command will dynamically assign a tmpfs directory taking up from the system RAM mem which could expand up to 3GB within the system memory.

Of course before mounting, its necessery to create the /var/www/site/cache and set proper permissions in the above example I use /var/www/site/cache with a default permissions of 755 which is owned by the use with which the Apache server is running, e.g.:

linux:~# mkdir -p /var/www/site/cache
linux:~# chown -R www-data:www-data /var/www/site/cache
linux:~# chmod -R 755 /var/www/site/cache

Using a tmpfs is very handy and have many advantages, however one should be very careful with the data stored inside a tmpfs dir resource, all data will be lost in case of sudden system restart as the data is stored in the memory.
One other problems one might expect with tmpfs would be if the assigned virtual disk space gets filled with data. It never happened to me but, I’ve red online some stories that in the past this led to system crashes, today as the dox I’ve checked prescribed overfilling it will start swapping make the system terribly sluggish and eventually afred depleting the reserver swap space will start killing processes.

Using tmpfs as a cache directory is very useful on servers running Apache+PHP/Perl/Python/Ruby etc. as it can be used for stroring script generated temorary data.

Using a tmpfs can signifantly decrease server i/o created disk overheads.

Some other application I can think of though, I haven’t tested it would be if tmpfs mounted directory is used to store scripting executable files, copied after restart. Executing the script reading it directly from the “virtual directory” could for sure have very good impact especially on huge websites.
One common service which takes advantage of the elegancy of tmpfs nowdays almost all modern GNU/Linux has is udevd – The Linux dynamic device management. By the way (man udev) is a very good and must read manual especially for Linux novices to get a good basic idea on how /dev/ mamagement occurs via udev.

To make permenant directory contained in memory on Linux the /etc/fstab file should be used.

In order to mount permanently a directory as a memory device of a size of 3GB with 0755 permissions in /var/www/site/cache, as shown in the earlier example, one can use the command:

linux:~# echo 'tmpfs /var/www/site/cache/ tmpfs size=3G,mode=0755 0 0' >> /etc/fstab

This will assure the directory stored in memory would be recreated on next boot.

Nowdays the use of tmpfs is constantly growing, I’ve seen it to be used as a way to substitute ordinary disk based /tmp with a tmpfs directory contained in memory in Cloud Linux OS.
The applications of tmpfs is pretty much to the imagination of the one who wants to get advantage of it. For sure using tmpfs will be seen by the Linux GUI programs.

Going to FreeBSD and the BSD world, tmpfs is also available, however it is still considered a bit experimental. To get use of tmpfs to gain some performance, one should first enable it via bsd’s /etc/rc.conf:

freebsd# echo 'tmpfs_load="YES"' >> /etc/rc.conf

Mounting a directory permanently using tmpfs persmanently it again is doable via /etc/fstab to add a new directory inside memory with tmpfs: is done with adding:

freebsd# echo 'tmpfs /var/www/site/cache tmpfs rw 0 0' >> /etc/fstab

The native equivallent of tmpfs in FreeBSD is called mdmfs.
As I said it is slower than tmpfs but rock solid.

To mount a 4gigabyte size mdmfs “ram directory” on BSD from csh:

freebsd# mdmfs -s 4g md /var/www/site/cache

Mounting a directory permanently using tmpfs persmanently it again is doable via /etc/fstab to add a new directory inside memory with tmpfs: is done with adding:

freebsd# echo 'tmpfs /var/www/site/cache tmpfs rw 0 0' >> /etc/fstab

The native equivallent of tmpfs in FreeBSD is called mdmfs.
As I said it is slower than tmpfs but rock solid.

To mount a 4gigabyte size mdmfs “ram directory” on BSD from csh:

freebsd# mdmfs -s 4g md /var/www/site/cache

Mounting a directory permanently using tmpfs persmanently it again is doable via /etc/fstab to add a new directory inside memory with tmpfs: is done with adding:

freebsd# echo 'md /var/www/site/cache mfs rw,-s4G 2 0' >> /etc/fstab There are some reports of users who presumable use it to increase the ports / kernel compile times, but I haven’t tried it yet so I don’t know for sure

In huge corporations like Google and Yahoo tmpfs is certanly used a lot as this technology can dramatically can improve access times to information.I’m curious to know for some good ways to get use of tmpfs to improve efficiency.
If someone has some readings or has some idea please shar with me 😉