Posts Tagged ‘system administrators’

Free Software Remote Desktop for Mac OS X – CoRD simple RDP remote desktop for Mac

Monday, August 4th, 2014

free-software-remote-desktop-client-to-connect-to-windows-rdp-for-MAC-OSX-CoRD-logo
If you're admin using Mac OS X Desktop or casually on a place where you have no access to a Windows / Linux PC (only have access to your girlfriend of wife MAC OS notebook) and you need to administrate Windows hosts remotely out of office hours (from home), you will need some remote desktop client for Mac OS X.

I was just recently in that situation as we were guests to a friend in Shabla village nearby Sea coast and the only near PC, I had was my wife's MacBook Air running Mac OS X.

I looked in google to see if there is some default RDP (remote desktop protocol) client like MS Windows remote desktop command line client, i.e. (yes there is way to invoke remote desktop on Windows from command line 🙂 ):

mstsc [] [/v:] [/admin] [/f[ullscreen]] [/w:] [/h:] [/public] | [/span] [/edit “connection file”] [/migrate] [/?]

remote-desktop-run-from-windows-command-line-rdp-command-line-ms-windows-screenshot

I also looked if there is Mac OS X version ofLinux's rdesktop (command) or RDP Linux GUI remmina 
however  I didn't find direct port of em, neither there is default integrated RDP Client on Mac OS X, thus after researching a bit further I tried installing the first returned result in Google which was leading to Apple's AppStore – Apple – Remote Desktop.

I tried installing the clicking it but it seemed my wife, didn't know her AppStore as it was her cousin which earlier configured her Mac OS PC on laptop initial install time. Contacting her cousin to ask for the password was a time eater as well as I was lazy to create new appstore account (plus I always prefer to use free software alternative when possible) …  did a quick search in Google whether there is some Open Source / Free Software Remote Desktop Client for Mac OS X and I found CoRD – Mac OS X remote desktop client for Microsoft Windows computers using the RDP protocol.
CoRD was originally ported from UNIX program rdesktop.
To have CoRD working you will need as a minimum requirement Mac OS X version 10.5 or later.

CoRD-Free-Software-Open_Source-remote-desktop-client-for-mac-osx
Here is CoRD's description quoted from its SourceForge website:

CoRD: Simple RDP Remote Desktop

Macs interact well with Windows, and with CoRD the experience is a bit smoother. Great for working on the office terminal server, administrating servers or any other time you'd like your PC to be a bit closer without leaving your Mac. CoRD allows you to view each session in its own window, or save space with all sessions in one window. Scale session windows to whatever size fits you—the screen is resized automatically. Enter full screen mode and feel like you're actually at the computer. The clipboard is automatically synchronized between CoRD and the server. For system administrators, CoRD creates a simpler workflow by allowing you to save server information, then quickly connect to that server by using HotKeys or the server drawer. This makes quickly connecting to a specific server easy, even when managing many servers.

Installing CoRD is pretty, straight forward, just download unzip the archive and run it:

cord-remote-desktop-free-software-client-mac-osx-install-warning-screenshot

cord-remote-desktop-free-software-client-mac-osx-install-warning-screenshot.png

cord-open-source-rdp-client-mac-osx

To later run Cord either look it up in Finder or if you prefer like me to access it from command line, you will need to export CoRD PATH in Mac Terminal $PATH variable:

add-cord-remote-desktop-lcient-command-to-default-path-mac-osx

As you see in above screenshot to find out which directory is CoRD located, I've grepped through the processes with

ps ax | grep cord

and then added it to PATH with:

export PATH=$PATH:/Users/svetlana/Application/CoRD.app/Contents/MacOS/

Remembering CoRD to type it each time is annoying, thus to make CorD be accessed like on Linux with rdesktop (easy to remember command), I've used alias:

alias rdesktop='CoRD'

To make the new PATH and alias permanent for the user, I've added it to (/Users/svetlana) – ~/.profile

echo "export PATH=$PATH:/Users/svetlana/Application/CoRD.app/Contents/MacOS/" >> ~/.profile
echo "alias rdesktop='CoRD'" >> ~/.profile


Current CoRD MacOSX version is 0.5.7, for personal ease if I need to install it in future time, I've made my own mirror of cord here.

There is also Microsoft Remote Desktop client for Mac OS 2.1.1 however this version was released back in 2011 and is outdated (not supported for use with Mac OS X v10.7 (Lion) or later).

What is Vertical scaling and Horizontal scaling – Vertical and Horizontal hardware / services scaling

Friday, June 13th, 2014

horizontal-vs-vertical-scaling-vertical-and-horizontal-scaling-explained-diagram

If you're coming from a small or middle-sized company to a corporations like HP or IBM probably you will not a clear defined idea on the 2 types (2 dimensions) of system Scaling (Horizontal and Vertical scaling). I know from my pesronal experience that in small companies – all needed is to guarantee a model for as less probels as possible without too much of defining things and with much less planning. Other thing is being a sysadmin in middle-sized companies, often doesn't give you opportunity to discuss issues to solve with other admins but you have to deal as "one man (machine) for all" and thus often to solve office server and services tasks you do some custom solution.
hence for novice system administrators probably it will be probably unclear what is the difference between Horizontal and Vertical Scaling?

horizontal-vertical-scaling-scale-up-and-scale-out-server-infrastructure-diagram

 

Vertical Scaling (scale vertically or scale up) :- adding more resources(CPU/RAM/DISK) to your server (database or application server is still remains one).
Vertical Scaling is much more used in small and middle-sized companies and in applications and products of middle-range. Very common example for Virtual Scaling nowdays is to buy an expensive hardware and use it as a Virtual Machine hypervisor (VMWare ESX). Where a database is involved using Vertical Scaling without use of multiple virtual machines might be not the best solution, as even though hardware might suffice (creation of database locks might impose problems). Reasons to scale vertically include increasing IOPS (Input / Ouput Operations), increasing CPU/RAM capacity, and increasing disk capacity.
Because Vertical Scaling usually means upgrade of server hardware – whenever an improved performance is targeted, even though if Virtualization is used, the risk for downtimes with it is much higher than whenever horizontal scaling.

Horizontal Scaling (scale horizontally or scale out):- adding more processing units (phyiscal machine) to your server (infrastructure be it application web/server or database).
Horizontal scaling, means increasing  the number of nodes in the cluster, reduces the responsibilities of each member node by spreading the keyspace wider and providing additional end-points for client connections. The capacity of each individual node does not change, but its load is decreased (because load is distributed between separate server nodes). Reasons to scale horizontally include increasing I/O concurrency, reducing the load on existing nodes, and increasing disk capacity.
Horizontal Scaling has been historically much more used for high level of computing and for application and services. The Internet and particular web services gave a boom of Horizontal Scaling use, most companies nowadays that provide well known web services like Google (Gmail, Youtube), Yahoo, Facebook, Ebay, Amazon etc. are using heavily horizontal scaling. Horizontal Scaling is a must use technology – whenever a high availability of (server) services are required.

Saving multiple passwords in Linux with Revelation and Keepass2 – Keeping track of multiple passwords

Thursday, October 17th, 2013

System Administrators who use MS Windows to access multiple hosts in big companies like HP or IBM certainly use some kind of multiple password manager like PasswordSafe.

Keep multiple passwords safe in Microsoft Windows 7 passwordsafe with masterpassword

When number of passwords you have to keep in mind grows significantly using something like PasswordSafe becomes mandatory. Same is valid also for valid for system administrators who use GNU / Linux as a Desktop environment to administer thousands or hundreds of servers. I'm one of those admins who for years use Linux and until recently I kept logging all my passwords in separate directory full with text files created with vim (text editor). As the number of passwords and accesses to servers and web interfaces grow up dramatically as well as my requirement for security raised up I wanted to have my passwords secured being kept encrypted on my hard drive. For those who never use PasswordSafe the idea of program is to store all passwords you have in encrypted database which can be only opened through PasswordSafe by providing a master password.

passwordsafe on microsoft windows keep in order multiple passwords manager

Of course having one master password imposes other security risks as someone who knows the MasterPass can easily access all your passwords anyways for now such level of security perfectly fits my needs.

PasswordSafe is since recently Open Source so there is a Linux port, but the port is still in beta and though I tried hard to install it using provided .deb binaries as well as compile from source, I finally give it up. And decided to review what kind of password managers are available in Debian Wheezy's ports.

Here are those I found with;

apt-cache search password|grep -i manager

cpm – Curses based password manager using PGP-encryption
fpm2 – password manager with GTK+ 2.x GUI
gringotts – secure password and data storage manager
kedpm – KED Password Manager
kedpm-gtk – KED Password Manager
keepass2 – Password manager
keepassx – Cross Platform Password Manager
kwalletmanager – secure password wallet manager
password-gorilla – cross-platform password manager
revelation – GNOME2 Password manager


I didn't have the time to test each one of them, so I installed and checked only those which seemed more reliable, i.e.:
keepass2 and revelation

# apt-get install –yes fpm2 keepass2 revelation

Below is screenshot of each one of managers:

Revelation Linux Gnome graphic password manager program

Revelation – GNOME Password Manager

keepass2 Linux gui password manager screenshot Debian - graphic manager for storing passwords

kde password safe gui program Linux Debian screenshot

KDE QT Interface Linux GUI Password Manager (KeePass2)

With one of this tools admin's life is much easier as you don't have to get crazy and remember thousands of passwords.
Hope this helps some admin out there! Enjoy ! 🙂
 

How to find out all programs bandwidth use with (nethogs) top like utility on Linux

Friday, September 30th, 2011

Just run across across a super nice top like, program for system administrators, its called nethogs and is definitely entering my “l337” admin outfit next to tools like iftop, nettop, ettercap, darkstat htop, iotop etc.

nethogs is ultra easy to use, to get immediately in console statistics about running processes UPLOAD and DOWNLOAD bandwidth consumption just run it:

linux:~# nethogs

Nethogs screenshot on Linux Server with Nginx
Nethogs running on Debian GNU/Linux serving static web content with Nginx

If you need to check what program is using what amount of network bandwidth, you will definitely love this tool. Having information of bandwidth consumption is also viewable partially with iftop, however iftop is unable to track the bandwidth consumption to each process using the network thus it seems nethogs is unique at what it does.

Nethogs supports IPv4 and IPv6 as well as supports network traffic over ppp. The tool is available via package repositories for Debian GNU/Lenny 5 and Debian Squeeze 6.

To install Nethogs on CentOS and Fedora distributions, you will have to install it from source. On CentOS 5.7, latest nethogs which as of time of writting this article is 0.8.0 compiles and installs fine with make && make install commands.

In the manner of thoughts of network bandwidth monitoring, another very handy tool to add extra understanding on what kind of traffic is crossing over a Linux server is jnettop
jnettop shows which hosts/ports is taking up the most network traffic.
It is available for install via apt in Debian 5/6).

Here is a screenshot on jnettop in action:

Jnettop check network traffic in console

To install jnettop on latest Fedoras / CentOS / Slackware Linux it has to be download and compiled from source via jnettop’s official wiki page
I’ve tested jnettop install from source on CentOS release 5.7 and it seems to compile just fine using the usual compile commands:

[root@prizebg jnettop-0.13.0]# ./configure
...
[root@prizebg jnettop-0.13.0]# make
...
[root@prizebg jnettop-0.13.0]# make install

If you need to have an idea on the network traffic passing by your Linux server distringuished by tcp/udp/icmp network protocols and services like ssh / ftp / apache, then you will definitely want to take a look at nettop (if of course not familiar with it yet).
Nettop is not provided as a deb package in Debian and Ubuntu, where it is included as rpm for CentOS and presumably Fedora?
Here is a screenshot on nettop network utility in action:

Nettop server traffic division by protocol screenshot
FreeBSD users should be happy to find out that jnettop and nettop are part of the ports tree and the two can be installed straight, however nethogs would not work on FreeBSD, I searched for a utility capable of what Nethogs can, but couldn’t find such.
It seems the only way on FreeBSD to track bandwidth back and from originating process is using a combination of iftop and sockstat utilities. Probably there are other tools which people use to track network traffic to the processes running on a hos and do general network monitoringt, if anyone knows some good tools, please share with me.

Using perl and sed to substitute strings in multiple files on Linux and BSD

Friday, August 26th, 2011

Using perl and sed to replace strings in files on Linux, FreeBSD, OpenBSD, NetBSD and other UnixOn many occasions when had to administer on Linux, BSD, SunOS or any other *nix, there is a need to substitute strings inside files or group of files containing a certain string with another one.

The task is not too complex and many of the senior sysadmins out there would certainly already has faced this requirement and probably had a good idea on files substitution with perl and sed, however I’m quite sure there are dozen of system administrators out there who did not know, how and still haven’t faced a situation where there i a requirement to substitute from a command shell or via a scripting language.

This article tagets exactly these system administrators who are not 100% sys op Gurus 😉

1. Substitute text strings inside files on Linux and BSD with perl

Perl programming language has originally been created to do a lot of text manipulation as well as most of the Linux / Unix based hosts today have installed working copy of perl , therefore using perl as a mean to substitute one string in a file to another one is maybe the best way to completet the task.
Another good thing about perl is that text processing with it is said to be in most cases a bit faster than sed .
However it is still dependent on the string to be substituted I haven’t done benchmark tests to positively say 100% that always perl is quicker, however my common sense suggests perl will be quicker.

Now enough talk here is a very simple way to substitute a reoccuring, text string inside a file with another chosen one is like so:

debian:~# perl -pi -e 's/foo/bar/g' file1 file2

This will substitute the string foo with bar everywhere it’s matched in file1 and file2

However the above code is a bit “dangerous” as it does not preserve a backup copy of the original files, where string is substituted is not made.
Therefore using the above command should only be used where one is 100% sure about the string changes to be made.

Hence a better idea whether conducting the text substitution is to keep also the original file backup under a let’s say .bak extension. To achieve that I use perl as follows:

freebsd# perl -i.bak -p -e 's/syzdarma/magdanoz/g;' file1 file2

This command creates copies of the original files file1 and file2 under the names file1.bak and file2.bak , the files file1 and file2 text occurance of strings syzdarma will get substituted with magdanoz using the option /g which means – (substitute globally).

2. Substitute string in all files inside directory using perl on Linux and BSD

Every now and then the there is a need to do manipulations with large amounts of files, I can’t right now remember a good scenario where I had to change all occuring matching strings to anther one to all files located inside a directory, anyhow I’ve done this on a number of occasions.

A good way to do a mass file string substitution on Linux and BSD hosts equipped with a bash shell is via the commands:

debian:/root/textfiles:# for i in $(echo *.txt); do perl -i.bak -p -e 's/old_string/new_string/g;' $i; done

Where the text files had the default txt file extension .txt

Above bash loop prints each of the files located in /root/textfiles and substitutes everywhere (globally) the old_string with new_string .

Another alternative to the above example to replace multiple occuring text string in all files in multiple directories is possible using a combination of shell commands grep, perl, sort, uniq and xargs .
Let’s say that one wants to match everywhere inside the root directory and all the descendant directories for files with a custom string and substitute it to another one, this can be done with the cmd:

debian:~# grep -R -files-with-matches 'old_string' / | sort | uniq | xargs perl -pi~ -e 's/old_string/new_string/g'

This command will lookup for string old_string in all files in the / – root directory and in case of occurance will substitute with new_string (This command’s idea was borrowed as an idea from http://linuxadmin.org so thx.).

Using the combination of 5 commands, however is not very wise in terms of efficiency.

Therefore to save some system resources, its better in terms of efficiency to take advantage of the find command in combination with xargs , here is how:

debian:~# find / | xargs grep 'old_string' -sl |uniq | xargs perl -pi~ -e 's/old_string/new_string/g'

Once again the find command example will do exactly the same as the substitute method with grep -R …

As enough is said about the way to substitute text strings inside files using perl, I will further explain how text strings can be substituted using sed

The main reason why using sed could be a better choice in some cases is that Unices are not equipped by default with perl interpreter. In general the amount of servers who contains installed sed compared to the ones with perl language interpreter is surely higher.

3. Substitute text strings inside files on Linux and BSD with sed stream editor

In many occasions, wether a website is hosted, one needs to quickly conduct a change in string inside all files located in a directory, to resolve issues with static urls directly encoded in html.
To achieve this task here is a code using two little bash script loops in conjunctions with sed, echo and mv commands:

debian:/var/www/website# for i in $(ls -1); do cat $i |sed -e "s#index.htm#http://www.webdomain.com/#g">$i.new; done
debian:/var/www/website# for i in $(ls *.new); do mv $i $(echo $i |sed -e "s#.new##g"); done

The above command sed -e “s#index.htm#http://www.webdomain.com/#g”, instructs sed to substitute all appearance of the text string index.htm to the new text string http://www.webdomain.com

First for bash loop, creates all the files with substituted string to file1.new, file2.new, file3.new etc.
The second for loop uses mv to overwrite the original input files file1, file2, file3, etc. with the newly created ones file1.new, file2.new, file3.new

There is a a way shorter way to conclude the same text substitutions task using a simpler one liner with only using sed and bash’s eval capabilities, here is how:

debian:/var/www/website# sed -i 's/old_string/new_string/g' *

Above command will change old_string to new_string inside all files in directory /var/www/website

Whether a change has to be made with less than 1024 files using this method might be more efficient, however whether a text substitute has to be done to let’s say 5000+ the above simplistic version will not work. An error of Argument list too long will prevent the sed -i ‘s/old_string/new_string/g’ to complete its task.

The above for loop 2 liner should be also working without problems with FreeBSD and the rest of BSD derivatives, though I have not tested it yet, hence any feedback from FreeBSD guys is mostly welcome.

Consider that in order to have the for loops commands work on FreeBSD or NetBSD, they have to be run under a bash shell.
That’s all folks thanks the Lord for letting me write this nice article, I hope it gives some insights on how multiple files text replace on Unix works .
Cheers 😉

How to tune MySQL Server to increase MySQL performance using mysqltuner.pl and Tuning-primer.sh

Tuesday, August 9th, 2011

MySQL Easy performance tuning with mysqltuner.pl and Tuning-primer.sh scripts

Improving MySQL performance is crucial for improving a website responce times, reduce server load and improve overall work efficiency of a mysql database server.

I’ve seen however many Linux System administrators who does belittle or completely miss the significance of tuning a newly installed MySQL server installation.
The reason behind that is probably caused by fact that many people think MySQL config variables, would not significantly improve performance and does not pay back for optimization efforts. Moreover there are a bunch of system admins who has to take care for numerous services so they don’t have time to get good knowledge to optimize MySQL servers.
Thus many admins and webmasters nowdays, think optimizations depend mostly on the side of the website programmers.
It’s also sometimes falsely believed that optimizing a MySQL server could reduce the overall server stability.

With the boom of Internet website building and internet marketing, many webmasters emerged and almost anybody with almost no knowledge on GNU/Linux or minimal or no knowledge on PHP can start his Online store, open a blog or create a website powered by some CMS like joomla.
Thus nowdays many servers even doesn’t have a hired system administrators but are managed by people whose knowledge on *Nix is almost next to zero, this is another reason why dozens of MySQL installations online are a default ones and are not taking a good advantage of the server hardware.

The incrase of website visitors leads people servers expectations for hardware also to grow, thus many companies simply buy a new hardware instead of taking the few time to investigate on how current server hardware can be utilized better.
In that manner of thought I though it will be a good idea to write this small article on Tuning mysql servers with two scripts Tuning-primer.sh and mysqltuner.pl.
The scripts are ultra easy to use and does not require only a minimal knowledge on MySQL, Linux or (*BSD *nix if sql is running on BSD).
Tuning-primer.sh and mysqltuner.pl are therefore suitable for a quick MySQL server optimizations to even people who are no computer experts.

I use this two scripts for MySQL server optimizations on almost every new configured GNU/Linux with a MySQL backend.
Use of the script comes to simply download with wget, lynx, curl or some other web client and execute it on the server host which is already running the MySQL server.

Here is an example of how simple it is to run the scripts to Optimize MySQL:

debian:~# perl mysqltuner.pl
>> MySQLTuner 1.2.0 - Major Hayden >major@mhtx.net<
>> Bug reports, feature requests, and downloads at http://mysqltuner.com/
>> Run with '--help' for additional options and output filtering

——– General Statistics ————————————————–
[–] Skipped version check for MySQLTuner script
[OK] Currently running supported MySQL version 5.1.49-3
[OK] Operating on 64-bit architecture

——– Storage Engine Statistics ——————————————-
[–] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster
[–] Data in MyISAM tables: 6G (Tables: 952)
[!!] InnoDB is enabled but isn’t being used
[!!] Total fragmented tables: 12

——– Security Recommendations ——————————————-
[OK] All database users have passwords assigned

——– Performance Metrics ————————————————-
[–] Up for: 1d 2h 3m 35s (68M q [732.193 qps], 610K conn, TX: 49B, RX: 11B)
[–] Reads / Writes: 76% / 24%
[–] Total buffers: 512.0M global + 2.8M per thread (2000 max threads)
[OK] Maximum possible memory usage: 6.0G (25% of installed RAM)
[OK] Slow queries: 0% (3K/68M)
[OK] Highest usage of available connections: 7% (159/2000)
[OK] Key buffer size / total MyISAM indexes: 230.0M/1.7G
[OK] Key buffer hit rate: 97.8% (11B cached / 257M reads)
[OK] Query cache efficiency: 76.6% (46M cached / 61M selects)
[!!] Query cache prunes per day: 1822075
[OK] Sorts requiring temporary tables: 0% (1K temp sorts / 2M sorts)
[!!] Joins performed without indexes: 63635
[OK] Temporary tables created on disk: 1% (26K on disk / 2M total)
[OK] Thread cache hit rate: 99% (159 created / 610K connections)
[!!] Table cache hit rate: 4% (1K open / 43K opened)
[OK] Open file limit used: 17% (2K/16K)
[OK] Table locks acquired immediately: 99% (36M immediate / 36M locks)

——– Recommendations —————————————————–
General recommendations:
Add skip-innodb to MySQL configuration to disable InnoDB
Run OPTIMIZE TABLE to defragment tables for better performance
Enable the slow query log to troubleshoot bad queries
Increasing the query_cache size over 128M may reduce performance
Adjust your join queries to always utilize indexes
Increase table_cache gradually to avoid file descriptor limits
Variables to adjust:
query_cache_size (> 256M) [see warning above] join_buffer_size (> 256.0K, or always use indexes with joins) table_cache (> 7200)

You see there are plenty of things, the script reports, for the unexperienced most of the information can be happily skipped without need to know the cryptic output, the section of importance here is Recommendations for some clarity, I’ve made this section to show up bold.

The most imporant things from the Recommendations script output is actually the lines who give suggestions for incrase of certain variables for MySQL.In this example case this are the last three variables:
query_cache_size,
join_buffer_size and
table_cache

All of these variables are tuned from /etc/mysql/my.cnf (on Debian) and derivatives distros and from /etc/my.cnf on RHEL, CentOS, Fedora and the other RPM based Linux distributions.

On some custom server installs my.cnf is also located in /usr/local/mysql/etc/ or some other a bit more unstandard location 😉

Anyways now having the Recommendation from the script, it’s necessery to edit my.cnf and try to double the values for the suggested variables.

First, I check if all the suggested variables are existent in my config with grep , if they’re not then I’ll simply add the variable with doubled size of the suggested values.
P.S: One note here is sometimes some values which are configured, are the default value for the MySQL server and does not have a record in my.cnf

debian:~# grep -E 'query_cache_size|join_buffer_size|table_cache' /etc/mysql/my.cnf table_cache = 7200
query_cache_size = 256M
join_buffer_size = 262144

All of my variables are in the config so, now edit my.cnf and set values to:
table_cache = 14400
query_cache_size = 512M
join_buffer_size = 524288

I always, however preserve the old variable’s value, because sometimes raising the value might create problem and the MySql server might be unable to restart properly.
Thus before going with adding the new values make sure the old ones are commented with # , e.g.:
#table_cache = 7200
#query_cache_size = 256M
#join_buffer_size = 262144

I would recommend vim as editor of choice while editing my.cnf as vim completely rox 😉 If you’re not acquainted to vim use nano or mcedit or your editor of choice 😉 :

debian:~# vim /etc/mysql/my.cnf
...

Assuming that the changes are made, it’s time to restart MySQL to make sure the new values are read by the SQL server.

debian:~# /etc/init.d/mysql restart
* Stopping MySQL database server mysqld [ OK ]
* Starting MySQL database server mysqld [ OK ]
Checking for tables which need an upgrade, are corrupt or were not closed cleanly.

If mysql server fails, however to restart, make sure immediately you reverse back the changed variables to the commented values and restart once again via mysql init script to make server load.

Afterwards start adding the values one by one until find out which one is causing the mysqld to fail.

Now the second script (Tuning-primer.sh) is also really nice for MySQL performance optimizations are necessery. However it’s less portable (as it’s written in bash scripting language).
Consider running this script among different GNU/Linux distributious (especially the newer ones) might produce errors.
Tuning-primer.sh requires some minor code changes to be able to run on FreeBSD, NetBSD and OpenBSD *nices.

The way Tuning-primer.sh works is precisely like mysqltuner.pl , one runs it it gives some info about current running MySQL server and based on certain factors gives suggestions on how increasing or decresing certain my.cnf variables could reduce sql query bottlenecks, solve table locking issues as well as generally improve INSERT, UPDATE query times.

Here is an example output from tuning-primer.sh run on another server:

server:~# wget http://www.pc-freak.net/files/Tuning-primer.sh
...
server:~# sh Tuning-primer.sh
-- MYSQL PERFORMANCE TUNING PRIMER --
- By: Matthew Montgomery -

MySQL Version 5.0.51a-24+lenny5 x86_64

Uptime = 8 days 10 hrs 19 min 8 sec
Avg. qps = 179
Total Questions = 130851322
Threads Connected = 1

Server has been running for over 48hrs.
It should be safe to follow these recommendations

To find out more information on how each of these
runtime variables effects performance visit:
http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html

SLOW QUERIES
Current long_query_time = 1 sec.
You have 16498 out of 130851322 that take longer than 1 sec. to complete
The slow query log is NOT enabled.
Your long_query_time seems to be fine

MAX CONNECTIONS
Current max_connections = 2000
Current threads_connected = 1
Historic max_used_connections = 85
The number of used connections is 4% of the configured maximum.
Your max_connections variable seems to be fine.

WORKER THREADS
Current thread_cache_size = 128
Current threads_cached = 84
Current threads_per_sec = 0
Historic threads_per_sec = 0
Your thread_cache_size is fine

MEMORY USAGE
Tuning-primer.sh: line 994: let: expression expected
Max Memory Ever Allocated : 741 M
Configured Max Memory Limit : 5049 M
Total System Memory : 23640 M

KEY BUFFER
Current MyISAM index space = 1646 M
Current key_buffer_size = 476 M
Key cache miss rate is 1 / 56
Key buffer fill ratio = 90.00 %
You could increase key_buffer_size
It is safe to raise this up to 1/4 of total system memory;
assuming this is a dedicated database server.

QUERY CACHE
Query cache is enabled
Current query_cache_size = 64 M
Current query_cache_used = 38 M
Current Query cache fill ratio = 59.90 %

SORT OPERATIONS
Current sort_buffer_size = 2 M
Current record/read_rnd_buffer_size = 256.00 K
Sort buffer seems to be fine

JOINS
Current join_buffer_size = 128.00 K
You have had 111560 queries where a join could not use an index properly
You have had 91 joins without keys that check for key usage after each row
You should enable “log-queries-not-using-indexes”
Then look for non indexed joins in the slow query log.
If you are unable to optimize your queries you may want to increase your
join_buffer_size to accommodate larger joins in one pass.

TABLE CACHE
Current table_cache value = 3600 tables
You have a total of 798 tables
You have 1904 open tables.
The table_cache value seems to be fine

TEMP TABLES
Current tmp_table_size = 128 M
1% of tmp tables created were disk based
Created disk tmp tables ratio seems fine

TABLE SCANS
Current read_buffer_size = 128.00 K
Current table scan ratio = 797 : 1
read_buffer_size seems to be fine

TABLE LOCKING
Current Lock Wait ratio = 1 : 1782
You may benefit from selective use of InnoDB.

As seen from script output, there are certain variables which might be increased a bit for better SQL performance, one such variable as suggested is key_buffer_size(You could increase key_buffer_size)

Now the steps to make the tunings to my.cnf are precisely the same as with mysqltuner.pl, e.g.:
1. Preserve old config variables which will be changed by commenting them
2. Double value of current variables in my.cnf suggested by script
3. Restart Mysql server via /etc/init.d/mysql restart cmd.
4. If mysql runs fine monitor mysql performance with mtop or mytop for at least 15 mins / half an hour.

if all is fine run once again the tuning scripts to see if there are no further improvement suggestions, if there are more follow the 4 steps described procedure once again.

It’s also a good idea that these scripts are periodically re-run on the server like once per few months as changes in SQL queries amounts and types will require changes in MySQL operational variables.
The authors of these nice scripts has done great job and have saved us a tons of nerves time, downtimes and money spend on meaningless hardware. So big thanks for the awesome scripts guys 😉
Finally after hopefully succesful deployment of changes, enjoy the incresed SQL server performance 😉

How to enable AUTO fsck (ext3, ext4, reiserfs, LVM filesystems) checking on Linux boot through /etc/fstab

Tuesday, July 12th, 2011

How to auto FSCK manual fsck screenshot

Are you an administrator of servers and it happens a server is DOWN.
You request the Data Center to reboot, however suddenly the server fails to boot properly and you have to request for IPKVM or some web java interface to directly access the server physical terminal …

This is a very normal admin scenario and many people who have worked in the field of remote system administrators (like me), should have experienced that bad times multiple times.

Sadly enough only a insignifant number of administrators try to do their best to reduce this down times to resolve client stuff downtime but prefer spending time playing the ztype! game or watching some porn website 😉

Anyways there are plenty of things like Server Auto Reboot on Crash with software Watchdog etc., that we as sysadmins can do to reduce server downtimes and most of the manual human interactions on server boot time.

In that manner of thougts a very common thing when setting up a new Linux server that many server admins forget or don’t know is to enable all the server partition filesystems to be auto fscked during server boot time.

By not enabling the auto filesystem check options in Linux the server filesystems did not automatically scan and fix hard drive partitions for fs innode inconsistencies.
Even though the filesystems are tuned to automatically get checked on every 38 system reboots, still if some kind of filesystem errors are found that require a manual confirmation the boot process is interrupted and the admin ends up with a server which is not reachable remotely via ssh !

For the remote system administrator, this times are a terrible times of waitings, prayers and hopes that the server hardware is fine 😉 as well as being on hold to get a KVM to get into the server manually and enter the necessery input to fsck prompt.

Many of this bad times can be completely avoided with a very simple fix through /etc/fstab by enabling all server partitions containing any filesystem to be automatically checked and fixed in case if inconsistencies or errors are found by fsck.ext3, fsck.ext4, fsck.reiserfs etc. commands.

A very typical default /etc/fstab file you will find on many servers should look something like:

/dev/sda8 / ext3 errors=remount-ro 0 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/sda1 /home ext3 defaults 0 0

Notice the line:
/dev/sda1 /home ext3 defaults 0 0

The first column in the example contains the device name, the second one its mount point, third its filesystem type, fourth the mount options, fifth (a number) dump options, and sixth (another number) filesystem check options. Let’s take a closer look at this stuff.

The ones which are interesting to enable auto fsck checking and error resolving is provided usually by the last sixth variable (filesystem check option) which in the above example equals 0 .

When the filesystem check option equals 0 this means the auto fsck and repair for the respective filesystem is disabled.
Some time in the past the dump backup option (5th option in the example) was also used but as far as I can understand today it’s not that important in modern GNU/Linux distributions.

Now having the above sample crontab in order to enable the fsck file checking on Linux boot for /dev/sda1 , we will need to modify the above line’s filesystem check option be 2, e.g. the line would afterwards look like:

/dev/sda1 /home ext3 defaults 0 2

Setting the 2 as an option for filesystem check is necessery for every filesystem which is not mounted as a root filesystem /

In above example /etc/fstab you already see that auto filesystem fsck is enabled for root partition:

/dev/sda8 / ext3 errors=remount-ro 0 1
(notice the 1 in the end of the line)

Finally a modified version of the default sample /etc/fstab which will check the extra /dev/sda1 /home partition would look like so:

/dev/sda8 / ext3 errors=remount-ro 0 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/sda1 /home ext3 defaults 0 2

Making sure all Linux server partitions has the auto filesystem check option enabled is something absoultely necessery!
Enabling the auto fsck on servers always makes me sleep calmer 😉
Hope it helps your too. 🙂

Runing sudo command simultaneously on multiple servers with SSHSUDO

Tuesday, June 21st, 2011

ssh multiple server command execute
I just was recommended by a friend a nifty tool, which is absoutely nifty for system administrators.

The tool is called sshsudo and the project is hosted on http://code.google.com/p/sshsudo/.

Let’s say you’re responsible for 10 servers with the same operating system let’s say; CentOS 4 and you want to install tcpdump and vnstat on all of them without logging one by one to each of the nodes.

This task is really simple with using sshsudo.
A typical use of sshsudo is:


[root@centos root]# sshsudo -u root \
comp1,comp2,comp3,comp4,comp5,comp6,comp7,comp8,comp9,comp10 yum install tcpdump vnstat

Consequently a password prompt will appear on the screen;
Please enter your password:

If all the servers are configured to have the same administrator root password then just typing one the root password will be enough and the command will get issued on all the servers.

The program can also be used to run a custom admin script by automatically populating the script (upload the script), to all the servers and issuing it next on.

One typical use to run a custom bash shell script on ten servers would be:


[root@centos root]# sshsudo -r -u root \
comp1,comp2,comp3,comp4,comp5,comp6,comp7,comp8,comp9,comp10 /pathtoscript/script.sh

I’m glad I found this handy tool 😉

Cloud Computing a possible threat to users privacy and system administrator employment

Monday, March 28th, 2011

Cloud Computing screenshot

If you’re employed into an IT branch an IT hobbyist or a tech, geek you should have certainly heard about the latest trend in Internet and Networking technologies the so called Cloud Computing

Most of the articles available in newspapers and online have seriously praised and put the hopes for a better future through cloud computing.
But is really the cloud computing as good as promised? I seriously doubt that.
Let’s think about it what is a cloud? It’s a cluster of computers which are connected to work as one.
No person can precisely say where exactly on the cluster cloud a stored information is located (even the administrator!)

The data stored on the cluster is a property of a few single organizations let’s say microsoft, amazon etc., so we as users no longer have a physical possession of our data (in case if we use the cloud).

On the other hand the number of system administrators that are needed for an administration of a huge cluster is dramatically decreased, the every day system administrator, who needs to check a few webservers and a mail server on daily basis, cache web data with a squid proxy cache or just restart a server will be no longer necessary.

Therefore about few million of peoples would have to loose their jobs, the people necessary to administrate a cluster will be probably no more than few thousands as the clouds are so high that no more than few clouds will exist on the net.

The idea behind the cluster is that we the users store retrieve our desktops and boot our operating system from the cluster.
Even loading a simple webpage will have to retrieve it’s data from the cluster.

Therefore it looks like in the future the cloud computing and the internet are about to become one and the same thing. The internet might become a single super cluster where all users would connect with their user ids and do have full access to the information inside.

Technologies like OpenID are trying to make the user identification uniform, I assume a similar uniform user identication will be used in the future in a super cloud where everybody, where entering inside will have access to his/her data and will have the option to access any other data online.

The desire of humans and business for transperancy would probably end up in one day, where people will want to share every single bit of information.
Even though it looks very cool for a sci-fi movie, it’s seriously scary!

Cloud computing expenses as they’re really high would be affordable only for a multi-national corporations like Google and Microsoft

Therefore small and middle IT business (network building, expanding, network and server system integration etc.) would gradually collapse and die.

This are only a few small tiny bit of concerns but in reality the problems that cloud computing might create are a way more severe.
We the people should think seriously and try to oppose cloud computing, while we still can! It might be even a good idea if a special legislation that is aming at limiting cloud computing can be integrated and used only inside the boundary of a prescribed limitations.

Institutions like the European Parliament should be more concerned about the issues which the use of cloud computing will bring, EU legislation should very soon be voted and bounding contracts stop clouds from expanding and taking over the middle size IT business.

How to set password on a mysql server with no password via mysql command line interface

Monday, March 28th, 2011

Many Linux distributions’s offered MySQL server comes without a set default password, in practice you can freely login to the mysql server on a plain mysql server installation on Debian, Ubuntu or Fedora by simply issuing:

linux:~# mysql -u root
Enter password:

Pressing enter will straight let you in the mysql server. The same kind of behaviour is also probably true on BSD based and many other Unixes which have pre-installed or the option to install a new mysql server.

I remember in my past that I’ve even seen a productive mysql servers on a servers running CMS based websites which doesn’t have a root password set.

Some administrators doesn’t take the time to think about the implications of the no password mysql installation and therefore being in a hurry simply let the server without an administrator password.
This is very common for the most lame and uneducated ones. Many novice system administrators think that by installing a phpmyadmin and configuring a password on it’s web interface is equal to setting up the mysql server (daemon) a password.

Thus for all this the uneducated ones and for all those who already have noticed that their newly installed mysql server doesn’t have a password set I’ve decided to give an example how a new mysql server password can be set or how an existing mysql server pass can be changed to a new one

To make any password manipulations usually the mysql-client package does provide a very handy instrument called mysqladmin , mysqladmin has many possibilities among which is creating a new mysql server admin (root) password or changing a previously set mysql server password to a new one

1. Here is how you can set a new MySQL server password:

mysqladmin -u root 'password' YOURasddsaPASSWORDjqweHERE

2. If you need to change an already existing mysql password you need to provide just one more argument to mysqladmin:

mysqladmin -u root 'password' YOURasdfdsaNEWasddsaPASSWORD_HERE -pEnter password:

Whether the Enter password: field appears you will be required to fill in the original mysql server root password after which the password will be changed to the above string passed in to the mysqladmin command line ‘YOURasdfdsaNEWasddsaPASSWORD_HERE’

That’s all now you have either set a new password for the mysql server or have already changed your previous one.