Posts Tagged ‘run’

Mail send from command line on Linux and *BSD servers – useful for scripting

Monday, September 10th, 2018

mail-send-email-from-command-line-on-linux-and-freebsd-operating-systems-logo

Historically Email sending has been very different from what most people use it in the Office, there was no heavy Email clients such as Outlook Express no MX Exchange, no e-mail client capabilities for Calendar and Meetings schedule as it is in most of the modern corporate offices that depend on products such as Office 365 (I would call it a connectedHell 365 days a year !).

There was no free webmail and pop3 / imap providers such as Mail.Yahoo.com, Gmail.com, Hotmail.com, Yandex.com, RediffMail, Mail.com the innumerous lists goes and on.
Nope back in the day emails were doing what they were originally supposed to like the post services in real life simply send and receive messages.

For those who remember that charming times, people used to be using BBS-es (which were basicly a shared set-up home system as a server) or some of the few University Internal Email student accounts or by crazy sysadmins who received their notification and warnings logs about daemon (services) messages via local DMZ-ed network email servers and it was common to read the email directly with mail (mailx) text command or custom written scripts … It was not uncommon also that mailx was used heavily to send notification messages on triggered events from logs. Oh life was simple and clear back then, and even though today the email could be used in a similar fashion by hard-core old school sysadmins and Dev Ops / simple shell scriptings tasks or report cron jobs such usage is already in the deep history.

The number of ways one could send email in text format directly from the GNU / Linux / *BSD server to another remote mail MTA node (assuming it had properly configured Relay server be it Exim or Postifix) were plenty.

In this article I will try to rewind back some of the UNIX history by pinpointing a few of the most common ways, one used to send quick emails directly from a remote server connection terminal or lets say a cheap VPS few cents server, through something like (SSH or Telnet) etc.
 

1. Using the mail command client (part of bsd-mailx on Debian).
 

In my previous article Linux: "bash mail command not found" error fix
I ended the article with a short explanation on how this is done but I will repeat myself one more time here for the sake of clearness of this article.

root@linux:~# echo "Your Sample Message Body" | mail -s "Whatever … Message Subject" remote_receiver@remote-server-email-address.com


The mail command will connect to local server TCP PORT 25 on local configured MTA and send via it. If the local MTA is misconfigured or it doesn't have a proper MX / PTR DNS records etc. or not configure as a relay SMTP remote mail will not get delivered. Sent Email should be properly delivered at remote recipient address.

How to send HTML formatted emails using mailx command on Linux console / terminal shell using remote server through SSH ?

Connect to remote SSH server (VPS), dedicated server, home Linux router etc. and run:

 

root@linux:~# mailx -a 'Content-Type: text/html'
      -s "This is advanced mailx indeed!" < email_content.html
      "first_email_to_send_to@gmail.com, mail_recipient_2@yahoo.com"

 


email_content.html should be properly formatted (at best w3c standard compliant) HTML.

Here is an example email_content.html (skeleton file)

 

    To: your_customer@gmail.com
    Subject: This is an HTML message
    From: marketing@your_company.com
    Content-Type: text/html; charset="utf8"

    <html>
    <body>
    <div style="
        background-color:
        #abcdef; width: 300px;
        height: 300px;
        ">
    </div>
Whatever text mixed with valid email HTML tags here.
    </body>
    </html>


Above command sends to two email addresses however if you have a text formatted list of recipients you can easily use that file with a bash shell script for loop and send to multiple addresses red from lets say email_addresses_list.txt .

To further advance the one liner you can also want to provide an email attachment, lets say the file email_archive.rar by using the -A email_archive.rar argument.

 

root@linux:~# mailx -a 'Content-Type: text/html'
      -s "This is advanced mailx indeed!" -A ~/email_archive.rar < email_content.html
      "first_email_to_send_to@gmail.com, mail_recipient_2@yahoo.com"

 

For those familiar with Dan Bernstein's Qmail MTA (which even though a bit obsolete is still a Security and Stability Beast across email servers) – mailx command had to be substituted with a custom qmail one in order to be capable to send via qmail MTA daemon.
 

2. Using sendmail command to send email
 

Do you remember that heavy hard to configure MTA monster sendmail ? It was and until this very day is the default Mail Transport Agent for Slackware Linux.

Here is how we were supposed to send mail with it:

 

[root@sendmail-host ~]# vim email_content_to_be_delivered.txt

 

Content of file should be something like:

Subject: This Email is sent from UNIX Terminal Email

Hi this Email was typed in a file and send via sendmail console email client
(part of the sendmail mail server)

It is really fun to go back in the pre-history of Mail Content creation 🙂

 

[root@sendmail-host ~]# sendmail -v user_name@remote-mail-domain.com  < /tmp/email_content_to_be_delivered.txt

 

-v argument provided, will make the communication between the mail server and your mail transfer agent visible.
 

3. Using ssmtp command to send mail
 

ssmtp MTA and its included shell command was used historically as it was pretty straight forward you just launch it on the command line type on one line all your email and subject and ship it (by pressing the CTRL + D key combination).

To give it a try you can do:

 

root@linux:~# apt-get install ssmtp
Reading package lists… Done
Building dependency tree       
Reading state information… Done
The following additional packages will be installed:
  libgnutls-openssl27
The following packages will be REMOVED:
  exim4-base exim4-config exim4-daemon-heavy
The following NEW packages will be installed:
  libgnutls-openssl27 ssmtp
0 upgraded, 2 newly installed, 3 to remove and 1 not upgraded.
Need to get 239 kB of archives.
After this operation, 3,697 kB disk space will be freed.
Do you want to continue? [Y/n] Y
Get:1 http://ftp.us.debian.org/debian stretch/main amd64 ssmtp amd64 2.64-8+b2 [54.2 kB]
Get:2 http://ftp.us.debian.org/debian stretch/main amd64 libgnutls-openssl27 amd64 3.5.8-5+deb9u3 [184 kB]
Fetched 239 kB in 2s (88.5 kB/s)         
Preconfiguring packages …
dpkg: exim4-daemon-heavy: dependency problems, but removing anyway as you requested:
 mailutils depends on default-mta | mail-transport-agent; however:
  Package default-mta is not installed.
  Package mail-transport-agent is not installed.
  Package exim4-daemon-heavy which provides mail-transport-agent is to be removed.

 

(Reading database … 169307 files and directories currently installed.)
Removing exim4-daemon-heavy (4.89-2+deb9u3) …
dpkg: exim4-config: dependency problems, but removing anyway as you requested:
 exim4-base depends on exim4-config (>= 4.82) | exim4-config-2; however:
  Package exim4-config is to be removed.
  Package exim4-config-2 is not installed.
  Package exim4-config which provides exim4-config-2 is to be removed.
 exim4-base depends on exim4-config (>= 4.82) | exim4-config-2; however:
  Package exim4-config is to be removed.
  Package exim4-config-2 is not installed.
  Package exim4-config which provides exim4-config-2 is to be removed.

Removing exim4-config (4.89-2+deb9u3) …
Selecting previously unselected package ssmtp.
(Reading database … 169247 files and directories currently installed.)
Preparing to unpack …/ssmtp_2.64-8+b2_amd64.deb …
Unpacking ssmtp (2.64-8+b2) …
(Reading database … 169268 files and directories currently installed.)
Removing exim4-base (4.89-2+deb9u3) …
Selecting previously unselected package libgnutls-openssl27:amd64.
(Reading database … 169195 files and directories currently installed.)
Preparing to unpack …/libgnutls-openssl27_3.5.8-5+deb9u3_amd64.deb …
Unpacking libgnutls-openssl27:amd64 (3.5.8-5+deb9u3) …
Processing triggers for libc-bin (2.24-11+deb9u3) …
Setting up libgnutls-openssl27:amd64 (3.5.8-5+deb9u3) …
Setting up ssmtp (2.64-8+b2) …
Processing triggers for man-db (2.7.6.1-2) …
Processing triggers for libc-bin (2.24-11+deb9u3) …

 

As you see from above output local default Debian Linux Exim is removed …

Lets send a simple test email …

 

hipo@linux:~# ssmtp user@remote-mail-server.com
Subject: Simply Test SSMTP Email
This Email was send just as a test using SSMTP obscure client
via SMTP server.
^d

 

What is notable about ssmtp is that even though so obsolete today it supports of STARTTLS (email communication encryption) that is done via its config file

 

/etc/ssmtp/ssmtp.conf

 

4. Send Email from terminal using Mutt client
 

Mutt was and still is one of the swiff army of most used console text email clients along with Alpine and Fetchmail to know more about it read here

Mutt supports reading / sending mail from multiple mailboxes and capable of reading IMAP and POP3 mail fetch protocols and was a serious step forward over mailx. Its syntax pretty much resembles mailx cmds.

 

root@linux:~# mutt -s "Test Email" user@example.com < /dev/null

 

Send email including attachment a 15 megabytes MySQL backup of Squirrel Webmail

 

root@linux:~# mutt  -s "This is last backup small sized database" -a /home/backups/backup_db.sql user@remote-mail-server.com < /dev/null

 


5. Using simple telnet to test and send email (verify existence of email on remote SMTP)
 

As a Mail Server SysAdmin this is one of my best ways to test whether I had a server properly configured and even sometimes for the sake of fun I used it as a hack to send my mail 🙂
telnet is and will always be a great tool for doing SMTP issues troubleshooting.
 

It is very useful to test whether a remote SMTP TCP port 25 is opened or a local / remote server firewall prevents connections to MTA.

Below is an example connect and send example using telnet to my local SMTP on pc-freak.net (QMail powered (R) 🙂 )

sending-email-using-telnet-command-howto-screenshot

 

root@pcfreak:~# telnet localhost 25
Trying 127.0.0.1…
Connected to localhost.
Escape character is '^]'.
220 This is Mail Pc-Freak.NET ESMTP
HELO mail.pc-freak.net
250 This is Mail Pc-Freak.NET
MAIL FROM:<hipo@pc-freak.net>
250 ok
RCPT TO:<roots_bg@yahoo.com>
250 ok
DATA
354 go ahead
Subject: This is a test subject

 

This is just a test mail send through telnet
.
250 ok 1536440787 qp 28058
^]
telnet>

 

Note that the returned messages are native to qmail, a postfix would return a slightly different content, here is another test example to remote SMTP running sendmail or postfix.

 

root@pcfreak:~# telnet mail.servername.com 25
Trying 127.0.0.1…
Connected to localhost.localdomain (127.0.0.1).
Escape character is '^]'.
220 mail.servername.com ESMTP Sendmail 8.13.8/8.13.8; Tue, 22 Oct 2013 05:05:59 -0400
HELO yahoo.com
250 mail.servername.com Hello mail.servername.com [127.0.0.1], pleased to meet you
mail from: systemexec@gmail.com
250 2.1.0 hipo@pc-freak.net… Sender ok
rcpt to: hip0d@yandex.ru
250 2.1.5 hip0d@yandex.ru… Recipient ok
data
354 Enter mail, end with "." on a line by itself
Hey
This is test email only

 

Thanks
.
250 2.0.0 r9M95xgc014513 Message accepted for delivery
quit
221 2.0.0 mail.servername.com closing connection
Connection closed by foreign host.


It is handy if you want to know whether remote MTA server has a certain Emailbox existing or not with telnet by simply trying to send to a certian email and checking the Email server returned output (note that the message returned depends on the remote MTA version and many qmails are configured to not give information on the initial SMTP handshake but returns instead a MAILER DAEMON failure error sent back to your sender address. Some MX servrers are still vulnerable to this attack yet, historically dreamhost.com. Below attack screenshot is made at the times before dreamhost.com fixed the brute force email issue.

Terminal-Verify-existing-Email-with-telnet

6. Using simple netcat TCP/IP Swiss Army Knife to test and send email in console

netcat-logo-a-swiff-army-knife-of-the-hacker-and-security-expert-logo
Other tool besides telnet of testing remote / local SMTP is netcat tool (for reading and writting data across TCP and UDP connections).

The way to do it is analogous but since netcat is not present on most Linux OSes by default you need to install it through the package manager first be it apt or yum etc.

# apt-get –yes install netcat


 

First lets create a new file test_email_content.txt using bash's echo cmd.
 

 

# echo 'EHLO hostname
MAIL FROM: hip0d@yandex.ru
RCPT TO:   solutions@pc-freak.net
DATA
From: A tester <hip0d@yandex.ru>
To:   <solutions@pc-freak.net>
Date: date
Subject: A test message from test hostname

 

Delete me, please
.
QUIT
' >>test_email_content.txt

 

# netcat -C localhost 25 < test_email_content.txt

 

220 This is Mail Pc-Freak.NET ESMTP
250-This is Mail Pc-Freak.NET
250-STARTTLS
250-SIZE 80000000
250-PIPELINING
250 8BITMIME
250 ok
250 ok
354 go ahead
451 See http://pobox.com/~djb/docs/smtplf.html.

Because of its simplicity and the fact it has a bit more capabilities in reading / writing data over network it was no surprise it was among the favorite tools not only of crackers and penetration testers but also a precious debug tool for the avarage sysadmin. netcat's advantage over telnet is you can push-pull over the remote SMTP port (25) a non-interactive input.


7. Using openssl to connect and send email via encrypted channel

 

root@linux:~# openssl s_client -connect smtp.gmail.com:465 -crlf -ign_eof

    ===
               Certificate negotiation output from openssl command goes here
        ===

        220 smtp.gmail.com ESMTP j92sm925556edd.81 – gsmtp
            EHLO localhost
        250-smtp.gmail.com at your service, [78.139.22.28]
        250-SIZE 35882577
        250-8BITMIME
        250-AUTH LOGIN PLAIN XOAUTH2 PLAIN-CLIENTTOKEN OAUTHBEARER XOAUTH
        250-ENHANCEDSTATUSCODES
        250-PIPELINING
        250-CHUNKING
        250 SMTPUTF8
            AUTH PLAIN *passwordhash*
        235 2.7.0 Accepted
            MAIL FROM: <hipo@pcfreak.org>
        250 2.1.0 OK j92sm925556edd.81 – gsmtp
            rcpt to: <systemexec@gmail.com>
        250 2.1.5 OK j92sm925556edd.81 – gsmtp
            DATA
        354  Go ahead j92sm925556edd.81 – gsmtp
            Subject: This is openssl mailing

            Hello nice user
            .
        250 2.0.0 OK 1339757532 m46sm11546481eeh.9
            quit
        221 2.0.0 closing connection m46sm11546481eeh.9
        read:errno=0


8. Using CURL (URL transfer) tool to send SSL / TLS secured crypted channel emails via Gmail / Yahoo servers and MailGun Mail send API service


Using curl webpage downloading advanced tool for managing email send might be  a shocking news to many as it is idea is to just transfer data from a server.
curl is mostly used in conjunction with PHP website scripts for the reason it has a Native PHP implementation and many PHP based websites widely use it for download / upload of user data.
Interestingly besides support for HTTP and FTP it has support for POP3 and SMTP email protocols as well
If you don't have it installed on your server and you want to give it a try, install it first with apt:
 

root@linux:~# apt-get install curl

 


To learn more about curl capabilities make sure you check cURL –manual arg.
 

root@linux:~# curl –manual

 

a) Sending Emails via Gmail and other Mail Public services

Curl is capable to send emails from terminal using Gmail and Yahoo Mail services, if you want to give that a try.

gmail-settings-google-allow-less-secure-apps-sign-in-to-google-screenshot

Go to myaccount.google.com URL and login from the web interface choose Sign in And Security choose Allow less Secure Apps to be -> ON and turn on access for less secure apps in Gmail. Though I have not tested it myself so far with Yahoo! Mail, I suppose it should have a similar security settings somewhere.

Here is how to use curl to send email via Gmail.

Gmail-password-Allow-less-secure-apps-ON-screenshot-howto-to-be-able-to-send-email-with-text-commands-with-encryption-and-outlook

 

 

root@linux:~# curl –url 'smtps://smtp.gmail.com:465' –ssl-reqd \
  –mail-from 'your_email@gmail.com' –mail-rcpt 'remote_recipient@mail.com' \
  –upload-file mail.txt –user 'your_email@gmail.com:your_accout_password'


b) Sending Emails using Mailgun.com (Transactional Email Service API for developers)

To use Mailgun to script sending automated emails go to Mailgun.com and create account and generate new API key.

Then use curl in a similar way like below example:

 

curl -sv –user 'api:key-7e55d003b…f79accd31a' \
    https://api.mailgun.net/v3/sandbox21a78f824…3eb160ebc79.mailgun.org/messages \
    -F from='Excited User <developer@yourcompany.com>' \
    -F to=sandbox21a78f824…3eb160ebc79.mailgun.org \
    -F to=user_acc@gmail.com \
    -F subject='Hello' \
    -F text='Testing Mailgun service!' \
   –form-string html='<h1>EDMdesigner Blog</h1><br /><cite>This tutorial helps me understand email sending from Linux console</cite>' \
    -F attachment=@logo_picture.jpg

 

The -F option that is heavy present in above command lets curl (Emulate a form filled in button in which user has pressed the submit button).
For more info of the options check out man curl.
 

 

9. Using swaks command to send emails from

 

root@linux:~# apt-cache show swaks|grep "Description" -B 10
Package: swaks
Version: 20170101.0-1
Installed-Size: 221
Maintainer: Andreas Metzler <ametzler@debian.org>
Architecture: all
Depends: perl
Recommends: libnet-dns-perl, libnet-ssleay-perl
Suggests: perl-doc, libauthen-sasl-perl, libauthen-ntlm-perl
Description-en: SMTP command-line test tool
 swaks (Swiss Army Knife SMTP) is a command-line tool written in Perl
 for testing SMTP setups; it supports STARTTLS and SMTP AUTH (PLAIN,
 LOGIN, CRAM-MD5, SPA, and DIGEST-MD5). swaks allows one to stop the
 SMTP dialog at any stage, e.g to check RCPT TO: without actually
 sending a mail.
 .
 If you are spending too much time iterating "telnet foo.example 25"
 swaks is for you.
Description-md5: f44c6c864f0f0cb3896aa932ce2bdaa8

 

 

 

root@linux:~# apt-get instal –yes swaks

root@linux:~# swaks –to mailbox@example.com -s smtp.gmail.com:587
      -tls -au <user-account> -ap <account-password>

 


The -tls argument (in order to use gmail encrypted TLS channel on port 587)

If you want to hide the password not to provide the password from command line so (in order not to log it to user history) add the -a options.

10. Using qmail-inject on Qmail mail servers to send simple emails

Create new file with content like:
 

root@qmail:~# vim email_file_content.text
To: user@mail-example.com
Subject: Test


This is a test message.
 

root@qmail:~# cat email_file_content.text | /var/qmail/bin/qmail-inject


qmail-inject is part of ordinary qmail installation so it is very simple it even doesn't return error codes it just ships what ever given as content to remote MTA.
If the linux host where you invoke it has a properly configured qmail installation the email will get immediately delivered. The advantage of qmail-inject over the other ones is it is really lightweight and will deliver the simple message more quickly than the the prior heavy tools but again it is more a Mail Delivery Agent (MDA) for quick debugging, if MTA is not working, than for daily email writting.

It is very useful to simply test whether email send works properly without sending any email content by (I used qmail-inject to test local email delivery works like so).
 

root@linux:~# echo 'To: mailbox_acc@mail-server.com' | /var/qmail/bin/qmail-inject

 

11. Debugging why Email send with text tool is not being send properly to remote recipient

If you use some of the above described methods and email is not delivered to remote recipient email addresses check /var/log/mail.log (for a general email log and postfix MTAs – the log is present on many of the Linux distributions) and /var/log/messages or /var/log/qmal (on Qmail installations) /var/log/exim4 (on servers running Exim as MTA).

http://pc-freak.net/images/linux-email-log-debug-var-log-mail-output

 Closure

The ways to send email via Linux terminal are properly innumerous as there are plenty of scripted tools in various programming languages, I am sure in this article,  also missing a lot of pre-bundled installable distro packages. If you know other interesting ways / tools to send via terminal I would like to hear it.

Hope you enjoyed, happy mailing !

The Best Most Effective Search Engine Optimization SEO tips or how to stay ahead of your competitors

Friday, October 27th, 2017

 

The 16 most effective search engine optimization tips

I've found an infogram that is showing the best practices of Search Engine Optimization as today SEO has been dependent strongly on this factors I suggest you closely check your site, whether all of the 16 pinpointed tips are already implemented in your site if not you better implement them before the robots (Machine Learning), Cloud Computing and the rest of the modern tech savy mambo jambo stuff modern technology takes over SEO ranking in Google. If you run a start up business like me this tips will definitely help you to keep up in the list of Google, Bing and Yahoo ahead of your competitors.

Enjoy Learning and please share anything you find missing on the diagram which you already do to Boost Up your SEO!

Check Windows Operating System install date, Full list of installed and uninstalled programs from command line / Check how old is your Windows installation?

Tuesday, March 29th, 2016

when-was-windows-installed-check-howto-from-command-line
Sometimes when you have some inherited Windows / Linux OS servers or Desktops, it is useful to be aware what is the Operating System install date. Usually the install date of the OS is closely to the date of purchase of the system this is especially true for Windows but not necessery true for Liunx based installs.

Knowing the install date is useful especially if you're not sure how outdated is a certain operating system. Knowing how long ago a current installation was performed could give you some hints on whether to create a re-install plans in order to keep system security up2date and could give you an idea whether the system is prone to some common errors of the time of installation or security flaws.

 

1. Check out how old is Windows install?

Finding out the age of WIndows installation can be performed across almost all NT 4.0 based Windowses and onwards, getting Winblows install date is obtained same way on both Windows XP / Vista/  7  and 8.

Besides many useful things such as detailed information about the configuration of your PC / notebook systeminfo could also provide you with install date, to do so just run from command line (cmd.exe).
 

C:\Users\hipo> systeminfo | find /i "install date"
Original Install Date:     09/18/13, 15:23:18 PM


check-windows-os-install-date-from-command-line-howto-screenshot

If you need to get the initial Windows system install date however it might be much better to use WMIC command to get the info:

 

 

C:\Users\hipo>WMIC OS GET installdate
InstallDate
20130918152318.000000+180


The only downside of using WMIC as you can see is it provides the Windows OS install date in a raw unparsed format, but for scripters that's great.

2. Check WIndows Installed and Uinstalled software and uptime from command line

One common other thing next to Windows install date is what is the Windows uptime, the easiest way to get that is to run Task Manager in command line run taskmgr

windows-task-manager-how-to-check-windows-operating-system-uptime-easily

For those who want to get the uptime from windows command line for scripting purposes, this can be done again with systeminfo cmd, i.e.:

 

C:\> systeminfo | find "System Boot Time:"
System Boot Time:          03/29/16, 08:48:59 AM


windows-os-command-to-get-system-uptime-screenshot

Other helpful Windows command liners you might want to find out about is getting all the Uninstalled and Installed programs from command line this again is done with WMIC

 

C:\> wmic /OUTPUT:my_software.txt product get name

 


get-a-full-list-of-installed-software-programs-on-windows-xp-vista-7-8-command-howto-screenshot

Alternative way to get a full list of installed software on Windows OS is to use Microsoft/SysInternals psinfo command:

 

C:\> psinfo -s > software.txt
C:\> psinfo -s -c > software.csv


If you need to get a complete list of Uinstalled Software using command line (e.g. for batch scripting) purposes, you can query that from Windows registry, like so:

 

C:\>reg query HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Uninstall


Command Output will be something like on below shot:

windows-OS-show-get-full-list-of-uninstalled-programs-using-a-command-line-screenshot

Well that's all folks 🙂

 

Where is Firefox plugin / bookmarks / temp files directory on Windows?

Saturday, February 13th, 2016

windows-appdata-mozilla-plugins-how-to-check-the-extensions-folder-firefox-windows-screenshot

If you want to find out where Firefox downloads and keeps installed extensions in a quick manner, just press together:
KBD Windows flag button + R

This shortcut will open WIndows Run prompt
And paste inside the run prompt

%appdata%\Mozilla\plugins

 

The %appdata% is Windows internal variable that keeps inside  path to C:\Users\Your-Username\AppData\Roaming

On my workPC this contains:

C:\Users\georgi7>echo %appdata%
C:\Users\georgi7\AppData\Roaming

 

mozilla-plugins-folder-in-windows-explorer-screenshot

Enjoy 🙂

Start Event Viewer from Command Line (Prompt) – eventvwr.msc to Debug Windows server issues

Friday, November 6th, 2015

eventvwrmsc-event-viewer-windows-7-screenshot-view-windows-log-and-dianose-errors

If you’re a sysadmin which needs to deal with Microsoft Windows servers locally or remotely via Remote Desktop RDP client (MSTSC.EXE) or inside a Windows Domain Controller, you will have to frequently debug Windows isseus or Application caused errors by reviewing debug information stored in Event Logs.

Event Viewer is a precious tool to debug often errors with missing libraries or failing programs on Windows boot and thus on M$ Windows it is the Swiss Army knife of sysadmin.
However as staring Event Viewer using the GUI menus, takes a lot of step and looses you time, e.g., you have to navigate to menus:

1. Start button Picture of the Start button
2. clicking Control Panel
3. clicking System and Security
4. clicking Administrative Tools
5.then double-clicking Event Viewer.‌
6. Granting Administrator permission required If you’re prompted for an administrator password or confirmation

It is much handier to just start it with a shortcut:

Press Windows (Button) + R
– To invoke run prompt

and type:

eventvwr.msc

In case if you’re running eventvwr.msc to connect to remote Windows Server run from command prompt (cmd.exe):

eventvwr-run-from-command-prompt-with-a-smart-shortcut-to-save-time-when-administrating-windows-servers

eventvwr.msc /computer=OTHER_Computer_Name

event-viewer-log-reader-and-debug-tool-for-windows-PC-and-windows-servers-adminsitration

How to update macos from terminal / Check and update remotely Mac OS X software from console

Friday, October 23rd, 2015

../files/how-to-update-mac-osx-notebook-from-terminalsoftware-update-command-line-mac-screnshot-1

If you happen to have to deal with Mac OS X (Apple) notebook or Desktop PC (Hackintosh) etc. and you’re sysadmin or console freak being pissed off Mac’s GUI App Store update interface and you want to “keep it simple stupid” (KISS) in an Debian Linux like apt-get manner then you can also use Mac’s console application (cli) terminal to do the updates manually from command line with:

softwareupdate

command.

how-to-update-mac-osx-notebook-from-terminalsoftware-update-command-line-mac-screnshot

To get help about softwareupdate pass it on the -h flag:

softwareupdate -h

1. Get a list of available Mac OS updates

Though not a very likely scenario of course before installing it is always a wise thing to see what is being updated to make sure you will not upgrade something that you don’t want to.
This is done with:

softwareupdate -l

However in most cases you can simply skip this step as updating directly every package installed on the Mac with a new version from Apple will not affect your PC.
Anyways it is always a good idea to keep a backup image of your OS before proceeding with updates with let’s say Time Machine Mac OS backup app.

2. Install only recommended Updates from Apple store

softwareupdate -irv


Above will download all updates that are critical and thus a must to have in order to keep Mac OS security adequate.
Translated into Debian / Ubuntu Linux language, the command does pretty much the same as Linux’s:

apt-get –yes update

3. Install All Updates available from AppleStore

To install absolutely all updates provided by Apple’s package repositories run:

softwareupdate -iva

One note to make here is that always when you keep updating make sure your notebook is switched on to electricity grid otherwise if due to battery discharge it shutoffs during update your Mac will crash in a very crappy hard to recover state that might even cost you a complete re-install or a need to bring a PC to a Mac Store technical support guy so beware, you’re warned!

4. Installing all updates except Specific Softwares from Terminal

Often if you have a cracked software or a software whose GUI interface changed too much and you don’t want to upgrade it but an update is offered by Apple repos you can add the -i ingnore option:

softwareupdate -i [update_name(s)]

For example:

softwareupdate -i Safari-version-XXXX

5. View Mac OS Software Update History

The quickest way to see the update history is with System Information app, e.g.:

/Applications/Utilities/System Information.app

Check your Server Download / Upload Internet Speed from Console on Linux / BSD / Unix howto

Tuesday, March 17th, 2015

tux-check-internet-network-download-upload-speed-on-linux-console-terminal-linux-bsd-unix
If you've been given a new dedicated server from a New Dedicated-Server-Provider or VPS with Linux and you were told that a certain download speed to the Server is guaranteed from the server provider, in order to be sure the server's connection to the Internet told by service provider is correct it is useful to run a simple measurement console test after logging in remotely to the server via SSH.

Testing connection from Terminal is useful because as you probably know most of Linux / UNIX servers doesn't have a GUI interface and thus it is not possible to test Internet Up / Down Bandwidth through speedtest.net.
 

1. Testing Download Internet Speed given by ISP / Dedi-Server Provider from Linux Console

For the download speed (internet) test the historical approach was to just try downloading the Linux kernel source code from www.kernel.org with some text browser such as lynx or links count the seconds for which the download is completed and then multiple the kernel source archive size on the seconds to get an approximate bandwidth per second, however as nowdays internet connection speeds are much higher, thus it is better to try to download some Linux distribution iso file, you can still use kernel tar archive but it completed too fast to give you some good (adequate) statistics on Download bandwidth.

If its a fresh installed Linux server probably you will probably not have links / elinks and lynx text internet browers  installed so install them depending on deb / rpm distro with:

If on Deb Linuz distro:

 

root@pcfreak:/root# apt-get install –yes links elinks lynx

 

On RPM Based Linuz distro:
 

 

[root@fedora ~]# yum install -y lynx elinks links

 

Conduct Internet  Download Speed with links
root@pcfreak:/root# links https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.19.1.tar.xz

check_your_download_speed-from-console-linux-with-links-text-browser

(Note that the kernel link is current latest stable Kernel source code archive in future that might change, so try with latest archive.)

You can also use non-interactive tool such as wget curl or lftp to measure internet download speed

To test Download Internet Speed with wget without saving anything to disk set output to go to /dev/null 

 

root@pcfreak:~# wget -O /dev/null http://pc-freak.net/~hipo/hirens-bootcd/HirensBootCD15/Hirens.BootCD.15.0.zip

 

check_bandwidth_download-internet-speed-with-wget-from-console-non-interactively-on-linux

You see the Download speed is 104 Mbit/s this is so because I'm conducting the download from my local 100Mbit network.

For the test you can use my mirrored version of Hirens BootCD

2. Testing Uplink Internet speed provided by ISP / Server Provider from Linux (SSH) Console

To test your uplink speed you will need lftp or iperf command tool.

 

root@pcfreak:~# apt-cache show lftp|grep -i descr -A 12
Description: Sophisticated command-line FTP/HTTP client programs
 Lftp is a file retrieving tool that supports FTP, HTTP, FISH, SFTP, HTTPS
 and FTPS protocols under both IPv4 and IPv6. Lftp has an amazing set of
 features, while preserving its interface as simple and easy as possible.
 .
 The main two advantages over other ftp clients are reliability and ability
 to perform tasks in background. It will reconnect and reget the file being
 transferred if the connection broke. You can start a transfer in background
 and continue browsing on the ftp site. It does this all in one process. When
 you have started background jobs and feel you are done, you can just exit
 lftp and it automatically moves to nohup mode and completes the transfers.
 It has also such nice features as reput and mirror. It can also download a
 file as soon as possible by using several connections at the same time.

 

root@pcfreak:/root# apt-cache show iperf|grep -i desc -A 2
Description: Internet Protocol bandwidth measuring tool
 Iperf is a modern alternative for measuring TCP and UDP bandwidth performance,
 allowing the tuning of various parameters and characteristics.

 

To test Upload Speed to Internet connect remotely and upload any FTP file:

 

root@pcfreak:/root# lftp -u hipo pc-freak.net -e 'put Hirens.BootCD.15.0.zip; bye'

 

uploading-file-with-lftp-screenshot-test-upload-internet-speed-linux

On Debian Linux to install iperf:

 

root@pcfreak:/root# apt-get install –yes iperf

 

On latest CentOS 7 and Fedora (and other RPM based) Linux, you will need to add RPMForge repository and install with yum

 

[root@centos ~]# rpm -ivh  rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm

[root@centos ~]# yum -y install iperf

 

Once having iperf on the server the easiest way currently to test it is to use
serverius.net speedtest server –  located at the Serverius datacenters, AS50673 and is running on a 10GE connection with 5GB cap.

 

root@pcfreak:/root# iperf -c speedtest.serverius.net -P 10
————————————————————
Client connecting to speedtest.serverius.net, TCP port 5001
TCP window size: 16.0 KByte (default)
————————————————————
[ 12] local 83.228.93.76 port 54258 connected with 178.21.16.76 port 5001
[  7] local 83.228.93.76 port 54252 connected with 178.21.16.76 port 5001
[  5] local 83.228.93.76 port 54253 connected with 178.21.16.76 port 5001
[  9] local 83.228.93.76 port 54251 connected with 178.21.16.76 port 5001
[  3] local 83.228.93.76 port 54249 connected with 178.21.16.76 port 5001
[  4] local 83.228.93.76 port 54250 connected with 178.21.16.76 port 5001
[ 10] local 83.228.93.76 port 54254 connected with 178.21.16.76 port 5001
[ 11] local 83.228.93.76 port 54255 connected with 178.21.16.76 port 5001
[  6] local 83.228.93.76 port 54256 connected with 178.21.16.76 port 5001
[  8] local 83.228.93.76 port 54257 connected with 178.21.16.76 port 5001
[ ID] Interval       Transfer     Bandwidth
[  9]  0.0-10.2 sec  4.05 MBytes  3.33 Mbits/sec
[ 10]  0.0-10.2 sec  3.39 MBytes  2.78 Mbits/sec
[ 11]  0.0-10.3 sec  3.75 MBytes  3.06 Mbits/sec
[  4]  0.0-10.3 sec  3.43 MBytes  2.78 Mbits/sec
[ 12]  0.0-10.3 sec  3.92 MBytes  3.18 Mbits/sec
[  3]  0.0-10.4 sec  4.45 MBytes  3.58 Mbits/sec
[  5]  0.0-10.5 sec  4.06 MBytes  3.24 Mbits/sec
[  6]  0.0-10.5 sec  4.30 MBytes  3.42 Mbits/sec
[  8]  0.0-10.8 sec  3.92 MBytes  3.03 Mbits/sec
[  7]  0.0-10.9 sec  4.03 MBytes  3.11 Mbits/sec
[SUM]  0.0-10.9 sec  39.3 MBytes  30.3 Mbits/sec

 

You see currently my home machine has an Uplink of 30.3 Mbit/s per second, that's pretty nice since I've ordered a 100Mbits from my ISP (Unguaranteed Bandwidth Connection Speed) and as you might know it is a standard practice for many Internet Proviers to give Uplink speed of 1/4 from the ISP provided overall bandwidth 1/4 would be 25Mbi/s, meaning my ISP (Bergon.NET) is doing pretty well providing me with even more than promised (ordered) bandwidth.

Iperf is probably the choice of most sysadmins who have to do regular bandwidth in local networks speed between 2 servers or test  Internet Bandwidth speed on heterogenous network with Linux / BSDs / AIX / HP-UX (UNIXes). On HP-UX and AIX and other UNIXes for which iperf doesn't have port you have to compile it yourself.

If you don't have root /admin permissions on server and there is python language enterpreter installed you can use speedtest_cli.py script to test internet throughput connectivity
speedtest_cli uses speedtest.net to test server up / down link just in case if script is lost in future I've made ownload mirror of speedtest_cli.py is here

Quickest way to test net speed with speedtest_cli.py:

 

$ lynx -dump https://raw.github.com/sivel/speedtest-cli/master/speedtest_cli.py > speedtest_cli.py
$ chmod +x speedtest_cli.py
python speedtest_cli.py

speedtest_cli_pyhon_script_screenshot-on-gnu-linux-test-internet-network-speed-on-unix

WordPress Security: Fix WordPress wp-config.php improper permissions to protect your sites from Database password steal / Website deface

Thursday, March 12th, 2015

wordpress-security-Fix-wordpress-wp-config-improper-permissions-to-protect-your-sites-from-Database-pass-steal
Keeping WordPress Site / Blog and related installed plugins up-to-date
is essential to prevent an attacker to hack into your Site / Database and deface your site, however if you're a company providing shell access from Cpanel / Plesk / Kloxo Panel to customers often customers are messing up permissions leaving important security credential files such as wp-config.php (which is storing user / pass credentials about connection to MySQL / PostgreSQL to have improper permissions and be world readable e.g. have permissions such as 666 or 777 while in reality the WordPress recommended permissions for wp-config.php is 600. I will skip here to explain in details difference between file permissions on Linux as this is already well described in any Linux book, however I just will recommend for any Share hosting Admin where Wordperss is hosted on Lighttpd / Apache Webserver + Some kind of backend database to be extra cautious.

Hence it is very useful to list all your WordPress sites on server wp-config.php permissions with find like this:

 

find /  -iname 'wp-config.php' -print1;

 

I find it a generally good practice to also automatically set all wp-config.php permissions to 600 (6= Read / Write  permissions only for File Owner  user 0 = No permissions for All groups, 0 = No Permissions for all non-owner users)

If find command output gives you some file permissions such as:
 

ls -al /var/www/wordpress-bak/wp-config.php
-rw-rw-rw- 1 www-data www-data 2654 jul 28  2009 wp-config.php

 

E.g. file permission has 666 permissions (Readable for all users), then it is wise to fix this with:
 

chmod 600 /var/www/wordpress-bak/wp-config.php


It is generally a very good practice to run also a chmod 600 to each and every found wp-config.php file on server:
 

find /  -iname 'wp-config.php' -print1 -exec chmod 600 '{}' \;


Above command will also print each file to whcih permission is set to Read / Write for Owner (this si done with -print1 option).

It is a good practice for shared hosting server to always configure a root cronjob to run above find chmod command at least once daily (whenever server hosts 50 – 100 wordpress+ more sites).
 

crontab -u root -l | { cat; echo “05 03 * * * find /  -iname 'wp-config.php' -print1 -exec chmod 600 '{}' \; } | crontab – 


If you don't have the 600 permissions set for all wp-config.php files this security "backdoor" can be used by any existing non-root user to be read and to break up (crack)  in your database and even when there are Deface bot-nets involved to deface all your hosted server wordpress sites.

One of my servers with wordpress has just recently suffered with this little but very important security hole due to a WordPress site directory backup  with improper permissions which allowed anyone to enter MySQL database, so I guess there are plenty of servers with this hidden vulnerability silently living.

Many thanks to my dear friend (Dimitar PaskalevNomen for sharing with me about this vulnerability! Very important note to make here is admins who are using some security enhancement modules such as SuPHP (which makes Apache webserver to run Separate Website instances with different user), should be careful with his set all wp-config.php modules to Owner, as it is possible the wp-config.php owner change to make customer WP based websites inaccessible.

Another good security measure to  protect your server WordPress based sites from malicious theme template injections (for both personal own hosted wordpress based blog / sites or a WordPress hosting company) is to install and activate WordPress Antivirus plugin.

Speed up WordPress / Joomla CMS and MySQL server on Linux with tmpfs ram file system / Decrease Website pageload times with RAM caching

Wednesday, March 4th, 2015

speed-up-accelerate-wordpress-joomla-drupal-cms-and-mysql-server-with-tmpfs_ramfs_decrease-pageload-times-with-ram-caching
As a WordPress blog owner and an sys admin that has to deal with servers running a lot of WordPress / Joomla / Droopal and other custom CMS installed on servers, performoing slow or big enough to put a significant load on servers
and I love efficiency and hardware cost saving is essential for my daily job, I'm constantly trying to find new ways to optimize Customer Website (WordPress) and rest of sites in order to utilize better our servers and improve our clients sites speed (and hence satisfaction). 

There is plenty of little things to do on servers but probably among the most crucial ones which we use nowadays that save us a lot of money is tmpfs, and earlier (ramfs) – previously known as shmfs).
TMPFS is a (Temporary File Storage Facility) Linux kernel technology based on ramfs (used by Linux kernel initrd / initramfs on boot time in order to load and store the Linux kernel in memory, before system hard disk partition file systems are mounted) which is heavily used by virtually all modern popular Linux distributions. 

Using ramfs (cramfs variation – Compressed ROM filesystem) has been used to store different system environment kernel and Desktop components of many Linux environment / applications and used by a lot of the Linux BootCD such as the most famous (Klaus Knopper's) KNOPPIX LiveCD and Trinity Rescue Kit Linux (TRK uses /dev/shm which btw can be seen on most modern Linux distros and is actually just another mounted tmpfs).
If you haven't tried Live Linux yet try it out as me and a lot of sysadmins out there use some kind of LiveLinux at least few times on yearly basis  to Recover Unbootable Linux servers after some applied remote Updates as well as for Rescuing (Save) Data from Linux server failing to properly boot because of hard disk (bad blocks) failures. As I said earlier TMPFS is also used on almost any distribution for the /dev/ filesystem which is kept in memory.

You can see which tmpfs partitions is used on your Linux server with:

 

debian-server:~# mount |grep -i tmpfs
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)

 

Above is an output from a standard Debian Linux server. On CentOS 7 standard mounted tmpfs are as follows:

 

[root@centos ~]# mount |grep -i tmpfs
devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=1016332k,nr_inodes=254083,mode=755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)
tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,seclabel,mode=755)

 

[root@centos ~]# df -h|grep -i tmpfs
devtmpfs                 993M     0  993M   0% /dev
tmpfs                   1002M   92K 1002M   1% /dev/shm
tmpfs                   1002M  8.8M  993M   1% /run
tmpfs                   1002M     0 1002M   0% /sys/fs/cgroup

The /run tmpfs mounted directory is also to be seen also on latest Ubuntus and Fedoras and is actually the good old /var/run ( where applications keep there pids and some small app related files) stored in tmpfs filesystem stored in memory.

If you're wondering what is /dev/shm and why it appears mounted on every single Linux Server / Desktop you've ever used this is a special filesystem shared memory which various running programs (processes) can use to transfer data quick and efficient between each other to preven the slow disk swapping. People using Linux for the rest 15 years should remember /dev/shm has been a target of a lot of kernel exploits as historically it had a lot of security issues.

While writting this article I've just checked about KNOPPIX developed amd just for info as of time of writting this distro has already 1000+ programs on CD version and 2600+ packages / application on DVD version.
Nowadays Knoppix is mostly used mostly as USB Live Flash drive as a lot of people are dropping CD / DVD use (many servers doesn't have a CD / DVD Drive) and for USB Live Flash Linux distros tmpfs is also key technology used as this gives the end user an amazing fast experience (Desktop applications run much fasten on Live USBs when tmpfs is used than when the slow 7200 RPM HDDs are used).

Loading big parts of the distribution within RAM (with tmpfs from Linux Kernel 2.4+ onwards) is also heavily used by a lot of Cluster vendors in most of Clustered (Cloud) Linux based environemnts, cause TMPFS gives often speeds up improvements to x30 times and decreases greatly I/O HDD. FreeBSD users will be happy to know that TMPFS is already ported and could be used on from FreeBSD 7.0+ onward.

In this small article I will give you example use on how I use tmpfs to speed up our WordPress Websites which use WP Caching plugins such as W3 Total Cache and WP Super Cache
and Hyper Cache / WP Super Cache disk caching and MySQL server as a Database backend.
Below example is wordpress specific but since it can be easily applied to JoomlaDrupal or any other CMS out there that uses mySQL server to make a lot of CPU expensive memory hungry (LEFT JOIN) queries which end up using a slow 7200 RPM hard disk.


 

1. Preparing tmpfs partitions for WordPress File Cache directory
 

If you want to give tmpfs a test drive, I recommend you try to create / mount a 20 Megabyte partition. To create a tmpfs partition you don't need to use a tool like mkfs.ext3 / mkfs.ext4 as TMPFS is in reality a virtual filesystem that is mapped in the server system physical RAM (volatile memory). TMPFS is very nice because if you run out of free RAM system starts a combination of RAM use + some Hard disk SWAP 
The great thing about TMPFS is it never uses all of the available RAM and SWAP, which would not halt your server if TMPFS partition gets filled, but instead you will start getting the usual "Insufficient Disk Space", just like with a physical HDD parititon. RAMFS cares much less about server compared to TMPFS, because if RAMFS is historically older.

ramfs file systems cannot be limited in size like a disk base file system which is limited by it’s capacity, thus ramfs will continue using memory storage until the system runs out of RAM and likely crashes or becomes unresponsive. This is a problem if the application writing to the file system cannot be limited in total size, so in my opinion you better stay away from RAMFS except you have a good idea what you're doing. Another disadvantage of RAMFS compared to TMPFS is you cannot see the size of the file system in df and it can only be estimated by looking at the cached entry in free.

Note that before proceeding to use TMPFS or RAMFS you should know besides having advantages, there are certain serious disadvantage that if the server using tmpfs (in RAM) to store files crashes the customer might loose his data, therefore using RAM filesystems on Production servers is best to be used just for caching folders which are regularly synchronized with (rsync) to some folder to assure no data will be lost on server reboot or crash.

Memory of fast storage areas are ideally suited for applications which need repetitively small data areas for caching or using as temporary space such as Jira (Issue and Proejct Tracking Software) Indexing  As the data is lost when the machine reboots the tmpfs stored data must not be data of high importance as even scheduling backups cannot guarantee that all the data will be replicated in the even of a system crash.

To test mounting a tmpfs virtual (memory stored) filesystem issue:
 

mount -t tmpfs tmpfs -o size=256m /mnt/tmpfs


If you want to test mount a ramfs instead:

 

 mount -t ramfs -o size=256m ramfs /mnt/ramfs

 

debian-server:~#  mount |grep -i -E "ramfs|tmpfs"
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /mnt/tmpfs type tmpfs (rw,size=256m)
ramfs on /mnt/ramfs type tmpfs (rw,size=256m)

 

Once mounted tmpfs can be used in the same way as any ext4 / reiserfs filesystem. In the same way to make mounts permanent, its necessery to add a line to /etc/fstab

To illustrate better a tmpfs use case on my blog running WordPress with W3TotalCache (W3TC) plugin cache folder in /var/www/blog/wp-content/w3tc to get advantage of tmpfs to store w3tc files.

a) Stop Apache

On Debian
 

debian-server:~# /etc/init.d/apache stop


On CentOS 
 

[root@centos ~]# /etc/init.d/httpd stop


b) Move w3tc dir to w3tc-bak

 

debian-server:~# cd /var/www/blog/wp-content/
debian-server:~# mv w3tc w3tc-bak

 

c) Create w3tc directory
 

debian-server:/var/www/blog/wp-content# mkdir w3tc
debian-server:/var/www/blog/wp-content# chown -R www-data:www-data w3tc


d) Add tmpfs record to /etc/fstab

My W3TC Cache didn't grow bigger than 2Gigabytes so I create a 2Giga directory for it by adding following in /etc/fstab 
 

debian-server:~# vim /etc/fstab

 

tmpfs /var/www/blog/wp-content/w3tc tmpfs defaults,size=2g,noexec,nosuid,uid=33,gid=33,mode=1755 0 0


You might also want to add the nr_inodes (option) to tmpfs while mounting. nr_inodes is the maximum inode for instance. Default is half the number of your physical RAM pages, (on a machine with highmem) the number of lowmem RAM page, some common option that should work is nr_inodes=5k, if you're unsure what this option does you can safely skip it 🙂

e) Mount new added tmpfs folder

Then to mount the newly added filesystem issue:
 

mount -a


Or if you're on a CentOS / RHEL server use httpd Apache user instead and whenever you have docroot and wordpress installed.

 

[root@centos ~]# chown -R apache:apache: w3tc


If you're using Apache SuPHP use whatever the UID / GID is proper.

On CentOS you will need to set proper UID and GID (UserID / GroupID), to find out which ones to to use check in /etc/passwd:
 

[root@centos ~]# grep -i apache /etc/passwd
apache:x:48:48:Apache:/var/www:/sbin/nologin


f) Move old w3tc cache from w3tc-bak to w3tc

 

debian-server:/var/www/blog/wp-content# mv w3tc-bak/* w3tc/

 

g) Start again Apache

On Debian:

 

debian-server:~# /etc/init.d/apache2 start

 


On CentOS:
 

[root@centos~]# /etc/init.d/httpd start

h) Keeping w3tc cache site folder synced

As I said earlier the biggest problem with caching (the reason why many hosting providers) and site admins refuse to use it is they might loose some data, to prevent data loss or at least mitigate the data loss to few minutes intervals it is a good idea to synchronize tmpfs kept folders somewhere to disk with rsync.

To achieve that use a cronjob like this:
 

debian-server:~# crontab -u root -e
*/5 * * * * /usr/bin/ionice -c3 -n7 /usr/bin/nice -n 19 /usr/bin/rsync -ah –stats –delete /var/www/blog/wp-content/w3tc/ /backups/tmpfs/cache/ 1>/dev/null


Note that you will need to have the /backups/tmpfs/cache folder existing, create it with:

 

debian-server:~# mkdir -p /backups/tmpfs/cache


You will also need to add a rsync synchronization from backupped folder to tmpfs (in case if the server gets accidently rebooted because it hanged or power outage), place in

/etc/rc.local

 

ionice -c3 -n7 nice -n 19 rsync -ahv –stats –delete /backups/tmpfs/cache/ /var/www/blog/wp-content/w3tc/ 1>/dev/null


(somewhere before exit 0) line
 

0 05 * * * /usr/bin/ionice -c3 -n7 /bin/nice -n 19 /usr/bin/rsync -ah –stats –delete /var/www/blog/wp-content/w3tc/ /backups/tmpfs/cache/ 1>/dev/null

 

 

2. Preparing tmpfs partitions for MySQL server temp File Cache directory


Its common that MySQL servers had to serve a lot of long and heavy SQL JOIN Queries mostly by related posts WP plugins such as (Zemanta Related Posts) and Contextual Related posts though MySQLs are well optimized  to work as much as efficient using mysql tuner (tuning primer) still often SQL servers get a lot of temp tables created to disk (about 25% to 30%) of all SQL queries use somehow HDD to serve queries and as this is very slow and there is file lock created the overall MySQL performance becomes sluggish at times to fix (resolve) that without playing with SQL code to optimize the slow queries the best way I found is by using TMPFS as MySQL temp folder.

To do so I create a TMPFS usually the size of 256 MB because this is usually enough for us, but other hosting companies might want to add bigger virtual temp disk:

a) Add tmpfs new dir to /etc/fstab

In /etc/fstab add below record with vim editor:
 

debian-server:~# vim /etc/fstab

 

tmpfs /var/mysqltmp tmpfs rw,gid=111,uid=108,size=256M,nr_inodes=10k,mode=0700 0 0

 

Note that the uid / and gid 105 and 114 are taken again from /etc/passwd

On Debian

debian-server:~# grep -i mysql /etc/passwd
mysql:x:108:111:MySQL Server,,,:/var/lib/mysql:/bin/false


On CentOS
 

[root@centos ~]# grep -i mysql /etc/passwd
mysql:x:27:27:MySQL Server:/var/lib/mysql:/bin/bash


b) Create folder /var/mysqltmp or whenever you want to place the tmpfs memory kept SQL folder

 

debian-server:~# mkdir /var/mysqltmp
debian-server:~# chown mysql:mysql /var/mysqltmp

 

debian-server:~# mount|grep -i tmpfs
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /var/www/blog/wp-content/w3tc type tmpfs (rw,noexec,nosuid,size=2g,uid=33,gid=33,mode=1755)
tmpfs on /var/mysqltmp type tmpfs (rw,gid=108,uid=111,size=256M,nr_inodes=10k,mode=0700)


c) Add new path to tmpfs created folder in my.cnf 

Then  edit /etc/mysql/my.cnf

 

debian-server:~# vim /etc/mysql/my.cnf

[mysqld]
#
# * Basic Settings
#
user        = mysql
pid-file    = /var/run/mysqld/mysqld.pid
socket      = /var/run/mysqld/mysqld.sock
port        = 3306
basedir     = /usr
datadir     = /var/lib/mysql
tmpdir      = /var/mysqltmp

 

On CentOS edit and change tmpdir in same way within /etc/my.cnf


d) Finally Restart Apache and MySQL to make mysql start using new set tmpfs memory kept folder

On Debian:
 

debian-server:~# /etc/init.d/apache2 stop; /etc/init.d/mysql restart; /etc/init.d/apache2 start

On CentOS:
 

[root@centos ~]# /etc/init.d/httpd stop; /etc/init.d/mysqld restart; /etc/initd/httpd start


Now monitor your server and check your pagespeed increase for me such an optimization usually improves site performance so site becomes +50% faster, to see the difference you can test your website before applying tmpfs caching for site and after that by using Google PageInsight (PageSpeed) Online Test. Though this example is for MySQL and WordPress you can easily adopt the same for Joomla if you have Joomla Caching enabled to some folder, same goes for any other CMS such as Drupal that can take use of Disk Caching. Actually its a small secret of many Hosting providers that allow clients to create sites via CPanel and Kloxo this tmpfs optimizations are already used for sites and by this the provider is able to offer better website service on lower prices. VPS hosting providers also use heavy caching. A lot of people are using TMPFS also to accelerate Sites that have enabled Google Pagespeed as Cacher and accelerator, as PageSpeed module puts a heavy HDD I/O load that can easily stone the server. Many admins also choose to use TMPFS for  /tmp, /var/run, and /var/lock directories as this leads often to significant overall server services operations improvement.
Once you have tmpfs enabled, It is a good idea to periodically monitor your SWAP used space with (df -h), because if you allocate bigger tmpfs partitions than your physical memory and tmpfs's full size starts to be used your machine will start swapping heavily and this could have a very negative performance affect.
 

debian-server:~# df -h|grep -i tmpfs
tmpfs            3,9G     0   3,9G   0% /lib/init/rw
tmpfs            3,9G     0   3,9G   0% /dev/shm
tmpfs            2,0G  1,4G   712M  66% /var/www/blog/wp-content/w3tc
tmpfs            256M     0   256M   0% /mnt/tmpfs
tmpfs            256M  236K   256M   1% /var/mysqltmp

The applications of tmpfs to accelerate services is up to your imagination, so I will be glad to hear from other admins on any interesting other application or problems faced while using TMPFS.

 Enjoy! 🙂

Howto install XCache Debian on GNU / Linux to accelerate Apache Webserver – XCache Best alternative to outdated PHP cacher EAccelerator

Thursday, February 26th, 2015

xcache_install-and-enable-best-alternative-php-cacher-to-eaccelerator-logo
I was using Eaccelerator until recently on all Apache / PHP / MySQL  (LAMP) web-servers as a caching engine (Webserver accelerator) across all Debian GNU / Linux Lenny / Squeeze / Etch servers.
However recently, I've noticed in phpinfo output on some of the Debian hosts, that eaccelerator was loaded but showed:

 Caching Enabled false

eaccelerator_caching_enabled_false-phpinfo-screenshot-apache-debian-linux.

 

Our servers are quite busy serving about 50 000 to 100 000 requests and thus not having enabled caching puts a lot of extra load on the CPU and eats a lot of memory which were usually saved by eAccelerator.
Logically I tried fixing the issues following some Stackoverflow threads recommendations such as this one but didn't work I tried playing manually spending hours trying to make eaccelerator run again and as a final mean, I even tried to upgrade eaccelerator to newer version but noticed the latest available eaccelerator version 0.9.6 was 2.5 years old (from 03.09.2012). Thus while there is no new release, just make s so just to make sure I didn't break the module with (default Debian bundled distribution package which is also installed on the servers)  re-installed eAccelerator from source 

This didn't worked either and since I was totally pissed off by the worsened systems performance (CPU load increased with to 10-30%) per server, I looked for some alternatives I can use and in the mean time I learned a bit more about history of PHP Accelerators, I learned some interesting things such as that  ionCube (PHPA) was the  first PHP Accelerator Apache like module (encoding PHP code),  created in 2001, later it become inspirational for  birth to PHP-APC (Alternative PHP Cache) Apache module. 
There is also Zend Opcache PHP accelerator (available since PHP 5.5 onwards)  but since Zend OpCache caches well PHP Zend written PHP code and servers run PHP 5.4 + sites are not using Zend PHP Framewosk  this was an option.
Further investigation lead me to MMCache which is already too obsolete (latest release is from 2013), PHPExpress – PHP Encoder which  was said to run on Windows, Linux, FreeBSD, NetBSD, Mac OS X, and Solaris) but already looks dead as there were no new releases since January 2012) and finally Lighttpd's XCache.

To give you an idea on what exactly is the difference between Apache Webserver with PHP-APC Caching or other PHP Cacher enabled and the Standard way PHP Interprets PHP scripts below is a diagram:

php-apc-cache-how-php-caching-works-with-and-without-encoding-php-code-diagram

Obviously my short research shows that from all the available PHP Cache Encoder / Accelerators only ones that seemed to be recently updated (under active development) are APC and XCache.
I've already used PHP-APC earlier on some servers and was having having some random Apache Webservers crashes and weird empty pages with some PHP pages and besides that APC is known to give lower speed in PHP caching than Eaccelerator and XCache, leaving me with the only and logical choise to use XCACHE.

Here is how Xcache developers describe their opcacher:
 

XCache is a free, open source operation code cacher, it is designed to enhance the performance of PHP scripts execution on servers. It optimizes the performance by eliminating the compilation time of PHP code by caching the compiled version of code into the memory and this way the compiled version loads the PHP script directly from the memory. This will surety accelerate the page generation time by up to 5 times faster and also optimizes and increases many other aspects of php scripts and reduce website/server load.

 


Thanksfully XCache is shipped by default with all Debians (Etch /Lenny / Squeeze / Wheezy)  Linuces so to install it just run the standard apt cmd:
 

apt-get install –yes php5-xcache


Then to enable XCache all I had to do is edit /etc/php5/apache2/php.ini and place below code
 

debian-server:~# vim /etc/php5/apache2/php.ini

 

[xcache-common]
;; install as zend extension (recommended), normally "$extension_dir/xcache.so"
;;zend_extension = /usr/lib/php5/20100525/xcache.so

 

[xcache.admin]
xcache.admin.enable_auth = On
; Configure this to use admin pages
; xcache.admin.user = "mOo"
; xcache.admin.pass = md5($your_password)
; xcache.admin.pass = ""

[xcache]
; ini only settings, all the values here is default unless explained

; select low level shm/allocator scheme implemenation
xcache.shm_scheme =        "mmap"
; to disable: xcache.size=0
; to enable : xcache.size=64M etc (any size > 0) and your system mmap allows
xcache.size  =                16M
; set to cpu count (cat /proc/cpuinfo |grep -c processor)
xcache.count =                 1
; just a hash hints, you can always store count(items) > slots
xcache.slots =                8K
; ttl of the cache item, 0=forever
xcache.ttl   =                 0
; interval of gc scanning expired items, 0=no scan, other values is in seconds
xcache.gc_interval =           0
; same as aboves but for variable cache

Note that Debian location which instructs xcache to load in Apache as a module is xcache.ini – e.g. /usr/share/php5/xcache/xcache.ini, so instead of placing above configuration right into php.ini you might prefer to place it in xcache.ini (though I personally prefer php.ini) because it is easier for me to later control how PHP behaves from single location.

To test whether XCache is enabled for Apache Webserver:

Create phpinfo.php somewhere in DocumentRoot (in my case this was /var/www/php_info.php)

debian-server:~# vim /var/www/php_info.php

 

<php?
phpinfo()
?>


When you access the php_info.php in browser you will get XCache loaded as in below screenshot:

 

xcache_loaded-in-php-apache-phpinfo-output-debian-gnu-linux-server

To Test whether Xcache is enabled also for PHP CLI (applications set to run as a crontab – cronjob) :
 

debian-server:~# php -v
PHP 5.4.37-1~dotdeb.0 (cli) (built: Feb  2 2015 05:03:00)
Copyright (c) 1997-2014 The PHP Group
Zend Engine v2.4.0, Copyright (c) 1998-2014 Zend Technologies
    with XCache v3.2.0, Copyright (c) 2005-2014, by mOo
    with XCache Cacher v3.2.0, Copyright (c) 2005-2014, by mOo

 

Once it is tested as successful install you might want to enable the XCache admin (which is disabled by default), to enable XCache Admin on Debian you need to generate new password for it first like so:

 

echo -n "xcache_rulez" | md5sum
acbf5ba4a44f03058aa0ad11e0a6b645

 


Then you need to add in /etc/php5/mods-available/xcache.ini

debian-server:~# vim /etc/php5/mods-available/xcache.ini
[xcache.admin]
xcache.admin.enable_auth = On
; Configure this to use admin pages
 xcache.admin.user = "admin"
; xcache.admin.pass = md5($your_password)
 xcache.admin.pass = "change_with_above_generated_password_here"

 

To enable admin and be able to access it in a browser (if you're using as a documentroot /var/www/ and docroot supports interpretting php scripts and (has AllowOverride All) enabled to also support htaccess authentication do:
 

debian-server:~# cd /var/www/
debian-server:~# ln -sf /usr/share/xcache/htdocs/ xcache

When you access http://your-url-address.com/xcache/ you should see in browser some statistics along with all configured xcache options:

xcacher-admin-on-debian-gnu-linux-in-chrome-browser-screenshot-enable-xcacher-admin-howto

If you have time you can play with the options and get some speed minor speed improvements. The overall increase in page opening XCache should give you is between 100% – 190% !

Enjoy 🙂