Posts Tagged ‘clue’

How to find how much power (electricity) consumption a server or PC has?

Friday, November 2nd, 2012

Kill-A-Watt track system power electricty consumption on GNU / Linux servers and FreeBSD
A friend of mine today ask me if I have clue if it is possible to track his home computer Consumption with some piece of Software?

The question is quite interesting, since I run a home server with Linux and it would have been nice if I can exactly track how much electricity per month it  consumes

Now knowing, the answer I first checked online for some kind of software and all I can find something that does something similar but all can find is powertop.

Though powertop is nice Linux tool to keep an eye which program on PC consumes most from overall consumed electricity and order the programs and modules based on electricity consumption it is not providing information on overall electricity consumption.

As the topic seem to be some interesting, I've decided to ask in irc.freenode.net #deiban
Here is a paste from  irssi channel log:

17:21 < hipodilski> hi any idea, how can I find how much electricity a server conmuses per month
17:21 < hipodilski> is there some some kind of software
17:21 -!- digdilem [~digdilem@plague.digdilem.org] has joined #debian
17:22 < babilen> hipodilski: I would recommend an electricity meter rather than software
17:22 -!- tommy_e [~tommy@81.27.221.202] has quit [Ping timeout: 260 seconds]
17:22 < jelly-home> watt meters ftw
17:22 -!- msx [~msx@190.194.114.10] has joined #debian
17:22 -!- blackshirt [~najwa@103.3.223.5] has left #debian []
17:23 < hipodilski> yes but i don't have electricity metter, if there is software it would be interesting to try it
17:23 -!- badiane [~gdurand@D8FF67fa.cst.lightpath.net] has quit [Remote host closed the connection]
17:23 < xand> hipodilski: no, you need a hardware device.
17:23 < jelly-home> now everything can be solved in software, hipodilski
17:23 < jelly-home> not*
17:23 < jelly-home> dammit
17:23 < xand> unless you have a very fancy PSU, software can't find that out
17:23 < babilen> jelly-home: hehe, nice typo !
17:23 < vacuous> hipodilski yes
17:24 < HelloShitty> nsadmin, are you out of ideas for me?
17:24 < vacuous> there's various devices that do it
17:24 -!- firecode [~irc@unaffiliated/firecode] has joined #debian
17:24 < vacuous> you can either get a killawat which are highly innacurate but it might give you a clue
17:24 < vacuous> and they're very cheap too
17:25 < vacuous> you can get a device which measures your entire houses electric, then you just turn off all the appliances and run the
                 server only
17:25 -!- trysten [~trysten@37-251-103-145.FTTH.ispfabriek.nl] has quit [Quit: be back]
17:25  * babilen likes that approach
17:25 < babilen> But this is getting a bit too off-topic. Maybe hipodilski wants to take it to #debian-offtopic
17:25 < vacuous> or you can keep all fridges on, check what the reading is and then negate that from the total
17:25 < hipodilski> yes thanks 🙂
 

The answer makes it clear right of time of writing this post there is no software for Linux or BSD that keeps track electricity consumption daily or monthly

I've googled to see what is Kill-A-Watt hardware? and found fuzzy named device Kill-A-Watt for sale on ThinkGeek's website for the not so expensive 24.99$

To use Kill-A-Watt device is to be connected inside the power plug and then PC or Server has to be plugged into  Kill-A-Watt dev. I've red also (while researching) many Intelligent UPS devs has support for keeping log of discharged energy, so just buying a good UPS with web administrator or even a cheap one providing statistical information of UPS use via serial port should be another alternative to track ur server consumption.

Linux: Fixing Qmail server qmail-smtpd port 25 slow (lagged) connect problem

Thursday, May 16th, 2013

qmail logo fixing qmail mail SMTP port 25 connect delays

After updating my Debian Squeeze to latest stable packages from repository with standard:
# apt-get update && apt-get upgrade

I routinely checked, if afterwards all is fine with Qmail?, just to find out connect to port 25 was hell delayed about 40-50 seconds before qmail responds with standard assigned Mail Greeting.
I Googled long time to see if I can find a post or forum thread discussing, exact issue, but though I found similar discussions I didn't found anything that exactly match problem. Thus I decided to follow the good old experimental try / fail method to figure out what causes it.

elow is pastes from telnet, illustrating delays in Qmail SMTP greeting respond:

# telnet localhost 25
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

# telnet localhost 25
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

# telnet localhost 25
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

I spend about 2 hours, checking Qmail for the standard so common errors, usually causing it to not work properly following my previous article testing qmail installation problems

After going, through all of possible causes the only clue for problems, were some slowness with spamassassin. This brought me the idea that something is done wrong with spamassassin .I tried disabling, Spamassassin Razon and Pyzor restarting spamd through (in my case done not via the standard start/stop debian script) but through daemontools with svc and qmailctl i.e.:

# svc -d /service/spamd
# svc -u /service/spamd
# svc -a /service/spamd

qmailctl restart
* Stopping qmail-smtpdssl.
* Stopping qmail-smtpd.
* Sending qmail-send SIGTERM and restarting.
* Restarting qmail-smtpd.
* Restarting qmail-smtpdssl.
* Restarting qmail-pop3d.
This doesn't help, so I continued trying to figure out, what is wrong .One assumption for slow  qmail-smtpd responce was of course slow DNS resolve issues. I checked /etc/resolv.conf to find out server is configured to use local  configured DJBDNS server as first line DNS resolver. I used djbdns for it is simple and easy to configure, however it is a bit obsolete so it was possible bottleneck. After commenting line to use localhost 127.0.0.1
and settings as primary DNS Google Public DNS 8.8.8.8, problem persisted so problems with hosts resolving was obviously not the problem.

I pondered for about 30 minutes, checking again all logs and checking machine processes. Just to remember before I experienced similar issues caused by unresolving RBL (blacklist IP) hosts. I checked configured SPF records in
(process list) and noticed following 4 hosts;

# ps auxwwf

7190 ?        S      0:00 tcpserver -vR -l /var/qmail/control/me -c 30 -u 89 -g 89 -x /etc/tcp.smtp.cdb 0 25 rblsmtpd -t0 -r zen.spamhaus.org -r dnsbl.njabl.org -r dnsbl.sorbs.net -r bl.spamcop.net qmail-smtpd /var/qmail/control/me /home/vpopmail/bin/vchkpw /bin/true
 

I checked one by one hosts and find out 1st two hosts in line are no longer resolving (blacklist is no longer accessible) as before:

 

zen.spamhaus.org, dnsbl.njabl.org

DNSBL (DNS blocklist) is configured on this host via /service/qmail-smtpd/run, hence to remove two unresolvable hosts forcing the weird qmail-smtpd connect delay I had to modify in it:

RBL_BAD="zen.spamhaus.org dnsbl.njabl.org dnsbl.sorbs.net bl.spamcop.net"

to

RBL_BAD="dnsbl.sorbs.net bl.spamcop.net"

After a close examinations in mail server config /var/qmail/control/spfrules, found one other Unresolvable SPF Blacklist host configured ;
# cat /var/qmail/control/spfrules
include:spf.trusted-forwarder.org

To move that one I null-ed file:

# cat /dev/null > /var/qmail/control/spfrules

Finally to take affect all changes, launched Qmail start:

# qmailctl restart
Restarting qmail:
* Stopping qmail-smtpdssl.
* Stopping qmail-smtpd.
* Sending qmail-send SIGTERM and restarting.
* Restarting qmail-smtpd.
* Restarting qmail-smtpdssl.
* Restarting qmail-pop3d.

To check all was fine afterwards, again used telnet:

# telnet localhost 25
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
220 This is Mail Pc-Freak.NET ESMTP

Mail greeting now appears in about 2-3 seconds time.

 

 

How to fix problems with encoding not showing umlauts in after import of sql data to MySQL

Thursday, October 1st, 2009

I’m restoring some websites from backups this days. One of the swiss websites had a serious problem with umlauts not showing up.
This happened right after I’ve used an old dump from a MySQL Server running version 4.x, the imported data was to MySQL server version 5. The problem consisted in that everywhere an umault was placed the shown content was ü.

You can imagine how annoying and ugly that looked, the whole text was crappy.
After some googling with a help of one of my colleagues (a programmer). I was pointed to this nice article Mysql Latin1 Utf8 Conversion .
What happens is that for some reason the dump I’ve made had latin1 character-set even though the data inside was in utf8.
Thus importing the dump would try to import the data as latin1 and make a crap out of it. The fix is as simple as substituting latin8 to utf8 in your mysql dump file and then reimporting it again.
In my case the browser displayed by default the website characters in iso8859 instead of utf8, so I had to specificly to change the browser encoding to UTF8 to realize all is okay.
Then it was necessery to modify all the templates to use UTF8 instead of the wrong character encoding. I have no clue how does it happened that the same umlaut encoding on the old server, what I suspect is there was something with the Apache’s default character encoding probably I have it set there by default set to utf8.
Well so far so good, let’s see how much trashy stuff I have to deal with today.
END—–

How to list and exclude table names from a database in MySQL (exclude table names from an show tables in MySQL) by using information_schema

Wednesday, March 30th, 2011

Listing all table names from a MySQL database is a very easy and trivial task that every sql or system administrator out there is aware of.

However excluding certain table names from a whole list of tables belonging to a database is not that commonly used and therefore I believe many people have no clue how to do it when they have to.

Today for one of my sql backup scripts it was necessary that certain tables from a database to be excluded from the whole list of tables for a database I’m backupping.
My example database has the sample name exampledatabase and usually I do list all the table contents from that database with the well known command:

mysql> SHOW tables from exampledatabase;

However as my desire was to exclude certain tables from the list (preferrably with a certain SQL query) I had to ask around in irc.freenode.net for some hints on a ways to achieve my exclude table goals.

I was adviced by some people in #mysql that what I need to achieve my goal is the information_schema mysql structure, which is available since MySQL version 5.0.

After a bit of look around in the information_schema and the respective documentation on mysql.com, thanksfully I could comprehend the idea behind the information_schema, though to be honest the first time I saw the documentation it was completly foggy on how to use this information_schema;
It seems using the information_schema is very easy and is not much different from your normal queries syntax used to do trivial operations in the mysql server.

If you wonder just like I did what is mysql’s information_schema go and use the information_schema database (which I believe is a virtual database that is stored in the system memory).

For instance:

mysql> use information_schema;
Database changed
mysql> show tables
+---------------------------------------+
| Tables_in_information_schema |
+---------------------------------------+
| CHARACTER_SETS |
| COLLATIONS |
| COLLATION_CHARACTER_SET_APPLICABILITY |
| COLUMNS |
| COLUMN_PRIVILEGES |
| KEY_COLUMN_USAGE |
| PROFILING |
| ROUTINES |
| SCHEMATA |
| SCHEMA_PRIVILEGES |
| STATISTICS |
| TABLES |
| TABLE_CONSTRAINTS |
| TABLE_PRIVILEGES |
| TRIGGERS |
| USER_PRIVILEGES |
| VIEWS |
+---------------------------------------+
17 rows in set (0.00 sec)

To get a general view on what each of the tables in the information_schema database contains I used the normal SELECT command for example

mysql> select * from TABLES limit 10;

I used the limit clause in order to prevent being overfilled with data, where I could still see the table fields name to get general and few lines of the table to get an idea what kind of information the TABLES table contains.

If you haven’t got any ecperience with using the information_schema I would advice you do follow my example select and look around through all the listed tables in the information_schema database

That will also give you a few hints about the exact way the MySQL works and comprehends it’s contained data structures.

In short information_schema virtual database and it’s existing tables provides a very thorough information and if you’re an SQL admin you certainly want to look over it every now and then.

A bit of playing with it lead me to a command which is actually a good substitute for the normal SHOW TABLES; mysql command.
To achieve a SHOW TABLES from exampledatabase via the information_schema info structure you can for example issue:

select TABLE_NAME from TABLES where TABLE_SCHEMA='exampledatabase';

Now as I’ve said a few words about information_schema let me go back to the main topic of this small article, which is How to exclude table names from a SHOW tables list

Here is how exclude a number of tables from a complete list of tables belonging to a database:

select TABLE_NAME from TABLES where TABLE_SCHEMA='exampledatabase'
AND TABLE_NAME not in
('mysql_table1_to_exlude_from_list', 'mysql_table2_to_exclude_from_list', 'table3_to_exclude');

In this example the above mysql command will list all the tables content belonging to exampledatabase and instruct the MySQL server not to list the table names with names mysql_table1_to_exlude_from_list, mysql_table2_to_exclude_from_list, table3_to_exclude

If you need to exclude more tables from your mysql table listing just add some more tables after the …’table3_to_exclude’, ‘new_table4_to_exclude’,’etc..’);

Of course this example can easily be adopted to a MySQL backup script which requires the exclusion of certain tables from a backed up database.

An example on how you can use the above table exclude command straight from the bash shell would be:

debian:~# echo "use information_schema; select TABLE_NAME from TABLES where
TABLE_SCHEMA='exampledatabase' AND TABLE_NAME not in
('mysql_table1_to_exlude_from_list', 'mysql_table2_to_exclude_from_list', 'table3_to_exclude',);"
| mysql -u root -p

Now this little bash one-liner can easily be customized to a backup script to create backups of a certain databases with a certain tables (e.g. with excluded number of tables) from the backup.

It’s seriously a pity that by default the mysqldump command does not have an option for a certain tables exclude while making a database dump.
I’ve saw the mysqldump exclude option, being suggested somewhere online as a future feature of mysqldump, I’ve also seen it being reported in the mysql.com’s bug database, I truly hope in the upcoming releases we will see the exclude option to appear as a possible mysqldump argument.
 

How to fix bug with WordPress domain extra trailing slash (Double wordpress trailing slash)

Monday, July 9th, 2012

How to fix bug with wordpress extra slash, domain double slash issue pic

2 of the wordpress installations, I take care for had been reported an annoying bug today by some colleagues.
The bug consisted in double trailing slash at the end of the domain url e.g.;

http://our-company-domainname.com//

As a result in the urls everywhere there was the double trailing slash appearing i.e.::

http://our-company-domainname.com//countact-us/
http://our-company-domainname.com//languages/

etc.

The bug was reported to happen in the multiolingual version of the wordpress based sites, as the Qtranslate plugin is used on this installations to achieve multiple languages it seemed at first logical that the double slash domain and url wordpress issues are caused for some reason by qTranslate.

Therefore, I initially looked for the cause of the problem, within the wordpress admin settings for qTranslate plugin. After not finding any clue pointing the bug to be related to qTranslate, I've then checked the settings for each individual wordpress Page and Post (There in posts usually one can manually set the exact url pointing to each post and page).
The double slash appeared also in each Post and Page and it wasn't possible to edit the complete URL address to remove the double trailin slashes. My next assumption was the cause for the double slash appearing on each site link is because of something wrong with the sites .htaccess, therefore I checked in the wp main sites directory .htaccess
Strangely .htacces seemed OKAY and there was any rule that somehow might lead to double slashes in URL. WP-sites .htaccess looked like so:
 

server:/home/wp-site1/www# cat .htaccess
RewriteEngine On
RewriteBase /

# Rewrite rules for new content and scripts folder
RewriteRule ^jscripts/(.*)$ wp-includes/js/$1
RewriteRule ^gallery/(.*)$ wp-content/uploads/$1
RewriteRule ^modules/(.*)$ wp-content/plugins/$1
RewriteRule ^gui/(.*)/(.*)$ wp-content/themes/$1/$2 [L]

# Disable direct acceees to wp files if referer is not valid
#RewriteCond %{THE_REQUEST} .wp-*
#RewriteCond %{REQUEST_URI} .wp-*
#RewriteCond %{REQUEST_URI} !.*media-upload.php.*
#RewriteCond %{HTTP_REFERER} !.*cadia.*
#RewriteRule . /error404 [L]

# Standard WordPress rewrite
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]

Onwards, I thought a possible way to fix bug by adding mod_rewrite rules in .htaccess which would do a redirect all requests to http://www.our-company-domainname.com//contact-us/ to http://www.our-company-domainname.com//contact-us/ etc. like so:

RewriteRule ^/(.*)$ /$1

This for unknown reasons to me didn't worked either, finally thanks God I remembered to check the variables in wp-config.php (some month ago or so I added there some variables in order to improve the wordpress websites opening times).

I've figured out I did a mistake in one of the variables by adding an ending slash to the URL. The variable added was:

define('WP_HOME','http://our-company-domainname.com/');

whether instead it should be without the ending trailing slash like so:

define('WP_HOME','http://our-company-domainname.com');

By removing the ending trailing slash:

define('WP_HOME','http://our-company-domainname.com/');

to:

define('WP_HOME','http://our-company-domainname.com');
fixed the issue.
Cheers 😉

How to list enabled VirtualHosts in Apache on GNU / Linux and FreeBSD

Thursday, December 8th, 2011

How Apache process vhost requests picture, how to list Apache virtualhosts on Linux and FreeBSD

I decided to start this post with this picture I found on onlamp.com article called “Simplify Your Life with Apache VirtualHosts .I put it here because I thing it illustrates quite well Apache’s webserver internal processes. The picture gives also a good clue when Virtual Hosts gets loaded, anways I’ll go back to the main topic of this article, hoping the above picture gives some more insight on how Apache works.;
Here is how to list all the enabled virtualhosts in Apache on Debian GNU / Linux serving pages:

server:~# /usr/sbin/ apache2ctl -S
VirtualHost configuration:
wildcard NameVirtualHosts and _default_ servers:
*:* is a NameVirtualHost
default server exampleserver1.com (/etc/apache2/sites-enabled/000-default:2)
port * namevhost exampleserver2.com (/etc/apache2/sites-enabled/000-default
port * namevhost exampleserver3.com (/etc/apache2/sites-enabled/exampleserver3.com:1)
port * namevhost exampleserver4.com (/etc/apache2/sites-enabled/exampleserver4.com:1)
...
Syntax OK

The line *:* is a NameVirtualHost, means the Apache VirtualHosts module will be able to use Virtualhosts listening on any IP address (configured on the host), on any port configured for the respective Virtualhost to listen on.

The next output line:
port * namevhost exampleserver2.com (/etc/apache2/sites-enabled/000-default Shows requests to the domain on any port will be accepted (port *) by the webserver as well as indicates the <VirtualHost> in the file /etc/apache2/sites-enabled/000-default:2 is defined on line 2 (e.g. :2).

To see the same all enabled VirtualHosts on FreeBSD the command to be issued is:

freebsd# pcfreak# /usr/local/sbin/httpd -S VirtualHost configuration:
wildcard NameVirtualHosts and _default_ servers:
*:80 is a NameVirtualHost
default server www.pc-freak.net (/usr/local/etc/apache2/httpd.conf:1218)
port 80 namevhost www.pc-freak.net (/usr/local/etc/apache2/httpd.conf:1218)
port 80 namevhost pcfreak.afraid.org (/usr/local/etc/apache2/httpd.conf:1353)
...
Syntax OK

On Fedora and the other Redhat Linux distributions, the apache2ctl -S should be displaying the enabled Virtualhosts.

One might wonder, what might be the reason for someone to want to check the VirtualHosts which are loaded by the Apache server, since this could be also checked if one reviews Apache / Apache2’s config file. Well the main advantage is that checking directly into the file might sometimes take more time, especially if the file contains thousands of similar named virtual host domains. Another time using the -S option is better would be if some enabled VirtualHost in a config file seems to not be accessible. Checking directly if Apache has properly loaded the VirtualHost directives ensures, there is no problem with loading the VirtualHost. Another scenario is if there are multiple Apache config files / installs located on the system and you’re unsure which one to check for the exact list of Virtual domains loaded.

How to Prevent Server inaccessibility by using a secondary SSH Server access port

Monday, December 12th, 2011

One of the Debian servers’s SSH daemon suddenly become inaccessible today. While trying to ssh I experienced the following error:

$ ssh root@my-server.net -v
OpenSSH_5.8p1 Debian-2, OpenSSL 0.9.8o 01 Jun 2010
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: Connecting to mx.soccerfame.com [83.170.104.169] port 22.
debug1: Connection established.
debug1: identity file /home/hipo/.ssh/id_rsa type -1
debug1: identity file /home/hipo/.ssh/id_rsa-cert type -1
debug1: identity file /home/hipo/.ssh/id_dsa type -1
debug1: identity file /home/hipo/.ssh/id_dsa-cert type -1
...
Connection closed by remote host

Interestingly only the SSH server and sometimes the mail server was failing to respond and therefore any mean to access the server was lost. Anyways some of the services on the server for example Nginx continued working just fine.
Some time ago while still working for design.bgweb development company, I’ve experienced some similar errors with SSH servers, so I already had a clue, on a way to work around the issue and to secure myself against the situation to loose access to remote server because the secure shell daemon has broken up.

My work around is actually very simple, I run a secondary sshd (different sshd instance) listening on a different port number.

To do so I invoke the sshd daemon on port 2207 like so:

debian:~# /usr/sbin/sshd -p 2207
debian:~#

Besides that to ensure my sshd -p 2207 will be running on next boot I add:

/usr/sbin/sshd -p 2207

to /etc/rc.local (before the script end line exit 0 ). I do set the sshd -p 2207 to run via /etc/rc.local on purpose instead of directly adding a Port 2207 line in /etc/ssh/sshd_config. The reason, why I’m not using /etc/ssh/sshd_config is that I’m not sure if using the sshd config to set a secondary port does run the port under a different sshd parent. If using the config doesn’t run the separate ssh port under a different server parent this will mean that once the main parent hangs, the secondary port will become inaccessible as well.