Posts Tagged ‘developers’

Monitoring Linux hardware Hard Drives / Temperature and Disk with lm_sensors / smartd / hddtemp and Zabbix Userparameter lm_sensors report script

Thursday, April 30th, 2020

monitoring-linux-hardware-with-software-temperature-disk-cpu-health-zabbix-userparameter-script

I'm part of a  SysAdmin Team that is partially doing some minor Zabbix imrovements on a custom corporate installed Zabbix in an ongoing project to substitute the previous HP OpenView monitoring for a bunch of Legacy Linux hosts.
As one of the necessery checks to have is regarding system Hardware, the task was to invent some simplistic way to monitor hardware with the Zabbix Monitoring tool.  Monitoring Bare Metal servers hardware of HP / Dell / Fujituse etc. servers  in Linux usually is done with a third party software provided by the Hardware vendor. But as this requires an additional services to run and sometimes is not desired. It was interesting to find out some alternative Linux native ways to do the System hardware monitoring.
Monitoring statistics from the system hardware components can be obtained directly from the server components with ipmi / ipmitool (for more info on it check my previous article Reset and Manage intelligent  Platform Management remote board article).
With ipmi
 hardware health info could be received straight from the ILO / IDRAC / HPMI of the server. However as often the Admin-Lan of the server is in a seperate DMZ secured network and available via only a certain set of routed IPs, ipmitool can't be used.

So what are the other options to use to implement Linux Server Hardware Monitoring?

The tools to use are perhaps many but I know of two which gives you most of the information you ever need to have a prelimitary hardware damage warning system before the crash, these are:
 

1. smartmontools (smartd)

Smartd is part of smartmontools package which contains two utility programs (smartctl and smartd) to control and monitor storage systems using the Self-Monitoring, Analysis and Reporting Technology system (SMART) built into most modern ATA/SATA, SCSI/SAS and NVMe disks

Disk monitoring is handled by a special service the package provides called smartd that does query the Hard Drives periodically aiming to find a warning signs of hardware failures.
The downside of smartd use is that it implies a little bit of extra load on Hard Drive read / writes and if misconfigured could reduce the the Hard disk life time.

 

linux:~#  /usr/sbin/smartctl -a /dev/sdb2
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.19.0-5-amd64] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model:     KINGSTON SA400S37240G
Serial Number:    50026B768340AA31
LU WWN Device Id: 5 0026b7 68340aa31
Firmware Version: S1Z40102
User Capacity:    240,057,409,536 bytes [240 GB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    Solid State Device
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   ACS-3 T13/2161-D revision 4
SATA Version is:  SATA 3.2, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Thu Apr 30 14:05:01 2020 EEST
SMART support is: Available – device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                                        was never started.
                                        Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (  120) seconds.
Offline data collection
capabilities:                    (0x11) SMART execute Offline immediate.
                                        No Auto Offline data collection support.
                                        Suspend Offline collection upon new
                                        command.
                                        No Offline surface scan supported.
                                        Self-test supported.
                                        No Conveyance Self-test supported.
                                        No Selective Self-test supported.
SMART capabilities:            (0x0002) Does not save SMART data before
                                        entering power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   2) minutes.
Extended self-test routine
recommended polling time:        (  10) minutes.

SMART Attributes Data Structure revision number: 1
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x0032   100   100   000    Old_age   Always       –       100
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       –       2820
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       –       21
148 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      –       0
149 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      –       0
167 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      –       0
168 Unknown_Attribute       0x0012   100   100   000    Old_age   Always       –       0
169 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      –       0
170 Unknown_Attribute       0x0000   100   100   010    Old_age   Offline      –       0
172 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       –       0
173 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      –       0
181 Program_Fail_Cnt_Total  0x0032   100   100   000    Old_age   Always       –       0
182 Erase_Fail_Count_Total  0x0000   100   100   000    Old_age   Offline      –       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       –       0
192 Power-Off_Retract_Count 0x0012   100   100   000    Old_age   Always       –       16
194 Temperature_Celsius     0x0022   034   052   000    Old_age   Always       –       34 (Min/Max 19/52)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       –       0
199 UDMA_CRC_Error_Count    0x0032   100   100   000    Old_age   Always       –       0
218 Unknown_Attribute       0x0032   100   100   000    Old_age   Always       –       0
231 Temperature_Celsius     0x0000   097   097   000    Old_age   Offline      –       97
233 Media_Wearout_Indicator 0x0032   100   100   000    Old_age   Always       –       2104
241 Total_LBAs_Written      0x0032   100   100   000    Old_age   Always       –       1857
242 Total_LBAs_Read         0x0032   100   100   000    Old_age   Always       –       1141
244 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      –       32
245 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      –       107
246 Unknown_Attribute       0x0000   100   100   000    Old_age   Offline      –       15940

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]

Selective Self-tests/Logging not supported

 

2. hddtemp

 

Usually if smartd is used it is useful to also use hddtemp which relies on smartd data.
 The hddtemp program monitors and reports the temperature of PATA, SATA
 or SCSI hard drives by reading Self-Monitoring Analysis and Reporting
 Technology (S.M.A.R.T.)
information on drives that support this feature.
 

linux:~# /usr/sbin/hddtemp /dev/sda1
/dev/sda1: Hitachi HDS721050CLA360: 31°C
linux:~# /usr/sbin/hddtemp /dev/sdc6
/dev/sdc6: KINGSTON SV300S37A120G: 25°C
linux:~# /usr/sbin/hddtemp /dev/sdb2
/dev/sdb2: KINGSTON SA400S37240G: 34°C
linux:~# /usr/sbin/hddtemp /dev/sdd1
/dev/sdd1: WD Elements 10B8: S.M.A.R.T. not available

 

 

3. lm-sensors / i2c-tools 

 Lm-sensors is a hardware health monitoring package for Linux. It allows you
 to access information from temperature, voltage, and fan speed sensors.
i2c-tools
was historically bundled in the same package as lm_sensors but has been seperated cause not all hardware monitoring chips are I2C devices, and not all I2C devices are hardware monitoring chips.

The most basic use of lm-sensors is with the sensors command

 

linux:~# sensors
i350bb-pci-0600
Adapter: PCI adapter
loc1:         +55.0 C  (high = +120.0 C, crit = +110.0 C)

 

coretemp-isa-0000
Adapter: ISA adapter
Physical id 0:  +28.0 C  (high = +78.0 C, crit = +88.0 C)
Core 0:         +26.0 C  (high = +78.0 C, crit = +88.0 C)
Core 1:         +28.0 C  (high = +78.0 C, crit = +88.0 C)
Core 2:         +28.0 C  (high = +78.0 C, crit = +88.0 C)
Core 3:         +28.0 C  (high = +78.0 C, crit = +88.0 C)

 


On CentOS Linux useful tool is also  lm_sensors-sensord.x86_64 – A Daemon that periodically logs sensor readings to syslog or a round-robin database, and warns of sensor alarms.

In Debian Linux there is also the psensors-server (an HTTP server providing JSON Web service which can be used by GTK+ Application to remotely monitor sensors) useful for developers
psesors-server

psensor-linux-graphical-tool-to-check-cpu-hard-disk-temperature-unix

If you have a Xserver installed on the Server accessed with Xclient or via VNC though quite rare,
You can use xsensors or Psensora GTK+ (Widget Toolkit for creating Graphical User Interface) application software.

With this 3 tools it is pretty easy to script one liners and use the Zabbix UserParameters functionality to send hardware report data to a Company's Zabbix Sserver, though Zabbix has already some templates to do so in my case, I couldn't import this templates cause I don't have Zabbix Super-Admin credentials, thus to work around that a sample work around is use script to monitor for higher and critical considered temperature.
Here is a tiny sample script I came up in 1 min time it can be used to used as 1 liner UserParameter and built upon something more complex.

SENSORS_HIGH=`sensors | awk '{ print $6 }'| grep '^+' | uniq`;
SENSORS_CRIT=`sensors | awk '{ print $9 }'| grep '^+' | uniq`; ;SENSORS_STAT=`sensors|grep -E 'Core\s' | awk '{ print $1" "$2" "$3 }' | grep "$SENSORS_HIGH|$SENSORS_CRIT"`;
if [ ! -z $SENSORS_STAT ]; then
echo 'Temperature HIGH';
else 
echo 'Sensors OK';
fi 

Of course there is much more sophisticated stuff to use for monitoring out there


Below script can be easily adapted and use on other Monitoring Platforms such as Nagios / Munin / Cacti / Icinga and there are plenty of paid solutions, but for anyone that wants to develop something from scratch just like me I hope this
article will be a good short introduction.
If you know some other Linux hardware monitoring tools, please share.

Change website .JS .PHP Python Perl CSS etc. file permissions recursively for Better Tightened Security on Linux Webhosting Servers

Friday, October 30th, 2015

change-permissions-recursively-on-linux-to-protect-website-against-security-breaches-hacks

It is a common security (breach) mistake that developers or a web design studio make with dedicated or shared hosted websites do to forget to set a nice restrictive file permissions.

This is so because most people (and especially nowdays) developers are not a security freaks and the important think for a programmer is to make the result running in shortest time without much caring on how secure that is.
Permissions issues are common among sites written in PHP / Perl / Python with some CSS and Javascript, but my observations are that JavaScript websites especially that are using some frameworks such as Zend / Smarty etc. and are using JQuery are the most susceptible to suffer from permission security holes such as the classic 777 file permissions, because of developers who’re overworking and pushed up for a deadlines to include new functionality on websites and thus often publish their experimental code on a Production systems without a serious testing by directly uploading the experimental code via FTP / WinSCP on Production system.

Such scenarios are very common for small and middle sized companies websites as well as many of the hobbyist developers websites running on ready CMS system platforms such as Joomla and WordPress.
I know pretty well from experience this is so. Often a lot of the servers where websites are hosted are just share-servers without a dedicated sysadmin and thus there are no routine security audits made on the server and the security permissions issue might lead to a serious website compromise by a cracker and make your website quickly be banned from Google / Yahoo / Ask Jeeves / Yandex and virtually most of Search Engines because of being marked as a spammer or hacked webiste inside some of the multiple website blacklists available nowdays.

Thus it is always a good idea to keep your server files (especially if you’re sysadmin) with restrictive permissions by making the files be owned by superuser (root) in order to prevent some XSS or vulnerable PHP / Python / Perl script to allow you to easily (inject) and overwrite code on your website.

1. Checking whether you have a all users read, write, executable permissions with find command

The first thing to do on your server to assure you don’t have a low security permissioend files is:

find /home/user/website -type f -perm 777 -print

You will get some file as an output like:

./www/tpl/images/js/ajax-dynamic-list/js/ajax-dynamic-list.js
./www/tpl/images/js/ajax-dynamic-list/js/ajax_admin.js
./www/tpl/images/js/ajax-dynamic-list/js/ajax_teams.js
./www/tpl/images/js/ajax-dynamic-list/js/ajax.js
./www/tpl/images/js/ajax-dynamic-list/js/ajax-dynamic-list_admin.js
./www/tpl/images/js/ajax-dynamic-list/lgpl.txt

2. Change permissions recursively to read, write and exec for root and read for everybody and set all files to be owned by (root) superuser

Then to fix the messy permissions files a common recommended permissions is 744 (e.g. Read / Write and Execute permissions for everyone and only read permissions for All Users and All groups).
Lets say you want to make files permissions to 744 just for all JavaScript (JQuery) files for a website, here is how:

find . -iname ‘*.js’ -type f -print -exec chown root:root ‘{}’ \;
find . -iname ‘*.js’ -type f -print -exec chmod 744 ‘{}’ \;

First find makes all Javascript files be owned by root user / group and second one sets all files permissions to 744.

To make 744 all files on server (including JPEG / PNG Pictures) etc.:

find . -iname /home/users/website -type f -print -exec chown root:root ‘{}’ \;
find . -iname /home/users/website -type f -print -exec chmod 744 ‘{}’ \;

Create SSH Tunnel to MySQL server to access remote filtered MySQL port 3306 host through localhost port 3308

Friday, February 27th, 2015

create_ssh_tunnel_to-mysql_server-to-access-remote-filtered-mysql-on-port-3306-secure_ssh_traffic
On our Debian / CentOS / Ubuntu Linux and Windows servers we're running multiple MySQL servers and our customers sometimes need to access this servers.
This is usually problem because MySQL Db  servers are running in a DMZ Zone with a strong firewall and besides that for security reasons SQLs are configured to only listen for connections coming from localhost, I mean in config files across our Debian Linux servers and CentOS / RHEL Linux machines the /etc/mysql/my.cnf and /etc/my.cnf the setting for bind-address is 127.0.0.1:
 

[root@centos ~]# grep -i bind-address /etc/my.cnf 
bind-address            = 127.0.0.1
##bind-address  = 0.0.0.0


For source code developers which are accessing development SQL servers only through a VPN secured DMZ Network there are few MySQL servers witha allowed access remotely from all hosts, e.g. on those I have configured:
 

[root@ubuntu-dev ~]# grep -i bind-address /etc/my.cnf 

bind-address  = 0.0.0.0


However though clients insisted to have remote access to their MySQL Databases but since this is pretty unsecure, we decided not to configure MySQLs to listen to all available IP addresses / network interfaces. 
MySQl acess is allowed only through PhpMyAdmin accessible via Cleint's Web interface which on some servers is CPanel  and on other Kloxo (This is open source CPanel like very nice webhosting platform).

For some stubborn clients which wanted to have a mysql CLI and MySQL Desktop clients access to be able to easily analyze their databases with Desktop clients such as MySQL WorkBench there is a "hackers" like work around to create and use a MySQL Tunnel to SQL server from their local Windows PCs using standard OpenSSH Linux Client from Cygwin,  MobaXterm which already comes with the SSH client pre-installed and has easy GUI interface to create SSH tunnels or eventually use Putty's Plink (Command Line Interface) to create the tunnel

Anyways the preferred and recommended (easiest) way to achieve a tunnel between MySQL and local PC (nomatter whether Windows or Linux client system) is to use standard ssh client and below command:
 

ssh -o ServerAliveInterval=10 -M -T -M -N -L 3308:localhost:3306 your-server.your-domain.com


By default SSH tunnel will keep opened for 3 minutes and if not used it will automatically close to get around this issue, you might want to raise it to (lets say 15 minutes). To do so in home directory user has to add in:
 

~/.ssh/config

ServerAliveInterval 15
ServerAliveCountMax 4


Note that sometimes it is possible ven though ssh tunnel timeout value is raised to not take affect if there is some NAT (Network Adress Translation) with low timeout setting on a firewall level. If you face constant SSH Tunnel timeouts you can use below bash few lines code to auto-respawn SSH tunnel connection (for Windows users use MobaXterm or install in advance bash shell cygwin package):
 

while true
do
 
ssh -o ServerAliveInterval=10 -M -T -M -N -L 3308:localhost:3306 your-server.your-domain.com
  sleep 15
done


Below is MySQLBench screenshot connected through server where this blog is located after establishing ssh tunnel to remote mysql server on port 3308 on localhost

mysql-workbench-database-analysis-and-management-gui-tool-convenient-for-data-migratin-and-queries-screenshot-

There is also another alternative way to access remote firewall filtered mysql servers without running complex commands to Run a tunnel which we recommend for clients (sql developers / sql designers) by using HeidiSQL (which is a useful tool for webdevelopers who has to deal with MySQL and MSSQL hosted Dbs).

heidisql-show-host_processlist-screenshot

To connect to remote MySQL server through a Tunnel using Heidi:

mysql_connection_configuration-heidi-mysql-gui-connect-tool

 

In the ‘Settings’ tab

1. In the dropdown list of ‘Network type’, please select SSH tunnel

2. Hostname/IP: localhost (even you are connecting remotely)

3. Username & Password: your mysql user and password

Next, in the tab SSH Tunnel:

1. specify plink.exe or you need to download it and specify where it’s located

2. Host + port: the remote IP of your SSH server(should be MySQL server as well), port 22 if you don’t change anything

3. Username & password: SSH username (not MySQL user)

 

heidi-connection_ssh_tunnel_configuration-heidi-sql-tool-screenshot
 

How to Turn Off, Suppress PHP Notices and Warnings – PHP error handling levels via php.ini and PHP source code

Friday, April 25th, 2014

php-logo-disable-warnings-and-notices-in-php-through-htaccess-php-ini-and-php-code

PHP Notices are common to occur after PHP version upgrades or where an obsolete PHP code is moved from Old version PHP to new version. This is common error in web software using Frameworks which have been abandoned by developers.

Having PHP Notices to appear on a webpage is pretty ugly and give a lot of information which might be used by malicious crackers to try to break your site thus it is always a good idea to disable PHP Notices. There are plenty of ways to disable PHP Notices

The easiest way to disable it is globally in all Webserver PHP library via php.ini (/etc/php.ini) open it and make sure display_errors is disabled:

display_errors = 0

or

display_errors = Off

Note that that some claim in PHP 5.3 setting display_errors to Off will not work as expected. Anyways to make sure where your loaded PHP Version display_errors is ON or OFF use phpinfo();

It is also possible to disable PHP Notices and error reporting straight from PHP code you need code like:

 

<?php
// Turn off all error reporting
error_reporting(0);
?>

 

or through code:

 

ini_set('display_errors',0);


PHP has different levels of error reporting, here is complete list of possible error handling variables:

 

 

 

<?php
// Report simple running errors

error_reporting(E_ERROR | E_WARNING | E_PARSE);

// Reporting E_NOTICE can be good too (to report uninitialized
// variables or catch variable name misspellings …)

error_reporting(E_ERROR | E_WARNING | E_PARSE | E_NOTICE);

// Report all errors except E_NOTICE
// This is the default value set in php.ini

error_reporting(E_ALL ^ E_NOTICE);
// Report all PHP errors (see changelog)

error_reporting(E_ALL);
// Report all PHP errors error_reporting(-1);
// Same as error_reporting(E_ALL);

ini_set('error_reporting', E_ALL); ?>

The level of logging could be tuned on Debian Linux via /etc/php5/apache2/php.ini or if necessary to set PHP log level in PHP CLI through /etc/php5/cli/php.ini with:

error_reporting = E_ALL & ~E_NOTICE

 

If you need to remove to remove exact warning or notices from PHP without changing the way  PHPLib behaves is to set @ infront of variable or function that is causing NOTICES or WARNING:
For example:
 

@yourFunctionHere();
@var = …;


Its also possible to Disable PHP Notices and Warnings using .htaccess file (useful in shared hosting where you don't have access to global php.ini), here is how:

# PHP error handling for development servers
php_flag display_startup_errors off
php_flag display_errors off
php_flag html_errors off
php_flag log_errors on
php_flag ignore_repeated_errors off
php_flag ignore_repeated_source off
php_flag report_memleaks on
php_flag track_errors on
php_value docref_root 0
php_value docref_ext 0
php_value error_log /home/path/public_html/domain/php_errors.log
php_value error_reporting -1
php_value log_errors_max_len 0

This way though PHP Notices and Warnings will be suppressed errors will get logged into php_error.log

Configure GNOME 3 to support dual / multiple monitors / Fix broken workspaces

Sunday, September 22nd, 2013

gnome3 dual 2 monitors not showing right workspace display issue how to fix

If you're using some GNU / Linux distribution with GNOME 3 and you would like to show output of screen in two connected Monitors to the machine you will stumble upon really unusual behavior. For some unknown reason GNOME environment developers make second monitor to keep fixed on on First Workspace, so whether you try changing Desktops to second / third etc. Virtual Desktop you end up with your secondary monitor focused on Workspace 1. Logically the use of Dual monitor configuration is to show all GUI output identically on both monitors so this behavior is "wrong" ….

Fortunately there is setting that control this weird behavior in GNOME through gconf-editor and simply changing that switches monitors to show properly.

To fix it:

Start Run Command or Press Alt + F2 to invoke GNOME Run menu

Navigate to registry path Desktop -> Gnome -> Shell -> Windows and Uncheck selection on workspaces_only_on_primary 

gconf-editor-gnome3-fix-dual-monitor-improperly-showing-workspaces

To make new changes take effect its necessary to Log Off or Restart PC.

There is another easier way for command line oriented people to apply changes without using / having installed gconf-editor by issuing:

gsettings set org.gnome.shell.overrides workspaces-only-on-primary false 

Capturing Video from WebCamera in Console and Terminal on Linux with good old ffmpeg

Tuesday, December 18th, 2012

 

Capturing video from webcamera in Skype and Desktop on Debian Ubuntu Fedora Linux Desktop - tux director webcamera recording from skype and desktop ffmpeg

Two articles, before I've blogged on how one can take pictures from console / terminal with ffmpeg. It was interesting fact, I've stumbled on ffmpeg is able of capturing video executed from terminal or plain console TTY.

 

The command to do so is:

# ffmpeg -f video4linux2 -r 25 -s 640x480 -i /dev/video0 webcam-movie.avi
FFmpeg version SVN-r25838, Copyright (c) 2000-2010 the FFmpeg developers
  built on Sep 20 2011 17:00:01 with gcc 4.4.5
  configuration: --enable-libdc1394 --prefix=/usr --extra-cflags='-Wall -g ' --cc='ccache cc' --enable-shared --enable-libmp3lame --enable-gpl --enable-libvorbis --enable-pthreads --enable-libfaac --enable-libxvid --enable-postproc --enable-x11grab --enable-libgsm --enable-libtheora --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libx264 --enable-libspeex --enable-nonfree --disable-stripping --enable-avfilter --enable-libdirac --disable-decoder=libdirac --enable-libschroedinger --disable-encoder=libschroedinger --enable-version3 --enable-libopenjpeg --enable-libvpx --enable-librtmp --extra-libs=-lgcrypt --disable-altivec --disable-armv5te --disable-armv6 --disable-vis
  libavutil     50.33. 0 / 50.43. 0
  libavcore      0.14. 0 /  0.14. 0
  libavcodec    52.97. 2 / 52.97. 2
  libavformat   52.87. 1 / 52.87. 1
  libavdevice   52. 2. 2 / 52. 2. 2
  libavfilter    1.65. 0 /  1.65. 0
  libswscale     0.12. 0 /  0.14. 1
  libpostproc   51. 2. 0 / 51. 2. 0


Like you can see in accordance with WebCamera maximum supported resolution, one can change 640×480 to higher in case if attached expensive HD webcam.

Note that the webcamera should not be in use when issuing the command, otherwise because /dev is used you will get:

[video4linux2 @ 0x633160] Cannot find a proper format for codec_id 0, pix_fmt -1. /dev/video0: Input/output error

It is another interesting, topic I thought if if i t is possible to somehow caputre the Video streamed currently, whether for example in Skype there is a Skype conference established, but unfortunately it is not possible to do it with ffmpeg, cause /dev/video0 is in use while Skype Video stream flows.

There is another way to record Skype and other Programs recording from the WebCam (i.e. Cheese) by using  a small command line tool recordmydesktop.

To use recordmydesktop to save (record) Skype Video Conference just run it in advance and afterwardsmake your Skype call. To capture input from the WebCam while it is in use there are two other GUI instruments capturing the Active Desktop – e.g. Istanbul and vnc2swf.  If you never used any of those and you want to read short review on them check out my older article – Best Software Available Today for GNU / Linux Desktop capturing on Debian

The The little problem with recording the desktop is that if you want to record the Skype conference and straight use the software you will catch also the rest of the Desktop, however it is possible to set recordmydesktop to record content from a Windows with specific ID, so recording only skype Video  should be possible too.

I was intrigued by the question if after all Video Capturing is possible while Video is Streamed from WebCam with ffmpeg, so did a quick research for the command line freaks, here is how:
 

ffmpeg -f x11grab -s `xdpyinfo | grep -i dimensions: | sed 's/[^0-9]*pixels.*(.*).*//' | sed 's/[^0-9x]*//'` -r 25 -i :0.0 -sameq recorder-video-from-cam.avi

The only problem with this command line is the video captured from webcamera will be without sound. To take the Video and Sound input with ffmpeg use:

ffmpeg -f alsa -ac 2 -i pulse -f x11grab -r 30 -s 1024x768 -i :0.0 -acodec pcm_s16le -vcodec libx264 -vpre lossless_ultrafast -threads 0 mydesktop.mov

 

On Debian and Ubuntu Linux, there is also GUI recordmydesktop the package name to install is gtk-recordmydesktop. GTK-RecordMyDesktop, works pretty well, so probably for people looking for convenience and ex-Windows GUI oriented Linux
users it is best choice
.

To use it on Debian:

# apt-get --yes install gtk-recordmydesktop

and launch it with cmd:

# gtk-recordmydesktop

recording Skype and Desktop Webcam Video on Windows program allowing capture / record content from webcam from certain Window

As you can see in above, screenshot GTK-Screenshot can select a Certain Window on Desktop to record, so with it it is a piece of cake to:

1. start the Skype Video  conference
2. Launch gtk-recordmydesktop
3. Press Select Window and Select Skype Video Stream

I'm curious if the pointed Skype + gtk-recordmydesktop, method to capture Skype Active videos will be working on FreeBSD. Unfortunately I don't have FreeBSD Desktop with attached WebCam to give it a, try I will be very thankful, if someone using FreeBSD / NetBSD happen to read this article and take few minutes to test if it works and drop a comment below.

That's all, Enjoy, your captured video with sound 😉

How to debug mod_rewrite .htaccess problems with RewriteLog / Solve mod_rewrite broken redirects

Friday, September 30th, 2011

Its common thing that CMS systems and many developers custom .htaccess cause issues where websites depending on mod_rewrite fails to work properly. Most common issues are broken redirects or mod_rewrite rules, which behave differently among the different mod_rewrite versions which comes with different versions of Apache.

Everytime there are such problems its necessery that mod_rewrite’s RewriteLog functionality is used.
Even though the RewriteLog mod_rewrite config variable is well described on httpd.apache.org , I decided to drop a little post here as I’m pretty sure many novice admins might not know about RewriteLog config var and might benefit of this small article.
Enabling mod_rewrite requests logging of requests to the webserver and process via mod_rewrite rules is being done either via the specific website .htaccess (located in the site’s root directory) or via httpd.conf, apache2.conf etc. depending on the Linux / BSD linux distribution Apache config file naming is used.

To enable RewriteLog near the end of the Apache configuration file its necessery to place the variables in apache conf:

1. Edit RewriteLog and place following variables:

RewriteLogLevel 9
RewriteLog /var/log/rewrite.log

RewriteLogLevel does define the level of logging that should get logged in /var/log/rewrite.log
The higher the RewriteLogLevel number defined the more debugging related to mod_rewrite requests processing gets logged.
RewriteLogLevel 9 is actually the highest loglevel that can be. Setting the RewriteLogLevel to 0 will instruct mod_rewrite to stop logging. In many cases a RewriteLogLevel of 3 is also enough to debug most of the redirect issues, however I prefer to see more, so almost always I use RewriteLogLevel of 9.

2. Create /var/log/rewrite.log and set writtable permissions

a. Create /var/log/rewrite.log

freebsd# touch /var/log/rewrite.log

b. Set writtable permissons

Either chown the file to the user with which the Apache server is running, or chmod it to permissions of 777.

On FreeBSD, chown permissions to allow webserver to write in file, should be:

freebsd# chown www:www /var/log/rewrite.log

On Debian and alike distros:

debian:~# chown www-data:www-data /var/log/rewrite.log

On CentOS, Fedora etc.:

[root@centos ~]# chown httpd:httpd /var/log/rewrite.log

On any other distribution, you don’t want to bother to check the uid:gid, the permissions can be set with chmod 777, e.g.:

linux# chmod 777 /var/log/rewrite.log

Next after RewriteLog is in conf to make configs active the usual webserver restart is required.

To restart Apache On FreeBSD:

freebsd# /usr/local/etc/rc.d/apache2 restart
...

To restart Apache on Debian and derivatives:

debian:~# /etc/init.d/apache2 restart
...

On Fedora and derivive distros:

[root@fedora ~]# /etc/init.d/httpd restart
...

Its common error to forget to set proper permissions to /var/log/rewrite.log this has puzzled me many times, when enabling RewriteLog’s logging.

Another important note is when debugging for mod_rewrite is enabled, one forgets to disable logging and after a while if the /var/log partition is placed on a small partition or is on an old server with less space often the RewriteLog fills in the disk quickly and might create website downtimes. Hence always make sure RewriteLog is disabled after work rewrite debugging is no longer needed.

The way I use to disable it is by commenting it in conf like so:

#RewriteLogLevel 9
#RewriteLog /var/log/rewrite.log

Finally to check, what the mod_rewrite processor is doing on the fly its handy to use the well known tail -f

linux# tail -f /var/log/rewrite.log

A bunch of time in watching the requests, should be enough to point to the exact problem causing broken redirects or general website malfunction.
Cheers 😉