Posts Tagged ‘linux distributions’

Linux: how to Start multiple X sessions – Connect to remote Linux GNOME with no need for VNC by exporting display

Thursday, October 10th, 2013

start multiple X server Xorg sessions export graphic X display to use Linux gui from another Linux like dumb terminal
It is useful sometimes in Linux to run multiple Xservers and from there to start few Window Managers (lets say one with Window Maker and one with GNOME and FluxBox). Running second / 3rd etc. X session is nice especially when you you'd like to access remotely your Desktop (lets say from another Linux).

To start second Xsession with only terminal from which you can invoke any GUI environment use:

xinit -- :1

For third one do

xinit -- :2

 

First Xsession is working on screen :0 (e.g. xinit — :0). To access and navigate later via various X sessions depending on the Linux distribution and how it is configured to which console to start new sessions use

ALT + F5, ALT + F6, ALT + F7.

On GNU / Linux distributions where default Xorg server is running on TTY7 to switch to 2nd and 3rd Window Manager use instead:

ALT + F7, ALT + F8, ALT + F9

Alternative command to issue to launch multiple sessions with lets say GNOME (if that's default set GUI environment) use:
 

startx -- :1

and

startx -- :2

Whether you want to launch GUI environment from another Linux after connecting through SSH or telnet term client (i.e. you have old machine hardware with Linux with no graphical environment and would like to use 2nd machine with decent hardware and Xorg + GNOME running fine), way to do is via:

xhost +

and exporting DISPLAY to remote IP host.

Here is example how to launch second Xsession with GUI environment from remote Linux host.
For example we assume host which will host 2nd X session is 192.168.1.1 and host from which remote Xorg with GNOME will be accessed is 192.168.1.2.

a. Use ssh from 192.168.1.2 to 192.168.1.1

# ssh user@192.168.1.1
xorg-machine:~# xhost +

Above command allows all hosts to be able to connect to 192.168.1.1.

To enable just single host to be able to connect to Xorg server on 192.168.1.1:

xorg-machine:~# xhost +192.168.1.1

b. On xorg-machine (192.168.1.1) its necessary export display to 192.168.1.2

xorg-machine:~# export DISPLAY=192.168.1.2:0.0

To make Xorg + default GUI window manager popup then on 192.168.1.2

xorg-machine:~# startx

Client Denied By Server Configuration Linux Apache error solution

Thursday, August 1st, 2013

Client denied by server configuration fix solution Apache feather logo

If you run Apache server on Debian Linux / Ubuntu / CentOS whatever Linux OS and you try to install a new PHP application under lets say /var/www/ getting an error in Apache error.log like:
 

[Wed Jul 31 03:36:21 2013] [error] [client 192.168.10.2] client denied by server configuration: /var/www/vea/index.php, referer: http://192.168.10.9/vea/


This is due to misconfigured AllowOverrides in some of your main configuration files.

So what is causing the error?

Reason is by default in most current Linux distributions Apache is configured to have restrictive policy following the good security practice (Restrictive by default).
Apache is configured by default to not accept AllowOverrides – i.e. AllowOverride None for DocumentRoot /, because there are plenty of administrators who run Apache without having profound understanding leaving it to interpret by default mod_rewrite rules from .htaccess files etc.

To fix this issue, hence you have to add extra configuration for AllowOverride directive for directory giving the err. In this case /vea:

<Directory /var/www/vea/> 
Options -Indexes FollowSymLinks
AllowOverride AuthConfig
FileInfo Order allow,deny
Allow from all
</Directory> 

Above rules are a bit restrictive and will allow only to have .htaccess with only for protecting directory with htaccess passsword for exmpl. – (AuthUserFile, AuthGroupFile, AuthName, AuthType) .htaccess.
-Indexes – instructs /var/www/vea directory listing to be disabled, below two lines:

Order allow, deny
Allow from all

Makes the directory Allowed to be visible by all, however note that it is possible in some of other Apache configuration files to have other rules configured for /vea documentroot /var/www/ which are preventive (Default Deny) – if this is the case just walk through all Apache configs and change where necessary to Allow from all.

In some cases it is possible Web application or Website placed requires AllowOverride All directive instead. If above <Directory>

does not help and you continue to get:

[Wed Jul 31 03:36:21 2013] [error] [client xxx.xxx.xx.x] client denied by server configuration: /var/www/php-application/index.php, referer: http://xxx.xxx.xx.xx/php-application/  

Try setting Directory rules with AllowOverride All ;

<Directory /var/www/php-application/> 
Options -Indexes FollowSymLinks
AllowOverride All
FileInfo Order allow,deny
Allow from all
</Directory> 

Debian / Ubuntu server admins should check closely for AllowOverride rules in files /etc/apache2/conf.d/*

as well as in /etc/apache2/mods-available/*:

Usually there are AllowOverride rules set from files:

/etc/apache2/conf.d/apache2-doc
/etc/apache2/conf.d/localized-error-pages
/etc/apache2/conf.d/security

and also in

/etc/apache2/mods-available/alias.conf
/etc/apache2/mods-available/userdir.conf

On Debian GNU / Linux, very common reason for getting client denied by server configuration is AllowOverride definitions in /etc/apache2/conf.d/security, default AllowOverride there are set to None, i.e.:

<Directory />
AllowOverride None
Order Deny,Allow
Deny from all
</Directory>

If that's the case with you make sure you config rules to become:

# <Directory />
# AllowOverride None
# Order Deny,Allow
# Deny from all
# </Directory>

A very useful command to find where there is occurance of AllowOverride in Apache many configs is:

root@linux:~# cd /etc/apache2
root@linux:/etc/apache2# grep -rli AllowOverride *
apache2.conf
conf.d/localized-error-pages
conf.d/apache2-doc
conf.d/security
mods-available/userdir.conf
mods-available/alias.conf
sites-available/www.website1.com
sites-available/www.website2.com
...


 

Once you did all necessary config Restart Apache:

root@linux:~# /etc/init.d/apache2 restart
....

Merging pictures on Linux command shell with ImageMagick merge

Friday, May 17th, 2013

combining-multiple-jpg-png-pictures-imagemagick-magician-logo

It is generally useful to combine multiple pictures into single one. A example case, where merging pictures on Linux is necessary is if you previously used ImageMagick's convert command line tool to convert PDF file (pages) to JPEG / PNG pictures. Unfortunately convertion with convert(as far as I know is only capable of generating multiple picture files instead of one single one), thus you further need montage to merge pages in separate photos to one. In my case I had my Curriculum Vitae in PDF and I needed to have same PDF in single photo for my applications for online Job Employment Belarusian portal site rabota.tut.by.

montage is one of numerous ImageMagick package script (plugins).
On all major Linux distributions (Debian / Ubuntu, Fedora, CentOS, RHEL, SuSE) montage comes installed together with imagemagick deb / rpm package.

Whether you don't have montage on Debian / Ubuntu and deb derivatives install it via:

linux:~# apt-get install --yes imagemagick
....

On CentOS, Fedora, RHEL, SuSE to install montage:

[root@centos ~]# yum -y install imagemagick
....

To merge two JPEG Photos into single PNG format picture:
linux:~$ montage -geometry +2+2 Picture-1.jpeg Merged-picture.png

Combining more photos, lets say my 8 Pages photos output from previous PDF convert to pictures is done with:

linux:~$ montage -geometry +8+8 CV_Georgi_Georgiev_bg-0.png \ CV_Georgi_Georgiev_bg-1.png \ CV_Georgi_Georgiev_bg-2.png \ CV_Georgi_Georgiev_bg-3.png \ CV_Georgi_Georgiev_bg-4.png \ CV_Georgi_Georgiev_bg-5.png \ CV_Georgi_Georgiev_bg-6.png \ CV_Georgi_Georgiev_bg-7.png \ CV_Georgi_Georgiev_bg.png
montage has plenty of useful other options, to do various photo montages from command line. Other way to merge photos with montage is by using:

linux:~$ montage -mode concatenate -tile 1x input-pic*.jpg out.jpg

Merging photos is also possible by using directly convert.

Combining multiple photos into single JPEG or PNG with Imagick convert is done with:

linux:~$ convert -append input-pic-*.jpg combined-picture.jpg

Other example use of montage is located on ImageMagick's montage's script site here

 

Linux Mint 14 – “Nadia”: how to Display Trash icon on Desktop

Wednesday, March 13th, 2013

Recently Linux Mint is taking lead among preferred Linux distributions. From my little experience with it mainly installing it on friends PCs I should say Mint develops done a great job to make it more graphically convenient for users migrating from Windows OS.

Though it is generally intuitive, there is one little thing that might be useful for novice Mint user – where from to make Trash icon.

There are two ways to do it.

1. Is by installing / launching gnome-tweak-tool

Linux Mint 12 desktop trashbin screenshot

I personally prefer gnome-tweak-tool, cause it has plenty of nice options related to how GUI environment, behaves. I believe even non Linux-Mint GNOME 3 users should take a look at gnome-tweak-tool if already haven't as it allows user to tailer plenty of desktop nice stuff.

2. Through [Main Menu]

-> Preferences -> Desktop Settings -> Desktop -> Desktop icons

Linux Mint desktop how to visualize trash bin on desktop screenshot

Get more peaceful night sleep on Ubuntu, Mint and Xubuntu Linux using gtk-redshift

Monday, March 11th, 2013

gtk redshift Xubuntu Linux screenshot sleep peacefully when using computer at late

If you want to have more peaceful night sleep when working on Ubuntu or other Debian based Linux distro, be sure to have gtk-redshift installed.
It is a little program that simply changes the color gamma of screen and makes your screen look more reddish at night. According to many scientific research done on how we humans react, whether using computer late at night. It is concluded that less bright colors and especially reddish color gamma relaxes our eye strain and thus makes it easier for us to get a sleep quickly once in bed. gtk-redshift is available in latest Ubuntu 12.04 as well as on other Ubuntu derivatives (Xubuntu, Mint Linux) etc.

Easiest way to install it is via respective GUI Package Manager or via good old Synaptic (GUI aptitude frontend).
I personally prefer to always install Synaptic on new Desktop Linux PCs, use it as package GUI frontend, for the simple reason it offers one very similar "unified" package Installer outlook across different Linux distros.

The quickest way to use GUI version of Redshift is to install with apt:

root@xubuntu:~# apt-get install --yes gtk-redshift
....

To further use it it needs one time to be run with color gamma paraments, launch it first time via terminal with:
user@xubuntu:~$ gtk-redshift  -l 52.5:13.4

It is a good idea to make a tiny shell script wrapper with good settings for gtk-redshift and later use this shell wrapper as launcher :

root@xubuntu:~# echo '#!/bin/sh' >> /usr/local/bin/gtk-redshift
root@xubuntu:~# echo 'gtk-redshift' >> /usr/local/bin/gtk-redshift
root@xubuntu:~# chmod +x /usr/local/bin/gtk-redshift

From then on, to launch it you can directly open it via terminal

user@xubuntu:~$ /usr/local/bin/gtk-redshift

To make the program permanently work, make it run via respective GUI environment startup . In GNOME add it start-up from:

user@xubuntu:~$ gnome-session-manager

Important note to make about gtk-redshift is that on some older monitor screens, very early in morning the screen becomes too red, making screen look like displaying on very old long time used CRT monitors. For people working in fields like; Web Design, Architecture, or any drawing twisted colors effect will be annoying and will probably interfere with your perception of colors. However for programmers, system administrators and people who use computer mainly for typing and reading gtk-redshift is huge blessing.

Enjoy ! 🙂
 

How to change default port 25 to 995 or any other on Postfix mail server

Wednesday, February 27th, 2013

If you need to change default mail delivery port of Postfix mail server from 25 to any other port lets say to 995. Here is how:

Edit /etc/postfix/master.cf  – a small note to make here is master.cf is located on same location on mostly all Linux distributions as well as FreeBSD.

Find line with:

smtp      inet  n       –       n       –       –       smtpd

Change smtp – which is actually a reference for port 25 to whatever port you need – i.e. 995 After change line should look like:

995      inet  n       –       n       –       –       smtpd

Save config file and restart postfix to load new settings:

# /etc/rc.d/postfix restart
postfix/postfix-script: stopping the Postfix mail system
postfix/postfix-script: starting the Postfix mail system

Secure delete files irreverseble on Debian and Fedora GNU / Linux

Thursday, February 21st, 2013

I just read an article in Linux-Magazine on Advanced File Management named – "Beyond the Basics". Most of what the article says is pretty trivial and known by any Linux enthusiast average user and administrator. There was one command mentioned shred which is probably not so well known among Free Software users shred allows the user to "secure delete files" / from the hard disk irreversible.

The tool is part of coreutils package and available across mostly all Linux distributions including Debian / Ubuntu debian derivatives and the RedHat based distros CentOS, Fedora, RHEL etc.

Just for info for those who don't know how to check, to which package a command belongs with rpm and dpkg, here is how;

[hipo@centos ~]$ rpm -qf /usr/bin/shred
coreutils-5.97-23.el5_4.2

hipo@debian:~$ dpkg -S /usr/bin/shred
coreutils: /usr/bin/shred

Here is how to delete a sample file ovewritting 3 times (-n2 – means 3 because in comuters we know we countr from 0 – 0 1 2 3), the z option fills up with zeros after overwritting the file ( just like seen on paste), -v option shows verbose what shred is doing and -u option truncates removes file after overwritting

noah:/var/tmp# shred -n2 -zvu crash20121113021508.txt
shred: crash20121113021508.txt: pass 1/3 (random)…
shred: crash20121113021508.txt: pass 2/3 (random)…
shred: crash20121113021508.txt: pass 3/3 (000000)…
shred: crash20121113021508.txt: removing
shred: crash20121113021508.txt: renamed to 00000000000000000000000
shred: 00000000000000000000000: renamed to 0000000000000000000000
shred: 0000000000000000000000: renamed to 000000000000000000000
shred: 000000000000000000000: renamed to 00000000000000000000
shred: 00000000000000000000: renamed to 0000000000000000000
shred: 0000000000000000000: renamed to 000000000000000000
shred: 000000000000000000: renamed to 00000000000000000
shred: 00000000000000000: renamed to 0000000000000000
shred: 0000000000000000: renamed to 000000000000000
shred: 000000000000000: renamed to 00000000000000
shred: 00000000000000: renamed to 0000000000000
shred: 0000000000000: renamed to 000000000000
shred: 000000000000: renamed to 00000000000
shred: 00000000000: renamed to 0000000000
shred: 0000000000: renamed to 000000000
shred: 000000000: renamed to 00000000
shred: 00000000: renamed to 0000000
shred: 0000000: renamed to 000000
shred: 000000: renamed to 00000
shred: 00000: renamed to 0000
shred: 0000: renamed to 000
shred: 000: renamed to 00
shred: 00: renamed to 0
shred: crash20121113021508.txt: removed
 

One common use of shred is by sysadmins who has to prepare old server containing lets say client data (SQL) – mail boxes or just file data and then sell it to third parties making sure data will be un-restorable for the new owner. Also shred is used a lot by crackers who set up "time bombs" activated on user activity or inactivity to destroy evidences in case of crackers PC is being captured by police. Though shred cannot guarantee 100% that deleted data can't be recoved within a special data recovery lab in most of cases it is enough to assure data with it will be almost impossible to recover.

How to keep track of All User accounts executed commands, highest CPU consumers and user times on Linux

Tuesday, February 5th, 2013

Linux accounting keeping an eye on all user run commands time accounting find cpu eaters

For people interested into statistics of how Linux existing users are spending, there log in times and what kind of commands each of users is executing, take a look at acct
acct is existing on all mainstream Linux distributions is a great sysadmin tool. acct is a great tool whether you have a system where a multitude of users you don't trust has to be monitored. It is an absolutely must have for anyone willing to run, lets say  experimental honeypot or  free shell host. acct is useful for paranoid sysadmins who like to always knows what there users are running as well as in situation where some of users is suspected to be a potential cracker trying to root the host.

Below is description of acct package on Debian:

# apt-cache show acct| grep -i description -A 8
Description: The GNU Accounting utilities for process and login accounting
 GNU Accounting Utilities is a set of utilities which reports and summarizes
 data about user connect times and process execution statistics.
 .
 "Login accounting" provides summaries of system resource usage based on connect
 time, and "process accounting" provides summaries based on the commands
 executed on the system.
 .
 The 'last' command is provided by the sysvinit package and not included here.

To start using acct, just install it with usual:

# apt-get install --yes acct

(Whether on Debian / Ubuntu Linux);

On Fedora, CentOS and RHEL and other RPM based Linuxes issue;

yum --y install psacct

On deb based Linux distributions, whether acct collects statistics is controlled via:

/etc/default/acct

# cat /etc/default/acct
# Defaults for acct

# If you want to keep acct installed, but not started automatically, set this
# variable to 0. Because /etc/cron.daily/acct calls the initscript daily, it is
# not sufficient to stop acct once after booting if your machine remains up.
ACCT_ENABLE="1"

# Amount of days that the logs are kept.
ACCT_LOGGING="30"

After installed to start collecting user "process accounting" data run acct via init script;

# /etc/init.d/acct start
Turning on process accounting, file set to '/var/log/account/pacct'.
Done..

The file gathering info on system usage, CPU load, user ran commands /var/log/account/psacct is a binary and unreadable tailing it with tail -f .

On CentOS / Fedora Linux to Enable acct account statistics gathering in future boot and from present moment on do;

# chkconfig psacct on
# /etc/init.d/psacct start

1. Find out all commands executed by Linux user account (lastcomm)

Once user accounting is running to get information of every command ever executed on user shell use lastcomm cmd. For example:

# lastcomm hipo

bash              F    hipo     pts/1      0.00 secs Tue Feb  5 00:20
bash              F    hipo     pts/1      0.03 secs Tue Feb  5 00:20
sed                    hipo     pts/1      0.00 secs Tue Feb  5 00:20
bash              F    hipo     pts/1      0.00 secs Tue Feb  5 00:20
uname                  hipo     pts/1      0.00 secs Tue Feb  5 00:20
bash              F    hipo     pts/1      0.00 secs Tue Feb  5 00:20
dircolors              hipo     pts/1      0.00 secs Tue Feb  5 00:20
bash              F    hipo     pts/1      0.00 secs Tue Feb  5 00:20
bash              F    hipo     pts/1      0.00 secs Tue Feb  5 00:20
bash              F    hipo     pts/1      0.00 secs Tue Feb  5 00:20
uname                  hipo     pts/1      0.00 secs Tue Feb  5 00:20
bash              F    hipo     pts/1      0.00 secs Tue Feb  5 00:20
bash              F    hipo     pts/1      0.00 secs Tue Feb  5 00:20
ls                     hipo     pts/1      0.00 secs Tue Feb  5 00:20
bash              F    hipo     pts/1      0.00 secs Tue Feb  5 00:20
bash              F    hipo     pts/1      0.03 secs Tue Feb  5 00:20
sed                    hipo     pts/1      0.00 secs Tue Feb  5 00:20
bash              F    hipo     pts/1      0.00 secs Tue Feb  5 00:20
uname                  hipo     pts/1      0.00 secs Tue Feb  5 00:20
bash              F    hipo     pts/1      0.00 secs Tue Feb  5 00:20
id                     hipo     pts/1      0.00 secs Tue Feb  5 00:20
mesg                   hipo     pts/1      0.00 secs Tue Feb  5 00:20
verse                  hipo     pts/1      0.00 secs Tue Feb  5 00:20
cowrand                hipo     pts/1      0.00 secs Tue Feb  5 00:20
cowsay                 hipo     pts/1      0.03 secs Tue Feb  5 00:20
cowrand           F    hipo     pts/1      0.00 secs Tue Feb  5 00:20
head                   hipo     pts/1      0.00 secs Tue Feb  5 00:20
tail                   hipo     pts/1      0.00 secs Tue Feb  5 00:20
head                   hipo     pts/1      0.00 secs Tue Feb  5 00:20
ls                     hipo     pts/1      0.00 secs Tue Feb  5 00:20
cowrand           F    hipo     pts/1      0.00 secs Tue Feb  5 00:20
awk                    hipo     pts/1      0.00 secs Tue Feb  5 00:20
wc                     hipo     pts/1      0.00 secs Tue Feb  5 00:20
ls                     hipo     pts/1      0.00 secs Tue Feb  5 00:20

A lot of the initial commands shown to run on pts/1 is not actual commands, by the user but are just stuff run on user login time via /etc/bash.bashrc, /etc/profile, ~/.bashrc. ~/.bash_profile.

lastcomm displayed output from 2nd column is a special flag giving more information on how and for what purpose command was executed. In above output
F
– indicates the command run after a fork.
X – is returned if a command exit with SIGTERM (kill signal)
D – in case of generated command core dump (D is good one to look for whether checking a suspicious user profile, as it is so common exploits use core dumping to get root superuser access)
S – means the command is run with superuser privileges (this one you will see usually whether inspecting user profile of a cracker who run exploit using core dump – a lot of Ds followed by some shell code to run as superuser)

2. Get statistics on CPU use time of services (daemons) and user accounts

psacct is very handy, whether you have CPU server overloads and you have difficulty finding out what are the "CPU hungry processes". To get those use summarized accounting information tool;

# sa -m
                                     2619      31.06re       0.54cp         0avio      2907k
root                                 2448      30.19re       0.52cp         0avio      2817k
www-data                               33       0.06re       0.02cp         0avio      3687k
hipo                                   72       0.15re       0.01cp         0avio      6217k
qscand                                 11       0.36re       0.00cp         0avio      5326k
vpopmail                               48       0.25re       0.00cp         0avio      1486k
qmails                                  6       0.00re       0.00cp         0avio       968k
sshd                                    1       0.04re       0.00cp         0avio     12632k

-m (prints user summary).

3. Find all system users running certain commands

Another good use of lastcomm command is to grep over all users executed command for precise commands of interest. One very good use case is if you catch a system abuser running certain exploit or DoS tool on the host and you want to make sure no-one else on the system doesn't try running it.

# lastcomm ls
ls                     www-data __         0.00 secs Tue Feb  5 00:40
ls                     www-data __         0.00 secs Tue Feb  5 00:30
ls                     hipo     pts/7      0.00 secs Tue Feb  5 00:20
ls                     hipo     pts/1      0.00 secs Tue Feb  5 00:20
ls                     hipo     pts/1      0.00 secs Tue Feb  5 00:20
ls                     hipo     pts/1      0.00 secs Tue Feb  5 00:20
ls                     hipo     pts/1      0.00 secs Tue Feb  5 00:20
ls                     hipo     pts/1      0.00 secs Tue Feb  5 00:20
ls                     www-data __         0.00 secs Tue Feb  5 00:20
ls                     root     pts/0      0.00 secs Tue Feb  5 00:10
ls                     root     pts/0      0.00 secs Tue Feb  5 00:10
ls                     www-data __         0.00 secs Tue Feb  5 00:10
 

4. Get statistics of most active system users in hours

There is one tool called ac, which is similar in what it does to last command, just like last it uses /var/log/wtmp binary log file to get its user login times stats . The difference is ac provides more and better structured user login time length info.

Its very useful if you want to have idea, which user spends most time connected to host.

$ ac -p
    sic                                  4.86
    hipo                                 4.80
    root                                25.80
    play                                 0.02

To get general info on how much overall hours all existing users spend doing stuff on node;

$ ac total 35.61

To know which days from the month users were most active:

$ ac -d
Feb 1 total 14.54
Feb 2 total 0.97
Feb 3 total 12.47
Feb 4 total 5.96
Today total 1.73

How to record microphone input sound (only) using good old ffmpeg

Tuesday, December 25th, 2012

The good old ffmpeg, along with being able to capture sound and video from your Linux Desktop or a certain Window and Skype whatever WebCamera input is also able to record sound from both camera or embedded laptop microphone. Here is how:

# ffmpeg -f alsa -ac 2 -i pulse   -acodec pcm_s16le -vcodec libx264 -vpre lossless_ultrafast -threads 0  -y  myVOICE.wav

This as you can see from arguments, uses GNOME's pulseaudio (audio service) and ALSA. Sound is first streamed through alsa and then the sound inflow is passed to be processed and multipled in a separate sound channel by pulseaudio. This method though said to be working fine on Ubuntu Linux is not working well on some other Linux distributions like Debian if one is using ALSA configured to use a software sound multiplexor via the so called – alsa dsnoop interface (previously I write how to use it in order to make Skype and other programs use SoundBlaster proper – article is here)

Below is the output warning I got whether trying ffmpeg with -f alsa and -i pulse arguments:

hipo@noah:~/Desktop$ ffmpeg -f alsa -ac 2 -i pulse   -acodec pcm_s16le -vcodec libx264 -vpre lossless_ultrafast -threads 0  -y  myVOICE.wav
FFmpeg version SVN-r25838, Copyright (c) 2000-2010 the FFmpeg developers
  built on Sep 20 2011 17:00:01 with gcc 4.4.5
  configuration: --enable-libdc1394 --prefix=/usr --extra-cflags='-Wall -g ' --cc='ccache cc' --enable-shared --enable-libmp3lame --enable-gpl --enable-libvorbis --enable-pthreads --enable-libfaac --enable-libxvid --enable-postproc --enable-x11grab --enable-libgsm --enable-libtheora --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libx264 --enable-libspeex --enable-nonfree --disable-stripping --enable-avfilter --enable-libdirac --disable-decoder=libdirac --enable-libschroedinger --disable-encoder=libschroedinger --enable-version3 --enable-libopenjpeg --enable-libvpx --enable-librtmp --extra-libs=-lgcrypt --disable-altivec --disable-armv5te --disable-armv6 --disable-vis
  libavutil     50.33. 0 / 50.43. 0
  libavcore      0.14. 0 /  0.14. 0
  libavcodec    52.97. 2 / 52.97. 2
  libavformat   52.87. 1 / 52.87. 1
  libavdevice   52. 2. 2 / 52. 2. 2
  libavfilter    1.65. 0 /  1.65. 0
  libswscale     0.12. 0 /  0.14. 1
  libpostproc   51. 2. 0 / 51. 2. 0
[alsa @ 0x633160] capture with some ALSA plugins, especially dsnoop, may hang.

where concrete programs, are run which take use of OSS (Open Sound System) – an already obsolete sound architecture. By the way on current Debian / Fedora etc. Linux-es OSS is managed and played only, whether few kernel modules are already  pre-loaded, below are the ones as pasted from my Debian Squeeze:

# lsmod | grep -i oss
snd_pcm_oss            32591  0
snd_mixer_oss          12606  1 snd_pcm_oss
snd_pcm                60487  3 snd_hda_intel,snd_hda_codec,snd_pcm_oss
snd                    46526  15 snd_hda_codec_analog,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device

The oss processed sound recording from ffmpeg is not working, well on my Linux, cause I have my custom (non-Debian) native binary Firefox downloaded and installed from Firefox's website.The browser is compiled to open automatically /dev/dsp which in practice uses the above-mentioned OSS listed modules, which on their behalf when used break out the sound processed by alsa and respectively pulseaudio (those who use Linux for longer time should remember in the times of OSS only one certain sound stream was possible to be processed / played on Linux historically before ALSA come to scene to be "defacto" standard kernel sound processor. Well ofcourse firefox developers who compiled the Firefox for Linux probably was using Slackware or some other Linux distro which probably used to play sound still via OSS or maybe they compiled it so thinking OSS because of its historical importance is still supported by more Linux distributions than alsa is. I like the custom compiled Firefox to run on my Debian instead of default Debian Squeeze (IceWeasel) cause firefox.org ,Firefox version is much newer and supports better latest HTML5  as well as it includes ability to download and apply automatic updates to the latest version provided by Firefox team. However I fou

Thus for Linux users like me using latest firefox binary from firefox.org (in parallel) with opened Firefox browser to record sound from Webcam or Embedded notebook mic the obsolete OSS has to be used, here is how:

# ffmpeg -f oss -ac 2 -i /dev/dsp   -acodec pcm_s16le -vcodec libx264 -vpre lossless_ultrafast -threads 0  -y  my-recorder-VOICE.wav

Enjoy ;)

Installing XMMS on Debian Squeeze from a Package / Installing XMMS on Debian – the debian way

Tuesday, July 17th, 2012

installing xmms on debian squeeze linux playing free software song green skin screenshot

I use Debian Linux for my desktop for quite some time; Even though there are plenty of MP3 / CD players around in Debian, I’m used to the good old XMMS, hence I often prefer to use XMMS to play my music instead of newer players like RhythmBox or audacious.
Actually audacious is not bad substitute for XMMS and is by default part of Debian but to me it seems more buggy and tends to crash during playing some music formats more than xmms ….

As most people might know, XMMS is no longer supported in almost all modern Linux distributions, so anyone using Debian, Ubuntu or other deb derivative Linux would have to normally compile it from source.
Compiling from source is time consuming and I think often it doesn’t pay back the effort. Thanksfully, though not officially supported by Debian crew XMMS still can be installed using a deb xmms prebuilt package repository kindly provided by a hacker fellow knuta.

Using the pre-build deb packages, installing xmms on new Debian installs comes to:

debian:~# echo 'deb http://www.pvv.ntnu.no/~knuta/xmms/squeeze ./' >> /etc/apt/sources.list
debian:~# echo 'deb-src http://www.pvv.ntnu.no/~knuta/xmms/squeeze ./' >> /etc/apt/sources.list
debian:~# apt-get update && apt-get -y install xmms

There are also deb xmms built for Ubuntu, so Ubuntu users could install xmms using repositories:

deb http://www.pvv.ntnu.no/~knuta/xmms/karmic ./
deb-src http://www.pvv.ntnu.no/~knuta/xmms/karmic ./
That’s all now xmms is ready to use. Enjoy 🙂