Posts Tagged ‘package description’

How to redirect TCP port traffic from Internet Public IP host to remote local LAN server, Redirect traffic for Apache Webserver, MySQL, or other TCP service to remote host

Thursday, September 23rd, 2021

 

 

Linux-redirect-forward-tcp-ip-port-traffic-from-internet-to-remote-internet-LAN-IP-server-rinetd-iptables-redir

 

 

1. Use the good old times rinetd – internet “redirection server” service


Perhaps, many people who are younger wouldn't remember rinetd's use was pretty common on old Linuxes in the age where iptables was not on the scene and its predecessor ipchains was so common.
In the raise of mass internet rinetd started loosing its popularity because the service was exposed to the outer world and due to security holes and many exploits circulating the script kiddie communities
many servers get hacked "pwned" in the jargon of the script kiddies.

rinetd is still available even in modern Linuxes and over the last years I did not heard any severe security concerns regarding it, but the old paranoia perhaps and the set to oblivion makes it still unpopular soluttion for port redirect today in year 2021.
However for a local secured DMZ lans I can tell you that its use is mostly useful and I chooes to use it myself, everynow and then due to its simplicity to configure and use.
rinetd is pretty standard among unixes and is also available in old Sun OS / Solaris and BSD-es and pretty much everything on the Unix scene.

Below is excerpt from 'man rinetd':

 

DESCRIPTION
     rinetd redirects TCP connections from one IP address and port to another. rinetd is a single-process server which handles any number of connections to the address/port pairs
     specified in the file /etc/rinetd.conf.  Since rinetd runs as a single process using nonblocking I/O, it is able to redirect a large number of connections without a severe im‐
     pact on the machine. This makes it practical to run TCP services on machines inside an IP masquerading firewall. rinetd does not redirect FTP, because FTP requires more than
     one socket.
     rinetd is typically launched at boot time, using the following syntax:      /usr/sbin/rinetd      The configuration file is found in the file /etc/rinetd.conf, unless another file is specified using the -c command line option.

To use rinetd on any LInux distro you have to install and enable it with apt or yum as usual. For example on my Debian GNU / Linux home machine to use it I had to install .deb package, enable and start it it via systemd :

 

server:~# apt install –yes rinetd

server:~#  systemctl enable rinetd


server:~#  systemctl start rinetd


server:~#  systemctl status rinetd
● rinetd.service
   Loaded: loaded (/etc/init.d/rinetd; generated)
   Active: active (running) since Tue 2021-09-21 10:48:20 EEST; 2 days ago
     Docs: man:systemd-sysv-generator(8)
    Tasks: 1 (limit: 4915)
   Memory: 892.0K
   CGroup: /system.slice/rinetd.service
           └─1364 /usr/sbin/rinetd


rinetd is doing the traffic redirect via a separate process daemon, in order for it to function once you have service up check daemon is up as well.

root@server:/home/hipo# ps -ef|grep -i rinet
root       359     1  0 16:10 ?        00:00:00 /usr/sbin/rinetd
root       824 26430  0 16:10 pts/0    00:00:00 grep -i rinet

+ Configuring a new port redirect with rinetd

 

Is pretty straight forward everything is handled via one single configuration – /etc/rinetd.conf

The format (syntax) of a forwarding rule is as follows:

     [bindaddress] [bindport] [connectaddress] [connectport]


Besides that rinetd , could be used as a primitive firewall substitute to iptables, general syntax of allow deny an IP address is done with (allow, deny) keywords:
 

allow 192.168.2.*
deny 192.168.2.1?


To enable logging to external file ,you'll have to include in the configuration:

# logging information
logfile /var/log/rinetd.log

Here is an example rinetd.conf configuration, redirecting tcp mysql 3306, nginx on port 80 and a second web service frontend for ILO to server reachable via port 8888 and a redirect from External IP to local IP SMTP server.

 

#
# this is the configuration file for rinetd, the internet redirection server
#
# you may specify global allow and deny rules here
# only ip addresses are matched, hostnames cannot be specified here
# the wildcards you may use are * and ?
#
# allow 192.168.2.*
# deny 192.168.2.1?


#
# forwarding rules come here
#
# you may specify allow and deny rules after a specific forwarding rule
# to apply to only that forwarding rule
#
# bindadress    bindport  connectaddress  connectport


# logging information
logfile /var/log/rinetd.log
83.228.93.76        80            192.168.0.20       80
192.168.0.2        3306            192.168.0.19        3306
83.228.93.76        443            192.168.0.20       443
# enable for access to ILO
83.228.93.76        8888            192.168.1.25 443

127.0.0.1    25    192.168.0.19    25


83.228.93.76 is my external ( Public )  IP internet address where 192.168.0.20, 192.168.0.19, 192.168.0.20 (are the DMZ-ed Lan internal IPs) with various services.

To identify the services for which rinetd is properly configured to redirect / forward traffic you can see it with netstat or the newer ss command
 

root@server:/home/hipo# netstat -tap|grep -i rinet
tcp        0      0 www.pc-freak.net:8888   0.0.0.0:*               LISTEN      13511/rinetd      
tcp        0      0 www.pc-freak.n:http-alt 0.0.0.0:*               LISTEN      21176/rinetd        
tcp        0      0 www.pc-freak.net:443   0.0.0.0:*               LISTEN      21176/rinetd      

 

+ Using rinetd to redirect External interface IP to loopback's port (127.0.0.1)

 

If you have the need to redirect an External connectable living service be it apache mysql / privoxy / squid or whatever rinetd is perhaps the tool of choice (especially since there is no way to do it with iptables.

If you want to redirect all traffic which is accessed via Linux's loopback interface (localhost) to be reaching a remote host 11.5.8.1 on TCP port 1083 and 1888, use below config

# bindadress    bindport  connectaddress  connectport
11.5.8.1        1083            127.0.0.1       1083
11.5.8.1        1888            127.0.0.1       1888

 

For a quick and dirty solution to redirect traffic rinetd is very useful, however you'll have to keep in mind that if you want to redirect traffic for tens of thousands of connections constantly originating from the internet you might end up with some disconnects as well as notice a increased use of rinetd CPU use with the incrased number of forwarded connections.

 

2. Redirect TCP / IP port using DNAT iptables firewall rules

 

Lets say you have some proxy, webservice or whatever service running on port 5900 to be redirected with iptables.
The easeiest legacy way is to simply add the redirection rules to /etc/rc.local​. In newer Linuxes rc.local so if you decide to use,
you'll have to enable rc.local , I've written earlier a short article on how to enable rc.local on newer Debian, Fedora, CentOS

 

# redirect 5900 TCP service 
sysctl -w net.ipv4.conf.all.route_localnet=1
iptables -t nat -I PREROUTING -p tcp –dport 5900 -j REDIRECT –to-ports 5900
iptables -t nat -I OUTPUT -p tcp -o lo –dport 5900 -j REDIRECT –to-ports 5900
iptables -t nat -A OUTPUT -o lo -d 127.0.0.1 -p tcp –dport 5900 -j DNAT  –to-destination 192.168.1.8:5900
iptables -t nat -I OUTPUT –source 0/0 –destination 0/0 -p tcp –dport 5900 -j REDIRECT –to-ports 5900

 

Here is another two example which redirects port 2208 (which has configured a bind listener for SSH on Internal host 192.168.0.209:2208) from External Internet IP address (XXX.YYY.ZZZ.XYZ) 
 

# Port redirect for SSH to VM on openxen internal Local lan server 192.168.0.209 
-A PREROUTING  -p tcp –dport 2208 -j DNAT –to-destination 192.168.0.209:2208
-A POSTROUTING -p tcp –dst 192.168.0.209 –dport 2208 -j SNAT –to-source 83.228.93.76

 

3. Redirect TCP traffic connections with redir tool

 

If you look for an easy straight forward way to redirect TCP traffic, installing and using redir (ready compiled program) might be a good idea.


root@server:~# apt-cache show redir|grep -i desc -A5 -B5
Version: 3.2-1
Installed-Size: 60
Maintainer: Lucas Kanashiro <kanashiro@debian.org>
Architecture: amd64
Depends: libc6 (>= 2.15)
Description-en: Redirect TCP connections
 It can run under inetd or stand alone (in which case it handles multiple
 connections).  It is 8 bit clean, not limited to line mode, is small and
 light. Supports transparency, FTP redirects, http proxying, NAT and bandwidth
 limiting.
 .
 redir is all you need to redirect traffic across firewalls that authenticate
 based on an IP address etc. No need for the firewall toolkit. The
 functionality of inetd/tcpd and "redir" will allow you to do everything you
 need without screwy telnet/ftp etc gateways. (I assume you are running IP
 Masquerading of course.)

Description-md5: 2089a3403d126a5a0bcf29b22b68406d
Homepage: https://github.com/troglobit/redir
Tag: interface::daemon, network::server, network::service, role::program,
 use::proxying
Section: net
Priority: optional

 

 

server:~# apt-get install –yes redir

Here is a short description taken from its man page 'man redir'

 

DESCRIPTION
     redir redirects TCP connections coming in on a local port, [SRC]:PORT, to a specified address/port combination, [DST]:PORT.  Both the SRC and DST arguments can be left out,
     redir will then use 0.0.0.0.

     redir can be run either from inetd or as a standalone daemon.  In –inetd mode the listening SRC:PORT combo is handled by another process, usually inetd, and a connected
     socket is handed over to redir via stdin.  Hence only [DST]:PORT is required in –inetd mode.  In standalone mode redir can run either in the foreground, -n, or in the back‐
     ground, detached like a proper UNIX daemon.  This is the default.  When running in the foreground log messages are also printed to stderr, unless the -s flag is given.

     Depending on how redir was compiled, not all options may be available.

 

+ Use redir to redirect TCP traffic one time

 

Lets say you have a MySQL running on remote machine on some internal or external IP address, lets say 192.168.0.200 and you want to redirect all traffic from remote host to the machine (192.168.0.50), where you run your Apache Webserver, which you want to configure to use
as MySQL localhost TCP port 3306.

Assuming there are no irewall restrictions between Host A (192.168.0.50) and Host B (192.168.0.200) is already permitting connectivity on TCP/IP port 3306 between the two machines.

To open redirection from localhost on 192.168.0.50 -> 192.168.0.200:

 

server:~# redir –laddr=127.0.0.1 –lport=3306 –caddr=192.168.0.200 –cport=3306

 

If you need other third party hosts to be additionally reaching 192.168.0.200 via 192.168.0.50 TCP 3306.

root@server:~# redir –laddr=192.168.0.50 –lport=3306 –caddr=192.168.0.200 –cport=3306


Of course once you close, the /dev/tty or /dev/vty console the connection redirect will be cancelled.

 

+ Making TCP port forwarding from Host A to Host B permanent


One solution to make the redir setup rules permanent is to use –rinetd option or simply background the process, nevertheless I prefer to use instead GNU Screen.
If you don't know screen is a vVrtual Console Emulation manager with VT100/ANSI terminal emulation to so, if you don't have screen present on the host install it with whatever Linux OS package manager is present and run:

 

root@server:~#screen -dm bash -c 'redir –laddr=127.0.0.1 –lport=3306 –caddr=192.168.0.200 –cport=3306'

 

That would run it into screen session and detach so you can later connect, if you want you can make redir to also log connections via syslog with ( -s) option.

I found also useful to be able to track real time what's going on currently with the opened redirect socket by changing redir log level.

Accepted log level is:

 

  -l, –loglevel=LEVEL
             Set log level: none, err, notice, info, debug.  Default is notice.

 

root@server:/ # screen -dm bash -c 'redir –laddr=127.0.0.1 –lport=3308 –caddr=192.168.0.200 –cport=3306 -l debug'

 

To test connectivity works as expected use telnet:
 

root@server:/ # telnet localhost 3308
Trying 127.0.0.1…
Connected to localhost.
Escape character is '^]'.
g
5.5.5-10.3.29-MariaDB-0+deb10u1-log�+c2nWG>B���o+#ly=bT^]79mysql_native_password

6#HY000Proxy header is not accepted from 192.168.0.19 Connection closed by foreign host.

once you attach to screen session with

 

root@server:/home #  screen -r

 

You will get connectivity attempt from localhost logged : .
 

redir[10640]: listening on 127.0.0.1:3306
redir[10640]: target is 192.168.0.200:3306
redir[10640]: Waiting for client to connect on server socket …
redir[10640]: target is 192.168.0.200:3306
redir[10640]: Waiting for client to connect on server socket …
redir[10793]: peer IP is 127.0.0.1
redir[10793]: peer socket is 25592
redir[10793]: target IP address is 192.168.0.200
redir[10793]: target port is 3306
redir[10793]: Connecting 127.0.0.1:25592 to 127.0.0.1:3306
redir[10793]: Entering copyloop() – timeout is 0
redir[10793]: Disconnect after 1 sec, 165 bytes in, 4 bytes out

The downsides of using redir is redirection is handled by the separate process which is all time hanging in the process list, as well as the connection redirection speed of incoming connections might be about at least 30% slower to if you simply use a software (firewall ) redirect such as iptables. If you use something like kernel IP set ( ipsets ). If you hear of ipset for a first time and you wander whta it is below is short package description.

 

root@server:/root# apt-cache show ipset|grep -i description -A13 -B5
Maintainer: Debian Netfilter Packaging Team <pkg-netfilter-team@lists.alioth.debian.org>
Architecture: amd64
Provides: ipset-6.38
Depends: iptables, libc6 (>= 2.4), libipset11 (>= 6.38-1~)
Breaks: xtables-addons-common (<< 1.41~)
Description-en: administration tool for kernel IP sets
 IP sets are a framework inside the Linux 2.4.x and 2.6.x kernel which can be
 administered by the ipset(8) utility. Depending on the type, currently an
 IP set may store IP addresses, (TCP/UDP) port numbers or IP addresses with
 MAC addresses in a  way which ensures lightning speed when matching an
 entry against a set.
 .
 If you want to
 .
  * store multiple IP addresses or port numbers and match against the
    entire collection using a single iptables rule.
  * dynamically update iptables rules against IP addresses or ports without
    performance penalty.
  * express complex IP address and ports based rulesets with a single
    iptables rule and benefit from the speed of IP sets.

 .
 then IP sets may be the proper tool for you.
Description-md5: d87e199641d9d6fbb0e52a65cf412bde
Homepage: http://ipset.netfilter.org/
Tag: implemented-in::c, role::program
Section: net
Priority: optional
Filename: pool/main/i/ipset/ipset_6.38-1.2_amd64.deb
Size: 50684
MD5sum: 095760c5db23552a9ae180bd58bc8efb
SHA256: 2e2d1c3d494fe32755324bf040ffcb614cf180327736c22168b4ddf51d462522

Share SCREEN terminal session in Linux / Screen share between two or more users howto

Wednesday, October 11th, 2017

share-screen-terminal-session-in-linux-share-linux-unix-shell-between-two-or-more-users

 

1. Short Intro to Screen command and what is Shared Screen Session

Do you have friends who want to learn some GNU / Linux or BSD basics remotely? Do you have people willing to share a terminal session together for educational purposes within a different network? Do you just want to have some fun and show off yourself between two or more users?

If the answer to the questions is yes, then continue on reading, otherwise if you're already aware how this is being done, just ignore this article and do something more joyful.

So let me start.

Some long time ago when I was starting to be a Free Software user and dedicated enthusiast, I've been given by a friend an interesting freeshell hosting access and I stumbled upon / observed an interesting phenomenon, multiple users like 5 or 10 were connected simultaneously to the same shell sharing their command line.

I can't remember what kind of shell I happen to be sharing with the other logged in users with the same account, was that bash / csh / zsh or another one but it doesn't matter, it was really cool to find out multiple users could be standing together on GNU / Linux and *BSD with the same account and use the regular shell for chatting or teaching each others  new Linux / Unix commands e.g. being able to type in shell simultaneously.

The multiple shared shell session was possible thanks to the screen command

For those who hear about screen for a first time, here is the package description:

 

# apt-cache show screen|grep -i desc -A 1
Description-en: terminal multiplexer with VT100/ANSI terminal emulation
 GNU Screen is a terminal multiplexer that runs several separate "screens" on

Description-md5: 2d86b86ed6058a04c540802e49312f40
Homepage: https://savannah.gnu.org/projects/screen
root@jericho:/usr/local/src/pure-python-otr# apt-cache show screen|grep -i desc -A 2
Description-en: terminal multiplexer with VT100/ANSI terminal emulation
 GNU Screen is a terminal multiplexer that runs several separate "screens" on
 a single physical character-based terminal. Each virtual terminal emulates a


Description-md5: 2d86b86ed6058a04c540802e49312f40
Homepage: https://savannah.gnu.org/projects/screen
Tag: hardware::input:keyboard, implemented-in::c, interface::text-mode,


There is plenty of things to use screen for as it provides you a way to open Virtual Terminals into a single ssh or physical console TTY login session and I've been in love with screen command since day 1 I found out about it.

To start using screen just invoke it into a shell and enter a screen command combinations that make various stuff for you.

 

2. Some of the most useful Daily Screen Key Combinations for the Sys Admin


To do use the various screen options, use the escape sequence (CTRL + Some Word), following by the command. For a full list of all of the available commands, run man screen, however
for the sake of interest below short listing shows some of most useful screen key combination invoked commands:

 

 

Ctrl-a a Passes a Ctrl-a through to the terminal session running within screen.
Ctrl-a c Create a new Virtual shell screen session within screen
Ctrl-a d Detaches from a screen session.
Ctrl-a f Toggle flow control mode (enable/disable Ctrl-Q and Ctrl-S pass through).
Ctrl-a k Detaches from and kills (terminates) the screen session.
Ctrl-a q Passes a Ctrl-q through to the terminal session running within screen (or use Ctrl-a f to toggle whether screen captures flow control characters).
Ctrl-a s Passes a Ctrl-s through to the terminal session running within screen (or use Ctrl-a f to toggle whether screen captures flow control characters).
Ctrl-a :kill Also detaches from and kills (terminates) the screen session.
Ctrl-a :multiuser on Make the screen session a multi-user session (so other users can attach).
Ctrl-a :acladd USER Allow the user specified (USER) to connect to a multi-user screen session.
Ctrl-a p Move around multiple opened Virtual terminals in screen (Move to previous)
Ctrl-a n Move backwards in multiple opened screen sessions under single shell connection


I have to underline strongly for me personally, I'm using the most

 

CTRL + A + D (to detach session),

CTRL + A + C to open new session within screen (I tend to open multiple sessions for multiple ssh connections with this),

CTRL + A + P, CTRL +  A + N – I use this twoto move around all my open screen Virtual sessions.
 

3. HOW TO ACTUALLY SHARE TERMINAL SESSION BETWEEN MULTIPLE USERS?


3.1 Configuring Shared Sessions so other users can connect

You need to  have a single user account on a Linux or Unix like server lets say that might be the /etc/passwd, /etc/shadow, /etc/group account screen and you have to give the password to all users to be participating into the shared screen shell session.

E.g. create new system account screen

root@jericho:~# adduser screen
Adding user `screen' …
Adding new group `screen' (1001) …
Adding new user `screen' (1001) with group `screen' …
The home directory `/home/screen' already exists.  Not copying from `/etc/skel'.
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Changing the user information for screen
Enter the new value, or press ENTER for the default
    Full Name []: Screen user to give users shared access to /bin/bash
    Room Number []:
    Work Phone []:
    Home Phone []:
    Other []:
Is the information correct? [Y/n] Y

Now distribute the user / pass pair around all users who are to be sharing the same virtual bash session via screen and instruct each of them to run:

hipo@jericho:~$  screen -d -m -S shared-session
hipo@jericho:~$

hipo@jericho:~$ screen -list

There is a screen on:
    4095.shared-session    (10.10.2017 20:22:22)    (Detached)
1 Socket in /run/screen/S-hipo.


3.2. Attaching to just created session
 

Simply login with as many users you need with SSH to the remote server and instruct them to run the following command to re-attach to the just created new session by you:

hipo@jericho:~$ screeen -x

That's all folks now everyone can type in simultaneously and enjoy the joys of the screen shared session.

If for some reasons more than one session is created by the simultaneously logged in users either as an exercise or by mistake i.e.:

hipo@jericho:~$ screen -list

There are screens on:
    4880.screen-session    (10.10.2017 20:30:09)    (Detached)
    4865.another-session    (10.10.2017 20:29:58)    (Detached)
    4847.hey-man    (10.10.2017 20:29:49)    (Detached)
    4831.another-session1    (10.10.2017 20:29:45)    (Detached)
4 Sockets in /run/screen/S-hipo.

You have to instruct everyone to connect actually to the exact session we need, as screen -x will ask them to what session they like to connect.

In that case to connect to screen-session, each user has to run with their account:

hipo@jericho:~$ screen -x shared-session


If under some circumstances it happened that there is more than one opened shared screen virtual session, for example screen -list returns:

 

hipo@jericho:~$ screen -list
There are screens on:
    5065.screen-session    (10.10.2017 20:33:20)    (Detached)
    4095.screen-session    (10.10.2017 20:30:08)    (Detached)

All users have to connect to the exact screen-session created name and ID, like so:

hipo@jericho:~$ screen -x 4095.screen-session
 


Here is the meaning of used options

 

-d option instructs screen to detach,
-m makes it multiuser session so other users can attach
-S argument is just to give the screen session a name
-list Sesssion gives the screen-session ID

Once you're over with screen session (e.g. all users that are learning and you show them stuff and ask them to test by themselves and have completed, scheduled tasks), to kill it just press CTRL + A + K
 

4. Share screen /bin/bash shell session with another user

Sharing screen session between different users is even more useful to the shared session of one user as you might have a *nix server with many users who might attach to your opened session directly, instead of being beforehand instructed to connect with a single user.

That's perfect also for educational purposes if you want to learn some Linux to a class of people, as you can use their ordinary accounts and show them stuff on a Linux / BSD  machine.

Assuming that you follow and created already screen-session with screen cmd

hipo@jericho:~$ screen -list
There is a screen on:
        5560.screen-session      (10.10.2017 20:41:06)   (Multi, attached)
1 Socket in /run/screen/S-hipo.

hipo@jericho:~$

Next attach to the session

bunny@jericho:~$ screen -r shared-session
bunny@jericho:~$ Ctrl-a :multiuser on
bunny@jericho:~$ Ctrl-a :acladd user2
bunny@jericho:~$ screen -x UserNameHere/shared-session
 

Here are 2 screenshots on what should happen if you had done above command combinations correctly:

screen-share-session-to-multi-users-screenshot-multiuser-on-on-gnome-terminal2

screen-share-session-to-multi-users-screenshot-multiuser-on-on-gnome-terminal3

In order to be able to share screen Virtual terminal ( VTY ) sessions between separate (different) logged in users, you have to have screen command be suid (SUID bit for screen is disabled in most Linux distributions for security reasons).

Without making SUID the screen binary file, you are to get the error:

hipo@jericho:/home/hipo$ screen -x hipo/shared

Must run suid root for multiuser support.

If you are absolutely sure you know what you're doing here is how to make screen command sticky bit:

 

root@jericho:/home/hipo# which screen
root@jericho:/home/hipo# /usr/bin/screen
root@jericho:/home/hipo# root@jericho:/home/hipo# root@jericho:/home/hipo# chmod u+s $(which screen)
chmod 755 /var/run/screen
root@jericho:/home/hipo# rm -fr /var/run/screen/*
exit

Howto Fix “sysstat Cannot open /var/log/sysstat/sa no such file or directory” on Debian / Ubuntu Linux

Monday, February 15th, 2016

sysstast-no-such-file-or-directory-fix-Debian-Ubuntu-Linux-howto
I really love sysstat and as a console maniac I tend to install it on every server however by default there is some <b>sysstat</b> tuning once installed to make it work, for those unfamiliar with <i>sysstat</i> I warmly recommend to check, it here is in short the package description:<br /><br />
 

server:~# apt-cache show sysstat|grep -i desc -A 15
Description: system performance tools for Linux
 The sysstat package contains the following system performance tools:
  – sar: collects and reports system activity information;
  – iostat: reports CPU utilization and disk I/O statistics;
  – mpstat: reports global and per-processor statistics;
  – pidstat: reports statistics for Linux tasks (processes);
  – sadf: displays data collected by sar in various formats;
  – nfsiostat: reports I/O statistics for network filesystems;
  – cifsiostat: reports I/O statistics for CIFS filesystems.
 .
 The statistics reported by sar deal with I/O transfer rates,
 paging activity, process-related activities, interrupts,
 network activity, memory and swap space utilization, CPU
 utilization, kernel activities and TTY statistics, among
 others. Both UP and SMP machines are fully supported.
Homepage: http://pagesperso-orange.fr/sebastien.godard/

 

If you happen to install sysstat on a Debian / Ubuntu server with:

server:~# apt-get install –yes sysstat


, and you try to get some statistics with sar command but you get some ugly error output from:

 

server:~# sar Cannot open /var/log/sysstat/sa20: No such file or directory


And you wonder how to resolve it and to be able to have the server log in text databases periodically the nice sar stats load avarages – %idle, %iowait, %system, %nice, %user, then to FIX that Cannot open /var/log/sysstat/sa20: No such file or directory

You need to:

server:~# vim /etc/default/sysstat


By Default value you will find out sysstat stats it is disabled, e.g.:

ENABLED="false"

Switch the value to "true"

ENABLED="true"


Then restart sysstat init script with:

server:~# /etc/init.d/sysstat restart

However for those who prefer to do things from menu Ncurses interfaces and are not familiar with Vi Improved, the easiest way is to run dpkg reconfigure of the sysstat:

server:~# dpkg –reconfigure


sysstat-reconfigure-on-gnu-linux

 

root@server:/# sar
Linux 2.6.32-5-amd64 (pcfreak) 15.02.2016 _x86_64_ (2 CPU)

0,00,01 CPU %user %nice %system %iowait %steal %idle
0,15,01 all 24,32 0,54 3,10 0,62 0,00 71,42
1,15,01 all 18,69 0,53 2,10 0,48 0,00 78,20
10,05,01 all 22,13 0,54 2,81 0,51 0,00 74,01
10,15,01 all 17,14 0,53 2,44 0,40 0,00 79,49
10,25,01 all 24,03 0,63 2,93 0,45 0,00 71,97
10,35,01 all 18,88 0,54 2,44 1,08 0,00 77,07
10,45,01 all 25,60 0,54 3,33 0,74 0,00 69,79
10,55,01 all 36,78 0,78 4,44 0,89 0,00 57,10
16,05,01 all 27,10 0,54 3,43 1,14 0,00 67,79


Well that's it now sysstat error resolved, text reporting stats data works again, Hooray! 🙂

Tracking multiple log files in real time in Linux console / terminal (MultiTail)

Monday, July 29th, 2013

Multitail multiple tail Debian GNU Linux viewing Apache access and error log in shared screen
Whether you have to administer Apache, Nginx or Lighttpd, or whatever other kind of daemon which interactively logs user requests or errors you probably already know well of tail command (tail -f /var/log/apache2/access.log) is something Webserver Linux admin can't live without. Sometimes however you have number of Virtualhost (domains) each configured to log site activity in separate log file. One solution to the problem is to use GNU Screen (screen – terminal emulator) to launch multiple screen session and launch separate tail -f /var/log/apache2/domain1/access.log , tail -f /var/log/apache2/domain2/access.log etc. This however is a bit of hack and except configuring screen to show multiple windows on one Virtual Terminal (tty or vty in gnome), you can't really see output simultaneously in one separated window.

Here is where multitail comes handy. MultiTail is tool to visualize in real time log records output of multiple logs (tails) in one shared terminal Window. MultiTail is written to use ncurses library used by a bunch of other useful tools like Midnight Command so output is colorful and very nice looking.

Here is MultiTail package description on Debian Linux:

linux:~# apt-cache show multitail|grep -i description -A 1
Description-en: view multiple logfiles windowed on console
 multitail lets you view one or multiple files like the original tail

Description-md5: 5e2f688efb214b063bdc418a705860a1
Tag: interface::text-mode, role::program, scope::utility, uitoolkit::ncurses,
root@noah:/home/hipo# apt-cache show multitail|grep -i description -A 1
Description-en: view multiple logfiles windowed on console
 multitail lets you view one or multiple files like the original tail

Description-md5: 5e2f688efb214b063bdc418a705860a1
Tag: interface::text-mode, role::program, scope::utility, uitoolkit::ncurses,
 

Multiple Tail is available across most Linux distributions to install on Debian / Ubuntu / Mint etc. Linux:

debian:~# apt-get install --yes multitail
...

On recent Fedora / RHEL / CentOS etc. RPM based Linuces to install:

[root@centos ~]# yum -y install multitail
...

On FreeBSD multitail is available to install from ports:

freebsd# cd /usr/ports/sysutils/multitail
freebsd# make install clean
...

Once installed to display records in multiple files lets say Apache domain name access.log and error.log

debian:~# multitail -f /var/log/apache2/access.log /var/log/apache2/error.log

It has very extensive help invoked by simply pressing h while running

multtail-viewing-in-gnome-shared-screen-debian-2-log-files-screenshot

Even better multitail is written to already have integrated color schemes for most popular Linux services log files

multitail multiple tail debian gnu linux logformat different color schemes screenshot
List of supported MulLog Color schemes as of time of writting article is:

acctail, acpitail, apache, apache_error, argus, asterisk, audit, bind, boinc, boinctail ,checkpoint, clamav, cscriptexample, dhcpd, errrpt, exim, httping, ii, inn, kerberos, lambamoo, liniptfw, log4j, mailscanner, motion, mpstat, mysql, nagtail, netscapeldap, netstat, nttpcache, ntpd, oracle, p0f, portsentry, postfix, pptpd, procmail, qmt-clamd, qmt-send, qmt-smtpd, qmt-sophie, qmt-spamassassin, rsstail, samba, sendmail, smartd, snort spamassassin, squid, ssh, strace, syslog, tcpdump, vmstat, vnetbr, websphere, wtmptail

To tell it what kind of log Color scheme to use from cmd line use:

debian:~# multitail -Csapache /var/log/apache2/access.log /var/log/apache2/error.log

multiple tail with Apache highlight on Debian Linux screenshot

Useful feature is to run command display in separate Windows while still following log output, i.e.:

[root@centos:~]# multitail /var/log/httpd.log -l "netstat -nat"
...

Multitail can also merge output from files in one Window, while in second window some other log or command output is displayed. To merge output from Apache access.log and error.log:

debian:~# multitail /var/log/apache2/access.log -I /var/log/apache2/error.log

When merging two log files output to show in one Window it is useful to display each file output in different color for the sake of readability

For example:
 

debian:~# multitail -ci green /var/log/apache/access.log -ci red -I /var/log/apache/error.log

multitail merged Apache access and error log on Debian Linux

To display output from 3 log files in 3 separate shared Windows in console use:

linux:~# multitail -s 2 /var/log/syslog /var/log/apache2/access.log /var/log/apache2/error.log

For some more useful examples, check out MultiTail's official page examples
There is plenty of other useful things to do with multitail, for more RTFM 🙂

Linux: Generating Web statistics from Old Apache logs with Webalizer

Thursday, July 25th, 2013

Webalizer generate and visualize in web page statistics of old websites howto webalizer static html google analytics like statistics on linux logo

Often it happens, that some old hosted websites were created in a way so no Web Statistics are available. Almost all modern created websites nowadays are already set to use Google AnalyticsAnyhow every now and then I stumble on hosting clients whose websites creator didn't thought on how to track how many hits or unique visitors site gets in a month / year etc.
 Thanksfully this is solvable by good "uncle" admin with help with of Webalizer (with custom configuration) and a little bit of shell scripting.

The idea is simple, we take the old website logs located in lets say 
/var/log/apache2/www.website-access.log*,
move files to some custom created new directory lets say /root/www.website-access-logs/ and then configure webalizer to read and generate statistics based on log in there.

For the purpose, we have to have webalizer installed on Linux system. In my case this is Debian GNU / Linux.

For those who hear of Webalizer for first time here is short package description:

debian:~# apt-cache show webalizer|grep -i description -A 2

Description-en: web server log analysis program
The Webalizer was designed to scan web server log files in various formats
and produce usage statistics in HTML format for viewing through a browser.

 If webalizer is not installed still install it with:

debian:~# apt-get install --yes webalizer
...
.....

Then make backup copy of original / default webalizer.conf (very important step especially if server is already processing Apache log files with some custom webalizer configuration:

debian:~# cp -rpf /etc/webalizer/webalizer.conf /etc/webalizer/webalizer.conf.orig

Next step is to copy webalizer.conf with a name reminding of website of which logs will be processed, e.g.:

debian:~# cp -rpf /etc/webalizer/webalizer.conf /etc/webalizer/www.website-webalizer.conf

In www.website-webalizer.conf config file its necessary to edit at least 4 variables:

LogFile /var/log/apache2/access.log
OutputDir /var/www
#Incremental no
ReportTitle Usage statistics for

 Make sure after modifying 3 vars read something like:  
LogFile /root/www.website/access_log_merged_1.log
OutputDir /var/www/www.website
Incremental yes
ReportTitle Usage statistics for Your-Website-Host-Name.com

Next create /root/www.website and /var/www/www.website, then copy all files you need to process from /var/log/apache2/www.website* to /root/www.website:

debian:~# mkdir -p /root/www.website
debian:~# cp -rpf /var/log/apache2/www.website* /root/www.website

On Debian Apache uses logrotate to archive old log files, so all logs except www.website-access.log and wwww.website-access.log.1 are gzipped:

debian:~#  cd /root/www.website
debian:~# ls 
www.website-access.log.10.gz
www.website-access.log.11.gz
www.website-access.log.12.gz
www.website-access.log.13.gz
www.website-access.log.14.gz
www.website-access.log.15.gz
www.website-access.log.16.gz
www.website-access.log.17.gz
www.website-access.log.18.gz
www.website-access.log.19.gz
www.website-access.log.20.gz
...
 

Then we have to un-gzip zipped logs and create one merged file from all of them ready to be red later by Webalizer. To do so I use a tiny shell script like so:

for n in {52..1}; do gzip -d www.dobrudzhatour.net-access.log.$n.gz; done
for n in {52..1}; do cat www.dobrudzhatour.net-access.log.$n >> access_log_merged_1.log;
done

First look de-gzips and second one does create a merged file from all with name access_merged_1.log The range of log files in my case is from www.website-access.log.1 to www.website-access.log.52, thus I have in loop back number counting from 52 to 1.

Once access_log_merged_1.log is ready we can run webalizer to process file (Incremental) and generate all time statistics for www.website:

debian:~# webalizer -c /etc/webalizer/webalizer-www.website-webalizer.conf

Webalizer V2.01-10 (Linux 2.6.32-27-server) locale: en_US.UTF-8
Using logfile /root/www.website/access_log_merged_1.log (clf)
Using default GeoIP database Creating output in /var/www/webalizer-www.website
Hostname for reports is 'debian'
Reading history file… webalizer.hist
Reading previous run data.. webalizer.current
333474 records (333474 ignored) in 37.50 seconds, 8892/sec

To check out just generated statistics open in browser:

http://yourserverhost/webalizer-www.website/

or

http://IP_Address/webalizer-www.website

 You should see statistics pop-up, below is screenshot with my currently generated stats:

Webalizer website access statistics screenshot Debian GNU Linux

Create Easy Data Backups with Rsnapshot back-up tool on GNU / Linux

Monday, April 15th, 2013

 

rsnapshot Linux and FreeBSD easy data backup tool logo
Backing up information on Linux servers is essential part of routine system adminsitrator job. Thus I decided to write for those interested in how one can easily create backups of important data through a tiny tool called rsnapshot which I prior used to make periodic data incremental backups on few of Debian Linux servers I manage. In case you wonder why use rsnapshot and not just rsync – the reasons are 2.
a. Rsnapshot is very easy to configure and use and you don't need to have deep understanding on  rsync numerous options to use it.
b. Rsnapshot does support incremental data backups – saving a lot of disk space on backup host.

 

 

 

Mentioning  incremental data backups for some those term might be a news so I will in short explain here what is Incremental Data Backups?

Incremental Data Backups are such backups which only create new backup of system scheduled files to backup only whether there are changes in files to backup or new ones are added to directory/directories set to be routinely backed up. Incremental backups are often desirable as they consume minimum storage space and are quicker to perform than normal periodic whole data archiving (differential backups). rsync has also support for incremental backups but configuring it to do so takes time and requires extra time on reading and understanding how they work, so I personally prefer simplicity rsnapshot brings.

1. Installing rsnapshot with apt-get

Here is rsnapshot debian package description;

debian:~#  apt-cache show rsnapshot|grep -i description -A 5

 

Description: local and remote filesystem snapshot utility
 rsnapshot is an rsync-based filesystem snapshot utility. It can take
 incremental backups of local and remote filesystems for any number of
 machines. rsnapshot makes extensive use of hard links, so disk space is
 only used when absolutely necessary.
Homepage: http://www.rsnapshot.org/

As you can read from description, rsnapshot is a frontend command using rsync to make data backups.

Install of rsnapshot is done through;

 debian:~# apt-get install --yes rsnapshot

Reading package lists… Done
Building dependency tree      
Reading state information… Done
The following NEW packages will be installed:
  rsnapshot
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/140 kB of archives.
After this operation, 598 kB of additional disk space will be used.
Selecting previously deselected package rsnapshot.
(Reading database … 87026 files and directories currently installed.)
Unpacking rsnapshot (from …/rsnapshot_1.3.1-1_all.deb) … –
Processing triggers for man-db …
Setting up rsnapshot (1.3.1-1) …

2. Rsnapshot  package content and Documentation

Once installed here is file content of rsnapshot deb package;

debian:~# dpkg -L rsnapshot

 

/.
/usr
/usr/share
/usr/share/doc-base
/usr/share/doc-base/rsnapshot
/usr/share/doc
/usr/share/doc/rsnapshot
/usr/share/doc/rsnapshot/TODO
/usr/share/doc/rsnapshot/changelog.gz
/usr/share/doc/rsnapshot/Upgrading_from_1.1.gz
/usr/share/doc/rsnapshot/examples
/usr/share/doc/rsnapshot/examples/rsnapshot.conf.default.gz
/usr/share/doc/rsnapshot/examples/utils
/usr/share/doc/rsnapshot/examples/utils/backup_mysql.sh
/usr/share/doc/rsnapshot/examples/utils/mysqlbackup.pl
/usr/share/doc/rsnapshot/examples/utils/random_file_verify.sh
/usr/share/doc/rsnapshot/examples/utils/rsnapreport.pl.gz
/usr/share/doc/rsnapshot/examples/utils/make_cvs_snapshot.sh
/usr/share/doc/rsnapshot/examples/utils/backup_pgsql.sh
/usr/share/doc/rsnapshot/examples/utils/rsnapshotdb
/usr/share/doc/rsnapshot/examples/utils/rsnapshotdb/CHANGES.txt
/usr/share/doc/rsnapshot/examples/utils/rsnapshotdb/rsnapshotDB.pl.gz
/usr/share/doc/rsnapshot/examples/utils/rsnapshotdb/INSTALL.txt
/usr/share/doc/rsnapshot/examples/utils/rsnapshotdb/TODO.txt
/usr/share/doc/rsnapshot/examples/utils/rsnapshotdb/rsnapshotDB.xsd
/usr/share/doc/rsnapshot/examples/utils/rsnapshotdb/rsnapshotDB.conf.sample
/usr/share/doc/rsnapshot/examples/utils/rsnapshotdb/README.txt
/usr/share/doc/rsnapshot/examples/utils/rsnapshot-copy
/usr/share/doc/rsnapshot/examples/utils/backup_rsnapshot_cvsroot.sh
/usr/share/doc/rsnapshot/examples/utils/backup_dpkg.sh
/usr/share/doc/rsnapshot/examples/utils/sign_packages.sh
/usr/share/doc/rsnapshot/examples/utils/mkmakefile.sh
/usr/share/doc/rsnapshot/examples/utils/rsnaptar
/usr/share/doc/rsnapshot/examples/utils/rsnapshot_invert.sh
/usr/share/doc/rsnapshot/examples/utils/rsnapshot_if_mounted.sh
/usr/share/doc/rsnapshot/examples/utils/README
/usr/share/doc/rsnapshot/examples/utils/debug_moving_files.sh
/usr/share/doc/rsnapshot/examples/utils/backup_smb_share.sh
/usr/share/doc/rsnapshot/README.gz
/usr/share/doc/rsnapshot/changelog.Debian.gz
/usr/share/doc/rsnapshot/copyright
/usr/share/doc/rsnapshot/README.Debian
/usr/share/doc/rsnapshot/html
/usr/share/doc/rsnapshot/html/rsnapshot-HOWTO.en.html
/usr/share/doc/rsnapshot/NEWS.Debian.gz
/usr/share/lintian
/usr/share/lintian/overrides
/usr/share/lintian/overrides/rsnapshot
/usr/share/man
/usr/share/man/man1
/usr/share/man/man1/rsnapshot.1.gz
/usr/share/man/man1/rsnapshot-diff.1.gz
/usr/bin
/usr/bin/rsnapshot-diff
/usr/bin/rsnapshot
/var
/var/cache
/var/cache/rsnapshot
/etc
/etc/cron.d
/etc/cron.d/rsnapshot
/etc/rsnapshot.conf
/etc/logrotate.d
/etc/logrotate.d/rsnapshot

To get basic idea, on rsnapshot and how it can be configured and run manually as well as how it can be set-up to run periodic via a cronjob README shipped with package is a good start point.

debian:~# zless /usr/share/doc/rsnapshot/README.gz
....

It is also useful to check program documentation in HTML, whether you have some text browser installed – i.e. lynx or links:

debian:~# links /usr/share/doc/rsnapshot/html/rsnapshot-HOWTO.en.html

Note that many of information in rsnapshot-HOWTO is related to how rsnapshot is installed manually from source, so for Deb based distro users reading these sections can be safely skipped. For Debian users hence it is useful to read howto from section 4.A onwards. man rsnapshot's Examle section is very good reading too as it gives a lot of use scenarios necessary in more complicated backup situations.

3. Configuring Rsnapshot – Setting Data Directories to Backup

Configuration of Rsnapshot is done through /etc/rsnapshot.conf file. There is plenty of comments in file, so opening in text editor and taking few minutes to read commented lines is necessery. Configuration options just like with most Linux tool config files is done through config directives, not commented.

debian:~# cat /etc/rsnapshot.conf |grep -v "#"|uniq

 

 

config_version    1.2

snapshot_root    /var/cache/rsnapshot/

cmd_rm        /bin/rm

cmd_rsync    /usr/bin/rsync

cmd_logger    /usr/bin/logger

interval    hourly    6
interval    daily    7
interval    weekly    4

verbose        2

loglevel    3

lockfile    /var/run/rsnapshot.pid

backup    /home/        localhost/
backup    /etc/        localhost/
backup    /usr/local/    localhost/

 

 

Above config options are clear to understand, there is interval of backups to set (hourly, daily, weekly), verbose level of rsnapshot backup operation log file, lockfile which will be used by rsnapshot to prevent duplicate rsnapshot runs and last backup directive in which you need to specify what needs to be backed up. In config file there is also commented variable for creating rsnapshot backup once a month

#interval   monthly 3

If you need to create backups once a month uncomment it.

In backup directive add all directories from filesystem which need to have routine backup, for example I keep my Apache Web server files in /var/www/, store various install software in
/root/

and keep backup of Qmail (Vpopmail) old emails kept in
/var/vpopmail
.
To make rsnapshot backup those I add after rest of backup directives:

backup  /var/www/   localhost/
backup  /var/vpopmail/  localhost/
backup  /root/  localhost/


It is good practice to change snapshot_root directive to /root/.backups or whether you prefer to keep snapshot_root to default /var/cache/rsnapshot at least link with ln command /root/.backups to -> /root/.backups.

debian:~# ln -sf /var/cache/rsnapshot /root/.backups

If you change snapshot_root to /root/.backups, don't forget to create /root/.backups and set chmod  dir persmissions only readable to owner, i.e.:

debian:~# mkdir /root/.rsnapshot
debian:~# chmod -R 700 /root/.backups

Note that, it is important to use tab delimiters, everywhere in /etc/rsnapshot.conf, if you use space key delimiter instead of Tab you will end up with errors preventing rsnapshot to run.

4. Testing rsnapshot configuration and launching it first time

I will say it once again use Tab key for delimiters in config. It was my mistake on first time Rsnapshot launch to use spaces to delimiter my config options, thus testing my configuration, rsnapshot print an error and failed:

debian:~# rsnapshot configtest

 

———————————————————
rsnapshot encountered an error! The program was invoked with these options: /usr/bin/rsnapshot configtest ———————————————————
ERROR: /etc/rsnapshot.conf on line 199: ERROR: backup /var/www/ localhost/
ERROR: ———————————————————
ERROR: Errors were found in /etc/rsnapshot.conf, ERROR: rsnapshot can not continue. If you think an entry looks right, make
ERROR: sure you don't have spaces where only tabs should be.  

After changing, Space delimiters with Tabs and re-running rsnapshot configtest if all fine you get:

debian:~# rsnapshot configtest
Syntax OK

Once all good with config to launch Rsnapshot do its first complete incremental data backup, to display what rsnapshot will backup and what exact rsync invocations will it use type:


debian:~# rsnapshot -t hourly

echo 5644 > /var/run/rsnapshot.pid
mv /var/cache/rsnapshot/hourly.2/ /var/cache/rsnapshot/hourly.3/
mv /var/cache/rsnapshot/hourly.1/ /var/cache/rsnapshot/hourly.2/
native_cp_al("/var/cache/rsnapshot/hourly.0", \
    "/var/cache/rsnapshot/hourly.1")
/usr/bin/rsync -a –delete –numeric-ids –relative –delete-excluded /home \
    /var/cache/rsnapshot/hourly.0/localhost/
/usr/bin/rsync -a –delete –numeric-ids –relative –delete-excluded /etc \
    /var/cache/rsnapshot/hourly.0/localhost/
/usr/bin/rsync -a –delete –numeric-ids –relative –delete-excluded \
    /usr/local /var/cache/rsnapshot/hourly.0/localhost/
/usr/bin/rsync -a –delete –numeric-ids –relative –delete-excluded \
    /var/www /var/cache/rsnapshot/hourly.0/localhost/
/usr/bin/rsync -a –delete –numeric-ids –relative –delete-excluded \
    /var/vpopmail /var/cache/rsnapshot/hourly.0/localhost/
/usr/bin/rsync -a –delete –numeric-ids –relative –delete-excluded /root \
    /var/cache/rsnapshot/hourly.0/localhost/
touch /var/cache/rsnapshot/hourly.0/

To launch backup first time manually:

debian:~# rsnapshot hourly

Depending on backupped data (Mega/Giga/Terabytes) size and the number of files which had to be backed up, backup takes from minutes to hours.
Note that it is always good idea to create backups on separate hard disk configured in some kind of RAID array, preferrably (RAID 1 or RAID 5). Creating backups on separate hard disk has numerous advantages, the most important one is it doesn't put too much Input / Output (I/O) stress on hard disk and thus will not create server downtimes on High traffic – Busy servers slow old Hard Disks or servers with Big amount of I/O HDD read/writes .

5. Enabling Rsnapshot to create backups via scheduled cron job

On package install Rsnapshot creates a skele file for running via cronjob in /etc/cron.d/rsnapshot.

debian:~# cat /etc/cron.d/rsnapshot

 

 

# This is a sample cron file for rsnapshot.
# The values used correspond to the examples in /etc/rsnapshot.conf.
# There you can also set the backup points and many other things.
#
# To activate this cron file you have to uncomment the lines below.
# Feel free to adapt it to your needs.

# 0 */4        * * *        root    /usr/bin/rsnapshot hourly
# 30 3      * * *        root    /usr/bin/rsnapshot daily
# 0  3      * * 1        root    /usr/bin/rsnapshot weekly
# 30 2      1 * *        root    /usr/bin/rsnapshot monthly
 

To make hourly, daily, weekly, monthly backup uncomment one of above 4 lines. For paranoid admins scared to loose even a bit of data, hourly data is a good solution. For me personally I prefer configuring weekly backups for the reason I routinely monitor servers – keeping an eye regularly on dmesg and checking Linux smard / smartmontools logs to find out whether a hard disk or RAID has bad blocks

6. Checking backup size / backup difference and backup structure

Checking size of backups can be done by using standard du command on backup directory:

debian:~# du -hsc /var/cache/rsnapshot/*
4.3G /var/cache/rsnapshot/hourly.0
4.5M /var/cache/rsnapshot/hourly.1
68M /var/cache/rsnapshot/hourly.2
4.4G total

rsnapshot also has du argument via which backup size can be viewed:

debian:~# rsnapshot du 4.3G /var/cache/rsnapshot/hourly.0/
4.5M /var/cache/rsnapshot/hourly.1/
68M /var/cache/rsnapshot/hourly.2/
4.4G total

As you can see each new incremental backup is with new number after hourly{0,1,2} etc.

To check difference between two different backups:

debian:~# rsnapshot diff /var/cache/rsnapshot/hourly.0/ /var/cache/rsnapshot/hourly.1/
Comparing /var/cache/rsnapshot/hourly.1 to /var/cache/rsnapshot/hourly.0
Between /var/cache/rsnapshot/hourly.1 and /var/cache/rsnapshot/hourly.0:
660 were added, taking 3728377727 bytes;
492 were removed, saving 17623 bytes;

Structure of backed up files is identical to normal copy of files without any compression:

debian:~# cd /root/.backups/hourly.0/localhost/
debian:~/.backups/hourly.0/localhost# ls

etc/ home/ root/ usr/ var/

 

7. Restoing files or directory from rsnapshot backup

To restore lets say /var directory cd into it:

debian:~/.backups/hourly.0/localhost# cd var
debian:~/.backups/hourly.0/localhost/var#

Then use rsync as follows:

debian:~/.backups/hourly.0/localhost/var# rsync -avr * /
 

 

8. Creating rsnapshot backups from remote server via SSH protocol

In /etc/rsnapshot.conf you should have set SSH port on which remote server is accepting SSH connections. Standard port is 22, however it is wise to configure on backup server SSH to listen to some other non standard port.

In config variables to look on are:

ssh_args -p 22

and

Onwards to enable remote login via ssh uncomment in /etc/rsnapshot.conf :

# cmd_ssh /usr/bin/ssh

to

cmd_ssh /usr/bin/ssh

Before starting rsnapshot to create backups on remote host2 you need to Configure automatic SSH passwordless login by generating DSA or RSA key pair between host1 and host2. Where host1 is machine on which rsnapshot is run and to which backups will be copied from host2
Once passwordless ssh to remote host is active, to force rsnapshot create backups from host1 you will need to add near end of /etc/rsnapshot.conf .

backup  root@host2.com:/root/ host2.com/

The same way you can add a number of remote hosts from which periodic backups will be created to central host1. Only condition is on each node – host3, host4, host5.

backup  root@host3.com:/root/root/ host3.com
backup  root@host4.com:/home/ host4.com
backup  root@host4.com:/var/ host4.com

To create on host1 public key (id_dsa.pub) file with command:

debian:~# ssh-keygen -t dsa
...
....
debian:~# ssh-copy-id -i ~/.ssh/id_dsa.pub root@host3

Once all hosts that needs to get backed up to central backup host – host1. To test if backups gets uploaded manually issue:

debian:~# rsnapshot -v hourly
...

Rsnapshot has a number of other scripts which can be easily integrated with it in /usr/share/doc/rsnapshot/examples/utils.
Inside you can find example scripts on how to create MySQL / PostgreSQL database backup, Samba Share backups, backup CVS repositories and so on. The scripts can be easily modified and work with mostly any data or protocol with a bit of tweaking. Short description of each of example scripts can be found in /usr/share/doc/rsnapshot/examples/utils/README

HasciiCAM supposed to stream ASCII video over the network on GNU / Linux

Tuesday, May 22nd, 2012

Richard M. Stallman (RMS) Face portrait rendered in ASCII art from a video with hasciicam
To continue with my lately ASCII centered articles I found hasciicam
hasciicam is a program to stream ASCII video over the network on Linux and probably can be easily made working on FreeBSDtoo.

The project concept is interesting in a matter of fun (play) point of view, however not too usable as we all know ASCII character looking faces doesn't look too pretty.

Below is the Debian (Squeeze) package description:

noah:~# apt-cache show hasciicam|grep -i description -A 7
Description: (h)ascii for the masses: live video as text
Hasciicam makes it possible to have live ASCII video on the web. It
captures video from a tv card and renders it into ascii, formatting the
output into an html page with a refresh tag or in a live ASCII window or
in a simple text file as well, giving the possibility to anybody that has a
bttv card, a Linux box and a cheap modem line to show a live ASCII video
feed that can be browsable without any need for plugin, java etc.
Homepage: http://ascii.dyne.org/

On hasciicam Project webpage is it is stated as a hardware you need to have:
 

"As hardware you need to have a webcam or a videocard supported by "video 4 linux", most of the gear you can buy around should work well."

To install and test it I run:

noah:~# apt-get --yes install hasciicam

Though it is stated on the project website supposed to work display video fine with most 'linux ready' webcams, it didn't with this very standard one.

Here is the exact WebCamera model as identified to the kernel:

noah:~# dmesg|grep -i camera
[ 1.433661] usb 2-2: Product: USB2.0 Camera
[ 10.107840] uvcvideo: Found UVC 1.00 device USB2.0 Camera (1e4e:0102)
[ 10.110660] input: USB2.0 Camera as /devices/pci0000:00/0000:00:1d.7/usb2/2-2/2-2:1.0/input/input11

By the way, I use the very same CAM daily on for Skype video calls as well as the Camera is working with no problems to save video or pictures inside Cheese

Here is the exact WebCamera model as identified to the kernel:

noah:~# dmesg|grep -i camera
[ 1.433661] usb 2-2: Product: USB2.0 Camera
[ 10.107840] uvcvideo: Found UVC 1.00 device USB2.0 Camera (1e4e:0102)
[ 10.110660] input: USB2.0 Camera as /devices/pci0000:00/0000:00:1d.7/usb2/2-2/2-2:1.0/input/input11

The just installed deb has one binary file only /usr/bin/hasciicam. To test it with the camera I issued:

noah:~# hasciicam -d /dev/video0
HasciiCam 1.0 - (h)ascii 4 the masses! - http://ascii.dyne.org
(c)2000-2006 Denis Roio < jaromil @ dyne.org >
watch out for the (h)ASCII ROOTS

Device detected is /dev/video0
USB2.0 Camera
1 channels detected
max size w[640] h[480] - min size w[48] h[32]
Video capabilities:
VID_TYPE_CAPTURE can capture to memory
!! error in ioctl VIDIOCGMBUF: : Invalid argument

Unfortunately as you see from the output, it failed to detect the web camera model.
The exact camera besides its kernel detection naminf is a cheap external USB 2.0 (fake brand / nonanem) "universal" Web PC Camera (SUPER .3mega pixel)

For those who have a further interest in building and installing hasciicam on other Linux platforms than Debian and Ubuntu or whoever wants to look in the code check check Project webpage is. For those who are less of programmers (like me) the project is written in C programming language and uses aa-lib in order to render the video to ASCII.

On the site you will notice two totally schizophrenic looking pictures of presumably the project head developer …

hasciiart video streamed ASCII screenshot of some crazy looking guy smoking marijuanna or smth

As I read in man hasciicam manual page it's said to be able to generate ascii plain text and html files as well as directly to write the output to console, which later probably can be streamed via the network.
Pitily as it didn't detect my camera I couldn't make some testing of its network capabilities.

A Streaming of ASCII couuld be done through pushing the .html output to a webserver and setting a php or javascript to loop through and refresh the browser over the uploaded files every sec or so.

Also I assume the ASCII video output saved in plain console could be streamed via netcat or some tiny scripted perl or bash script and directly observed via a telnet or ssh connection.
One playful way I can think of checking a stored video without the use of FTP is to login via ssh and do:

$ ssh someuser@somehost
$ watch -n 1 "cat video-ascii.html"

🙂

Well something disturbing about hasciicam from a (purely Christian point of view) is it was developed by some kind of non profit organization called RastaSoft on the project website, some of its authors has written JAH BLESS.

As I didn't succeeded seeing it working, I'll be interested to hear if someone who red this article and give it a try can report the web camera model used.

How to copy / clone installed packages from one Debian server to another

Friday, April 13th, 2012

1. Dump all installed server packages from Debian Linux server1

First it is necessery to dump a list of all installed packages on the server from which the intalled deb packages 'selection' will be replicated.

debian-server1:~# dpkg --get-selections \* > packages.txt

The format of the produced packages.txt file will have only two columns, in column1 there will be the package (name) installed and in column 2, the status of the package e.g.: install or deinstall

Note that you can only use the –get-selections as root superuser, trying to run it with non-privileged user I got:

hipo@server1:~$ dpkg --set-selections > packages.txt
dpkg: operation requires read/write access to dpkg status area

2. Copy packages.txt file containing the installed deb packages from server1 to server2

There is many way to copy the packages.txt package description file, one can use ftp, sftp, scp, rsync … lftp or even copy it via wget if placed in some Apache directory on server1.

A quick and convenient way to copy the file from Debian server1 to server2 is with scp as it can also be used easily for an automated script to do the packages.txt file copying (if for instance you have to implement package cloning on multiple Debian Linux servers).

root@debian-server1:~# scp ./packages.txt hipo@server-hostname2:~/packages.txt
The authenticity of host '83.170.97.153 (83.170.97.153)' can't be established. RSA key fingerprint is 38:da:2a:79:ad:38:5b:64:9e:8b:b4:81:09:cd:94:d4. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '83.170.97.153' (RSA) to the list of known hosts. hipo@83.170.97.153's password:
packages.txt

As this is the first time I make connection to server2 from server1, I'm prompted to accept the host RSA unique fingerprint.

3. Install the copied selection from server1 on server2 with apt-get or dselect

debian-server2:/home/hipo# apt-get update
...
debian-server2:/home/hipo# apt-get upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
debian-server2:/home/hipo# dpkg --set-selections < packages.txt
debian-server2:/home/hipo# apt-get -u dselect-upgrade --yes

The first apt-get update command assures the server will have the latest version of the packages currently installed, this will save you from running an outdated versions of the installed packages on debian-server2

Bear in mind that using apt-get sometimes, might create dependency issues. This is depending on the exact package names, being replicated in between the servers

Therefore it is better to use another approach with bash for loop to "replicate" installed packages between two servers, like so:

debian-server2:/home/hipo# for i in $(cat packages.txt |awk '{ print $1 }'); do aptitude install $i; done

If you want to automate the questioning about aptitude operations pass on the -y

debian-server2:/home/hipo# for i in $(cat packages.txt |awk '{ print $1 }'); do aptitude -y install $i; done

Be cautious if the -y is passed as sometimes some packages might be removed from the server to resolve dependency issues, if you need this packages you will have to again install them manually.

4. Mirroring package selection from server1 to server2 using one liner

A quick one liner, that does replicate a set of preselected packages from server1 to server2 is also possible with either a combination of apt, ssh, awk and dpkg or with ssh + dpkg + dselect :

a) One-liner code with apt-get unifying the installed packages between 2 or more servers

debian-server2:~# apt-get --yes install `ssh root@debian-server1 "dpkg -l | grep -E ^ii" | awk '{print $2}'`
...

If it is necessery to install on more than just debian-server2, copy paste the above code to all servers you want to have identical installed packages as with debian-server1 or use a shor for loop to run the commands for each and every host of multiple servers group.

In some cases it might be better to use dselect instead as in some situations using apt-get might not correctly solve the package dependencies, if encountering problems with dependencies better run:

debian-server2:/home/hipo# ssh root@debian-server1 'dpkg --get-selections' | dpkg --set-selections && dselect install

As you can see using this second dselect installed "package" mirroring is also way easier to read and understand than the prior "cryptic" method with apt-get, hence I personally think using dselect method is a better.

Well that's basically it. If you need to synchronize also configurations, either an rsync/scp shell script, should be used with all defined server1 config files or in case if a cloning of packages between identical server machines is necessery dd or some other tool like Norton Ghost could be used.
Hope this helps, someone.

How to make Video from your Linux Desktop with xvidcap / Capture desktop output in a video on Linux

Wednesday, April 6th, 2011

If you have wondered on how to create videos aiming at manuals on how you do certain stuff on Linux, let’s say related to programming or system administration.
Then you should definitely check out

xvidcap

Below is the package description as taken from apt-cache show xvidcap

A screen capture enabling you to capture videos off your X-Window desktop
for illustration or documentation purposes. It is intended to be a
standards-based alternative to tools like Lotus ScreenCam.

On Debian based Linux systems (e.g. Debian Ubuntu) xvidcap is available straight from the package repositories. To install and test it you can straight issue:

linux:~# apt-get install xvidcap
...

To start using xvidcap, either by starting it with alt+f2 in gnome or straight launch it from the applications menu via:

Applications -> Sound & Video -> xvidcap

Here is how the xvidcap program looks like right after you start it;
xvidcap screenshot main menu

As you see in the screenshot xvidcap’s menu interface is extraordinary simple.

As you see it only has a stop, pause, rec, back and forward buttons, a capture selector and movie editor.
Pitily xvidcap does not support music capturing, but at least for me that’s not such an issue.

If you click over the field test-0000.mpeg[0000] with your last mouse button, you will notice a drop down menu with an option for preferences of xvidcap.

Take the time to play with the preferences, since there are quite a few of them.

The most important preference that you might like to straightly adjust in my view is in the:

Preferences -> Multi-Frame tab -> File Name:

The default file that xvidcap uses to store it’s content files as you will see in the preferences is utest-%04d.mpeg

If you want to change the type of the output file format to let’s say flv change the File Name: value to utest-%04d.flv
Next time you record with xvidcap, you will have the file stored in flv format.

The red lines which you see in the above screenshot is the capture area, you will have to also tune the screen capture area before you can proceed with recording a video from your desktop.

The way to capture your Desktop in fullscreen is a bit unusual, you first need to mark up all your visible Desktop and before that you will have to select from xvidcap’s preferences from:

Preferences -> General -> Minimize to System Tray

By selecting this option each time you press the xvidcap’s record button the xvidcap’s controller interface will be minimized to tray and capturing the video of the region previously selected with the capture selector will start up.