Posts Tagged ‘Install’

Install and configure rkhunter for improved security on a PCI DSS Linux / BSD servers with no access to Internet

Wednesday, November 10th, 2021

install-and-configure-rkhunter-with-tightened-security-variables-rkhunter-logo

rkhunter or Rootkit Hunter scans systems for known and unknown rootkits. The tool is not new and most system administrators that has to mantain some good security servers perhaps already use it in their daily sysadmin tasks.

It does this by comparing SHA-1 Hashes of important files with known good ones in online databases, searching for default directories (of rootkits), wrong permissions, hidden files, suspicious strings in kernel modules, commmon backdoors, sniffers and exploits as well as other special tests mostly for Linux and FreeBSD though a ports for other UNIX operating systems like Solaris etc. are perhaps available. rkhunter is notable due to its inclusion in popular mainstream FOSS operating systems (CentOS, Fedora,Debian, Ubuntu etc.).

Even though rkhunter is not rapidly improved over the last 3 years (its last Official version release was on 20th of Febuary 2018), it is a good tool that helps to strengthen even further security and it is often a requirement for Unix servers systems that should follow the PCI DSS Standards (Payment Card Industry Data Security Standards).

Configuring rkhunter is a pretty straight forward if you don't have too much requirements but I decided to write this article for the reason there are fwe interesting options that you might want to adopt in configuration to whitelist any files that are reported as Warnings, as well as how to set a configuration that sets a stricter security checks than the installation defaults. 

1. Install rkhunter .deb / .rpm package depending on the Linux distro or BSD

  • If you have to place it on a Redhat based distro CentOS / Redhat / Fedora

[root@Centos ~]# yum install -y rkhunter

 

  • On Debian distros the package name is equevallent to install there exec usual:

root@debian:~# apt install –yes rkhunter

  • On FreeBSD / NetBSD or other BSD forks you can install it from the BSD "World" ports system or install it from a precompiled binary.

freebsd# pkg install rkhunter

One important note to make here is to have a fully functional Alarming from rkhunter, you will have to have a fully functional configured postfix / exim / qmail whatever mail server to relay via official SMTP so you the Warning Alarm emails be able to reach your preferred Alarm email address. If you haven't installed postfix for example and configure it you might do.

– On Deb based distros 

[root@Centos ~]#yum install postfix


– On RPM based distros

root@debian:~# apt-get install –yes postfix


and as minimum, further on configure some functional Email Relay server within /etc/postfix/main.cf
 

# vi /etc/postfix/main.cf
relayhost = [relay.smtp-server.com]

2. Prepare rkhunter.conf initial configuration


Depending on what kind of files are present on the filesystem it could be for some reasons some standard package binaries has to be excluded for verification, because they possess unusual permissions because of manual sys admin monification this is done with the rkhunter variable PKGMGR_NO_VRFY.

If remote logging is configured on the system via something like rsyslog you will want to specificly tell it to rkhunter so this check as a possible security issue is skipped via ALLOW_SYSLOG_REMOTE_LOGGING=1. 

In case if remote root login via SSH protocol is disabled via /etc/ssh/sshd_config
PermitRootLogin no variable, the variable to include is ALLOW_SSH_ROOT_USER=no

It is useful to also increase the hashing check algorithm for security default one SHA256 you might want to change to SHA512, this is done via rkhunter.conf var HASH_CMD=SHA512

Triggering new email Warnings has to be configured so you receive, new mails at a preconfigured mailbox of your choice via variable
MAIL-ON-WARNING=SetMailAddress

 

# vi /etc/rkhunter.conf

PKGMGR_NO_VRFY=/usr/bin/su

PKGMGR_NO_VRFY=/usr/bin/passwd

ALLOW_SYSLOG_REMOTE_LOGGING=1

# Needed for corosync/pacemaker since update 19.11.2020

ALLOWDEVFILE=/dev/shm/qb-*/qb-*

# enabled ssh root access skip

ALLOW_SSH_ROOT_USER=no

HASH_CMD=SHA512

# Email address to sent alert in case of Warnings

MAIL-ON-WARNING=Your-Customer@Your-Email-Server-Destination-Address.com

MAIL-ON-WARNING=Your-Second-Peronsl-Email-Address@SMTP-Server.com

DISABLE_TESTS=os_specific


Optionally if you're using something specific such as corosync / pacemaker High Availability cluster or some specific software that is creating /dev/ files identified as potential Risks you might want to add more rkhunter.conf options like:
 

# Allow PCS/Pacemaker/Corosync
ALLOWDEVFILE=/dev/shm/qb-attrd-*
ALLOWDEVFILE=/dev/shm/qb-cfg-*
ALLOWDEVFILE=/dev/shm/qb-cib_rw-*
ALLOWDEVFILE=/dev/shm/qb-cib_shm-*
ALLOWDEVFILE=/dev/shm/qb-corosync-*
ALLOWDEVFILE=/dev/shm/qb-cpg-*
ALLOWDEVFILE=/dev/shm/qb-lrmd-*
ALLOWDEVFILE=/dev/shm/qb-pengine-*
ALLOWDEVFILE=/dev/shm/qb-quorum-*
ALLOWDEVFILE=/dev/shm/qb-stonith-*
ALLOWDEVFILE=/dev/shm/pulse-shm-*
ALLOWDEVFILE=/dev/md/md-device-map
# Needed for corosync/pacemaker since update 19.11.2020
ALLOWDEVFILE=/dev/shm/qb-*/qb-*

# tomboy creates this one
ALLOWDEVFILE="/dev/shm/mono.*"
# created by libv4l
ALLOWDEVFILE="/dev/shm/libv4l-*"
# created by spice video
ALLOWDEVFILE="/dev/shm/spice.*"
# created by mdadm
ALLOWDEVFILE="/dev/md/autorebuild.pid"
# 389 Directory Server
ALLOWDEVFILE=/dev/shm/sem.slapd-*.stats
# squid proxy
ALLOWDEVFILE=/dev/shm/squid-cf*
# squid ssl cache
ALLOWDEVFILE=/dev/shm/squid-ssl_session_cache.shm
# Allow podman
ALLOWDEVFILE=/dev/shm/libpod*lock*

 

3. Set the proper mirror database URL location to internal network repository

 

Usually  file /var/lib/rkhunter/db/mirrors.dat does contain Internet server address where latest version of mirrors.dat could be fetched, below is how it looks by default on Debian 10 Linux.

root@debian:/var/lib/rkhunter/db# cat mirrors.dat 
Version:2007060601
mirror=http://rkhunter.sourceforge.net
mirror=http://rkhunter.sourceforge.net

As you can guess a machine that doesn't have access to the Internet neither directly, neither via some kind of secure proxy because it is in a Paranoic Demilitarized Zone (DMZ) Network with many firewalls. What you can do then is setup another Mirror server (Apache / Nginx) within the local PCI secured LAN that gets regularly the database from official database on http://rkhunter.sourceforge.net/ (by installing and running rkhunter –update command on the Mirror WebServer and copying data under some directory structure on the remote local LAN accessible server, to keep the DB uptodate you might want to setup a cron to periodically copy latest available rkhunter database towards the http://mirror-url/path-folder/)

# vi /var/lib/rkhunter/db/mirrors.dat

local=http://rkhunter-url-mirror-server-url.com/rkhunter/1.4/


A mirror copy of entire db files from Debian 10.8 ( Buster ) ready for download are here.

Update entire file property db and check for rkhunter db updates

 

# rkhunter –update && rkhunter –propupdate

[ Rootkit Hunter version 1.4.6 ]

Checking rkhunter data files…
  Checking file mirrors.dat                                  [ Skipped ]
  Checking file programs_bad.dat                             [ No update ]
  Checking file backdoorports.dat                            [ No update ]
  Checking file suspscan.dat                                 [ No update ]
  Checking file i18n/cn                                      [ No update ]
  Checking file i18n/de                                      [ No update ]
  Checking file i18n/en                                      [ No update ]
  Checking file i18n/tr                                      [ No update ]
  Checking file i18n/tr.utf8                                 [ No update ]
  Checking file i18n/zh                                      [ No update ]
  Checking file i18n/zh.utf8                                 [ No update ]
  Checking file i18n/ja                                      [ No update ]

 

rkhunter-update-propupdate-screenshot-centos-linux


4. Initiate a first time check and see whether something is not triggering Warnings

# rkhunter –check

rkhunter-checking-for-rootkits-linux-screenshot

As you might have to run the rkhunter multiple times, there is annoying Press Enter prompt, between checks. The idea of it is that you're able to inspect what went on but since usually, inspecting /var/log/rkhunter/rkhunter.log is much more easier, I prefer to skip this with –skip-keypress option.

# rkhunter –check  –skip-keypress


5. Whitelist additional files and dev triggering false warnings alerts


You have to keep in mind many files which are considered to not be officially PCI compatible and potentially dangerous such as lynx browser curl, telnet etc. might trigger Warning, after checking them thoroughfully with some AntiVirus software such as Clamav and checking the MD5 checksum compared to a clean installed .deb / .rpm package on another RootKit, Virus, Spyware etc. Clean system (be it virtual machine or a Testing / Staging) machine you might want to simply whitelist the files which are incorrectly detected as dangerous for the system security.

Again this can be achieved with

PKGMGR_NO_VRFY=

Some Cluster softwares that are preparing their own /dev/ temporary files such as Pacemaker / Corosync might also trigger alarms, so you might want to suppress this as well with ALLOWDEVFILE

ALLOWDEVFILE=/dev/shm/qb-*/qb-*


If Warnings are found check what is the issue and if necessery white list files due to incorrect permissions in /etc/rkhunter.conf .

rkhunter-warnings-found-screenshot

Re-run the check until all appears clean as in below screenshot.

rkhunter-clean-report-linux-screenshot

Fixing Checking for a system logging configuration file [ Warning ]

If you happen to get some message like, message appears when rkhunter -C is done on legacy CentOS release 6.10 (Final) servers:

[13:45:29] Checking for a system logging configuration file [ Warning ]
[13:45:29] Warning: The 'systemd-journald' daemon is running, but no configuration file can be found.
[13:45:29] Checking if syslog remote logging is allowed [ Allowed ]

To fix it, you will have to disable SYSLOG_CONFIG_FILE at all.
 

SYSLOG_CONFIG_FILE=NONE

Install and enable Sysstats IO / DIsk / CPU / Network monitoring console suite on Redhat 8.3, Few sar useful command examples

Tuesday, September 28th, 2021

linux-sysstat-monitoring-logo

 

Why to monitoring CPU, Memory, Hard Disk, Network usage etc. with sysstats tools?
 

Using system monitoring tools such as Zabbix, Nagios Monit is a good approach, however sometimes due to zabbix server interruptions you might not be able to track certain aspects of system performance on time. Thus it is always a good idea to 
Gain more insights on system peroformance from command line. Of course there is cmd tools such as iostat and top, free, vnstat that provides plenty of useful info on system performance issues or bottlenecks. However from my experience to have a better historical data that is systimized and all the time accessible from console it is a great thing to have sysstat package at place. Since many years mostly on every server I administer, I've been using sysstats to monitor what is going on servers over a short time frames and I'm quite happy with it. In current company we're using Redhats and CentOS-es and I had to install sysstats on Redhat 8.3. I've earlier done it multiple times on Debian / Ubuntu Linux and while I've faced on some .deb distributions complications of making sysstat collect statistics I've come with an article on Howto fix sysstat Cannot open /var/log/sysstat/sa no such file or directory” on Debian / Ubuntu Linux
 

Sysstat contains the following tools related to collecting I/O and CPU statistics:
iostat
Displays an overview of CPU utilization, along with I/O statistics for one or more disk drives.
mpstat
Displays more in-depth CPU statistics.
Sysstat also contains tools that collect system resource utilization data and create daily reports based on that data. These tools are:
sadc
Known as the system activity data collector, sadc collects system resource utilization information and writes it to a file.
sar
Producing reports from the files created by sadc, sar reports can be generated interactively or written to a file for more intensive analysis.

My experience with CentOS 7 and Fedora to install sysstat it was pretty straight forward, I just had to install it via yum install sysstat wait for some time and use sar (System Activity Reporter) tool to report collected system activity info stats over time.
Unfortunately it seems on RedHat 8.3 as well as on CentOS 8.XX instaling sysstats does not work out of the box.

To complete a successful installation of it on RHEL 8.3, I had to:

[root@server ~]# yum install -y sysstat


To make sysstat enabled on the system and make it run, I've enabled it in sysstat

[root@server ~]# systemctl enable sysstat


Running immediately sar command, I've faced the shitty error:


Cannot open /var/log/sysstat/sa18:
No such file or directory. Please check if data collecting is enabled”

 

Once installed I've waited for about 5 minutes hoping, that somehow automatically sysstat would manage it but it didn't.

To solve it, I've had to create additionally file /etc/cron.d/sysstat (weirdly RPM's post install instructions does not tell it to automatically create it)

[root@server ~]# vim /etc/cron.d/sysstat

# run system activity accounting tool every 10 minutes
0 * * * * root /usr/lib64/sa/sa1 60 59 &
# generate a daily summary of process accounting at 23:53
53 23 * * * root /usr/lib64/sa/sa2 -A &

 

  • /usr/local/lib/sa1 is a shell script that we can use for scheduling cron which will create daily binary log file.
  • /usr/local/lib/sa2 is a shell script will change binary log file to human-readable form.

 

[root@server ~]# chmod 600 /etc/cron.d/sysstat

[root@server ~]# systemctl restart sysstat


In a while if sysstat is working correctly you should get produced its data history logs inside /var/log/sa

[root@server ~]# ls -al /var/log/sa 


Note that the standard sysstat history files on Debian and other modern .deb based distros such as Debian 10 (in  y.2021) is stored under /var/log/sysstat

Here is few useful uses of sysstat cmds


1. Check with sysstat machine history SWAP and RAM Memory use


To lets say check last 10 minutes SWAP memory use:

[hipo@server yum.repos.d] $ sar -W  |last -n 10
 

Linux 4.18.0-240.el8.x86_64 (server)       09/28/2021      _x86_64_        (8 CPU)

12:00:00 AM  pswpin/s pswpout/s
12:00:01 AM      0.00      0.00
12:01:01 AM      0.00      0.00
12:02:01 AM      0.00      0.00
12:03:01 AM      0.00      0.00
12:04:01 AM      0.00      0.00
12:05:01 AM      0.00      0.00
12:06:01 AM      0.00      0.00

[root@ccnrlb01 ~]# sar -r | tail -n 10
14:00:01        93008   1788832     95.06         0   1357700    725740      9.02    795168    683484        32
14:10:01        78756   1803084     95.81         0   1358780    725740      9.02    827660    652248        16
14:20:01        92844   1788996     95.07         0   1344332    725740      9.02    813912    651620        28
14:30:01        92408   1789432     95.09         0   1344612    725740      9.02    816392    649544        24
14:40:01        91740   1790100     95.12         0   1344876    725740      9.02    816948    649436        36
14:50:01        91688   1790152     95.13         0   1345144    725740      9.02    817136    649448        36
15:00:02        91544   1790296     95.14         0   1345448    725740      9.02    817472    649448        36
15:10:01        91108   1790732     95.16         0   1345724    725740      9.02    817732    649340        36
15:20:01        90844   1790996     95.17         0   1346000    725740      9.02    818016    649332        28
Average:        93473   1788367     95.03         0   1369583    725074      9.02    800965    671266        29

 

2. Check system load? Are my processes waiting too long to run on the CPU?

[root@server ~ ]# sar -q |head -n 10
Linux 4.18.0-240.el8.x86_64 (server)       09/28/2021      _x86_64_        (8 CPU)

12:00:00 AM   runq-sz  plist-sz   ldavg-1   ldavg-5  ldavg-15   blocked
12:00:01 AM         0       272      0.00      0.02      0.00         0
12:01:01 AM         1       271      0.00      0.02      0.00         0
12:02:01 AM         0       268      0.00      0.01      0.00         0
12:03:01 AM         0       268      0.00      0.00      0.00         0
12:04:01 AM         1       271      0.00      0.00      0.00         0
12:05:01 AM         1       271      0.00      0.00      0.00         0
12:06:01 AM         1       265      0.00      0.00      0.00         0


3. Show various CPU statistics per CPU use
 

On a multiprocessor, multi core server sometimes for scripting it is useful to fetch processor per use historic data, 
this can be attained with:

 

[hipo@server ~ ] $ mpstat -P ALL
Linux 4.18.0-240.el8.x86_64 (server)       09/28/2021      _x86_64_        (8 CPU)

06:08:38 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
06:08:38 PM  all    0.17    0.02    0.25    0.00    0.05    0.02    0.00    0.00    0.00   99.49
06:08:38 PM    0    0.22    0.02    0.28    0.00    0.06    0.03    0.00    0.00    0.00   99.39
06:08:38 PM    1    0.28    0.02    0.36    0.00    0.08    0.02    0.00    0.00    0.00   99.23
06:08:38 PM    2    0.27    0.02    0.31    0.00    0.06    0.01    0.00    0.00    0.00   99.33
06:08:38 PM    3    0.15    0.02    0.22    0.00    0.03    0.01    0.00    0.00    0.00   99.57
06:08:38 PM    4    0.13    0.02    0.20    0.01    0.03    0.01    0.00    0.00    0.00   99.60
06:08:38 PM    5    0.14    0.02    0.27    0.00    0.04    0.06    0.01    0.00    0.00   99.47
06:08:38 PM    6    0.10    0.02    0.17    0.00    0.04    0.02    0.00    0.00    0.00   99.65
06:08:38 PM    7    0.09    0.02    0.15    0.00    0.02    0.01    0.00    0.00    0.00   99.70


 

sar-sysstat-cpu-statistics-screenshot

Monitor processes and threads currently being managed by the Linux kernel.

[hipo@server ~ ] $ pidstat

pidstat-various-random-process-statistics

[hipo@server ~ ] $ pidstat -d 2


pidstat-show-processes-with-most-io-activities-linux-screenshot

This report tells us that there is few processes with heave I/O use Filesystem system journalling daemon jbd2, apache, mysqld and supervise, in 3rd column you see their respective PID IDs.

To show threads used inside a process (like if you press SHIFT + H) inside Linux top command:

[hipo@server ~ ] $ pidstat -t -p 10765 1 3

Linux 4.19.0-14-amd64 (server)     28.09.2021     _x86_64_    (10 CPU)

21:41:22      UID      TGID       TID    %usr %system  %guest   %wait    %CPU   CPU  Command
21:41:23      108     10765         –    1,98    0,99    0,00    0,00    2,97     1  mysqld
21:41:23      108         –     10765    0,00    0,00    0,00    0,00    0,00     1  |__mysqld
21:41:23      108         –     10768    0,00    0,00    0,00    0,00    0,00     0  |__mysqld
21:41:23      108         –     10771    0,00    0,00    0,00    0,00    0,00     5  |__mysqld
21:41:23      108         –     10784    0,00    0,00    0,00    0,00    0,00     7  |__mysqld
21:41:23      108         –     10785    0,00    0,00    0,00    0,00    0,00     6  |__mysqld
21:41:23      108         –     10786    0,00    0,00    0,00    0,00    0,00     2  |__mysqld

10765 – is the Process ID whose threads you would like to list

With pidstat, you can further monitor processes for memory leaks with:

[hipo@server ~ ] $ pidstat -r 2

 

4. Report paging statistics for some old period

 

[root@server ~ ]# sar -B -f /var/log/sa/sa27 |head -n 10
Linux 4.18.0-240.el8.x86_64 (server)       09/27/2021      _x86_64_        (8 CPU)

15:42:26     LINUX RESTART      (8 CPU)

15:55:30     LINUX RESTART      (8 CPU)

04:00:01 PM  pgpgin/s pgpgout/s   fault/s  majflt/s  pgfree/s pgscank/s pgscand/s pgsteal/s    %vmeff
04:01:01 PM      0.00     14.47    629.17      0.00    502.53      0.00      0.00      0.00      0.00
04:02:01 PM      0.00     13.07    553.75      0.00    419.98      0.00      0.00      0.00      0.00
04:03:01 PM      0.00     11.67    548.13      0.00    411.80      0.00      0.00      0.00      0.00

 

5.  Monitor Received RX and Transmitted TX network traffic perl Network interface real time
 

To print out Received and Send traffic per network interface 4 times in a raw

sar-sysstats-network-traffic-statistics-screenshot
 

[hipo@server ~ ] $ sar -n DEV 1 4


To continusly monitor all network interfaces I/O traffic

[hipo@server ~ ] $ sar -n DEV 1


To only monitor a certain network interface lets say loopback interface (127.0.0.1) received / transmitted bytes

[hipo@server yum.repos.d] $  sar -n DEV 1 2|grep -i lo
06:29:53 PM        lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
06:29:54 PM        lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00


6. Monitor block devices use
 

To check block devices use 3 times in a raw
 

[hipo@server yum.repos.d] $ sar -d 1 3


sar-sysstats-blockdevice-statistics-screenshot
 

7. Output server monitoring data in CSV database structured format


For preparing a nice graphs with Excel from CSV strucuted file format, you can dump the collected data as so:

 [root@server yum.repos.d]# sadf -d /var/log/sa/sa27 — -n DEV | grep -v lo|head -n 10
server-name-fqdn;-1;2021-09-27 13:42:26 UTC;LINUX-RESTART    (8 CPU)
# hostname;interval;timestamp;IFACE;rxpck/s;txpck/s;rxkB/s;txkB/s;rxcmp/s;txcmp/s;rxmcst/s;%ifutil
server-name-fqdn;-1;2021-09-27 13:55:30 UTC;LINUX-RESTART    (8 CPU)
# hostname;interval;timestamp;IFACE;rxpck/s;txpck/s;rxkB/s;txkB/s;rxcmp/s;txcmp/s;rxmcst/s;%ifutil
server-name-fqdn;60;2021-09-27 14:01:01 UTC;eth1;19.42;16.12;1.94;1.68;0.00;0.00;0.00;0.00
server-name-fqdn;60;2021-09-27 14:01:01 UTC;eth0;7.18;9.65;0.55;0.78;0.00;0.00;0.00;0.00
server-name-fqdn;60;2021-09-27 14:01:01 UTC;eth2;5.65;5.13;0.42;0.39;0.00;0.00;0.00;0.00
server-name-fqdn;60;2021-09-27 14:02:01 UTC;eth1;18.90;15.55;1.89;1.60;0.00;0.00;0.00;0.00
server-name-fqdn;60;2021-09-27 14:02:01 UTC;eth0;7.15;9.63;0.55;0.74;0.00;0.00;0.00;0.00
server-name-fqdn;60;2021-09-27 14:02:01 UTC;eth2;5.67;5.15;0.42;0.39;0.00;0.00;0.00;0.00

To graph the output data you can use Excel / LibreOffice's Excel equivalent Calc or if you need to dump a CSV sar output and generate it on the fly from a script  use gnuplot 


What we've learned?


How to install and enable on cron sysstats on Redhat and CentOS 8 Linux ? 
How to continuously monitor CPU / Disk and Network, block devices, paging use and processes and threads used by the kernel per process ?  
As well as how to export previously collected data to CSV to import to database or for later use inrder to generate graphic presentation of data.
Cheers ! 🙂

 

Install certbot on Debian, Ubuntu, CentOS, Fedora Linux 10 / Generate and use Apache / Nginx SSL Letsencrypt certificates

Monday, December 21st, 2020

letsencrypt certbot install on any linux distribution with apache or nginx webserver howto</a><p> Let's Encrypt is a free, automated, and open certificate authority brought to you by the nonprofit <a data-cke-saved-href=
Internet Security Research Group (ISRG). ISRG group gave initiative with the goal to "encrypt the internet", i.e. offer free alternative to the overpriced domani registrer sold certificates with the goal to make more people offer SSL / TSL Free secured connection line on their websites. 
ISRG group supported Letsencrypt non-profit certificate authority actrively by Internet industry standard giants such as Mozilla, Cisco, EFF (Electronic Frontier Foundation),  Facebook, Google Chrome, Amazon AWS, OVH Cloud, Redhat, VMWare, Github and many many of the leading companies in IT.

Letsencrpyt is aimed at automating the process designed to overcome manual creation, validation, signing, installation, and renewal of certificates for secure websites. I.e. you don't have to manually write on console complicated openssl command lines with passing on Certificate CSR /  KEY / PEM files etc and generate Self-Signed Untrusted Authority Certificates (noted in my previous article How to generate Self-Signed SSL Certificates with openssl or use similar process to pay money generate secret key and submit the key to third party authority through a their website webadmin  interface in order to Generate SSL brought by Godaddy or Other Certificate Authority.

But of course as you can guess there are downsides as you submit your private key automatically via letsencrypt set of SSL certificate automation domain scripts to a third party Certificate Authority which is at Letsencrypt.org. A security intrusion in their private key store servers might mean a catastrophy for your data as malicious stealer might be able to decrypt your data with some additional effort and see in plain text what is talking to your Apache / Nginx or Mail Server nevertheless the cert. Hence for a high standards such as PCI environments Letsencrypt as well as for the paranoid security freak admins,  who don't trust the mainstream letsencrypt is definitely not a choice. Anyways for most small and midsized businesses who doesn't hold too much of a top secret data and want a moderate level of security Letsencrypt is a great opportunity to try. But enough talk, lets get down to business.

How to install and use certbot on Debian GNU / Linux 10 Buster?
Certbot is not available from the Debian software repositories by default, but it’s possible to configure the buster-backports repository in your /etc/apt/sources.list file to allow you to install a backport of the Certbot software with APT tool.
 

1. Install certbot on Debian / Ubuntu Linux

 

root@webserver:/etc/apt# tail -n 1 /etc/apt/sources.list
deb http://ftp.debian.org/debian buster-backports main


If not there append the repositories to file:

 

  • Install certbot-nginx certbot-apache deb packages

root@webserver:/ # echo 'deb http://ftp.debian.org/debian buster-backports main' >> /etc/apt/sources.list

 

  • Install certbot-nginx certbot-apache deb packages

root@webserver:/ # apt update
root@webserver:/ # apt install certbot python-certbot-nginx python3-certbot-apache python-certbot-nginx-doc


This will install the /usr/bin/certbot python executable script which is used to register / renew / revoke / delete your domains certificates.
 

2. Install letsencrypt certbot client on CentOS / RHEL / Fedora and other Linux Distributions

 


For RPM based distributions and other Linux distributions you will have to install snap package (if not already installed) and use snap command :

 

 

[root@centos ~ :] # yum install snapd
systemctl enable –now snapd.socket

To enable classic snap support, enter the following to create a symbolic link between

[root@centos ~ :] # ln -s /var/lib/snapd/snap /snap

snap command lets you install, configure, refresh and remove snaps.  Snaps are packages that work across many different Linux distributions, enabling secure delivery and operation of the latest apps and utilities.

[root@centos ~ :] # snap install core; sudo snap refresh core

Logout from console or Xsession to make the snap update its $PATH definitions.

Then use snap universal distro certbot classic package

 [root@centos ~ :] # snap install –classic certbot
[root@centos ~ :] # ln -s /snap/bin/certbot /usr/bin/certbot
 

 

If you're having an XOrg server access on the RHEL / CentOS via Xming or other type of Xemulator you might check out also the snap-store as it contains a multitude of packages installable which are not usually available in RPM distros.

 [root@centos ~ :] # snap install snap-store


how-to-install-snap-applications-on-centos-rhel-linux-snap-store

snap-store is a powerful and via it you can install many non easily installable stuff on Linux such as eclipse famous development IDE, notepad++ , Discord, the so favourite for the Quality Assurance guy Protocol tester Postman etc.

  • Installing certbot to any distribution via acme.sh script

Another often preferred solution to Universally deploy  and upgrade an existing LetsEncrypt program to any Linux distribution (e.g. RHEL / CentOS / Fedora etc.) is the acme.sh script. To install acme you have to clone the repository and run the script with –install

P.S. If you don't have git installed yet do

root@webserver:/ # apt-get install –yes git


and then the usual git clone to fetch it at your side

# cd /root
# git clone https://github.com/acmesh-official/acme.sh
Cloning into 'acme.sh'…
remote: Enumerating objects: 71, done.
remote: Counting objects: 100% (71/71), done.
remote: Compressing objects: 100% (53/53), done.
remote: Total 12475 (delta 39), reused 38 (delta 18), pack-reused 12404
Receiving objects: 100% (12475/12475), 4.79 MiB | 6.66 MiB/s, done.
Resolving deltas: 100% (7444/7444), done.

# sh acme.sh –install


To later upgrade acme.sh to latest you can do

# sh acme.sh –upgrade


In order to renew a concrete existing letsencrypt certificiate

# sh acme.sh –renew domainname.com


To renew all certificates using acme.sh script

# ./acme.sh –renew-all

 

3. Generate Apache or NGINX Free SSL / TLS Certificate with certbot tool

Now lets generate a certificate for a domain running on Apache Webserver with a Website WebRoot directory /home/phpdev/public/www

 

root@webserver:/ # certbot –apache –webroot -w /home/phpdev/public/www/ -d your-domain-name.com -d your-domain-name.com

root@webserver:/ # certbot certonly –webroot -w /home/phpdev/public/www/ -d your-domain-name.com -d other-domain-name.com


As you see all the domains for which you will need to generate are passed on with -d option.

Once certificates are properly generated you can test it in a browser and once you're sure they work as expected usually you can sleep safe for the next 3 months ( 90 days) which is the default for TSL / SSL Letsencrypt certificates the reason behind of course is security.

 

4. Enable freshly generated letsencrypt SSL certificate in Nginx VirtualHost config

Go to your nginx VirtualHost configuration (i.e. /etc/nginx/sites-enabled/phpmyadmin.adzone.pro ) and inside chunk of config add after location { … } – 443 TCP Port SSL listener (as in shown in bolded configuration)
 

server {

….
   location ~ \.php$ {
      include /etc/nginx/fastcgi_params;
##      fastcgi_pass 127.0.0.1:9000;
      fastcgi_pass unix:/run/php/php7.3-fpm.sock;
      fastcgi_index index.php;
      fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin$fastcgi_script_name;
   }
 

 

 

    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/phpmyadmin.adzone.pro/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/phpmyadmin.adzone.pro/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

 

5. Enable new generated letsencrypt SSL certificate in Apache VirtualHost


In /etc/apache2/{sites-available,sites-enabled}/your-domain.com-ssl.conf you should have as a minimum a configuration setup like below:
 

 

NameVirtualHost *:443 <VirtualHost 123.123.123.12:443>
    ServerAdmin hipo@domain.com
    ServerName www.pc-freak.net
    ServerAlias www.your-domain.com wwww.your-domain.com your-domain.com
 
    HostnameLookups off
    DocumentRoot /var/www
    DirectoryIndex index.html index.htm index.php index.html.var

 

 

CheckSpelling on
SSLEngine on

    <Directory />
        Options FollowSymLinks
        AllowOverride All
        ##Order allow,deny
        ##allow from all
        Require all granted
    </Directory>
    <Directory /var/www>
        Options Indexes FollowSymLinks MultiViews
        AllowOverride All
##      Order allow,deny
##      allow from all
Require all granted
    </Directory>

Include /etc/letsencrypt/options-ssl-apache.conf
SSLCertificateFile /etc/letsencrypt/live/your-domain.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/your-domain.com/privkey.pem
</VirtualHost>

 

6. Simulate a certificate regenerate with –dry-run

Soon before the 90 days period expiry approaches, it is a good idea to test how all installed Nginx webserver certficiates will be renewed and whether any issues are expected this can be done with the –dry-run option.

root@webserver:/ # certbot renew –dry-run

 

– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates below have not been saved.)

Congratulations, all renewals succeeded. The following certs have been renewed:
  /etc/letsencrypt/live/adzone.pro/fullchain.pem (success)
  /etc/letsencrypt/live/cdn.natsr.pro/fullchain.pem (success)
  /etc/letsencrypt/live/mail.adzone.pro/fullchain.pem (success)
  /etc/letsencrypt/live/natsr.pro-0001/fullchain.pem (success)
  /etc/letsencrypt/live/natsr.pro/fullchain.pem (success)
  /etc/letsencrypt/live/phpmyadmin.adzone.pro/fullchain.pem (success)
  /etc/letsencrypt/live/www.adzone.pro/fullchain.pem (success)
  /etc/letsencrypt/live/www.natsr.pro/fullchain.pem (success)
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates above have not been saved.)
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –

 

7. Renew a certificate from a multiple installed certificate list

In some time when you need to renew letsencrypt domain certificates you can list them and choose manually which one you want to renew.

root@webserver:/ # certbot –force-renewal
Saving debug log to /var/log/letsencrypt/letsencrypt.log

How would you like to authenticate and install certificates?
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
1: Apache Web Server plugin (apache)
2: Nginx Web Server plugin (nginx)
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 2
Plugins selected: Authenticator nginx, Installer nginx

Which names would you like to activate HTTPS for?
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
1: adzone.pro
2: mail.adzone.pro
3: phpmyadmin.adzone.pro
4: www.adzone.pro
5: natsr.pro
6: cdn.natsr.pro
7: www.natsr.pro
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel): 3
Renewing an existing certificate
Deploying Certificate to VirtualHost /etc/nginx/sites-enabled/phpmyadmin.adzone.pro

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
1: No redirect – Make no further changes to the webserver configuration.
2: Redirect – Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 2
Redirecting all traffic on port 80 to ssl in /etc/nginx/sites-enabled/phpmyadmin.adzone.pro

– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
Your existing certificate has been successfully renewed, and the new certificate
has been installed.

The new certificate covers the following domains: https://phpmyadmin.adzone.pro

You should test your configuration at:
https://www.ssllabs.com/ssltest/analyze.html?d=phpmyadmin.adzone.pro
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –

IMPORTANT NOTES:
 – Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/phpmyadmin.adzone.pro/fullchain.pem

   Your key file has been saved at:
   /etc/letsencrypt/live/phpmyadmin.adzone.pro/privkey.pem
   Your cert will expire on 2021-03-21. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot again
   with the "certonly" option. To non-interactively renew *all* of
   your certificates, run "certbot renew"
 – If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

 

8. Renew all present SSL certificates

root@webserver:/ # certbot renew

Processing /etc/letsencrypt/renewal/www.natsr.pro.conf
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
Cert not yet due for renewal

 

– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –

The following certs are not due for renewal yet:
  /etc/letsencrypt/live/adzone.pro/fullchain.pem expires on 2021-03-01 (skipped)
  /etc/letsencrypt/live/cdn.natsr.pro/fullchain.pem expires on 2021-02-28 (skipped)
  /etc/letsencrypt/live/mail.adzone.pro/fullchain.pem expires on 2021-02-28 (skipped)
  /etc/letsencrypt/live/natsr.pro-0001/fullchain.pem expires on 2021-03-01 (skipped)
  /etc/letsencrypt/live/natsr.pro/fullchain.pem expires on 2021-02-25 (skipped)
  /etc/letsencrypt/live/phpmyadmin.adzone.pro/fullchain.pem expires on 2021-03-21 (skipped)
  /etc/letsencrypt/live/www.adzone.pro/fullchain.pem expires on 2021-02-28 (skipped)
  /etc/letsencrypt/live/www.natsr.pro/fullchain.pem expires on 2021-03-01 (skipped)
No renewals were attempted.
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –

 

 

9. Renew all existing server certificates from a cron job


The certbot package will install a script under /etc/cron.d/certbot to be run that will attempt every 12 hours however from my experience
often this script is not going to work, the script looks similar to below:

# Upgrade all existing SSL certbot machine certificates

 

0 */12 * * * root test -x /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(43200))' && certbot -q renew

Another approach to renew all installed certificates if you want to have a specific options and keep log of what happened is using a tiny shell script like this:

 

10. Auto renew installed SSL / TSL Certbot certificates with a bash loop over all present certificates

#!/bin/sh
# update SSL certificates
# prints from 1 to 104 (according to each certbot generated certificate and triggers rewew and logs what happened to log file
# an ugly hack for certbot certificate renew
for i in $(seq 1 104); do echo "Updating $i SSL Cert" | tee -a /root/certificate-update.log; yes "$i" | certbot –force-renewal | tee -a /root/certificate-update.log 2>&1; sleep 5; done

Note: The seq 1 104 is the range depends on the count of installed SSL certificates you have installed on the machine, that can be seen and set the proper value according to your case when you run one time certbot –force-renewal.
 

How to Configure Nginx as a Reverse Proxy Load Balancer on Debian, CentOS, RHEL Linux

Monday, December 14th, 2020

set-up-nginx-reverse-proxy-howto-linux-logo

What is reverse Proxy?

Reverse Proxy (RP) is a Proxy server which routes all incoming traffic to secondary Webserver situated behind the Reverse Proxy site.

Then all incoming replies from secondary webserver (which is not visible) from the internet gets routed back to Reverse Proxy service. The result is it seems like all incoming and outgoing HTTP requests are served from Reverse Proxy host where in reality, reverse proxy host just does traffic redirection. Problem with reverse proxies is it is one more point of failure the good side of it can protect and route only certain traffic to your webserver, preventing the behind reverse proxy located server from crackers malicious HTTP requests.

Treverse proxy, which accepts all traffic and forwards it to a specific resource, like a server or container.  Earlier I've blogged on how to create Apache reverse Proxy to Tomcat.
Having a reverse proxy with apache is a classical scenarios however as NGINX is taking lead slowly and overthrowing apache's use due to its easiness to configure, its high reliability and less consumption of resources.


One of most common use of Reverse Proxy is to use it as a software Load Balancer for traffic towards another webserver or directly to a backend server. Using RP as a to mitigate DDoS attacks from hacked computers Bot nets (coming from a network range) is very common Anti-DDoS protection measure.
With the bloom of VM and Contrainerizations technology such as docker. And the trend to switch services towards micro-services, often reverse proxy is used to seamessly redirect incoming requests traff to multiple separate OS docker running containers etc.


Some of the other security pros of using a Reverse proxy that could be pointed are:

  • Restrict access to locations that may be obvious targets for brute-force attacks, reducing the effectiveness of DDOS attacks by limiting the number of connections and the download rate per IP address. 
  • Cache pre-rendered versions of popular pages to speed up page load times.
  • Interfere with other unwanted traffic when needed.

 


what-is-reverse-proxy-explained-proxying-tubes

 

1. Install nginx webserver


Assuming you have a Debian at hand as an OS which will be used for Nginx RP host, lets install nginx.
 

[hipo@proxy ~]$ sudo su – root

[root@proxy ~]#  apt update

[root@proxy ~]# apt install -y nginx


Fedora / CentOS / Redhat Linux could use yum or dnf to install NGINX
 

[root@proxy ~]# dnf install -y nginx
[root@proxy ~]# yum install -y nginx

 

2. Launch nginx for a first time and test


Start nginx one time to test default configuration is reachable
 

systemctl enable –now nginx


To test nginx works out of the box right after install, open a browser and go to http://localhost if you have X or use text based browser such as lynx or some web console fetcher as curl to verify that the web server is running as expected.

nginx-test-default-page-centos-linux-screenshot
 

3. Create Reverse proxy configuration file

Remove default Nginx package provided configuration

As of 2020 by default nginx does load configuration from file /etc/nginx/sites-enabled/default on DEB based Linuxes and in /etc/nginx/nginx.conf on RPM based ones, therefore to setup our desired configuration and disable default domain config to be loaded we have to unlink on Debian

[root@proxy ~]# unlink /etc/nginx/sites-enabled/default

or move out the original nginx.conf on Fedora / CentOS / RHEL:
 

[root@proxy ~]# mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf-distro

 

Lets take a scenario where you have a local IP address that's NAT-ted ot DMZ-ed and and you want to run nginx to server as a reverse proxy to accelerate traffic and forward all traffic to another webserver such as LigHTTPD / Apache or towards java serving Application server Jboss / Tomcat etc that listens on the same host on lets say on port 8000 accessible via app server from /application/.

To do so prepare new /etc/nginx/nginx.conf to look like so
 

[root@proxy ~]# mv /etc/nginx/nginx.conf /etc/nginx.conf.bak
[root@proxy ~]# vim /etc/nginx/nginx.conf

user nginx;
worker_processes auto;
worker_rlimit_nofile 10240;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

 

events {
#       worker_connections 768;
        worker_connections 4096;
        multi_accept on;
        # multi_accept on;
}

http {

        ##
        # Basic Settings
        ##

        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        #keepalive_timeout 65;
        keepalive_requests 1024;
        client_header_timeout 30;
        client_body_timeout 30;
        keepalive_timeout 30;
        types_hash_max_size 2048;
        # server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ##
        # SSL Settings
        ##

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
        ssl_prefer_server_ciphers on;

        ##
        # Logging Settings
        ##

        access_log /var/log/nginx/domain.com/access.log;
        error_log /var/log/nginx/domain.com/error.log;

        ##
        # Gzip Settings
        ##

        gzip on;

        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        ##
        # Virtual Host Configs
        ##

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;

 include /etc/nginx/default.d/*.conf;

upstream myapp {
    server 127.0.0.1:8000 weight=3;
    server 127.0.0.1:8001;
    server 127.0.0.1:8002;
    server 127.0.0.1:8003;
# Uncomment and use Load balancing with external FQDNs if needed
#  server srv1.example.com;
#   server srv2.example.com;
#   server srv3.example.co

}

#mail {
#       # See sample authentication script at:
#       # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
#       # auth_http localhost/auth.php;
#       # pop3_capabilities "TOP" "USER";
#       # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
#       server {
#               listen     localhost:110;
#               protocol   pop3;
#               proxy      on;
#       }
#
#       server {
#               listen     localhost:143;
#               protocol   imap;
#               proxy      on;
#       }
#}

 

In the example above, there are 3 instances of the same application running on 3 IPs on different ports, just for the sake to illustrate Fully Qualified Domain Names (FQDNs) Load balancing is also supported you can see the 3 commented application instances srv1-srv3.
 When the load balancing method is not specifically configured, it defaults to round-robin. All requests are proxied to the server group myapp1, and nginx applies HTTP load balancing to distribute the requests.Reverse proxy implementation in nginx includes load balancing for HTTP, HTTPS, FastCGI, uwsgi, SCGI, memcached, and gRPC.
To configure load balancing for HTTPS instead of HTTP, just use “https” as the protocol.


To download above nginx.conf configured for High traffic servers and supports Nginx virtualhosts click here.

Now lets prepare for the reverse proxy nginx configuration a separate file under /etc/nginx/default.d/ all files stored there with .conf extension are to be red by nginx.conf as instructed by /etc/nginx/nginx.conf :

We'll need prepare a sample nginx

[root@proxy ~]# vim /etc/nginx/sites-available/reverse-proxy.conf

server {

        listen 80;
        listen [::]:80;


 server_name domain.com www.domain.com;
#index       index.php;
# fallback for index.php usually this one is never used
root        /var/www/domain.com/public    ;
#location / {
#try_files $uri $uri/ /index.php?$query_string;
#}

        location / {
                    proxy_pass http://127.0.0.1:8080;
  }

 

location /application {
proxy_pass http://domain.com/application/ ;

proxy_http_version                 1.1;
proxy_cache_bypass                 $http_upgrade;

# Proxy headers
proxy_set_header Upgrade           $http_upgrade;
proxy_set_header Connection        "upgrade";
proxy_set_header Host              $host;
proxy_set_header X-Real-IP         $remote_addr;
proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host  $host;
proxy_set_header X-Forwarded-Port  $server_port;

# Proxy timeouts
proxy_connect_timeout              60s;
proxy_send_timeout                 60s;
proxy_read_timeout                 60s;

        access_log /var/log/nginx/reverse-access.log;
        error_log /var/log/nginx/reverse-error.log;

}

##listen 443 ssl;
##    ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem;
##    ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem;
##    include /etc/letsencrypt/options-ssl-nginx.conf;
##    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

}

Get above reverse-proxy.conf from here

As you see config makes all incoming traffic towards root ( / ) NGINX directory for domain http://domain.com on port 80 on Nginx Webserver to be passed on http://127.0.0.1:8000/application.

      location / {
                    proxy_pass http://127.0.0.1:8000;
  }


Another set of configuration has configuration domain.com/application to reverse proxy to Webserver on Port 8080 /application.

 

location /application {
proxy_pass http://domain.com/application/ ;

proxy_http_version                 1.1;
proxy_cache_bypass                 $http_upgrade;

# Proxy headers
proxy_set_header Upgrade           $http_upgrade;
proxy_set_header Connection        "upgrade";
proxy_set_header Host              $host;
proxy_set_header X-Real-IP         $remote_addr;
proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host  $host;
proxy_set_header X-Forwarded-Port  $server_port;

# Proxy timeouts
proxy_connect_timeout              60s;
proxy_send_timeout                 60s;
proxy_read_timeout                 60s;

        access_log /var/log/nginx/reverse-access.log;
        error_log /var/log/nginx/reverse-error.log;

}

– Enable new configuration to be active in NGINX

 

[root@proxy ~]# ln -s /etc/nginx/sites-available/reverse-proxy.conf /etc/nginx/sites-enabled/reverse-proxy.conf

 

4. Test reverse proxy nginx config for syntax errors

 

[root@proxy ~]# nginx -t

 

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Test connectivity to listen external IP address

 

5. Enable nginx SSL letsencrypt certificates support

 

[root@proxy ~]# apt-get update
[root@proxy ~]# apt-get install software-properties-common

[root@proxy ~]# apt-get update
[root@proxy ~]# apt-get install python-certbot-nginx

 

6. Generate NGINX Letsencrypt certificates

 

[root@proxy ~]# certbot –nginx

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator nginx, Installer nginx

Which names would you like to activate HTTPS for?
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
1: your.domain.com
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel): 1
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for your.domain.com
Waiting for verification…
Cleaning up challenges
Deploying Certificate to VirtualHost /etc/nginx/sites-enabled/reverse-proxy.conf

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
1: No redirect – Make no further changes to the webserver configuration.
2: Redirect – Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 2
Redirecting all traffic on port 80 to ssl in /etc/nginx/sites-enabled/reverse-proxy.conf

– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
Congratulations! You have successfully enabled https://your.domain.com

You should test your configuration at:
https://www.ssllabs.com/ssltest/analyze.html?d=your.domain.com
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –

 

7. Set NGINX Reverse Proxy to auto-start on Linux server boot

On most modern Linux distros use systemctl for legacy machines depending on the Linux distribution use the addequate runlevel /etc/rc3.d/ symlink link on Debian based distros on Fedoras / CentOS / RHEL and other RPM based ones use chkconfig RedHat command.

 

[root@proxy ~]# systemctl start nginx
[root@proxy ~]# systemctl enable nginx

 

8. Fixing weird connection permission denied errors


If you get a weird permission denied errors right after you have configured the ProxyPass on Nginx and you're wondering what is causing it you have to know by default on CentOS 7 and RHEL you'll get this errors due to automatically enabled OS selinux security hardening.

If this is the case after you setup Nginx + HTTPD or whatever application server you will get errors in  /var/log/nginx.log like:

2020/12/14 07:46:01 [crit] 7626#0: *1 connect() to 127.0.0.1:8080 failed (13: Permission denied) while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "localhost"
2020/12/14 07:46:01 [crit] 7626#0: *1 connect() to 127.0.0.1:8080 failed (13: Permission denied) while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "localhost"
2020/12/14 07:46:01 [crit] 7626#0: *1 connect() to 127.0.0.1:8080 failed (13: Permission denied) while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "localhost"
2020/12/14 07:46:02 [crit] 7626#0: *1 connect() to 127.0.0.1:8080 failed (13: Permission denied) while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "localhost"


The solution to proxy_pass weird permission denied errors is to turn off selinux

[root@proxy ~]# setsebool -P httpd_can_network_connect 1

To permanently allow nginx and httpd

[root@proxy ~]# cat /var/log/audit/audit.log | grep nginx | grep denied | audit2allow -M mynginx
[root@proxy ~]# semodule -i mynginx.pp

 

[root@proxy ~]# cat /var/log/audit/audit.log | grep httpd | grep denied | audit2allow -M myhttpd
[root@proxy ~]# semodule -i myhttpd.pp


Then to let know nginx and httpd (or whatever else app you run) be aware of new settings restart both

[root@proxy ~]# systemctl restart nginx
[root@proxy ~]# systemctl restart httpd

VIM Project (VI Improvied IDE Editor extension to facilitate web development with vi enhanced editor

Wednesday, August 25th, 2010

I use VIM as an editor of choice for many years already.
Yet it's until recently I use it for a PHP ZF (Zend Framework) web development.

Few days ago I've blogged How to configure vimrc for a php syntax highlightning (A Nicely pre-configured vimrc to imrpove the daily text editing experience

This enhancements significantly improves the overall PHP code editing with VIM. However I felt something is yet missing because I didn't have the power and functunality of a complete IDE like for instance The Eclipse IDE

I was pretty sure that VIM has to have a way to be used in a similar fashion to a fully functional IDE and looked around the net to find for any VIM plugins that will add vim an IDE like coding interface.

I then come accross a vim plugin called VIM Prokject : Organize/Navigate projects of files (like IDE/buffer explorer)

The latest VIM Project as of time of writting is 1.4.1 and I've mirrored it here

The installation of the VIM Project VIM extension is pretty straight forward to install it and start using it on your PC issue commands:

1. Install the project VIM add-on

debian:~$ wget https://www.pc-freak.net/files/project-1.4.1.tar.gz
debian:~$ mv project-1.4.1.tar.gz ~/.vim/
debian:~$ cd ~/.vim/
debian:~$ tar -zxvvf project-1.4.1.tar.gz

2. Load the plugin

Launch your vim editor and type : Project(without the space between : and P)
You will further see a screen like:

vim project entry screen

3. You will have to press C within the Project window to load a new project

Then you will have to type a directory to use to load a project sources files from:

vim project enter file source directory screen

You will be prompted with to type a project name like in the screenshot below:

vim project load test project

4. Next you will have to type a CD (Current Dir) parameter
To see more about the CD parameter consult vim project documentation by typing in main vim pane :help project

The appearing screen will be something like:

vim project extension cd parameter screen

5. Thereafter you will have to type a file filter

File filter is necessary and will instruct the vim project plugin to load all files with the specified extension within vim project pane window

You will experience a screen like:


vim project plugin file filter screen

Following will be a short interval in which all specified files by the filter type will get loaded in VIM project pane and your Zend Framework, PHP or any other source files will be listed in a directory tree structure like in the picture shown below:

vim project successful loaded project screen

6. Saving loaded project hierarchy state

In order to save a state of a loaded project within the VIM project window pane you will have to type in vim, let's say:

:saveas .projects/someproject

Later on to load back the saved project state you will have to type in vim :r .projects/someproject

You will now have almost fully functional development IDE on top of your simple vim text editor.

You can navigate within the Project files loaded with the Project extension pane easily and select a file you would like to open up, whenever a source file is opened and you work on it to switch in between the Project file listing pane and the opened source code file you will have to type twice CTRL+w or in vim language C-w

To even further sophisticate your web development in PHP with vim you can add within your ~/.vimrc file the following two lines:

" run file with PHP CLI (CTRL-M)
:autocmd FileType php noremap <C-M> :w!<CR>:!/usr/bin/php %<CR>
" PHP parser check (CTRL-L)
:autocmd FileType php noremap <C-L> :!/usr/bin/php -l %>CR>

In the above vim configuration directovies the " character is a comment line and the autocmd is actually vim declarations.
The first :autocmd … declaration will instruct vim to execute your current opened php source file with the php cli interpreter whenever a key press of CTRL+M (C-m) occurs.

The second :autocmd … will add to your vim a shortcut, so whenever a CTRL+L (C-l) key combination is pressed VIM editor will check your current edited source file for syntax errors.
Therefore this will enable you to very easily periodically check if your file syntax is correct.

Well this things were really helpful to me, so I hope they will be profitable for you as well.
Cheers 🙂

Preparing your Linux to work with the Cloud providers – Installing aws , gcloud, az, oc, cf CLI Cloud access command interfaces

Wednesday, October 10th, 2018

howto Install-Cloud-access-tools-for-google-aws-azure-openshift-cloud-foundryCloud_computing-explained-on-linux.svg

If you're a sysadmin / developer whose boss requires a migration of Stored Data, Database structures or Web Objects to Amazon Web Services / Google Clourd or you happen to be a DevOps Engineer you will certainly need to have installed as a minimumum amazon AWS and Google Clouds clients to do daily routines and script stuff in managing cloud resources without tampering to use the Web GUI interface.

Here is how to install the aws, gcloud, oc, az and cf next to your kubernetes client (kubectl) on your Linux Desktop.
 

1. Install Google Cloud  gcloud (to manage Google Cloud platform resources and developer workflow
 

google-cloud-logo

Here is few cmds to run to install  gcloud, gcloud alpha, gcloud beta, gsutil, and bq commands to manage your Google Cloud from CLI

a.) On Debian / Ubuntu / Mint or any other deb based distro

# Create environment variable for correct distribution
export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)"

 

# Add the Cloud SDK distribution URI as a package source
# echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list

 

# Import the Google Cloud Platform public key
$ sudo curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add –

 

# Update the package list and install the Cloud SDK
$ sudo apt-get update && sudo apt-get install google-cloud-sdk


b) On CentOS, RHEL, Fedora Linux and other rpm based ones
 

$ sudo tee -a /etc/yum.repos.d/google-cloud-sdk.repo << EOM
[google-cloud-sdk]
name=Google Cloud SDK
baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOM

# yum install google-cloud-sdk

 

That's all now the text client to talk to Google Cloud's API gcloud is installed under
/usr/bin/gcloud

Latest install instructions of Google Cloud SDK are here.


2. Install AWS Cloud command line interface tool for managing AWS (Amazon Web Services)
 

AmazonWebservices_Logo.svg

AWS client is dependent on Python PIP so before you proceed you will have to install python-pip deb package if on Debian / Ubuntu Linux use apt:

 

# apt-get install –yes python-pip

 

It is also possible to install newest version of PIP a tiny shell script provided by Amazon get-pip.py

 

# curl -O https://bootstrap.pypa.io/get-pip.py
# python get-pip.py –user

 

# pip install awscli –upgrade –user

 

3. Install Azure Cloud Console access CLI command interface
 

Microsoft_Azure_Cloud-Logo.svg

On Debian / Ubuntu or any other deb based distro:

# AZ_REPO=$(lsb_release -cs)
# echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main" | \
$ sudo tee /etc/apt/sources.list.d/azure-cli.list

# curl -L https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add –
$ sudo apt-get update
$ sudo apt-get install apt-transport-https azure-cli

 

Finaly to check that Azure CLI is properly installed run simple login with:

 

$ az login

 


$ sudo rpm –import https://packages.microsoft.com/keys/microsoft.asc
$ sudo sh -c 'echo -e "[azure-cli]\nname=Azure CLI\nbaseurl=https://packages.microsoft.com/yumrepos/azure-cli\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc" > /etc/yum.repos.d/azure-cli.repo'
$ sudo yum install azure-cli

$ az login


For Latest install instructions check Amazon's documentation here

4. Install OpenShift OC CLI tool to access OpenShift Open Source Cloud

 

OpenShift-Redhat-cloud-platform

Even thought OpenShift has its original Redhat produced package binaries, if you're not on RPM distro it is probably
best to install using official latest version from openshift github repo.


As of time of writting this article this is done with:

 

# wget https://github.com/openshift/origin/releases/download/v1.5.1/openshift-origin-client-tools-v1.5.1-7b451fc-linux-64bit.tar.gz
tar –xvf openshift-origin-client-tools-v1.5.1-7b451fc-linux-64bit.tar.gz

 

# # mv openshift-origin-client-tools-v1.5.1-7b451fc-linux-64bit oc-tool

 

# cd oc-tool
# echo'export PATH=$HOME/oc-tool:$PATH' >> ~/.bashrc

 

To test openshift, try to login to OpenShift cloud:

 

$ oc login
Server [https://localhost:8443]: https://128.XX.XX.XX:8443


Latest install instructions on OC here

5. Install Cloud Foundry cf CLI Cloud access tool

cloud-foundry-cloud-logo

a) On Debian / Ubuntu Linux based distributions, do run:

 

$ wget -q -O – https://packages.cloudfoundry.org/debian/cli.cloudfoundry.org.key | sudo apt-key add –
$ echo "deb https://packages.cloudfoundry.org/debian stable main" | sudo tee /etc/apt/sources.list.d/cloudfoundry-cli.list
$ sudo apt-get update
$ sudo apt-get install cf-cli

 

b) On RHEL Enterprise Linux / CentOS and Fedoras

 

$ sudo wget -O /etc/yum.repos.d/cloudfoundry-cli.repo https://packages.cloudfoundry.org/fedora/cloudfoundry-cli.repo
$ sudo yum install cf-cli


For latest install insructions on cf cli check Cloud Foundry's install site

There plenty of other Cloud providers with the number exponentially growing and most have their own custom cli tools to access but as there use is not so common as the 5 ones mentioned below, I've omited 'em. If you're interested to know the complete list of Cloud Providers providing Cloud Services check here.

6. Install Ruby GEMs RHC tools collection

If you have to work with Redhat Cloud Storage / OpenShift you will perhaps want to install also (RHC) Redhat Collection Tools.

Assuming that the Linux system is running an up2date version of ruby programming language do run:

 

 

root@jeremiah:~# gem install rhc
Fetching: net-ssh-5.0.2.gem (100%)
Successfully installed net-ssh-5.0.2
Fetching: net-ssh-gateway-2.0.0.gem (100%)
Successfully installed net-ssh-gateway-2.0.0
Fetching: net-ssh-multi-1.2.1.gem (100%)
Successfully installed net-ssh-multi-1.2.1
Fetching: minitar-0.7.gem (100%)
The `minitar` executable is no longer bundled with `minitar`. If you are
expecting this executable, make sure you also install `minitar-cli`.
Successfully installed minitar-0.7
Fetching: hashie-3.6.0.gem (100%)
Successfully installed hashie-3.6.0
Fetching: powerbar-1.0.18.gem (100%)
Successfully installed powerbar-1.0.18
Fetching: minitar-cli-0.7.gem (100%)
Successfully installed minitar-cli-0.7
Fetching: archive-tar-minitar-0.6.1.gem (100%)
'archive-tar-minitar' has been deprecated; just install 'minitar'.
Successfully installed archive-tar-minitar-0.6.1
Fetching: highline-1.6.21.gem (100%)
Successfully installed highline-1.6.21
Fetching: commander-4.2.1.gem (100%)
Successfully installed commander-4.2.1
Fetching: httpclient-2.6.0.1.gem (100%)
Successfully installed httpclient-2.6.0.1
Fetching: open4-1.3.4.gem (100%)
Successfully installed open4-1.3.4
Fetching: rhc-1.38.7.gem (100%)
===========================================================================

 

If this is your first time installing the RHC tools, please run 'rhc setup'

===========================================================================
Successfully installed rhc-1.38.7
Parsing documentation for net-ssh-5.0.2
Installing ri documentation for net-ssh-5.0.2
Parsing documentation for net-ssh-gateway-2.0.0
Installing ri documentation for net-ssh-gateway-2.0.0
Parsing documentation for net-ssh-multi-1.2.1
Installing ri documentation for net-ssh-multi-1.2.1
Parsing documentation for minitar-0.7
Installing ri documentation for minitar-0.7
Parsing documentation for hashie-3.6.0
Installing ri documentation for hashie-3.6.0
Parsing documentation for powerbar-1.0.18
Installing ri documentation for powerbar-1.0.18
Parsing documentation for minitar-cli-0.7
Installing ri documentation for minitar-cli-0.7
Parsing documentation for archive-tar-minitar-0.6.1
Installing ri documentation for archive-tar-minitar-0.6.1
Parsing documentation for highline-1.6.21
Installing ri documentation for highline-1.6.21
Parsing documentation for commander-4.2.1
Installing ri documentation for commander-4.2.1
Parsing documentation for httpclient-2.6.0.1
Installing ri documentation for httpclient-2.6.0.1
Parsing documentation for open4-1.3.4
Installing ri documentation for open4-1.3.4
Parsing documentation for rhc-1.38.7
Installing ri documentation for rhc-1.38.7
Done installing documentation for net-ssh, net-ssh-gateway, net-ssh-multi, minitar, hashie, powerbar, minitar-cli, archive-tar-minitar, highline, commander, httpclient, open4, rhc after 10 seconds
13 gems installed
root@jeremiah:~#

To start with rhc next do:
 

rhc setup
rhc app create my-app diy-0.1


and play with it to install software create services on the Redhat cloud.

 

 

Closure

This are just of the few of the numerous tools available and I definitely understand there is much more to be said on the topic.
If you can remember other tools tor interesting cloud starting up tips about stuff to do on a fresh installed Linux PC to make life easier with Cloud / PaaS / SaaS / DevOps engineer please drop a comment.

Installing the phpbb forum on Debian (Squeeze/Sid) Linux

Saturday, September 11th, 2010

howto-easily-install-phpbb-on-debian-gnu-linux

I've just installed the phpbb forum on a Debian Linux because we needed a goodquick to install communication media in order to improve our internal communication in a student project in Strategic HR we're developing right now in Arnhem Business School.

Here are the exact steps I followed to have a properly it properly instlled:

1. Install the phpbb3 debian package
This was pretty straight forward:

debian:~# apt-get install phpbb3

At this point of installation I've faced a dpkg-reconfigure phpbb deb package configuration issue:
I was prompted to pass in the credentials for my MySQL password right after I've selected the MySQL as my preferred database back engine.
I've feeded my MySQL root password as well as my preferred forum database name, however the database installation failed because, somehow the configuration procedure tried to connect to my MySQL database with the htcheck user.
I guess this has to be a bug in the package itself or something from my previous installation misconfigured the way the debian database backend configuration was operating.
My assumption is that my previously installed htcheck package or something beforehand I've done right after the htcheck and htcheck-php packages installation.

after the package configuration failed still the package had a status of properly installed when I reviewed it with dpkg
I've thought about trying to manually reconfigure it using the dpkg-reconfigure debian command and I gave it a try like that:

debian:~# dpkg-reconfigure phpbb3

This time along with the other fields I've to fill in the ncurses interface I was prompted for a username before the password prompted appeared.
Logically I tried to fill in the root as it's my global privileges MySQL allowed user.
However that didn't helped at all and again the configuration tried to send the credentials with user htcheck to my MySQL database server.
To deal with the situation I had to approach it in the good old manual way.

2. Manually prepare / create the required phpbb forum database

To completet that connected to the MySQL server with the mysql client and created the proper database like so:

debian:~# mysql -u root -p
mysql>
CREATE database phpbb3forum;

3. Use phpmyadmin or the mysql client command line to create a new user for the phpbb forum

Here since adding up the user using the phpmyadmin was a way easier to do I decided to go that route, anyways using the mysql cli is also an option.

From phpmyadmin It's pretty easy to add a new user and grant privileges to a certain database, to do so navigate to the following database:

Privileges -> -> Add a new user ->

Now type your User name: , Host , Password , Re-type password , also for a Host: you have to choose Local from the drop down menu.

Leave the Database for user field empty as we have already previously created our desired database in step 2 of this article

Now press the "Go" button and the user will get created.

Further after choose the Privileges menu right on the bottom of the page once again, select through the checkbox the username you have just created let's say the previously created user is phpbb3

Go to Action (There is a picture with a man and a pencil on the right side of this button

Scroll down to the page part saying Database-specific privileges and in the field Add privileges on the following database: fill in your previosly created database name in our case it's phpbb3forum

and then press the "Go" button once again.
A page will appear where you will have to select the exact privileges you would like to grant on the specific selected database.
For some simplicity just check all the checkbox to grant as many privilegs to your database as you could.
Then again you will have to press the "Go" button and there you go you should have already configured an username and database ready to go with your new phpbb forum.

4. Create a virtualhost if you would like to have the forum as a subdomain or into a separate domain

If you decide to have the forum on a separate sub-domain or domain as I did you will have to add some kind of Virtualhost into either your Apache configuration /etc/apache2/apache2.conf or into where officially the virutualhosts are laid in Debian Linux in /etc/apache2/sites-available
I've personally created a new file like for instance /etc/apache2/sites-available/mysubdomain.mydomain.com

Here is an example content of the new Virtualhost:

<VirtualHost *>
ServerAdmin admin-email@domain.com
ServerName mysubdomain.domain.com

# Indexes + Directory Root.
DirectoryIndex index.php index.php5 index.htm index.html index.pl index.cgi index.phtml index.jsp index.py index.asp

DocumentRoot /usr/share/phpbb3/www/

# Logfiles
ErrorLog /var/log/apache2/yourdomain/error.log
CustomLog /var/log/apache2/yourdomain/access.log combined
# CustomLog /dev/null combined
<Directory /usr/share/phpbb3/www/>
Options FollowSymLinks MultiViews -Includes ExecCGI
AllowOverride All
Order allow,deny
allow from all </Directory>
</VirtualHost>

In above Virtualhost just change the values for ServerAdmin , ServerName , DocumentRoot , ErrorLog , CustomLog and Directory declaration to adjust it to your situation.

5. Restart the Apache webserver for the new Virtualhost to take affect

debian:~# /etc/init.d/apache2 restart

Now accessing your http://mysubdomain.domain.com should display the installed phpbb3 forum
The default username and password for your forum you can use straight are:

username: admin
password: admin

So far so good you by now have the PHPBB3 forum properly installed and running, however if you try to Register a new user in the forum you will notice that it's impossible because of a terrible ugly message reading:

Sorry but this board is currently unavailable.

I've spend few minutes online to scrape through the forums before I can understand what I have to stop that annoying message from appearing and allow new users to register in the phpbb forum

The solution came natural and was a setting that had to be changed with the forum admin account, thus login as admin and look at the bottom of the page, below the text reading Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group you will notice a link with Administration Control Panel
just press there a whole bunch of menus will appear on the screen allowing you to do numerous things, however what you will have to do is go to
Board Settings -> Disable Board

and change the radio button there to say No

That's all now your forum will be ready to go and your users can freely register and if the server where the forum is installed has an already running mail server, they will receive an emails with a registration data concerning their new registrations in your new phpbb forum.
Cheers and Enjoy your new shiny phpbb Forum 🙂

Install TeamViewer on latest Debian, Ubuntu, Fedora, CentOS Linux quick how to

Tuesday, October 31st, 2017

teamviewer-howto-install-on-gnu-linux-teamviewer-and-tux-penguin-logo

If you're a sysadmin who uses GNU / Linux as a Desktop as me you will certainly need to have TeamViewer installed and ready for use on your Linux desktop.

Even though TeamViewer is a proprietary application and I prefer not to use it I'm forced to have it installed because of every now and then a friend or customer would require you to login remotely to his Windows server and clean up the system either from spyware or viruses or just deploy some new software.

Nowdays most of people are running 64 bit ( amd64 ) built operating system and the problem with TeamViewer on Linux 64bit is that it doesn't have an actual full featured 64 port of the application but only have a 32 bits install, besides that big part of components of TeamViewer are running using wine windows emulation and hencing making it work on Linux is sometimes not so trivial as we might have desired.

Because TeamViewer is a 32 bit application, it has a number of dependency libraries that are 32 bit in Linux that's the so called (i386) built libraries (packages).

Hence to make TeamViewer work on modern GNU / Linux operating systems such as Debian / Ubuntu / Mint Linux / Fedora / CentOS etc. it is necessery to have some i386 libraries and other 32 bit things pre-installed and only then you can have a working copy of teamviewer on your Linux.

1. Installing i386 applications required for TeamViewer operation

– On Debian / Ubuntu / Kubuntu / Xubuntu Linux run below commands:

First we need to add the i386 architecture to be supported by Linux

 

dpkg –add-architecture i386
apt update

 

Then on Debian and other deb based Linux we need to install following libraries
 

# apt install libjpeg62-turbo:i386 wget gdebi-core
 

 


2. Download latest teamviewer version from TeamViewer website

– On Debian, Ubuntu and other deb based Linux distros.

Download latest teamviewer version and install it:

 

# wget https://download.teamviewer.com/download/teamviewer_i386.deb

 

 

 

On CentOS, Fedora, OpenSuSE other RPM based distros:

Download the Teamviewer package and package signature using wget

 

# wget https://download.teamviewer.com/download/linux/signature/TeamViewer2017.asc
# wget https://download.teamviewer.com/download/teamviewer.i686.rpm


 

 

3. Insteall teamviewer with gdebi (Simple Tool to install deb files)

 

# gdebi teamviewer_i386.deb


Remote control and meeting solution.
 TeamViewer provides easy, fast and secure remote access and meeting solutions
 to Linux, Windows PCs, Apple PCs and various other platforms,
 including Android and iPhone.
 .
 TeamViewer is free for personal use.
 You can use TeamViewer completely free of charge to access your private
 computers or to help your friends with their computer problems.
 .
 To buy a license for commercial use, please visit http://www.teamviewer.com
 .
 This package contains Free Software components.
 For details, see /opt/teamviewer/doc/license_foss.txt
Do you want to install the software package? [y/N]:y

 

On Fedora, CentOS, SuSE RPM based ones:

 

# rpm –import TeamViewer_Linux_PubKey.asc

 

 

# rpm -i teamviewer_12.0.xxxxx.i686.rpm

 

or if you face some failed dependencies you better use zypper that will download any missing teamviewer dependencies.

 

# zypper install teamviewer_12.0.xxxxx.i686.rpm

 

 

 

4. Start Teamviewer

 

 

teamviewer-running-on-linux-screenshot

 

linux:~$ teamviewer
Init…
CheckCPU: SSE2 support: yes
XRandRWait: No value set. Using default.
XRandRWait: Started by user.
Checking setup…
wine: configuration in '/home/hipo/.local/share/teamviewer12' has been updated.
Launching TeamViewer …
Launching TeamViewer GUI …

 

How to install and configure torbutton on Debian / Anonymizing Iceweasel, Firefox on Debian GNU/Linux

Thursday, August 5th, 2010

Tor Onion Logo

There is a quite a buzz online recently about the implications breach of personal privacy by simple browing online.
A week ago I've blogged On How to improve your web browser security for better personal identity
Though there is probably a plenty of more things to be done on guaranteeing your anonymous identity online, the article lacked to mention one very one vital project related with anonymity – the tor Anonymity online project
The project offer the user the right to be anonymous online through a complex constantly expanding network of volunteers which voluntary install and grant access to the installed tor server to be used as a proxy from their computers.
A very thorough explanation on what is tor can be red here
Enabling tor on your personal computer would at least guarantee you that every now and then your traffic browser network traffic (request) would flow through a random tor servers located on a different worldly geographic locations.
Usually the traffic to a destination host would pass through 5 tor network nodes. Where the traffic is unecrypted between last node and the 4th node, while in the other four ones it's completely crypted.
This makes your tracking almost impossible if it's based on technologies like for instance Maxmind's Geoip or Geonames's geographical data base because every now and then you'll appear to be coming to the end point referrar web server originating from a different tor node ip address

The tor server is a free software licensed under the GPL and this is also a good assurance because everybody is able to have a look on the code and this is a further guarantee that the software doesn't include a malicious ways for a middle users to sniff on your traffic.

The tor project has even built a pre-bundled browser ready to be worn by yourself on a usb stick, so you can quickly start using the tor anonymous network on any random computer anywhere.
The tor browser page is available here also Tor Browser Bundle for Windows is available here
Tor server is available for both Windows, Mac OS X, Linux and Linux/BSD Unix
Of course tor is not perfect it opens some other possible doors for attackers which are much less likely to occur if you don't use it, however in general it's better off with tor than without it.

One serious reason for not reason for not using Tor might be that it's usually many times slower than normal browser since, it routes traffic through a different tor network nodes.
So if you decide to go on and use it you better be patient and calm 🙂

Since I'm a Debian user and I really do value my privacy I decided to start using Tor.
In order to start using Tor it's usually necessary to configure your browser to use The TorButoon Firefox browserextension

Nevertheless on Debian GNU/Linux if you try to go the straigh way as explained on Tor's website install the TorButton and configure it to work in cooperation with the polipo caching proxy
You will be not able to browse after enabling straight the tor plugin.
If you try the up-mentioned approach you're probably about to come to errors like:
"the proxy server is refusing connections"
,
Proxy error: 502 Disconnected operation and object not in cache
or
504 Connect to superquizgames.com:80 failed: SOCKS error: host unreachableThe following error occurred while trying to access http://yourwebsite.com/:504 Connect to superquizgames.com:80 failed: SOCKS error: host unreachable

In order to properly install configure and enable the TorButton on my Debian GNU/Linux I had to get through the following steps:

1. Install the polipo caching proxy

debian:~# apt-get install polipo

2. Download and overwrite default polipo configuration with the one from torproject.org

This is necessary to configure in order to have polipo adapted to work with tor, so issue the following commands:

debian:~# cd /etc/polipo
debian:~# wget https://svn.torproject.org/svn/torbrowser/trunk/build-scripts/config/polipo.conf
debian:~# mv config config.bak
debian:~# mv polipo.conf config

3. Restart polipo for the new config settings to take affect

debian:~# /etc/init.d/polipo restart

4. Install the iceweasel-torbutton browser extension

debian:~# apt-get install iceweasel-torbutton

The iceweasel-torbutton will also install you the tor package which is evidently required for the torbutton to operate.
Now you should be ready to go, you can enable the tor use from the tor button which should appear in your browser in the bottom right corner of your browser.
It should look something similar to:

Tor Button screenshot in Iceweasel

Tor Enable/Disable Iceweasel browser Button

To test your Tor Configuration you can use the Test Settings button which is straight available from TorButton's preferences

From here after it might be a good idea to play with the TorButton security settings and configure it according to your liking, bear in mind that you should have a solid knowledge on how browsers work and some basic Internet protocols before you start tampering this options.
If tou don't know what you do you better stop and don't tamper with the torbutton security options.
The only one that you will most probably want to untick is The Disable plugins during Tor usage , stopping this option will allow you to have a flash video streaming display properly, otherwise you won't be able to use , Vbox etc.
Below you see a screenshot of the TorButton Security Settings dialog.

TorButton properties Dialog

To open up this dialog you need to navigate to the TorButto and choose preferences with the right mouse buttons 🙂
Hope this article is informative to somebody out there.
User feedback is mostly welcome! Cheers 🙂