Posts Tagged ‘linux?’

Fix staled NFS on server with dmesg error log nfs: server nfs-server not responding, still trying

Saturday, March 16th, 2019

NFS_Filesystem-fix-staled-NFS-System-dmesg-error-nfs-server-not-responding-still-trying

On a server today I've found to have found a number of NFS mounts mounted through /etc/fstab file definitions that were hanging;
 

nfs-server:~# df -hT


 command kept hanging as well as any attempt to access the mounted NFS directory was not possible.
The server with the hanged Network File System is running SLES (SuSE Enterprise Linux 12 SP3) a short investigation in the kernel logs (dmesg) as well as /var/log/messages reveales following errors:

 

nfs-server:~# dmesg
[3117414.856995] nfs: server nfs-server OK
[3117595.104058] nfs: server nfs-server not responding, still trying
[3117625.032864] nfs: server nfs-server OK
[3117805.280036] nfs: server nfs-server not responding, still trying
[3117835.209110] nfs: server nfs-server OK
[3118015.456045] nfs: server nfs-server not responding, still trying
[3118045.384930] nfs: server nfs-server OK
[3118225.568029] nfs: server nfs-server not responding, still trying
[3118255.560536] nfs: server nfs-server OK
[3118435.808035] nfs: server nfs-server not responding, still trying
[3118465.736463] nfs: server nfs-server OK
[3118645.984057] nfs: server nfs-server not responding, still trying
[3118675.912595] nfs: server nfs-server OK
[3118886.098614] nfs: server nfs-server OK
[3119066.336035] nfs: server nfs-server not responding, still trying
[3119096.274493] nfs: server nfs-server OK
[3119276.512033] nfs: server nfs-server not responding, still trying
[3119306.440455] nfs: server nfs-server OK
[3119486.688029] nfs: server nfs-server not responding, still trying
[3119516.616622] nfs: server nfs-server OK
[3119696.864032] nfs: server nfs-server not responding, still trying
[3119726.792650] nfs: server nfs-server OK
[3119907.040037] nfs: server nfs-server not responding, still trying
[3119936.968691] nfs: server nfs-server OK
[3120117.216053] nfs: server nfs-server not responding, still trying
[3120147.144476] nfs: server nfs-server OK
[3120328.352037] nfs: server nfs-server not responding, still trying
[3120567.496808] nfs: server nfs-server OK
[3121370.592040] nfs: server nfs-server not responding, still trying
[3121400.520779] nfs: server nfs-server OK
[3121400.520866] nfs: server nfs-server OK


It took me a short while to investigate and check the NetApp remote NFS storage filesystem and investigate the Virtual Machine that is running on top of OpenXen Hypervisor system.
The NFS storage permissions of the exported file permissions were checked and they were in a good shape, also a reexport of the NFS mount share was re-exported and on the Linux
mount host the following commands ran to remount the hanged Filesystems:

 

nfs-server:~# umount -f /mnt/nfs_share
nfs-server:~# umount -l /mnt/nfs_share
nfs-server:~# umount -lf /mnt/nfs_share1
nfs-server:~# umount -lf /mnt/nfs_share2
nfs-server:~# mount -t nfs -o remount /mnt/nfs_share


that fixed one of the hanged mount, but as I didn't wanted to manually remount each of the NFS FS-es, I've remounted them all with:

nfs-server:~# mount -a -t nfs


This solved it but, the fix seemed unpermanent as in a time while the issue started reoccuring and I've spend some time
in further investigation on the weird NFS hanging problem has led me to the following blog post where the same problem was described and it was pointed the root cause of it lays
in parameter for MTU which seems to be quite high MTU 9000 and this over the years has prooven to cause problems with NFS especially due to network router (switches) configurations
which seem to have a filters for MTU and are passing only packets with low MTU levels and using rsize / wzise custom mount NFS values in /etc/fstab could lead to this strange NFS hangs.

Below is a list of Maximum Transmission  Unit (MTU) for Media Transport excerpt taken from wikipedia as of time of writting this article.

http://pc-freak.net/images/Maximum-Transmission-Unit-for-Media-Transport-diagram-3.png

In my further research on the issue I've come across this very interesting article which explains a lot on "Large Internet" and Internet Performance

I've used tracepath command which is doing basicly the same as traceroute but could be run without root user and discovers hops (network routers) and shows MTU between path -> destionation.

Below is a sample example

nfs-server:~# tracepath bergon.net
 1?: [LOCALHOST]                      pmtu 1500
 1:  192.168.6.1                                           0.909ms
 1:  192.168.6.1                                           0.966ms
 2:  192.168.222.1                                         0.859ms
 3:  6.192.104.109.bergon.net                              1.138ms reached
     Resume: pmtu 1500 hops 3 back 3

 

Optiomal pmtu for this connection is to be 1500 .traceroute in some cases might return hops with 'no reply' if there is a router UDP  packet filtering implemented on it.

The high MTU value for the Storage network connection interface on eth1 was evident with a simple:

 

 nfs-server:~# /sbin/ifconfig |grep -i eth -A 2
eth0      Link encap:Ethernet  HWaddr 00:16:3E:5C:65:74
          inet addr:100.127.108.56  Bcast:100.127.109.255  Mask:255.255.254.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth1      Link encap:Ethernet  HWaddr 00:16:3E:5C:65:76
          inet addr:100.96.80.94  Bcast:100.96.83.255  Mask:255.255.252.0
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1


The fix was as simple to lower MTU value for eth1 Ethernet interface to 1500 which is the value which most network routers are configured too.

To apply the new MTU to the eth1 interface without restarting the SuSE SLES networking , I first used ifconfig one time with:

 

 nfs-server:~# /sbin/ifconfig eth1 mtu 1500
 nfs-server:~# ip addr show
 …


To make the setting permanent on next  SuSE boot:

I had to set the MTU=1500 value in

 

nfs-server:~#/etc/sysconfig/network/ifcfg-eth1
nfs-server:~#  ip address show eth1
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 8c:89:a5:f2:e8:d8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.1/24 brd 192.168.0.255 scope global eth1
       valid_lft forever preferred_lft forever

 


Then to remount the NFS mounted hanged filesystems once again ran:
 

nfs-server:~# mount -a -t nfs


Many network routers keeps the MTU to low as 1500 also because a higher values causes IP packet fragmentation when using NFS over UDP where IP packet fragmentation and packet
reassembly requires significant amount of CPU at both ends of the network connection.
Packet fragmentation also exposes network traffic to greater unreliability, since a complete RPC request must be retransmitted if a UDP packet fragment is dropped for any reason.
Any increase of RPC retransmissions, along with the possibility of increased timeouts, are the single worst impediment to performance for NFS over UDP.
This and many more is very well explained in Optimizing NFS Performance page (which is a must reading) for any sys admin that plans to use NFS frequently.

Even though lowering MTU (Maximum Transmission Union) value does solved my problem at some cases especially in a modern local LANs with Jumbo Frames, allowing and increasing the MTU to 9000 bytes
might be a good idea as this will increase the amount of packet size.and will raise network performance, however as always on distant networks with many router hops keeping MTU value as low as 1492 / 5000 is always a good idea.

 

Automatic network restart and reboot Linux server script if ping timeout to gateway is not responding as a way to reduce connectivity downtimes

Monday, December 10th, 2018

automatic-server-network-restart-and-reboot-script-if-connection-to-server-gateway-inavailable-tux-penguing-ascii-art-bin-bash

Inability of server to come back online server automaticallyafter electricity / network outage

These days my home server  is experiencing a lot of issues due to Electricity Power Outages, a construction dig operations to fix / change waterpipe tubes near my home are in action and perhaps the power cables got ruptered by the digger machine.
The effect of all this was that my server networking accessability was affected and as I didn't have network I couldn't access it remotely anymore at a certain point the electricity was restored (and the UPS charge could keep the server up), however the server accessibility did not due restore until I asked a relative to restart it or under a more complicated cases where Tech aquanted guy has to help – Alexander (Alex) a close friend from school years check his old site here – alex.pc-freak.net helps a lot.to restart the machine physically either run a quick restoration commands on root TTY terminal or generally do check whether default router is reachable.

This kind of Pc-Freak.net downtime issues over the last month become too frequent (the machine was down about 5 times for 2 to 5 hours and this was too much (and weirdly enough it was not accessible from the internet even after electricity network was restored and the only solution to that was a physical server restart (from the Power Button).

To decrease the number of cases in which known relatives or friends has to  physically go to the server and restart it, each time after network or electricity outage I wrote a small script to check accessibility towards Default defined Network Gateway for my server with few ICMP packages sent with good old PING command
and trigger a network restart and system reboot
(in case if the network restart does fail) in a row.

1. Create reboot-if-nwork-is-downsh script under /usr/sbin or other dir

Here is the script itself:

 

#!/bin/sh
# Script checks with ping 5 ICMP pings 10 times to DEF GW and if so
# triggers networking restart /etc/inid.d/networking restart
# Then does another 5 x 10 PINGS and if ping command returns errors,
# Reboots machine
# This script is useful if you run home router with Linux and you have
# electricity outages and machine doesn't go up if not rebooted in that case

GATEWAY_HOST='192.168.0.1';

run_ping () {
for i in $(seq 1 10); do
    ping -c 5 $GATEWAY_HOST
done

}

reboot_f () {
if [ $? -eq 0 ]; then
        echo "$(date "+%Y-%m-%d %H:%M:%S") Ping to $GATEWAY_HOST OK" >> /var/log/reboot.log
    else
    /etc/init.d/networking restart
        echo "$(date "+%Y-%m-%d %H:%M:%S") Restarted Network Interfaces:" >> /tmp/rebooted.txt
    for i in $(seq 1 10); do ping -c 5 $GATEWAY_HOST; done
    if [ $? -eq 0 ] && [ $(cat /tmp/rebooted.txt) -lt ‘5’ ]; then
         echo "$(date "+%Y-%m-%d %H:%M:%S") Ping to $GATEWAY_HOST FAILED !!! REBOOTING." >> /var/log/reboot.log
        /sbin/reboot

    # increment 5 times until stop
    [[ -f /tmp/rebooted.txt ]] || echo 0 > /tmp/rebooted.txt
    n=$(< /tmp/rebooted.txt)
        echo $(( n + 1 )) > /tmp/rebooted.txt
    fi
    # if 5 times rebooted sleep 30 mins and reset counter
    if [ $(cat /tmprebooted.txt) -eq ‘5’ ]; then
    sleep 1800
        cat /dev/null > /tmp/rebooted.txt
    fi
fi

}
run_ping;
reboot_f;

You can download a copy of reboot-if-nwork-is-down.sh script here.

As you see in script successful runs  as well as its failures are logged on server in /var/log/reboot.log with respective timestamp.
Also a counter to 5 is kept in /tmp/rebooted.txt, incremented on each and every script run (rebooting) if, the 5 times increment is matched

a sleep is executed for 30 minutes and the counter is being restarted.
The counter check to 5 guarantees the server will not get restarted if access to Gateway is not continuing for a long time to prevent the system is not being restarted like crazy all time.
 

2. Create a cron job to run reboot-if-nwork-is-down.sh every 15 minutes or so 

I've set the script to re-run in a scheduled (root user) cron job every 15 minutes with following  job:

To add the script to the existing cron rules without rewriting my old cron jobs and without tempering to use cronta -u root -e (e.g. do the cron job add in a non-interactive mode with a single bash script one liner had to run following command:

 

{ crontab -l; echo "*/15 * * * * /usr/sbin/reboot-if-nwork-is-down.sh 2>&1 >/dev/null; } | crontab –


I know restarting a server to restore accessibility is a stupid practice but for home-use or small client servers with unguaranteed networks with a cheap Uninterruptable Power Supply (UPS) devices it is useful.

Summary

Time will show how efficient such a  "self-healing script practice is.
Even though I'm pretty sure that even in a Corporate businesses and large Public / Private Hybrid Clouds where access to remote mounted NFS / XFS / ZFS filesystems are failing a modifications of the script could save you a lot of nerves and troubles and unhappy customers / managers screaming at you on the phone 🙂


I'll be interested to hear from others who have a better  ideas to restore ( resurrect ) access to inessible Linux server after an outage.?
 

Prevent rsync cronjob to run multiple times via cronjob on Linux

Wednesday, November 21st, 2018

prevent-rsync-rsync-to-run-multiple-times-via-cronjob-on-linux

Today I had a report of a server whose Load Avarage keeps at the high level of 86, the machine runs on a bare metal rock solid hardware and even with such high Loads of the kernel it runs fine, but due to the I/O overhead the SANs red from a remote NetApp storage device started to be sluggish and hence it needed to be reviewed, thus I jumped in via the hop station (jump host) into the server.
 

1. Short investation on root cause for high server load


After a short investigation, I've found an rsync job set by someone on a cron job to be routinely run every 30 minutes, thus the old scheduled rsync, which seemed to run multiple times on the server (about 50 processes) of same rsync (file system synchronization was running) and as expected the storage was saddled with mutiple Input / Output requests.

The root cron job was like that:
 

server:~# crontab -u root -l |grep -i rsync
/usr/bin/rsync -ax /var/www/htdocs/directory_to_synchronize / /srv/www/synch_back/directory_to_synchrnize


A process list showed the following high number of running mirrored rsyncs:

 

server:~# ps axuwwf | grep -i rsync | wc -l
80


 

2. The Fix – Set Rsync to only via cron only in case if it is not already running in background


In order to fix it, I had to kill all current running rsync (here luckily only same single instance of rsync was running, but generally I was cautious to check no other rsync jobs are running – otherwise I would have mistakenly killed some other rsync job ongoing …)

Then I set the following new cron job one liner quick shell script that does the job to assign a pid file that is created before rsync and deleted after rsync completion.
 

if [ ! -e /tmp/repo_dba_sync.lock ]; then touch /tmp/repo_dba_sync.lock; /usr/bin/rsync -ax /var/www/htdocs/directory_to_synchronize / /srv/www/synch_back/directory_to_synchrnize ; trap 'rm -f /tmp/repo_dba_sync.lock; fi' EXIT  >/dev/null 2>&1


The cron job looked like so:

 

*/30 * * * * if [ ! -e /tmp/repo_dba_sync.lock ]; then touch /tmp/repo_dba_sync.lock; /usr/bin/rsync -ax /var/www/htdocs/directory_to_synchronize / /srv/www/synch_back/directory_to_synchrnize ; trap 'rm -f /tmp/repo_dba_sync.lock; fi'  EXIT >/dev/null 2>&1

Just in case if you're wondering
a trap should be used to verify that the lock file is removed when the script is exited for any reason.
This way the lock file will be removed even if the script exits before the end of the script.

An alternative and more simple ways to do it is via:
 

pgrep rsync > /dev/null || rsync -ax /var/www/htdocs/directory_to_synchronize / /srv/www/synch_back/directory_to_synchrnize

 

Or if you don't want to use bash's:
 

if []; then; fi


condition but still use a file lock the flock command can be used like so:
 

flock -n lock_file -c "rsync …"

How to install custom Font files on Linux with font-viewer, fc-cache, font-manager – Install Church Slavonic fonts on GNU / Linux

Saturday, October 27th, 2018

install-custom-fonts-on-linux-easily-linux-libertine-alphabet-typography-font-u-shaped

If you're regularly using GIMP for Image Editing or LibreOffice for Office stuff or any other program that you might use to add / edit fonts, then you certainly will come to a point wondering how to manually add new .TTF (TrueType Fonts) or .AFM .PBM.
Using apt-get  install tool multiple fonts can be searched in Debian / Ubuntu repos, but adding a third party fonts provided by some random graphics designer is a necessity.

For example earlier I've blogged on What is Church Slavonic and collected a large collection pack of Church Slavonic fonts ready which I used to install at that time on a Windows 7 PC, question comes how this fonts once downloaded can be added / installed so Xorg running and Font rendering programs on GNU / Linux are aware of the new downloaded fonts and can be used in various programs?

gnome-font-viewer-program-gnu-linux-screenshot

The easiest way to install font in Linux is to Double click over the new font you want to install that would run Font Viewer program in GNOME GUI environment when clicked over fonts the  gnome-font-viewer) opens, however it is tedicious task to install in that manner if you have to instal some new 100 or 200 fonts by clicking over each.

To make the new downloaded pack of fonts on a user level it is as simple as downloading the number of fonts and placing them in $HOME/fonts folder e.g. in ~/.fonts (in some distributions placing the new fonts under ~/usr/local/share/fonts makes them available for use on next Xsession login.

To make new fonts available system-wide (e.g. for all existing or logged in in Xorg) users it is as simple as copying all new font files (TTF, PFM, PFB etc.) you'd like to add to /usr/local/share/fonts:
 

# cp -rpf ~/Desktop/fonts-folder/* /usr/local/share/fonts/


And run fs-cache to rescan and build new font cache files based on the fonts copied

 

 fc-cache -f -v


To check whether the new fonts are present you can list all available fonts with:

 

fc-list

 

/usr/share/fonts/truetype/lato/Lato-Medium.ttf: Lato,Lato Medium:style=Medium,Regular
/usr/share/fonts/truetype/msttcorefonts/comicbd.ttf: Comic Sans MS:style=Bold,Negreta,tučné,fed,Fett,Έντονα,Negrita,Lihavoitu,Gras,Félkövér,Grassetto,
Vet,Halvfet,Pogrubiony,Negrito,Полужирный,Fet,Kalın,Krepko,Lodia
/usr/share/fonts/truetype/lato/Lato-SemiboldItalic.ttf: Lato,
Lato Semibold:style=Semibold Italic,Italic
/usr/local/share/fonts/TriKUcs.pfb: Triodion kUcs:style=Regular
/usr/share/fonts/truetype/dejavu/DejaVuSerif-Bold.ttf: DejaVu Serif:style=Bold
/usr/local/share/fonts/OglUcs8.ttf: Oglavie Ucs:style=Regular
/usr/share/fonts/truetype/noto/NotoSansThai-Regular.ttf: Noto Sans Thai:style=Regular
/usr/local/share/fonts/freefont-20080323/FreeSerifBold.ttf: FreeSerif:style=Bold,polkrepko
/usr/local/share/fonts/TITUSEN.TTF: Titus SyriacEstrangelo:style=Regular
/usr/local/share/fonts/feofanucs.ttf: Feofan Ucs:style=Regular
/usr/local/share/fonts/OstgDSoIEUcs8.ttf: Ostrog\-Dol ieUcs:style=SpacedOut
/usr/share/fonts/truetype/dejavu/DejaVuSansMono.ttf: DejaVu Sans Mono:style=Book
/usr/share/fonts/truetype/noto/NotoSansCypriot-Regular.ttf: Noto Sans Cypriot:style=Regular
/usr/local/share/fonts/ZlatUcs.pfb: Zlatoust Ucs:style=Regular
..
.

 


To look for a certain font supposed to be installed run cmd:

 

fc-list|grep -i "Times New Roman"
/usr/share/fonts/truetype/msttcorefonts/Times_New_Roman.ttf: Times New Roman:style=Regular,Normal,obyčejné,Standard,Κανονικά,
Normaali,Normál,Normale,Standaard,Normalny,Обычный,Normálne,Navadno,thường,Arrunta

 

fc-list|grep -i "slavonic"
/usr/local/share/fonts/TITUSN__.TTF: Titus Slavonic:style=Normal

 


gnome-font-viewer-program-gnu-linux-screenshot

Another good tool for GNOME users is font-manager if you don't have it already installed:

 

apt-get install font-manager


One of the cool things about it is it can show you Licensing of each of system installed fonts the full list of font character sets and could visualize you different pixel font sizes in the so called "waterfall" font view.

Ansible Quick Start Cheatsheet for Linux admins and DevOps engineers

Wednesday, October 24th, 2018

ansible-quick-start-cheetsheet-ansible-logo

Ansible is widely used (Configuration management, deployment, and task execution system) nowadays for mass service depoyments on multiple servers and Clustered environments like, Kubernetes clusters (with multiple pods replicas) virtual swarms running XEN / IPKVM virtualization hosting multiple nodes etc. .

Ansible can be used to configure or deploy GNU / Linux tools and services such as Apache / Squid / Nginx / MySQL / PostgreSQL. etc. It is pretty much like Puppet (server / services lifecycle management) tool , except its less-complecated to start with makes it often a choose as a tool for mass deployment (devops) automation.

Ansible is used for multi-node deployments and remote-task execution on group of servers, the big pro of it it does all its stuff over simple SSH on the remote nodes (servers) and does not require extra services or listening daemons like with Puppet. It combined with Docker containerization is used very much for later deploying later on inside Cloud environments such as Amazon AWS / Google Cloud Platform / SAP HANA / OpenStack etc.

Ansible-Architechture-What-Is-Ansible-Edureka

0. Instaling ansible on Debian / Ubuntu Linux


Ansible is a python script and because of that depends heavily on python so to make it running, you will need to have a working python installed on local and remote servers.

Ansible is as easy to install as running the apt cmd:

 

# apt-get install –yes ansible
 

The following additional packages will be installed:
  ieee-data python-jinja2 python-kerberos python-markupsafe python-netaddr python-paramiko python-selinux python-xmltodict python-yaml
Suggested packages:
  sshpass python-jinja2-doc ipython python-netaddr-docs python-gssapi
Recommended packages:
  python-winrm
The following NEW packages will be installed:
  ansible ieee-data python-jinja2 python-kerberos python-markupsafe python-netaddr python-paramiko python-selinux python-xmltodict python-yaml
0 upgraded, 10 newly installed, 0 to remove and 1 not upgraded.
Need to get 3,413 kB of archives.
After this operation, 22.8 MB of additional disk space will be used.

apt-get install –yes sshpass

 

Installing Ansible on Fedora Linux is done with:

 

# dnf install ansible –yes sshpass

 

On CentOS to install:
 

# yum install ansible –yes sshpass

sshpass needs to be installed only if you plan to use ssh password prompt authentication with ansible.

Ansible is also installable via python-pip tool, if you need to install a specific version of ansible you have to use it instead, the package is available as an installable package on most linux distros.

Ansible has a lot of pros and cons and there are multiple articles already written on people for and against it in favour of Chef or Puppet As I recently started learning Ansible. The most important thing to know about Ansible is though many of the things can be done directly using a simple command line, the tool is planned for remote installing of server services using a specially prepared .yaml format configuration files. The power of Ansible comes of the use of Ansible Playbooks which are yaml scripts that tells ansible how to do its activities step by step on remote server. In this article, I'm giving a quick cheat sheet to start quickly with it.
 

1. Remote commands execution with Ansible
 

First thing to do to start with it is to add the desired hostnames ansible will operate with it can be done either globally (if you have a number of remote nodes) to deploy stuff periodically by using /etc/ansible/hosts or use a custom host script for each and every ansible custom scripts developed.

a. Ansible main config files

A common ansible /etc/ansible/hosts definition looks something like that:

 

# cat /etc/ansible/hosts
[mysqldb]
10.69.2.185
10.69.2.186
[master]
10.69.2.181
[slave]
10.69.2.187
[db-servers]
10.69.2.181
10.69.2.187
[squid]
10.69.2.184

Host to execute on can be also provided via a shell variable $ANSIBLE_HOSTS
b) is remote hosts reachable / execute commands on all remote host

To test whether hour hosts are properly configure from /etc/ansible/hosts you can ping all defined hosts with:

 

ansible all -m ping


ansible-check-hosts-ping-command-screenshot

This makes ansible try to remote to remote hosts (if you have properly configured SSH public key authorization) the command should return success statuses on every host.

 

ansible all -a "ifconfig -a"


If you don't have SSH keys configured you can also authenticate with an argument (assuming) all hosts are configured with same password with:

 

ansible all –ask-pass -a "ip all show" -u hipo –ask-pass


ansible-show-ips-ip-a-command-screenshot-linux

If you have configured group of hosts via hosts file you can also run certain commands on just a certain host group, like so:

 

ansible <host-group> -a <command>

It is a good idea to always check /etc/ansible/ansible.cfg which is the system global (main red ansible config file).

c) List defined host groups
 

ansible localhost -m debug -a 'var=groups.keys()'
ansible localhost -m debug -a 'var=groups'

d) Searching remote server variables

 

# Search remote server variables
ansible localhost -m setup -a 'filter=*ipv4*'

 

 

ansible localhost -m setup -a 'filter=ansible_domain'

 

 

ansible all -m setup -a 'filter=ansible_domain'

 

 

# uninstall package on RPM based distros
ansible centos -s -m yum -a "name=telnet state=absent"
# uninstall package on APT distro
ansible localhost -s -m apt -a "name=telnet state=absent"

 

 

2. Debugging – Listing information about remote hosts (facts) and state of a host

 

# All facts for one host
ansible -m setup
  # Only ansible fact for one host
ansible
-m setup -a 'filter=ansible_eth*'
# Only facter facts but for all hosts
ansible all -m setup -a 'filter=facter_*'


To Save outputted information per-host in separate files in lets say ~/ansible/host_facts

 

ansible all -m setup –tree ~/ansible/host_facts

 

3. Playing with Playbooks deployment scripts

 

a) Syntax Check of a playbook yaml

 

ansible-playbook –syntax-check


b) Run General Infos about a playbook such as get what a playbook would do on remote hosts (tasks to run) and list-hosts defined for a playbook (like above pinging).

 

ansible-playbook –list-hosts
ansible-playbook
–list-tasks


To get the idea about what an yaml playbook looks like, here is example from official ansible docs, that deploys on remote defined hosts a simple Apache webserver.
 


– hosts: webservers
  vars:
    http_port: 80
    max_clients: 200
  remote_user: root
  tasks:
  – name: ensure apache is at the latest version
    yum:
      name: httpd
      state: latest
  – name: write the apache config file
    template:
      src: /srv/httpd.j2
      dest: /etc/httpd.conf
    notify:
    – restart apache
  – name: ensure apache is running
    service:
      name: httpd
      state: started
  handlers:
    – name: restart apache
      service:
        name: httpd
        state: restarted

To give it a quick try save the file as webserver.yml and give it a run via ansible-playbook command
 

ansible-playbook -s playbooks/webserver.yml

 

The -s option instructs ansible to run play on remote server with super user (root) privileges.

The power of ansible is its modules, which are constantly growing over time a complete set of Ansible supported modules is in its official documenation.

Ansible-running-playbook-Commands-Task-script-Successful-output-1024x536

There is a lot of things to say about playbooks, just to give the brief they have there own language like a  templates, tasks, handlers, a playbook could have one or multiple plays inside (for instance instructions for deployment of one or more services).

The downsides of playbooks are they're so hard to write from scratch and edit, because yaml syntaxing is much more stricter than a normal oldschool sysadmin configuration file.
I've stucked with problems with modifying and writting .yaml files and I should say the community in #ansible in irc.freenode.net was very helpful to help me debug the obscure errors.

yamllint (The YAML Linter tool) comes handy at times, when facing yaml syntax errors, to use it install via apt:
 

# apt-get install –yes yamllint


a) Running ansible in "dry mode" just show what ansible might do but not change anything
 

ansible-playbook playbooks/PLAYBOOK_NAME.yml –check


b) Running playbook with different users and separate SSH keys

 

ansible-playbook playbooks/your_playbook.yml –user ansible-user
 
ansible -m ping hosts –private-key=~/.ssh/keys/custom_id_rsa -u centos

 

c) Running ansible playbook only for certain hostnames part of a bigger host group

 

ansible-playbook playbooks/PLAYBOOK_NAME.yml –limit "host1,host2,host3"


d) Run Ansible on remote hosts in parallel

To run in raw of 10 hosts in parallel
 

# Run 10 hosts parallel
ansible-playbook <File.yaml> -f 10            


e) Passing variables to .yaml scripts using commandline

Ansible has ability to pre-define variables from .yml playbooks. This variables later can be passed from shell cli, here is an example:

# Example of variable substitution pass from command line the var in varsubsts.yaml if present is defined / replaced ansible-playbook playbooks/varsubst.yaml –extra-vars "myhosts=localhost gather=yes pkg=telnet"

 

4. Ansible Galaxy (A Docker Hub) like large repository with playbook (script) files

 

Ansible Galaxy has about 10000 active users which are contributing ansible automation playbooks in fields such as Development / Networking / Cloud / Monitoring / Database / Web / Security etc.

To install from ansible galaxy use ansible-galaxy

# install from galaxy the geerlingguy mysql playbook
ansible-galaxy install geerlingguy.mysql


The available packages you can use as a template for your purpose are not so much as with Puppet as Ansible is younger and not corporate supported like Puppet, anyhow they are a lot and does cover most basic sysadmin needs for mass deployments, besides there are plenty of other unofficial yaml ansible scripts in various github repos.

Preparing your Linux to work with the Cloud providers – Installing aws , gcloud, az, oc, cf CLI Cloud access command interfaces

Wednesday, October 10th, 2018

howto Install-Cloud-access-tools-for-google-aws-azure-openshift-cloud-foundryCloud_computing-explained-on-linux.svg

If you're a sysadmin / developer whose boss requires a migration of Stored Data, Database structures or Web Objects to Amazon Web Services / Google Clourd or you happen to be a DevOps Engineer you will certainly need to have installed as a minimumum amazon AWS and Google Clouds clients to do daily routines and script stuff in managing cloud resources without tampering to use the Web GUI interface.

Here is how to install the aws, gcloud, oc, az and cf next to your kubernetes client (kubectl) on your Linux Desktop.
 

1. Install Google Cloud  gcloud (to manage Google Cloud platform resources and developer workflow
 

google-cloud-logo

Here is few cmds to run to install  gcloud, gcloud alpha, gcloud beta, gsutil, and bq commands to manage your Google Cloud from CLI

a.) On Debian / Ubuntu / Mint or any other deb based distro

# Create environment variable for correct distribution
export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)"

 

# Add the Cloud SDK distribution URI as a package source
# echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list

 

# Import the Google Cloud Platform public key
$ sudo curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add –

 

# Update the package list and install the Cloud SDK
$ sudo apt-get update && sudo apt-get install google-cloud-sdk


b) On CentOS, RHEL, Fedora Linux and other rpm based ones
 

$ sudo tee -a /etc/yum.repos.d/google-cloud-sdk.repo << EOM
[google-cloud-sdk]
name=Google Cloud SDK
baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOM

# yum install google-cloud-sdk

 

That's all now the text client to talk to Google Cloud's API gcloud is installed under
/usr/bin/gcloud

Latest install instructions of Google Cloud SDK are here.


2. Install AWS Cloud command line interface tool for managing AWS (Amazon Web Services)
 

AmazonWebservices_Logo.svg

AWS client is dependent on Python PIP so before you proceed you will have to install python-pip deb package if on Debian / Ubuntu Linux use apt:

 

# apt-get install –yes python-pip

 

It is also possible to install newest version of PIP a tiny shell script provided by Amazon get-pip.py

 

# curl -O https://bootstrap.pypa.io/get-pip.py
# python get-pip.py –user

 

# pip install awscli –upgrade –user

 

3. Install Azure Cloud Console access CLI command interface
 

Microsoft_Azure_Cloud-Logo.svg

On Debian / Ubuntu or any other deb based distro:

# AZ_REPO=$(lsb_release -cs)
# echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main" | \
$ sudo tee /etc/apt/sources.list.d/azure-cli.list

# curl -L https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add –
$ sudo apt-get update
$ sudo apt-get install apt-transport-https azure-cli

 

Finaly to check that Azure CLI is properly installed run simple login with:

 

$ az login

 


$ sudo rpm –import https://packages.microsoft.com/keys/microsoft.asc
$ sudo sh -c 'echo -e "[azure-cli]\nname=Azure CLI\nbaseurl=https://packages.microsoft.com/yumrepos/azure-cli\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc" > /etc/yum.repos.d/azure-cli.repo'
$ sudo yum install azure-cli

$ az login


For Latest install instructions check Amazon's documentation here

4. Install OpenShift OC CLI tool to access OpenShift Open Source Cloud

 

OpenShift-Redhat-cloud-platform

Even thought OpenShift has its original Redhat produced package binaries, if you're not on RPM distro it is probably
best to install using official latest version from openshift github repo.


As of time of writting this article this is done with:

 

# wget https://github.com/openshift/origin/releases/download/v1.5.1/openshift-origin-client-tools-v1.5.1-7b451fc-linux-64bit.tar.gz
tar –xvf openshift-origin-client-tools-v1.5.1-7b451fc-linux-64bit.tar.gz

 

# # mv openshift-origin-client-tools-v1.5.1-7b451fc-linux-64bit oc-tool

 

# cd oc-tool
# echo'export PATH=$HOME/oc-tool:$PATH' >> ~/.bashrc

 

To test openshift, try to login to OpenShift cloud:

 

$ oc login
Server [https://localhost:8443]: https://128.XX.XX.XX:8443


Latest install instructions on OC here

5. Install Cloud Foundry cf CLI Cloud access tool

cloud-foundry-cloud-logo

a) On Debian / Ubuntu Linux based distributions, do run:

 

$ wget -q -O – https://packages.cloudfoundry.org/debian/cli.cloudfoundry.org.key | sudo apt-key add –
$ echo "deb https://packages.cloudfoundry.org/debian stable main" | sudo tee /etc/apt/sources.list.d/cloudfoundry-cli.list
$ sudo apt-get update
$ sudo apt-get install cf-cli

 

b) On RHEL Enterprise Linux / CentOS and Fedoras

 

$ sudo wget -O /etc/yum.repos.d/cloudfoundry-cli.repo https://packages.cloudfoundry.org/fedora/cloudfoundry-cli.repo
$ sudo yum install cf-cli


For latest install insructions on cf cli check Cloud Foundry's install site

There plenty of other Cloud providers with the number exponentially growing and most have their own custom cli tools to access but as there use is not so common as the 5 ones mentioned below, I've omited 'em. If you're interested to know the complete list of Cloud Providers providing Cloud Services check here.

6. Install Ruby GEMs RHC tools collection

If you have to work with Redhat Cloud Storage / OpenShift you will perhaps want to install also (RHC) Redhat Collection Tools.

Assuming that the Linux system is running an up2date version of ruby programming language do run:

 

 

root@jeremiah:~# gem install rhc
Fetching: net-ssh-5.0.2.gem (100%)
Successfully installed net-ssh-5.0.2
Fetching: net-ssh-gateway-2.0.0.gem (100%)
Successfully installed net-ssh-gateway-2.0.0
Fetching: net-ssh-multi-1.2.1.gem (100%)
Successfully installed net-ssh-multi-1.2.1
Fetching: minitar-0.7.gem (100%)
The `minitar` executable is no longer bundled with `minitar`. If you are
expecting this executable, make sure you also install `minitar-cli`.
Successfully installed minitar-0.7
Fetching: hashie-3.6.0.gem (100%)
Successfully installed hashie-3.6.0
Fetching: powerbar-1.0.18.gem (100%)
Successfully installed powerbar-1.0.18
Fetching: minitar-cli-0.7.gem (100%)
Successfully installed minitar-cli-0.7
Fetching: archive-tar-minitar-0.6.1.gem (100%)
'archive-tar-minitar' has been deprecated; just install 'minitar'.
Successfully installed archive-tar-minitar-0.6.1
Fetching: highline-1.6.21.gem (100%)
Successfully installed highline-1.6.21
Fetching: commander-4.2.1.gem (100%)
Successfully installed commander-4.2.1
Fetching: httpclient-2.6.0.1.gem (100%)
Successfully installed httpclient-2.6.0.1
Fetching: open4-1.3.4.gem (100%)
Successfully installed open4-1.3.4
Fetching: rhc-1.38.7.gem (100%)
===========================================================================

 

If this is your first time installing the RHC tools, please run 'rhc setup'

===========================================================================
Successfully installed rhc-1.38.7
Parsing documentation for net-ssh-5.0.2
Installing ri documentation for net-ssh-5.0.2
Parsing documentation for net-ssh-gateway-2.0.0
Installing ri documentation for net-ssh-gateway-2.0.0
Parsing documentation for net-ssh-multi-1.2.1
Installing ri documentation for net-ssh-multi-1.2.1
Parsing documentation for minitar-0.7
Installing ri documentation for minitar-0.7
Parsing documentation for hashie-3.6.0
Installing ri documentation for hashie-3.6.0
Parsing documentation for powerbar-1.0.18
Installing ri documentation for powerbar-1.0.18
Parsing documentation for minitar-cli-0.7
Installing ri documentation for minitar-cli-0.7
Parsing documentation for archive-tar-minitar-0.6.1
Installing ri documentation for archive-tar-minitar-0.6.1
Parsing documentation for highline-1.6.21
Installing ri documentation for highline-1.6.21
Parsing documentation for commander-4.2.1
Installing ri documentation for commander-4.2.1
Parsing documentation for httpclient-2.6.0.1
Installing ri documentation for httpclient-2.6.0.1
Parsing documentation for open4-1.3.4
Installing ri documentation for open4-1.3.4
Parsing documentation for rhc-1.38.7
Installing ri documentation for rhc-1.38.7
Done installing documentation for net-ssh, net-ssh-gateway, net-ssh-multi, minitar, hashie, powerbar, minitar-cli, archive-tar-minitar, highline, commander, httpclient, open4, rhc after 10 seconds
13 gems installed
root@jeremiah:~#

To start with rhc next do:
 

rhc setup
rhc app create my-app diy-0.1


and play with it to install software create services on the Redhat cloud.

 

 

Closure

This are just of the few of the numerous tools available and I definitely understand there is much more to be said on the topic.
If you can remember other tools tor interesting cloud starting up tips about stuff to do on a fresh installed Linux PC to make life easier with Cloud / PaaS / SaaS / DevOps engineer please drop a comment.

Change Linux Wireless Access Point connection from text terminal with iwconfig

Monday, October 8th, 2018

wireless-change-wireless-network-to-connect-to-using-console

If you have configured a couple of Wireless connections at home or work on your Laptop  and each of the remote Wi-FI access points are at different distance (some APs are situated at closer range than others) and your Linux OS keeps connecting sometimes to the wrong AP by default you'll perhaps want to change that behavior, so you keep connected to the Wi-Fi AP that has the best Link Quality (is situatated physically at closest location to your laptop integrated wifi card).
Using a Graphical tool such as Gnome Network Manager / Wicd Network Manager or KDE's Network Manager is great and easy way to do it but sometimes if you do upgrade of your GNU / Linux and the upgrade fails and your Graphical Environment GNOME / KDE / OpenBox / Window Maker or whatever Window Manager you use fails to start it is super handy to use text console (terminal) to connect to the right wiki in order to do a deb / rpm package rollback to revert your GUI environment or Xorg to the older working release.

Connection to WPA or WEP protected APs on GNU / Linux on a low level is done by /sbin/iwlist , /sbin/iwconfig and wpa_supplicant

wpasupplicant and network-manager (if you're running Xorg server).

 

/sbin/iwlist scan
 

 

wlp3s0    Scan completed :
          Cell 01 – Address: 10:FE:ED:43:CB:0E
                    Channel:6
                    Frequency:2.437 GHz (Channel 6)
                    Quality=64/70  Signal level=-46 dBm  
                    Encryption key:on
                    ESSID:"Magdanoz"
                    Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s
                              9 Mb/s; 12 Mb/s; 18 Mb/s
                    Bit Rates:24 Mb/s; 36 Mb/s; 48 Mb/s; 54 Mb/s
                    Mode:Master
                    Extra:tsf=00000032cff7c214
                    Extra: Last beacon: 144ms ago
                    IE: Unknown: 00084D616764616E6F7A
                    IE: Unknown: 010882848B960C121824
                    IE: Unknown: 030106
                    IE: Unknown: 0706555320010B1B
                    IE: Unknown: 2A0100
                    IE: IEEE 802.11i/WPA2 Version 1
                        Group Cipher : TKIP
                        Pairwise Ciphers (2) : CCMP TKIP
                        Authentication Suites (1) : PSK
                    IE: Unknown: 32043048606C

 

iwlist command is used to get more detailed wireless info from a wireless interface (in terminal this command shows you the wifi networks available to connect to and various info such as the type of Wifi network the Wifi Name / network quality Frequency (is it it spreading the wifi signal at 2.4 Ghz or 5 Ghz frequency) etc.

 

# ifconfig interafce_name down

 

For example on my Thinkpad the wifi interface is wlp3s0 to check what is yours do ifconfig -a e.g.

 

root@jeremiah:~# /sbin/ifconfig -a
enp0s25: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        ether 00:21:cc:cc:b2:27  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 20  memory 0xf3900000-f3920000  

 

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1  (Local Loopback)
        RX packets 350  bytes 28408 (27.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 350  bytes 28408 (27.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

wlp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.0.103  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 fe80::6267:20ff:fe3c:20ec  prefixlen 64  scopeid 0x20<link>
        ether 60:67:20:3c:20:ec  txqueuelen 1000  (Ethernet)
        RX packets 299735  bytes 362561115 (345.7 MiB)
        RX errors 0  dropped 1  overruns 0  frame 0
        TX packets 278518  bytes 96996135 (92.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

Next use iwconfig on Debian / Ubuntu Linux it is part of wireless-tools deb package.

 

root@jeremiah:~# /sbin/iwconfig interface essid "Your-Acess-Point-name"

 

To check whether you're connected to a wireless network you can do:

http://pc-freak.net/images/check-wireless-frequency-access-point-mac-and-wireless-name-iwconfig-linux

root@jeremiah:~# iwconfig
enp0s25   no wireless extensions.

 

lo        no wireless extensions.

wlp3s0    IEEE 802.11  ESSID:"Magdanoz"  
          Mode:Managed  Frequency:2.437 GHz  Access Point: 10:FE:ED:43:CB:0E   
          Bit Rate=150 Mb/s   Tx-Power=15 dBm   
          Retry short limit:7   RTS thr:off   Fragment thr:off
          Encryption key:off
          Power Management:off
          Link Quality=61/70  Signal level=-49 dBm  
          Rx invalid nwid:0  Rx invalid crypt:0  Rx invalid frag:0
          Tx excessive retries:5  Invalid misc:1803   Missed beacon:0


N.B. ! To get a list of all your PC network interfaces you can use cmd:

 

root@jeremiah:/home/hipo# ls -al /sys/class/net/
total 0
drwxr-xr-x  2 root root 0 Oct  8 22:53 .
drwxr-xr-x 52 root root 0 Oct  8 22:53 ..
lrwxrwxrwx  1 root root 0 Oct  8 22:53 enp0s25 -> ../../devices/pci0000:00/0000:00:19.0/net/enp0s25
lrwxrwxrwx  1 root root 0 Oct  8 22:53 lo -> ../../devices/virtual/net/lo
lrwxrwxrwx  1 root root 0 Oct  8 22:53 wlp3s0 -> ../../devices/pci0000:00/0000:00:1c.1/0000:03:00.0/net/wlp3s0

show-all-network-interfaces-with-netstat-linux

or use netstat like so:

root@jeremiah:/home/hipo# netstat -i | column -t
Kernel   Interface  table
Iface    MTU        RX-OK   RX-ERR  RX-DRP  RX-OVR  TX-OK   TX-ERR  TX-DRP  TX-OVR  Flg
enp0s25  1500       0       0       0       0       0       0       0       0       BMU
lo       65536      590     0       0       0       590     0       0       0       LRU
wlp3s0   1500       428112  0       1       0       423538  0       0       0       BMRU

 


To get only the Wireless network card interface on Linux (e.g. find out which of the listed above interfaces is your wireless adapter's name), use iw command (that shows devices and their configuration):

 

root@jeremiah:/home/hipo# iw dev
phy#0
    Interface wlp3s0
        ifindex 3
        wdev 0x1
        addr 60:67:20:3c:20:ec
        type managed
        channel 6 (2437 MHz), width: 40 MHz, center1: 2427 MHz
        txpower 15.00 dBm

 

linux-wireless-terminal-console-check-wireless-interfaces-command

  • If you need to get only the active Wireless adapter device assigned by Linux kernel

 

root@jeremiah:~# iw dev | awk '$1=="Interface"{print $2}'

 

To check the IP / Netmask and Broadcase address assigned by connected Access Point use ifconfig
with your Laptop Wireless Interface Name.

show-extra-information-ip-netmask-broadcast-about-wireless-interface-linux

root@jeremiah:~# /sbin/ifconfig wlp3s0
wlp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.0.103  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 fe80::6267:20ff:fe3c:20ec  prefixlen 64  scopeid 0x20<link>
        ether 60:67:20:3c:20:ec  txqueuelen 1000  (Ethernet)
        RX packets 319534  bytes 365527097 (348.5 MiB)
        RX errors 0  dropped 1  overruns 0  frame 0
        TX packets 285464  bytes 99082701 (94.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


As you can see in above 3 examples iwconfig could configure various settings regarding the wireless network interface.

It is really annoying because sometimes if you have configured your Linux to connect to multiple access points, the wifi adapter might keep connecting to an access point that is more distanced from you and because of that the Bandwidth might be slower and that could impact your Internet connectivity, to fix that and get rid of any networks that are automatically set to connect to that you don't want to, just delete the correspodning files (the Wifi file name coincides with the Wireless AP network name).
All stored Wi-FI access points that your Linux is configured to connect to are stored inside /etc/NetworkManager/system-connections/

For example to delete an auto connection to wireless router with a name NetGear do:

 

root@jeremiah:~# rm -f /etc/NetworkManager/system-connections/NetGear

 

For a complete list of stored Wifi Networks that your PC might connect (and authorize to if configured so) do:

 

root@jeremiah:~# ls -a /etc/NetworkManager/system-connections/
Magdanoz
NetGear

LinkSys
Cobra
NetIs
WirelessNet

 

After deleting the required Networks you want your computer to not automatically connect to to make NetworkManager aware of that restart it with:
 

hipo@jeremiah:~# systemctl restart NetworkManager.service


or if you hate systemd like I do just use the good old init script to restart:

 

hipo@jeremiah:~# /etc/init.d/network-manager restart


To get some more informatoin on the exact network you're connected, you can run:

show-information-about-wireless-connection-on-gnu-linux

 

hipo@jeremiah:~# systemctl status NetworkManager.service
● NetworkManager.service – Network Manager
   Loaded: loaded (/lib/systemd/system/NetworkManager.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2018-10-08 22:35:09 EEST; 15s ago
     Docs: man:NetworkManager(8)
 Main PID: 13721 (NetworkManager)
    Tasks: 5 (limit: 4915)
   CGroup: /system.slice/NetworkManager.service
           ├─13721 /usr/sbin/NetworkManager –no-daemon
           └─13742 /sbin/dhclient -d -q -sf /usr/lib/NetworkManager/nm-dhcp-helper -pf /var/run/dhclient-wlp3s0.pid -lf /var/lib/NetworkManager/dhclie

 

Oct 08 22:35:15 jeremiah NetworkManager[13721]:   [1539027315.6657] dhcp4 (wlp3s0): state changed unknown -> bound
Oct 08 22:35:15 jeremiah dhclient[13742]: bound to 192.168.0.103 — renewal in 2951 seconds.
Oct 08 22:35:15 jeremiah NetworkManager[13721]:
  [1539027315.6735] device (wlp3s0): state change: ip-config -> ip-check (reason 'none') [70 80
Oct 08 22:35:15 jeremiah NetworkManager[13721]:
  [1539027315.6744] device (wlp3s0): state change: ip-check -> secondaries (reason 'none') [80 9
Oct 08 22:35:15 jeremiah NetworkManager[13721]:
  [1539027315.6747] device (wlp3s0): state change: secondaries -> activated (reason 'none') [90
Oct 08 22:35:15 jeremiah NetworkManager[13721]:
  [1539027315.6749] manager: NetworkManager state is now CONNECTED_LOCAL
Oct 08 22:35:15 jeremiah NetworkManager[13721]:
  [1539027315.6812] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 08 22:35:15 jeremiah NetworkManager[13721]:
  [1539027315.6813] policy: set 'Magdanoz' (wlp3s0) as default for IPv4 routing and DNS
Oct 08 22:35:15 jeremiah NetworkManager[13721]:
  [1539027315.6816] device (wlp3s0): Activation: successful, device activated.
Oct 08 22:35:15 jeremiah NetworkManager[13721]:
  [1539027315.6823] manager: startup complete

 

Linux: GNOME Flashback missing Desktop Icons fix – Hack to add desktop icons via gnome-shell in GNOME 3.28 onwards

Monday, September 24th, 2018

how-to-fix-workaround-gnome-3.30-missing-desktop-icons-on-linux

I just upgraded my notebook fom Debian Stretch 9.5 Linux to Buster (current Testing Debian release). All went fine except I got a lot of headaches because it seems in Buster the GNOME Flashback 3.30 which I use has removed the support for Show Desktop Icons in Nautilus because of some migration reasons of Nautilus to a newer version 4, (hopefully that would be temporary) from gnome-tweak-tool whose package now contains no binary for gnome-tweak-tool, instead an equivalent tool now is called gnome-tweaks and this tool is no longer working under Gnome Flashback but only with GNOME Classic 3.30 and the regular GNOME 3.30 launcher available from gdm3 (the Gnome Display manager).
 

1. Displaying Missing Desktop icons on GNOME version 3.30


The way to display Desktop icons in GNOME 3.28 onwards at the moment of writting this post and the whole issue with the removed handling of Desktop icons in Nautilus is explained well by Carlos Soriano a gnome shell extension developer in his blog post Desktop icons goes beta.

The good guy C. S. wrote  the his desktop icons gnome shell extension which is on github.com
To use it you have to fetch it and enable it by fetching the repo source code to gnome-shell extensions directory:
 

hipo@linux:~$ cd ~/.local/share/gnome-shell/extensions
hipo@linux:~$ git clone https://gitlab.gnome.org/World/ShellExtensions/desktop-icons
hipo@linux:~$ mkdir 'desktop-icons@csoriano'
hipo@linux:~$ mv desktop-icons/* 'desktop-icons@csoriano'/
hipo@linux:~$ rm -rf desktop-icons/


Now you should use the gnome-tweaks command tool to enable the new added gnome-shell extension.

 

 

hipo@linux:~$ gnome-tweaks


gnome-tweak-on-debian-testing-linux-screenshot

Once enabled your Desktop icons will appear as usual as seen in below shot, the downside this solutions is icons as seen in below screenshot is that pictures doesn't have Thumbnail pictures generated … and icons when kept on with mouse over can move only in a selected square like perimeter (when moved left / right / up down side). That "woody" icon movement sucks a bit but much better than no icons at all.

gnome-3.30-general-desktop-solved-missing-icons-desktop-screenshot-on-debian-linux

 

2. Displaying Missing Desktop icons in GNOME Flashback 3.30

 

I really love GNOME Flashback as it used to be a good replacement for Linux MATE (which is the fork of GNOME2 and not bad but lacks Metacity Window Manager and some of the Eye Candy that GNOME 3 and beside that even MATE had to be slightly hacked to make look more like Classical GNOME 2 – for more on that check my previous article Fixing Mate Adwaita Theme problems on Debian and Ubuntu). 

At the moment when I tried to run gnome-tweaks under a GNOME Flashback session I got the following error:

 

hipo@jericho:~/.local/share/gnome-shell/extensions$ gnome-tweaks 
WARNING : Shell not installed or running
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/gtweak/app.py", line 30, in do_activate
    self.win = Window(self, model)
  File "/usr/lib/python3/dist-packages/gtweak/tweakview.py", line 38, in __init__
    self._model.load_tweaks(self)
  File "/usr/lib/python3/dist-packages/gtweak/tweakmodel.py", line 104, in load_tweaks
    mods = __import__("gtweak.tweaks", globals(), locals(), tweak_files, 0)
  File "/usr/lib/python3/dist-packages/gtweak/tweaks/tweak_group_general.py", line 14, in <module>
    _shell_not_ubuntu = _shell.mode != 'ubuntu'
AttributeError: 'NoneType' object has no attribute 'mode'

 

In regular GNOME session gnome-tweaks works fine and with the help of an GNOME Shell Extension addon it is possible to add the Missing Desktop icons however the only working fix for GNOME FlashBack 3.30 is to substitute nautilus (the default GNOME file manager) with NEMO (which is The Official file manager for Cinnamon Desktop Environment).
Thanksfully this is done relatively easy and all I had to do is to use a little "hack", e.g. install nemo.

 

root@linux:~# apt-get install –yes -qq nemo

 

And add a new auto-launcher for gnome that launches nemo file manager instead of nautilus.

To add the auto-launcher in GNOME I had to add a file with following content:

 

[Desktop Entry]
Type=Application
Name=Nemo
Comment=Start Nemo desktop at log in
Exec=nemo-desktop
OnlyShowIn=GNOME;
AutostartCondition=GSettings org.nemo.desktop show-desktop-icons
X-GNOME-AutoRestart=true
NoDisplay=true

 

to ~/.config/autostart/

For those who don't know GNOME has this handy way to set an autostart programs by using the specific .desktop extension files that have to be placed under $HOME/.config/autostart (where $HOME = the logged in user home directory).

The one liner to do so is:

 

echo '[Desktop Entry]
Type=Application
Name=Nemo
Comment=Start Nemo desktop at log in
Exec=nemo-desktop
OnlyShowIn=GNOME;
AutostartCondition=GSettings org.nemo.desktop show-desktop-icons
X-GNOME-AutoRestart=true
NoDisplay=true' >> ~/.config/autostart/nemo.desktop

 

Then I had to restart my GNOME FlashBack session (e.g. Log Out and Login with a new session) and the icons appeared.

classic-gnome-flashback-debian-gnu-linux-hipos-desktop-screenshot

The downside of this dirty workaround is that desktop icons even though showing up couldn't be moved (rearranged) freely on any location of desktop (are pretty much static) and the even worser  fact about this hack is you can't actually copy paste easily copy files from your Desktop within another desktop folder … 
 I know that's shitty but at the moment there is no better solution and this is better than nothing at all.

 

P.S. I tried downgrading my Debian Testing to Stable Stretch Linux again with the idea to use the old GNOME 3.22 that the Stable Debian distro provides but, ended up with a lot of mess after experimenting to downgrade using /etc/apt/preferences file records and substitution in /etc/apt/sources.list to include the stable .deb repository and apt dselect and aptitude package management tools. Officially downgrades are not encouraged and supported by Debian, but I hoped I could relatively easily do it by manually fixing the broken dependencies after removing debian packages manually combined with short for bash loops like I did in the past, but it seems this time I broke the system worse, so I could hardly return it back to normal operation in upgrading packages with a lot of manual hacking with apt-get and few one liner scripts. Thus I abandoned as a fix the possiiblity to downgrade Testing Debian to stable, I even considered switching from GNOME desktop environment to something more light as OpenBoxCinnamon / XFCE and gave them one more try but the results weren't nice, I reconsidered again to go back to using the Good Old GNU Step Window Maker as a GNOME alternative which in my opinion is still a great GUI environment for security crackers / sysadmins /hackers (programmers) and eventually perhaps I will switch back to using it, because GNOME is becoming more and more bloated with the years and I can hardly stand it … I mean I did not expected GNOME to be developed in the shitty Mobile Interface  (Unity) way, I have been a loyal user to GNOME for so many years and have lived trhough  all its mess over the years, its painful to see how the good and efficient GNOME 2 went the bad broken road of changing completely concepts and interface in GNOME 3.x
 

3. Closure

GNOME Desktop icons has been with GNOME users already for about 15+ years so IMHO the missing ability to add them easily through gnome tweak tool or Gnome Control Center is a absurd stupidity and killed at least 5 times out of my time to solve and the solution is far from good … I understand that in future the GNOME developers want to make GNOME as modular as possible through GNOME Shell Extensions however if you're removing such an important functionality that's for ages in most mainstream operating systems such as M$ Windows / Mac OS is an insanity. Through my quick research online I found the Missing Desktop Icons is experienced by other people on other Linux distros besides Debian I saw complains by Ubuntu / Fedora and Arch Linux users in forums and mailing lists.
What puzzles me why the reaction of such a major complained are not seriously considered by GNOME developers, especially after all the problems with transition from GNOME 2 -> GNOME 3 which already pushed a lot of GNOME users to move from GNOME to KDE / MATE  (like in Linux Mint whose GUI is based on Linux Mint). Definitely such a general issues would drive further enthusiasts from GNU / Linux and makes a great harm to the Free Software software community.
Hopefully the missing desktop icons hell will be solved in upcoming GNOME releases.

Virtualbox Shared folder set up on Linux between Host and Guest OS – Set up Virtualbox shared folder to Copy files from PC Host to Guest

Wednesday, September 12th, 2018

mount-shares-between-host-OS-and-guest-virtual-machine-howto-virtualbox-vbox-logo

How to set-up Virtualbox shared folder to Copy files from PC Host  and Guest Virtualized OS?

Running VirtualBox Host is an easy thing to set-up across all Operating Systems.  Once you have it sooner or later you will need to copy files from the VM Host OS (that in my case is GNU / Linux) to the virtualized Guest operating system (again in my case that's again another Linux ISO running indide the Virtual Machine).

Below are steps to follow To use Virtualbox Shared Folder functionality to copy files between VBox and your Desktop / server Linux install.

1. Install Virtualbox Guest Additions CD Image ISO

I've explained how to add the Guest Additions CD image thoroughfully in my previous article Howto enable Copy / Paste Virtualbox betwen Linux guest and Host OS
Anyways I'll repeat myself below for sake of clarity:

To do so use Oracle VBox menus (on the booted virtualized OS VBox window):

 

Devices -> Insert Guest additions CD Image

 

Mount the ISO inside the Linux Virtual Machine:

root@debian:~# mount /media/cdrom1/
 

If the mount fails and there are no files inside the mount point it might be because the virtualbox-dkms and virtualbox-guest-dkms packages might be missing on the Host OS.

To install them (on Debian GNU / Linux) assuming that you're using virtualbox default distro packages /etc/apt/sources.list :
 

apt-get install –yes -qq virtualbox-dkms virtualbox-guest-dkms


and run:

 

root@debian:/media/cdrom1# cd /media/cdrom1; sh VBoxLinuxAdditions.run


2. Create directory for Shared Folder that will be used to access Host / OS files from the Guest Virtualized OS
 

root@debian:~# mkdir /mnt/shared_folder

 

3. Map from VBox program interface Shared folder settings and Mount /mnt/shared_folder location

virtualbox-virtual-machine-devices-shared-folders-shared-folder-settings-linux-screenshot

 

Devices -> Shared Folder -> Shared Folder settings -> Transient Folders (click blue folder add small button right)

 

From Transient Folders add whatever directory you want to be shared from your local notebook / PC to the VM.

virtualbox-devices-Shared-Folder-Add-Shared-Folder-add-share-linux-screenshotDepending on whether you would like to mount the shared folder only for reading files (choose Read Only) to make it a permanent shared folder (and not just for the one session of current running Virtual Machine until its killed use Make Permanent) or check Auto-Mount tick if you want the shared_folder mapping to be mounted on every VM boot.

Once the shared_folder directory location is set-up from GUI menu click OK and in order for the settings to take effect, you'll need to restart the VM Guest with Linux (use halt command from terminal) or Power Off the Machine via the VBox menus.

To mount use command like:

mount -t vboxsf name_of_folder_linked_from_vbox  /mnt/name_folder_guest_os/


mount-vboxsf-shared-folder-mnt-shared-linux-guest-screenshot

In my case I wanted to share home folder /home so the command I used is:

root@debian:~# mount -t vboxsf  shared_folder /mnt/shared_folder


If everything is fine your Host OS file content from /home will be visible (for read and write if you Mapped it so) 
under /mnt/shared_folder …

And as Turtles Ninja used to heavily say Cowabunga !!! 🙂
You have it mounted and ready for file share between Desktop -> Virtualized OS.

 

Bear in mind that above mount command has to run as root (superuser) to succeed.

You now could copy files from your Host OS (running the Virtual Machine) and the Guest OS (Virtualized OS) using /mnt/shared_folder mount point without problems.

The example is if you want to share files between VirtualBox installed Linux and the Guest (Desktop / server) OS, however at many cases mounting your Host OS directory for root users might be not very practical but, instead you might prefer to do the mount for specific non admin user, for example I prefer to do the shared folder mount with my pointed non-root username hipo.

Here is how to do above VM shared_folder mount for non-root user:

First you need to know the exact UID / GID (User ID / Group ID) of user, you can get that with id command:

 

hipo@linux:~$  id
uid=1000(hipo) gid=1000(hipo) groups=1000(hipo),24(cdrom),25(floppy),29(audio),30(dip),44(video),46(plugdev),108(netdev),114(bluetooth),115(lpadmin),119(scanner)

 

As you see UID / GID in my case are 1000 / 1000

hipo@linux:~$ sudo mount -t vboxsf -o rw,uid=1000,gid=1000 shared_folder /mnt/shared_folder

 

mount-virtual-box-shared_folder-with-non-administration-permissions-non-root-permissions-id-and-mount-command-screenshot-linux


4. Mounting configured shared_folder to automatically mount into the Guest OS Linux on every boot

a) Configuring shared_folder auto-mount using /etc/rc.local

If you need the shared_folder to automatically mount next-time you boot the virtual machine quickest way is to add the mount command to /etc/rc.local (on Debian 8 and Debian 9 and newer Ubuntu Linuxes rc.local is missing by default to enable it to work like it worked before read follow my previous article ).

b) Configuring auto-mount for shared_folder through /etc/fstab

The more professional way to auto-mount on emulated OS VM boot time,  you could add the vboxsf mount definitions to /etc/fstab with your favourite text-editor mcedit, nano, joe etc. … (for me that's vim).

Syntax of /etc/fstab is as follows:
 

<Device> <Mount Point> <Type> <Options> <Dump> <Pass>

root@linux:~# vim /etc/fstab

 

shared_folder /mnt/shared_folder                                vboxsf rw,uid=1000,gid=1000 0 0

Note that you will want to change 1000 / 1000, id / gid with the ones of the non-admin user you would like to add to mount it for.

A quick way to add it to /etc/fstab with a shell one-liner is with command
 

root@linux:~# echo 'shared_folder /mnt/shared_folder                                vboxsf rw,uid=1000,gid=1000 0  0' >> /etc/fstab

An alternative way to add a user to have permissions for vboxsf file system (without specifying the long -o uid=1000,gid=1000 options is to simply add the username in question to group vboxsf like so:

c) Adding non super user username to vboxsf group

root@linux:~# usermod -G vboxsf hipo
root@linux:~# grep -i vboxsf /etc/group
vboxsf:x:999:hipo

 

hipo@linux:~$ sudo mount -t vboxsf  shared_folder /mnt/shared_folder

 

without the extra arguments and the options to pass to /etc/fstab (for eventual requirement to auto mount the shared_folder) would be more simple e.g.:

 

echo 'shared_folder /mnt/shared_folder                                vboxsf ' >> /etc/fstab

 

One note to make here is if the uesr is added to vboxsf the line for /etc/fstab to auto mount to mount for root user and non-root will be identical.

Then you can get the /etc/fstab auto-mount configured tested by running:

c) Checking auto-mount is working

hipo@linux:~# mount -a
hipo@linux:~# mount |grep -i vboxsf
shared_folder on /mnt/shared_folder type vboxsf (rw,nodev,relatime)


5. What if you end up with mounting failed errors ? – What might be causing the mounting failed Protocol error (a few things to check to solve)


In case of troubles with the mount you might get an error like:

hipo@linux:~# mount -t vboxsf  share_folder /mnt/shared_folder

/sbin/mount.vboxsf: mounting failed with the error: Protocol error


This error might be caused because of Insert Guest Additions CD Image might be not properly enabled and installed using the ISO provided VBoxLinuxAdditions.sh shell script.
Other common reason you might get this error if you have mistyped the Folder name: given in Shared Folders -> Folder Path -> Add Share for example I have given shared_folder as a Map name but as you can see in above mount -t vboxsf, I've mistyped share_folder instead of the correct one shared_folder inserted.
In some VBox releases this error was caused by bugs in the Virtual Machine.
 

virtualbox-virtual-machine-shared-folder-transient-folder-add-folder-linux-VM-guest-linux

One useful tip is to be able to check whether a Virtualbox Virtual Machine has a configured shared_folder (if you're logging to manage the machine on remote server – nomatter whether you have logged in with VNC / Teamviewer / Citrix etc. or via SSH session.

To do so use VBoxControl as of time of writting usually located on most distributions under (/usr/bin/VBoxControl)
 

 

hipo@linux:~# VBoxControl sharedfolder list -automount
Oracle VM VirtualBox Guest Additions Command Line Management Interface Version 5.2.18
(C) 2008-2018 Oracle Corporation
All rights reserved.

 

Auto-mounted Shared Folder mappings (0):

No Shared Folders available.

You can use VBoxControl command to get set and list a number of settings on the VBox VM, here is an useful example with it where you get information about numerous VBox info values:

 

root@linux:~# VBoxControl guestproperty enumerate
Oracle VM VirtualBox Guest Additions Command Line Management Interface Version 5.2.18
(C) 2008-2018 Oracle Corporation
All rights reserved.

 

Name: /VirtualBox/GuestInfo/OS/Product, value: Linux, timestamp: 1536681633430852000, flags: <NULL>
Name: /VirtualBox/GuestInfo/Net/0/V4/IP, value: 10.0.2.15, timestamp: 1536681633438717000, flags: <NULL>
Name: /VirtualBox/HostInfo/GUI/LanguageID, value: en_US, timestamp: 1536697521395621000, flags: RDONLYGUEST
Name: /VirtualBox/GuestInfo/Net/0/MAC, value: 08002762FA1C, timestamp: 1536681633442120000, flags: <NULL>
Name: /VirtualBox/GuestInfo/OS/ServicePack, value: <NULL>, timestamp: 1536681633431259000, flags: <NULL>
Name: /VirtualBox/HostInfo/VBoxVerExt, value: 5.2.18, timestamp: 1536681619002646000, flags: TRANSIENT, RDONLYGUEST
Name: /VirtualBox/GuestInfo/Net/0/V4/Netmask, value: 255.255.255.0, timestamp: 1536681633440157000, flags: <NULL>
Name: /VirtualBox/GuestInfo/OS/Version, value: #1 SMP Debian 4.9.110-3+deb9u2 (2018-08-13), timestamp: 1536681633431125000, flags: <NULL>
Name: /VirtualBox/GuestAdd/VersionExt, value: 5.2.18, timestamp: 1536681633431582000, flags: <NULL>
Name: /VirtualBox/GuestAdd/Revision, value: 124319, timestamp: 1536681633432515000, flags: <NULL>
Name: /VirtualBox/HostGuest/SysprepExec, value: <NULL>, timestamp: 1536681619002355000, flags: TRANSIENT, RDONLYGUEST
Name: /VirtualBox/GuestInfo/OS/LoggedInUsers, value: 1, timestamp: 1536681673447293000, flags: TRANSIENT, TRANSRESET
Name: /VirtualBox/GuestInfo/Net/0/Status, value: Up, timestamp: 1536681633443911000, flags: <NULL>
Name: /VirtualBox/GuestInfo/Net/0/Name, value: enp0s3, timestamp: 1536681633445302000, flags: <NULL>
Name: /VirtualBox/HostGuest/SysprepArgs, value: <NULL>, timestamp: 1536681619002387000, flags: TRANSIENT, RDONLYGUEST
Name: /VirtualBox/GuestAdd/Version, value: 5.2.18, timestamp: 1536681633431419000, flags: <NULL>
Name: /VirtualBox/HostInfo/VBoxRev, value: 124319, timestamp: 1536681619002668000, flags: TRANSIENT, RDONLYGUEST
Name: /VirtualBox/GuestInfo/Net/0/V4/Broadcast, value: 10.0.2.255, timestamp: 1536681633439531000, flags: <NULL>
Name: /VirtualBox/HostInfo/VBoxVer, value: 5.2.18, timestamp: 1536681619002613000, flags: TRANSIENT, RDONLYGUEST
Name: /VirtualBox/GuestInfo/OS/LoggedInUsersList, value: hipo, timestamp: 1536681673446498000, flags: TRANSIENT, TRANSRESET
Name: /VirtualBox/GuestInfo/Net/Count, value: 1, timestamp: 1536698949773993000, flags: <NULL>
Name: /VirtualBox/GuestInfo/OS/Release, value: 4.9.0-7-amd64, timestamp: 1536681633431001000, flags: <NULL>
Name: /VirtualBox/GuestInfo/OS/NoLoggedInUsers, value: false, timestamp: 1536681673447965000, flags: TRANSIENT, TRANSRESET
Name: /VirtualBox/GuestAdd/HostVerLastChecked, value: 5.2.18, timestamp: 1536681702832389000, flags: <NULL>

Hope you enjoyed ! Have phun! 🙂

Install and use personal Own Cloud on Debian Linux for better shared data security – OwnCloud a Free Software replacement for Google Drive

Thursday, August 23rd, 2018

owncloud-self-hosted-cloud-file-sharing-and-storage-service-for-gnu-linux-howto-install-on-debian

Basicly I am against the use of any Cloud type of service but as nowadays Cloud usage is almost inevitable and most of the times you need some kind of service to store and access remotely your Data from multiple devices such as DropBox, Google Drive, iCloud etc. and using some kind of infrastructure to execute high-performance computing is invitable just like the Private Cloud paid services online are booming nowdays, I decided to give a to research and test what is available as a free software in the field of Clouding (your data) 🙂

Undoubfully, it is really nice fact that there are Free Software / Open Source alternatives to run your Own personal Cloud to store your data from multiple locations on a single point.

The most popular and leading Cloud Collaboration service (which is OpenSource but unfortunately not under GPLv2 / GPV3 – e.g. not fully free software) is OwnCloud.

ownCloud is a flexible self-hosted PHP and Javascript based web application used for data synchronization and file sharing (where its remote file access capabilites are realized by Sabre/Dav an open source WebDav server.
OwnCloud allows end user to easily Store / Manage files, Calendars, Contacts, To-Do lists (user and group administration via OpenID and LDAP), public URLs can be easily, created, the users can interact with browser-based ODF (Open Document Format) word processor , there is a Bookmarking, URL Shortening service integrated, Gallery RSS Feed and Document Viewer tools such as PDF viewer etc. which makes it a great alternative to the popular Google Drive, iCloud, DropBox etc.

The main advantage of using a self-hosted Cloud is that Your data is hosted and managed by you (on your server and your hard drives) and not by some God knows who third party provider such as the upmentioned.
In other words by using OwnCloud you manage your own data and you don't share it ot on demand with the Security Agencies with CIA, MI6, Mussad … (as it is very likely most of publicly offered Cloud storage services keeps track on the data stored on them).

The other disadvantage of Cloud Computing is that the stored data on such is usually stored on multiple servers and you can never know for sure where your data is physically located, which in my opinion is way worse than the option with Self Hosted Cloud where you know where your data belongs and you can do whatever you want with your data keep it secret / delete it or share it on your demand.

OwnCloud has its clients for most popular Mobile (Smart Phone) platforms – an Android client is available in Google Play Store as well as in Apple iTunes besides the clients available for FreeBSD OS, the GNOME desktop integration package and Raspberry Pi.

For those who are looking for additional advanced features an Enterprise version of OwnCloud is also available aiming business use and included software support.

Assuming you have a homebrew server or have hired a dedidacted or VPS server (such as the Ones we provide) ,Installing OwnCloud on GNU / Linux is a relatively easy
task and it will take no more than 15 minutes to 2 hours of your life.
In that article I am going to give you a specific instructions on how to install on Debian GNU / Linux 9 but installing on RPM based distros is similar and straightfoward process.
 

1. Install MySQL / MariaDB database server backend
 

By default OwnCloud does use SQLite as a backend data storage but as SQLite stores its data in a file and is becoming quickly slow, is generally speaking slowre than relational databases such as MariaDB server (or the now almost becoming obsolete MySQL Community server).
Hence in this article I will explain how to install OwnCloud with MariaDB as a backend.

If you don't have it installed already, e.g. it is a new dedicated server install MariaDB with:
 

server:~# apt-get install –yes mariadb-server


Assuming you're install on a (brand new fresh Linux install – you might want to install also the following set of tools / services).

 

server:~# systemctl start mariadb
server:~# systemctl enable mariadb
server:~# mysql_secure_installation


mysql_secure_installation – is to finalize and secure MariaDB installation and set the root password.
 

2. Create necessery database and users for OwnCloud to the database server
 

linux:~# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE owncloud CHARACTER SET utf8;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON owncloud.* TO 'owncloud'@'localhost' IDENTIFIED BY 'owncloud_passwd';
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> \q

 

3. Install Apache + PHP necessery deb packages
 

As of time of writting the article on Debian 9.0 the required packages for a working Apache + PHP install for OwnCloud are as follows.

 

server:~# apt-get install –yes apache2 mariadb-server libapache2-mod-php7.0 \
openssl php-imagick php7.0-common php7.0-curl php7.0-gd \
php7.0-imap php7.0-intl php7.0-json php7.0-ldap php7.0-mbstring \
php7.0-mcrypt php7.0-mysql php7.0-pgsql php-smbclient php-ssh2 \
php7.0-sqlite3 php7.0-xml php7.0-zip php-redis php-apcu

 

4. Install Redis to use as a Memory Cache for accelerated / better performance ownCloud service


Redis is an in-memory kept key-value database that is similar to Memcached so OwnCloud could use it to cache stored data files. To install latest redis-server on Debian 9:
 

server:~# apt-get install –yes redis-server

5. Install ownCloud software packages on the server

Unfortunately, default package repositories on Debian 9 does not provide owncloud server packages but only some owncloud-client packages are provided, that's perhaps the packages issued by owncloud does not match debian packages.

As of time of writting this article, the latest available OwnCloud server  version package for Debian is OC 10.

a) Add necessery GPG keys

The repositories to use are provided by owncloud.org, to use them we need to first add the necessery gpg key to verify the binaries have a legit checksum.
 

server:~# wget -qO- https://download.owncloud.org/download/repositories/stable/Debian_9.0/Release.key | sudo apt-key add –

 

b) Add owncloud.org repositories in separete sources.list file

 

server:~# echo 'deb https://download.owncloud.org/download/repositories/stable/Debian_9.0/ /' | sudo tee /etc/apt/sources.list.d/owncloud.list

 

c) Enable https transports for the apt install tool

 

server:~# apt-get –yes install apt-transport-https

 

d) Update Debian apt cache list files and install the pack

 

server:~# apt-get update

 

server:~# apt-get install –yes owncloud-files

 

By default owncloud store file location is /var/www/owncloud but on many servers that location is not really appropriate because /var/www might be situated on a hard drive partition whose size is not big enough, if that's the case just move the folder to another partition and create a symbolic link in /var/www/owncloud pointing to it …


6. Create necessery Apache configurations to make your new self-hosted cloud accessible
 

a) Create Apache config file

 

server:~# vim /etc/apache2/sites-available/owncloud.conf

 

 

Alias /owncloud "/var/www/owncloud/"

<Directory /var/www/owncloud/>
Options +FollowSymlinks
AllowOverride All

<IfModule mod_dav.c>
Dav off
</IfModule>

SetEnv HOME /var/www/owncloud
SetEnv HTTP_HOME /var/www/owncloud

</Directory>

b) Enable Mod_Dav (WebDAV) if it is not enabled yet

 

server:~# ln -sf ../mods-available/dav_fs.conf
server:~# ln -sf ../mods-available/dav_fs.load
server:~# ln -sf ../mods-available/dav.load
server:~# ln -sf ../mods-available/dav_lock.load

c) Set proper permissions for /var/www/owncloud to make upload work properly

 

chown -R www-data: /var/www/owncloud/


d) Restart Apache WebServer (to make new configuration affective)

 

 

server:~# /etc/init.d/apache2 restart


7. Finalize  OwnCloud Install
 

Access OwnCloud Web Interface to finish the database creation and set the administrator password for the New Self-Hosted cloud
 

http://Your_server_ip_address/owncloud/

By default the Web interface is accessible in unencrypted (insecure) http:// it is a recommended practice (if you already don't have an HTTPS SSL certificate install for the IP or the domain to install one either a self-signed certificate or even better to use LetsEncrypt CertBot to easily create a valid SSL for free for your domain

 

installing-OwnCloud-Web-Config-User-Pass-interface-Owncloud-10-on-Debian-9-Linux-howto

Just fill in in your desired user / pass and pass on the database user / password / db name (if required you can set also a different location for the data directory from the default one /var/www/owncloud/data.

Click Finish Setup and That's all folks!

owncloud-server-web-ui-interface

OwnCloud is successfully installed on the server, you can now go and download a Mobile App or Desktop application for whatever OS you're using and start using it as a Dropbox replacement. In a certain moment you might want to consult also the official UserManual documentation as you would probably need further information on how to manage your owncloud.

Enjoy !