Posts Tagged ‘make’

How to make 27 inch monitor to work on 2560×1440 with Virtualbox Linux

Wednesday, October 4th, 2023

make-virtualbox-with-linux-work-on-2k-2560x1440-howto

I've bought a new "second hand" refurbished EIZO Flexscan Monitor EV2760 S1 K1 awesome monitor re from Kvant Serviz a company reseller of Second Hand electronics that is located on the territy of Bulgarian Academy of Sciences (BAN / BAS) and was created by BAS people originally for the BAS people and am pretty happy with it for doing my daily job as system administrator, especially as the monitor has been used on very short screen time for only 256 use hours (which is less than a year of full-use time), whether EIZO does guarantee their monitors to be able to serve up to 5 Full years monitor use time.

For those who deals with Graphics such as Designers and people into art working with Computers knows EIZO brand Monitors for quite some time now and it seems as much of those people are using Windows or Macintoshes, these monitors have been mainly created to work optimally with Windows / Mac computers on a higher resolution.
My work PC that is Dell Latitude 5510 with its HDMI cable has been running perfect with The EIZO with Windows 10, however as I'm using a Virtualbox virutal machines with CentOS Linux, the VM does not automatically detected the highest resolution 2K that this monitors supports 2560×1440 at 60 Hz is the best one can use to get more things fit into the screen and hopefully also good for the Eyes, the Ecoview shoulk also be a good idea for the eyes, as the Ecoview by EIZO tries to adjust the monitor brightness to lower levels according to the light in the room to try to minimize the eye strain on the eyes. The Ecoview mode is a little bit I guess like the famous BENQ's monitors Eye care. 
I'm talking about all this Displays specifics as I spend quite a lot of time to learn the very basics about monitors as my old old 24 Inch EIZO Monitor Flexscan model 2436W started to wear off with time and doesn't support HDMI cable input, so I had to use a special. cable connector that modifies the signal from HDMI to DVI (and I'm not sure how this really effects the eyes), plus the DVI quality is said to be a little bit worse than HDMI as far as I read a bit on the topic online.

Well anyways currently I'm a happy owner of the EIZO EV2760 Monitor which has a full set of inputs of:

  • 27" In-Plane Switching (IPS) Panel
  • DisplayPort | HDMI | DVI-D | 3.5mm Audio
  • 2560 x 1440 Native Resolution
  • 1000:1 Typical Contrast Ratio
     

I've tried to make the monitor work with Linux and my first assumption from what I've read was that I have to reinstall the Guess Addition Tools on the Virtualbox with additing the Guest Addition Tools via the Vbox GUI interface:

Devices -> Insert Guest Additions CD Image

virtualbox-resolutions-screenshot

But got an error that the Guest additions tools iso is missing
So eventually resolved it by remounting and reinstalling the guest addition tools with the following set of commands:

[root@localhost test]# yum install perl gcc dkms kernel-devel kernel-headers make bzip2
[root@localhost test]# cd /mnt/cdrom/
[root@localhost cdrom]# ls
AUTORUN.INF  runasroot.sh                       VBoxSolarisAdditions.pkg
autorun.sh   TRANS.TBL                          VBoxWindowsAdditions-amd64.exe
cert         VBoxDarwinAdditions.pkg            VBoxWindowsAdditions.exe
NT3x         VBoxDarwinAdditionsUninstall.tool  VBoxWindowsAdditions-x86.exe
OS2          VBoxLinuxAdditions.run

 


[root@localhost cdrom]# ./VBoxLinuxAdditions.run 

Verifying archive integrity… All good.
Uncompressing VirtualBox 6.1.34 Guest Additions for Linux……..
VirtualBox Guest Additions installer
Removing installed version 6.1.34 of VirtualBox Guest Additions…
Copying additional installer modules …
Installing additional modules …
VirtualBox Guest Additions: Starting.
VirtualBox Guest Additions: Building the VirtualBox Guest Additions kernel
modules.  This may take a while.
VirtualBox Guest Additions: To build modules for other installed kernels, run
VirtualBox Guest Additions:   /sbin/rcvboxadd quicksetup <version>
VirtualBox Guest Additions: or
VirtualBox Guest Additions:   /sbin/rcvboxadd quicksetup all
VirtualBox Guest Additions: Building the modules for kernel
3.10.0-1160.80.1.el7.x86_64.
ERROR: Can't map '//etc/selinux/targeted/policy/policy.31':  Invalid argument

ERROR: Unable to open policy //etc/selinux/targeted/policy/policy.31.
libsemanage.semanage_read_policydb: Error while reading kernel policy from /etc/selinux/targeted/active/policy.kern. (No such file or directory).
OSError: No such file or directory
VirtualBox Guest Additions: Running kernel modules will not be replaced until
the system is restarted

 

 

The solution to that was to reinstal the security policy-target was necessery

[root@localhost test]# yum install selinux-policy-targeted –reinstall


And of course rerun the reinstall of Guest addition tools up to the latest

[root@localhost cdrom]# ./VBoxLinuxAdditions.run 

Unfortunately that doesn't make it resolve it and even shutting down the VM machine and reloading it again with Raised Video Memory for the simulated hardware from settings from 16 MB to 128MB for the VM does not give the option from the Virtualbox interface to set the resolution from
 

View -> Virtual Screen 1 (Resize to 1920×1200)

to any higher than that.

After a bit of googling I found some newer monitors doesn't seem to be seen by xrandr command and few extra commands with xrandr need to be run to make the 2K resolution 2560×1440@60 Herzes work under the Linux virtual machine.

These are the extra xranrd command that make it happen

# xrandr –newmode "2560x1440_60.00" 311.83  2560 2744 3024 3488  1440 1441 1444 1490  -HSync +Vsync
# xrandr –addmode Virtual1 2560x1440_60.00
# xrandr –output Virtual1 –mode "2560x1440_60.00"

As this kind of settings needs to be rerun on next time the Virtual Machine runs it is a good idea to place the commands in a tiny shell script:

[test@localhost ~]$ cat xrandr-set-resolution-to-2560×1440.sh 
#!/bin/bash
xrandr –newmode "2560x1440_60.00" 311.83  2560 2744 3024 3488  1440 1441 1444 1490  -HSync +Vsync
xrandr –addmode Virtual1 2560x1440_60.00
xrandr –output Virtual1 –mode "2560x1440_60.00"


You can Download  the xrandr-set-resolution-to-2560×1440.sh script from here

Once the commands are run, to make it affect the Virtualbox, you can simply put it in FullScreen mode via


View -> Full-Screen Mode (can be teriggered from keyboard by pressing Right CTRL + F) together

[test@localhost ~]$ xrandr –addmode Virtual1 2560x1440_60.00
[test@localhost ~]$ xrandr –output Virtual1 –mode "2560x1440_60.00"
[test@localhost ~]$ xrandr 
Screen 0: minimum 1 x 1, current 2560 x 1440, maximum 8192 x 8192
Virtual1 connected primary 2560×1440+0+0 (normal left inverted right x axis y axis) 0mm x 0mm
   1920×1200     60.00 +  59.88  
   2560×1600     59.99  
   1920×1440     60.00  
   1856×1392     60.00  
   1792×1344     60.00  
   1600×1200     60.00  
   1680×1050     59.95  
   1400×1050     59.98  
   1280×1024     60.02  
   1440×900      59.89  
   1280×960      60.00  
   1360×768      60.02  
   1280×800      59.81  
   1152×864      75.00  
   1280×768      59.87  
   1024×768      60.00  
   800×600       60.32  
   640×480       59.94  
   2560x1440_60.00  60.00* 
Virtual2 disconnected (normal left inverted right x axis y axis)
Virtual3 disconnected (normal left inverted right x axis y axis)
Virtual4 disconnected (normal left inverted right x axis y axis)
Virtual5 disconnected (normal left inverted right x axis y axis)
Virtual6 disconnected (normal left inverted right x axis y axis)
Virtual7 disconnected (normal left inverted right x axis y axis)
Virtual8 disconnected (normal left inverted right x axis y axis)

Tadadadam ! That's all folks, enjoy having your 27 Inch monitor running at 2560×1440 @ 60 Hz 🙂
 

 

Saint martyr Angel of Lerina – a Bulgarian saint confessor and the Day of Saint Archangel Michael and of all Angels Archangels and Heavenly Powers

Wednesday, November 9th, 2022

saint_archangel_Michaill_Joanikij_papa_Vitanov_1820_Trjavna_Bulgaria

Saint Archangel Michael (Church of Saint  Archangel Michael Tryavna, Bulgaria) iconographer Yoanikij Papavitanov

On 8th of November in the Bulgarian Orthodox Church, we celebrate the day to remember the gathering of Archangel Michael with all the Angels Archangels, Cherubims and heavenly powers that have kept loyal to the Holy Trinity God – The Father, The Son, and The Holy Spirit.
The same arch-angels and powers who could do what they want and were created in the beginning of time after God as a helper Spirits to God and man.

The same angels are also supporting the whole universe with their deeds of love. They sustain the waters, make the wind blow, the clouds to move and give rain, the earth to give its fruits, possess and give wisdom or transfer secret messages from God to man when sent.

Others do protect all Christians and people from the evils of the fallen-agels who choose to misobey the True God Christ and follow the master of the evil spirits whose place is in the burning Gehenah and whose time is running out. 

They help the woman in birth-pain (like my sister Stanimira whose time to give birth is approaching), the make the organism of man to function properly. Or give the physics to make the stars shine on heaven, the Planet and heavenly bodies to move. Each and every place and Country and Church has its own guardian angels. And they're of a Big multitude the Church fathers says a lot about the Angels and many is still unknown and will be revealed in the that everyone will stand on the Judgement day in front of God and sees the Heaven and Hell realities and will stumble in fear seeing the gloriness of the archangels and cherubs (burning out of Love for God and man) made in a likeness of the Holy God.

The orthodox Church sticks clearly to the teaching of so called saint Dyonisious the Areopagites (often called in the Theology Pseudo Dyonisious), who was one of the important apostles of Christ, Athenian judge at the Areopagus Court in Athens, who lived in the first century. A convert to Christianity, he is venerated as a saint by multiple denominations.

 

The writings of Saint Dionysius the Areopagite hold great significance for the Orthodox Church. Four books of his have survived to the present day:

 

On the Celestial Hierarchy, On the Ecclesiastical Hierarchy, On the Names of God, On Mystical Theology

In additional, there are ten letters to various people.

The book On the Celestial Hierarchies was written actually in one of the countries of Western Europe, where Saint Dionysius was preaching. In it he speaks of the Christian teaching about the angelic world. The angelic (or Celestial-Heavenly) hierarchy comprises the nine angelic Ranks:

  • Seraphim
  • Cherubim
  • Thrones
  • Dominions
  • Powers
  • Authorities
  • Principalities
  • Archangels
  • Angels

 

The account of the Synaxis of the Bodiless Powers of Heaven is located under November 8.

saint-Agatangel-Bitolski-Bulgarian-saint-icon

The purpose of the divinely-established Angelic Hierarchy is the ascent towards godliness through purification, enlightenment and perfection. The highest ranks are bearers of divine light and divine life for the lower ranks. And not only are the sentient, bodiless angelic hosts included in the spiritual light-bearing hierarchy, but also the human race, created anew and sanctified in the Church of Christ.

There is too much to be said about Angels, Archangels, through the years from ancient times, they can heal and help, and grant special powers to man and many, many more. There were innumerable heresies who have over-deified heavenly powers, especially gnostics and that is a well known fact. For those who want to read about Angels, and their hierarchy there is a lot ot read and learn, angels has helped the saints in their hardship in fight with evilness, there is really a lot about this for those who want to further learn. 

But what is less known is here in today's relatively small country of Bulgaria, we have a local saint Angel of Lerina who is born in Bulgarian family and stems from a Bulgarian village. As his endeveour and confession of his love for Christ and the Church was enormous he has suffered martyrdom for Christ in the 17th century during the times Bulgaria was enslaved by the Ottoman Turks. Thus as there is not much written about saint Angel Lerinski (Lerina), I dedicated this small article in glory of his memory. The article is also in memoriam of my grand-grand-grand Father who was also named Angel himself, perhaps in glory of Saint Angel of Lerina.

The Life of Saint Angel of Lerina


Saint-Angel-Agathangel-of-Lerina-orthodox-icon

Saint New Martyr Angel of Lerina (Bitolski) – picture source Wikipedia

All the sources about the holy martyr Angel of Lerinsky that we have reached cite the story of Saint Paisius of Hilendar as the main source for the life of the new martyr, called Angel or Agathangel. This is what St. Paisius writes about him in the History of Slavonic Bulgaria":
           "In 1750, in Bitol, where the Turkish and Macedonian Pasha sits, the Turks tortured and beheaded a young man, handsome in face and stature, for the Christian faith. Many forced and tormented him to renounce Christ, but he wisely and courageously denounced their godless faith. The Bishop of Bitola recorded many of his answers, described his sufferings in Greek. And God showed a great sign over his powers. His name was Angel from the village of Lerin. This holy martyr Angel shone in our time in the Bulgarian land."
          The Bulgarian Orthodox Church honors the holy new martyr on November 8, the feast of the holy Archangels. Probably the veneration of the saint in our church dates from the time when he was martyred, because his martyrdom was described by Saint Paisius of Hilendar immediately after it happened, since Saint Paisius was his contemporary.

          Greek information about the new martyr Angel Lerinski appeared only in recent years.

In the electronic version of "Οι Νεομάρτυρες της Булгариас" (New Martyrs Bulgarian) Αρχιμανδρίτου του Οικουμενικού Θρόνου Θωμά Ανδρέου Ιεροκήρυκος Ιεράς Μητροπόλεως Ελευθερουπόλεως (Archimandrite of the Ecumenical See Thomas Andreu, Preacher of the Eleftheroupolis Holy Metropolis), Kavala, 2011, p.88, we read :

         "Another case of a new martyr of Greek origin is that of Angel (or Agatangel) from today's Florina (in Bulgarian Lerin). The 2009 calendar of the Holy Metropolis of Florin, Prespa and Eordei honors this new martyr, who was martyred in the monastery of Pelagonia (now Bitola, Macedonia) on February 17, 1727*. The book "History of Slavonic Bulgaria" by Paisiy Hilendarski talks about the martyrdom of the new martyr ("his name was Angel or Agatangel and he was from the village of Florina")

Saint-Agatangel_Bitolski-Greek-icon

…..
          Little is known about the new martyr. We know that he was born in 1732 in Florina, in the sanjak (prefecture) of Bitola (Monastery). When he grew up, he became a tall and handsome young man. At the age of 18, the Turks tried to convert him to Islam, but Angel – although very young – did not succumb to the temptations and then bravely accepted martyrdom. In his book Paisius Hilendarski mentions that: "In 1750 in the monastery… the Turks tortured and slaughtered a handsome young man because of his Christian faith… His name was Angel and he was from the village of Florina" His testimony in the monastery was attended by the local Greek metropolitan, who described his courage and the intelligent and logical answers he gave in court. Due to the fact that he condemned the Muslim faith with particular wisdom and courage, he was beheaded when he was only 18 years old in 1750. The Bulgarian Orthodox Church honors his memory on November 8, during the Feast of the Archangels…."

          Additional information about the holy new martyr Angel Lerinsky can be found on one of Florina's sites (http://agiospanteleimonas-florina.blogspot.com/2010/06/blog-post_8186.html.)


          The Metropolitan of Florini, Prespa and Eordaia, Theoclitus, addresses the citizens on the occasion of the decision to start the veneration of the holy martyr Agathangel of Florina (June, 2010):

          "With special feelings of joy, emotion, holy contentment and reverence, I turn to you, the blessed children of the Greek Macedonian land, to become participants in the great spiritual joy experienced by our local Church for the first official celebration of the memory of the holy new martyr Agathangel in the seat of our metropolis Florina. It is already known to all of you. that the holy new martyr Agathangel, martyred in the Pelagonian monastery, originated from Florina, is our fellow citizen. At an early age he left Florina and went to Vutelion of Byzantium, to the monastery, seeking better living conditions. There, exercising the profession of shoemaker, he soon distinguished himself by his honesty and his diligence. …. But what distinguished him from the young people of his time was the pure and firm faith he had in Christ and in His "orthodox church". He loved Christ more than anything else in his life. No other love could "steal" the love that Agathangel had in his heart for Christ, he loved Him simply, purely, with all his heart, with all his strength, he loved Him as his poor parents and his blessed ancestors loved Him.
         

Along with the love for Christ, the saint had love for his homeland, conquered Macedonia. Almost four hundred years of slavery count the long-suffering "Greek Macedonians".
The Turkish conquerors treated them with cruelty. Sometimes with flattery, sometimes with threats, sometimes with violence, they try to make them change their faith. To deny Christ. To renounce the Orthodox faith and become Muslims.
And whoever renounces his faith renounces his homeland.

          Agathangel's heart was troubled by the fact that several of his fellow Roman Christians did not withstand the temptations or the violence, denying Christ and the country. His brave heart rebelled. He could not bear the Orthodox faith to be dishonored. For this, when during the three-day Bairam, which is celebrated after Ramadan, the forced conversion of the Orthodox increased, this young boy, not yet twenty years old, went to Constantinople, where he received a Sultan's firman, which forbade the forced conversion in the area of ​​Pelagonia.

On his return to the Monastery, the saint was arrested by those outraged by the Turkish Sultan's decision, and after cruel torture, he was beheaded on February 17, 1727.

           In a meeting we held in the Holy Metropolis, in which, in addition to the Metropolitan, the Honorable Prefect of Florini Mr. Ioannis Voskopoulas, the Mayor of Florini Mr. Stefanos Papanastasiou, the President of TEDC of N. Florinis, Mr. Dimitrios Iliadis, and the President of the monasteries of N. Florinis, Mr. Theodoros Vosdou, decided to jointly hold events in honor of the holy new martyr Agathangel…”

* 1727 is mentioned as the year of the martyrdom of St. Angel Lerinski in some sources, and sometimes it is mentioned together with 1750, in the same source. This discrepancy in the years of the martyrdom leaves the doubt that different martyrs are being talked about.


For this reason, we cannot say exactly whether the reliquary with the relics – the holy head of the new martyr Agathangel in the Kykkos monastery in Cyprus, which contains the same description of his life, but the date February 17, 1727 is indicated, refers to the same martyr, for which speaks Saint Paisius of Hilendar.

Create simple proxy http server with netcat ( nc ) based tiny shell script

Tuesday, January 26th, 2021

use-Netcat_proxy-picture

The need of proxy server is inevitable nowadays especially if you have servers located in a paranoid security environments. Where virtually all is being passed through some kind of a proxy server. In my work we have recently started a  CentOS Linux release 7.9.2009 on HP Proliant DL360e Gen8 (host named rhel-testing).

HP DL360e are quite old nowadays but since we have spare servers and we can refurnish them to use as a local testing internal server Hypervisor it is okay for us. The machine is attached to a Rack that is connected to a Secured Deimilitarized Zone LAN (DMZ Network) which is so much filtered that even simple access to the local company homebrew RPM repository is not accessible from the machine.
Thus to set and remove software from the machine we needed a way to make yum repositories be available, and it seems the only way was to use a proxy server (situated on another accessible server which we use as a jump host to access the testing machine).

Since opening additional firewall request was a time consuming non-sense and the machine is just for testing purposes, we had to come with a solution where we can somehow access a Local repository RPM storage server http://rpm-package-server-repo.com/ for which we have a separate /etc/yum.repos.d/custom-rpms.repo definition file created.

This is why we needed a simplistic way to run a proxy but as we did not have the easy way to install privoxy / squid / haproxy or apache webserver configured as a proxy (to install one of those of relatively giant piece of software need to copy many rpm packages and manually satisfy dependencies), we looked for a simplistic way to run a proxy server on jump-host machine host A.

A note to make here is jump-host that was about to serve as a proxy  had already HTTP access towards the RPM repositories http://rpm-package-server-repo.com and could normally fetch packages with curl or wget via it …

For to create a simple proxy server out of nothing, I've googled a bit thinking that it should be possible either with BASH's TCP/IP capabilities or some other small C written tool compiled as a static binary, just to find out that netcat swiss army knife as a proxy server bash script is capable of doing the trick.

Jump host machine which was about to be used as a proxy server for http traffic did not have enabled access to tcp/port 8888 (port's firewall policies were prohibiting access to it).Since 8888 was the port targetted to run the proxy to allow TCP/IP port 8888 accessibility from the testing RHEL machine towards jump host, we had to issue first on jump host:

[root@jump-host: ~ ]# firewall-cmd –permanent –zone=public –add-port=8888/tcp

To run the script once placed under /root/tcp-proxy.sh on jump-host we had to run a never ending loop in a GNU screen session to make sure it runs forever:

Original tcp-proxy.sh script used taken from above article is:
 

#!/bin/sh -e

 

if [ $# != 3 ]
then
    echo "usage: $0 <src-port> <dst-host> <dst-port>"
    exit 0
fi

TMP=`mktemp -d`
BACK=$TMP/pipe.back
SENT=$TMP/pipe.sent
RCVD=$TMP/pipe.rcvd
trap 'rm -rf "$TMP"' EXIT
mkfifo -m 0600 "$BACK" "$SENT" "$RCVD"
sed 's/^/ => /' <"$SENT" &
sed 's/^/<=  /' <"$RCVD" &
nc -l -p "$1" <"$BACK" | tee "$SENT" | nc "$2" "$3" | tee "$RCVD" >"$BACK"

 

Above tcp-proxy.sh script you can download here.

I've tested the script one time and it worked, the script syntax is:

 [root@jump-host: ~ ]#  sh tcp-proxy.sh
usage: tcp-proxy.sh <src-port> <dst-host> <dst-port>


To make it work for one time connection I've run it as so:

 

 [root@jump-host: ~ ]# sh tcp-proxy.sh 8888 rpm-package-server-repo.com 80

 

 

To make the script work all the time I had to use one small one liner infinite bash loop which goes like this:

[root@jump-host: ~ ]#  while [ 1 ]; do sh tcp-proxy.sh 8888 rpm-package-server-repo.com 80; done​

On rhel-testing we had to configure for yum and all applications to use a proxy temporary via
 

[root@rhel-tresting: ~ ]# export http_proxy=jump-host_machine_accessibleIP:8888


And then use the normal yum check-update && yum update to apply to rhel-testing machine latest RPM package security updates.

The nice stuff about the tcp-proxy.sh with netcat in a inifite loop is you will see the binary copy of traffic flowing on the script which will make you feel like in those notorious Hackers movies ! 🙂

The stupid stuff is that sometimes some connections and RPM database updates or RPMs could be cancelled due to some kind of network issues.

To make the connection issues that are occuring to the improvised proxy server go away we finally used a slightly modified version from the original netcat script, which read like this.
 

#!/bin/sh -e

 

if [ $# != 3 ]
then
    echo "usage: $0 <src-port> <dst-host> <dst-port>"
        exit 0
        fi

        TMP=`mktemp -d`
        BACK=$TMP/pipe.back
        SENT=$TMP/pipe.sent
        RCVD=$TMP/pipe.rcvd
        trap 'rm -rf "$TMP"' EXIT
        mkfifo -m 0600 "$BACK" "$SENT" "$RCVD"
        sed 's/^/ => /' <"$SENT" &
        sed 's/^/<=  /' <"$RCVD" &
        nc –proxy-type http -l -p "$1" <"$BACK" | tee "$SENT" | nc "$2" "$3" | tee "$RCVD" >"$BACK"


Modified version tcp-proxy1.sh with –proxy-type http argument passed to netcat script u can download here.

With –proxy-type http yum check-update works normal just like with any normal fully functional http_proxy configured.

Next step wasto make the configuration permanent you can either add to /root/.bashrc or /etc/bashrc (if you need the setting to be system wide for every user that logged in to Linux system).

[root@rhel-tresting: ~ ]#  echo "http_proxy=http://jump-host_machine_accessibleIP:8888/" > /etc/environment


If you need to set the new built netcat TCP proxy only for yum package update tool include proxy only in /etc/yum.conf:

[root@rhel-tresting: ~ ]# vi /etc/yum.conf
proxy=http_proxy=http://jump-host_machine_accessibleIP:8888/


That's all now you have a proxy out of nothing with just a simple netcat enjoy.

Start Stop Restart Microsoft IIS Webserver from command line and GUI

Thursday, April 17th, 2014

start-stop-restart-microsoft-iis-howto-iis-server-logo
For a decomissioning project just recently I had the task to stop Microsoft IIS  on Windows Server system.
If you have been into security for a while you know well how many vulnerabilities Microsoft (Internet Information Server) Webserver used to be. Nowadays things with IIS are better but anyways it is better not to use it if possible …

Nomatter what the rason if you need to make IIS stop serving web pages here is how to do it via command line:

At Windows Command Prompt, type:

net stop WAS

If the command returns error message to stop it type:

net stop W3SVC

stop-microsoft-IIS-webservice
Just in case you have to start it again run:

net start W3SVC

start-restart-IIS-webserver-screenshot

For those who prefer to do it from GUI interface, launch services.msc command from Windows Run:

> services.msc

services-msc-stop-microsoft-iis-webserver

In list of services lookup for
IIS Admin Service and HTTP SSL
a) (Click over it with right mouse button -> Properties)
b) Set Startup type to Manual
c) Click Stop Button

You're done now IIS is stopped to make sure it is stopped you can run from cmd.exe:

telnet localhost 80

when not working you should get 'Could not open connection to the host. on port 80: Connection failed' like shown up in screenshot.

Deny DHCP Address by MAC on Linux

Thursday, October 8th, 2020

Deny DHCP addresses by MAC ignore MAC to not be DHCPD leased on GNU / Linux howto

I have not blogged for a long time due to being on a few weeks vacation and being in home with a small cute baby. However as a hardcore and a bit of dumb System administrator, I have spend some of my vacation and   worked on bringing up the the www.pc-freak.net and the other Websites hosted as a high availvailability ones living on a 2 Webservers running on a Master to Master MySQL Replication backend database, this is oll hosted on  servers, set to run as a round robin DNS hosts on 2 servers one old Lenove ThinkCentre Edge71 as well as a brand new real Lenovo server Lenovo ThinkServer SD350 with 24 CPUs and a 32 GB of RAM
To assure Internet Connectivity is having a good degree of connectivity and ensure websites hosted on both machines is not going to die if one of the 2 pair configured Fiber Optics Internet Providers Bergon.NET has some Issues, I've rented another Internet Provider Line is set bought from the VIVACOM Mobile Fiber Internet provider – that is a 1 Gigabit Fiber Optics Line.
Next to that to guarantee there is no Database, Webserver, MailServer, Memcached and other running services did not hit downtimes due to Electricity power outage, two Powerful Uninterruptable Power Supplies (UPS)  FPS Fortron devices are connected to the servers each of which that could keep the machine and the connected switches and Servers for up to 1 Hour.

The machines are configured to use dhcpd to distributed IP addresses and the Main Node is set to distribute IPs, however as there is a local LAN network with more of a personal Work PCs, Wireless Devices and Testing Computers and few Virtual machines in the Network and the IPs are being distributed in a consequential manner via a ISC DHCP server.

As always to make everything work properly hence, I had again some a bit weird non-standard requirement to make some of the computers within the Network with Static IP addresses and the others to have their IPs received via the DHCP (Dynamic Host Configuration Protocol) and add some filter for some of the Machine MAC Addresses which are configured to have a static IP addresses to prevent the DHCP (daemon) server to automatically reassign IPs to this machines.

After a bit of googling and pondering I've done it and some of the machines, therefore to save others the efforts to look around How to set Certain Computers / Servers Network Card MAC (Interfaces) MAC Addresses  configured on the LAN network to use Static IPs and instruct the DHCP server to ingnore any broadcast IP addresses leases – if they're to be destined to a set of IGNORED MAcs, I came up with this small article.

Here is the DHCP server /etc/dhcpd/dhcpd.conf from my Debian GNU / Linux (Buster) 10.4

 

option domain-name "pcfreak.lan";
option domain-name-servers 8.8.8.8, 8.8.4.4, 208.67.222.222, 208.67.220.220;
max-lease-time 891200;
authoritative;
class "black-hole" {
    match substring (hardware, 1, 6);
    ignore booting;
}
subclass "black-hole" 18:45:91:c3:d9:00;
subclass "black-hole" 70:e2:81:13:44:11;
subclass "black-hole" 70:e2:81:13:44:12;
subclass "black-hole" 00:16:3f:53:5d:11;
subclass "black-hole" 18:45:9b:c6:d9:00;
subclass "black-hole" 16:45:93:c3:d9:09;
subclass "black-hole" 16:45:94:c3:d9:0d;/etc/dhcpd/dhcpd.conf
subclass "black-hole" 60:67:21:3c:20:ec;
subclass "black-hole" 60:67:20:5c:20:ed;
subclass "black-hole" 00:16:3e:0f:48:04;
subclass "black-hole" 00:16:3e:3a:f4:fc;
subclass "black-hole" 50:d4:f5:13:e8:ba;
subclass "black-hole" 50:d4:f5:13:e8:bb;
subnet 192.168.0.0 netmask 255.255.255.0 {
        option routers                  192.168.0.1;
        option subnet-mask              255.255.255.0;
}
host think-server {
        hardware ethernet 70:e2:85:13:44:12;
        fixed-address 192.168.0.200;
}
default-lease-time 691200;
max-lease-time 891200;
log-facility local7;

To spend you copy paste efforts a file with Deny DHCP Address by Mac Linux configuration is here
/home/hipo/info
Of course I have dumped the MAC Addresses to omit a data leaking but I guess the idea behind the MAC ADDR ignore is quite clear

The main configuration doing the trick to ignore a certain MAC ALenovo ThinkServer SD350ddresses that are reachable on the Connected hardware switch on the device is like so:

class "black-hole" {
    match substring (hardware, 1, 6);
    ignore booting;
}
subclass "black-hole" 18:45:91:c3:d9:00;


The Deny DHCP Address by MAC is described on isc.org distribution lists here but it seems the documentation on the topic on how to Deny / IGNORE DHCP Addresses by MAC Address on Linux has been quite obscure and limited online.

As you can see in above config the time via which an IP is freed up and a new IP lease is done from the server is severely maximized as often DHCP servers do use a max-lease-time like 1 hour (3600) seconds:, the reason for increasing the lease time to be to like 10 days time is that the IPs in my network change very rarely so it is a waste of CPU cycles to do a frequent lease.

default-lease-time 691200;
max-lease-time 891200;


As you see to Guarantee resolving works always as expected I have configured – Google Public DNS and OpenDNS IPs

option domain-name-servers 8.8.8.8, 8.8.4.4, 208.67.222.222, 208.67.220.220;


One hint to make is, after setting up all my desired config in the standard config location /etc/dhcp/dhcpd.conf it is always good idea to test configuration before reloading the running dhcpd process.

 

root@pcfreak: ~# /usr/sbin/dhcpd -t
Internet Systems Consortium DHCP Server 4.4.1
Copyright 2004-2018 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/
Config file: /etc/dhcp/dhcpd.conf
Database file: /va/home/hipo/infor/lib/dhcp/dhcpd.leases
PID file: /var/run/dhcpd.pid
 

That's all folks with this sample config the IPs under subclass "black-hole", which are a local LAN Static IP Addresses will never be offered leasess anymore from the ISC DHCP.
Hope this stuff helps someone, enjoy and in case if you need a colocation of a server or a website hosting for a really cheap price on this new set High Availlability up described machines open an inquiry on https://web.www.pc-freak.net.

 

Rsync copy files with root privileges between servers with root superuser account disabled

Tuesday, December 3rd, 2019

 

rsync-copy-files-between-two-servers-with-root-privileges-with-root-superuser-account-disabled

Sometimes on servers that follow high security standards in companies following PCI Security (Payment Card Data Security) standards it is necessery to have a very weird configurations on servers,to be able to do trivial things such as syncing files between servers with root privileges in a weird manners.This is the case for example if due to security policies you have disabled root user logins via ssh server and you still need to synchronize files in directories such as lets say /etc , /usr/local/etc/ /var/ with root:root user and group belongings.

Disabling root user logins in sshd is controlled by a variable in /etc/ssh/sshd_config that on most default Linux OS
installations is switched on, e.g. 

grep -i permitrootlogin /etc/ssh/sshd_config
PermitRootLogin yes


Many corporations use Vulnerability Scanners such as Qualys are always having in their list of remote server scan for SSH Port 22 to turn have the PermitRootLogin stopped with:

 

PermitRootLogin no


In this article, I'll explain a scenario where we have synchronization between 2 or more servers Server A / Server B, whatever number of servers that have already turned off this value, but still need to
synchronize traditionally owned and allowed to write directories only by root superuser, here is 4 easy steps to acheive it.

 

1. Add rsyncuser to Source Server (Server A) and Destination (Server B)


a. Execute on Src Host:

 

groupadd rsyncuser
useradd -g 1000 -c 'Rsync user to sync files as root src_host' -d /home/rsyncuser -m rsyncuser

 

b. Execute on Dst Host:

 

groupadd rsyncuser
useradd -g 1000 -c 'Rsync user to sync files dst_host' -d /home/rsyncuser -m rsyncuser

 

2. Generate RSA SSH Key pair to be used for passwordless authentication


a. On Src Host
 

su – rsyncuser

ssh-keygen -t rsa -b 4096

 

b. Check .ssh/ generated key pairs and make sure the directory content look like.

 

[rsyncuser@src-host .ssh]$ cd ~/.ssh/;  ls -1

id_rsa
id_rsa.pub
known_hosts


 

3. Copy id_rsa.pub to Destination host server under authorized_keys

 

scp ~/.ssh/id_rsa.pub  rsyncuser@dst-host:~/.ssh/authorized_keys

 

Next fix permissions of authorized_keys file for rsyncuser as anyone who have access to that file (that exists as a user account) on the system
could steal the key and use it to run rsync commands and overwrite remotely files, like overwrite /etc/passwd /etc/shadow files with his custom crafted credentials
and hence hack you 🙂
 

Hence, On Destionation Host Server B fix permissions with:
 

su – rsyncuser; chmod 0600 ~/.ssh/authorized_keys
[rsyncuser@dst-host ~]$


An alternative way for the lazy sysadmins is to use the ssh-copy-id command

 

$ ssh-copy-id rsyncuser@192.168.0.180
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
root@192.168.0.180's password: 
 

 

For improved security here to restrict rsyncuser to be able to run only specific command such as very specific script instead of being able to run any command it is good to use little known command= option
once creating the authorized_keys

 

4. Test ssh passwordless authentication works correctly


For that Run as a normal ssh from rsyncuser

On Src Host

 

[rsyncuser@src-host ~]$ ssh rsyncuser@dst-host


Perhaps here is time that for those who, think enabling a passwordless authentication is not enough secure and prefer to authorize rsyncuser via a password red from a secured file take a look in my prior article how to login to remote server with password provided from command line as a script argument / Running same commands on many servers 

5. Enable rsync in sudoers to be able to execute as root superuser (copy files as root)

 


For this step you will need to have sudo package installed on the Linux server.

Then, Execute once logged in as root on Destionation Server (Server B)

 

[root@dst-host ~]# grep 'rsyncuser ALL' /etc/sudoers|wc -l || echo ‘rsyncuser ALL=NOPASSWD:/usr/bin/rsync’ >> /etc/sudoers
 

 

Note that using rsync with a ALL=NOPASSWD in /etc/sudoers could pose a high security risk for the system as anyone authorized to run as rsyncuser is able to overwrite and
respectivle nullify important files on Destionation Host Server B and hence easily mess the system, even shell script bugs could produce a mess, thus perhaps a better solution to the problem
to copy files with root privileges with the root account disabled is to rsync as normal user somewhere on Dst_host and use some kind of additional script running on Dst_host via lets say cron job and
will copy gently files on selective basis.

Perhaps, even a better solution would be if instead of granting ALL=NOPASSWD:/usr/bin/rsync in /etc/sudoers is to do ALL=NOPASSWD:/usr/local/bin/some_copy_script.sh
that will get triggered, once the files are copied with a regular rsyncuser acct.

 

6. Test rsync passwordless authentication copy with superuser works


Do some simple copy, lets say copy files on Encrypted tunnel configurations located under some directory in /etc/stunnel on Server A to /etc/stunnel on Server B

The general command to test is like so:
 

rsync -aPz -e 'ssh' '–rsync-path=sudo rsync' /var/log rsyncuser@$dst_host:/root/tmp/


This will copy /var/log files to /root/tmp, you will get a success messages for the copy and the files will be at destination folder if succesful.

 

On Src_Host run:

 

[rsyncuser@src-host ~]$ dst=FQDN-DST-HOST; user=rsyncuser; src_dir=/etc/stunnel; dst_dir=/root/tmp;  rsync -aP -e 'ssh' '–rsync-path=sudo rsync' $src_dir  $rsyncuser@$dst:$dst_dir;

 

7. Copying files with root credentials via script


The simlest file to use to copy a bunch of predefined files  is best to be handled by some shell script, the most simple version of it, could look something like this.
 

#!/bin/bash
# On server1 use something like this
# On server2 dst server
# add in /etc/sudoers
# rsyncuser ALL=NOPASSWD:/usr/bin/rsync

user='rsyncuser';

dst_dir="/root/tmp";
dst_host='$dst_host';
src[1]="/etc/hosts.deny";
src[2]="/etc/sysctl.conf";
src[3]="/etc/samhainrc";
src[4]="/etc/pki/tls/";
src[5]="/usr/local/bin/";

 

for i in $(echo ${src[@]}); do
rsync -aPvz –delete –dry-run -e 'ssh' '–rsync-path=sudo rsync' "$i" $rsyncuser@$dst_host:$dst_dir"$i";
done


In above script as you can see, we define a bunch of files that will be copied in bash array and then run a loop to take each of them and copy to testination dir.
A very sample version of the script rsync_with_superuser-while-root_account_prohibited.sh 
 

Conclusion


Lets do short overview on what we have done here. First Created rsyncuser on SRC Server A and DST Server B, set up the key pair on both copied the keys to make passwordless login possible,
set-up rsync to be able to write as root on Dst_Host / testing all the setup and pinpointing a small script that can be used as a backbone to develop something more complex
to sync backups or keep system configurations identicatial – for example if you have doubts that some user might by mistake change a config etc.
In short it was pointed the security downsides of using rsync NOPASSWD via /etc/sudoers and few ideas given that could be used to work on if you target even higher
PCI standards.

 

Fix FTP active connection issues “Cannot create a data connection: No route to host” on ProFTPD Linux dedicated server

Tuesday, October 1st, 2019

proftpd-linux-logo

Earlier I've blogged about an encounter problem that prevented Active mode FTP connections on CentOS
As I'm working for a client building a brand new dedicated server purchased from Contabo Dedi Host provider on a freshly installed Debian 10 GNU / Linux, I've had to configure a new FTP server, since some time I prefer to use Proftpd instead of VSFTPD because in my opinion it is more lightweight and hence better choice for a small UNIX server setups. During this once again I've encounted the same ACTIVE FTP not working from FTP server to FTP client host machine. But before shortly explaining, the fix I find worthy to explain briefly what is ACTIVE / PASSIVE FTP connection.

 

1. What is ACTIVE / PASSIVE FTP connection?
 

Whether in active mode, the client specifies which client-side port the data channel has been opened and the server starts the connection. Or in other words the default FTP client communication for historical reasons is in ACTIVE MODE. E.g.
Client once connected to Server tells the server to open extra port or ports locally via which the overall FTP data transfer will be occuring. In the early days of networking when FTP protocol was developed security was not of such a big concern and usually Networks did not have firewalls at all and the FTP DATA transfer host machine was running just a single FTP-server and nothing more in this, early days when FTP was not even used over the Internet and FTP DATA transfers happened on local networks, this was not a problem at all.

In passive mode, the server decides which server-side port the client should connect to. Then the client starts the connection to the specified port.

But with the ever increasing complexity of Internet / Networks and the ever tightening firewalls due to viruses and worms that are trying to own and exploit networks creating unnecessery bulk loads this has changed …

active-passive-ftp-explained-diagram
 

2. Installing and configure ProFTPD server Public ServerName

I've installed the server with the common cmd:

 

apt –yes install proftpd

 

And the only configuration changed in default configuration file /etc/proftpd/proftpd.conf  was
ServerName          "Debian"

I do this in new FTP setups for the logical reason to prevent the multiple FTP Vulnerability Scan script kiddie Crawlers to know the exact OS version of the server, so this was changed to:

 

ServerName "MyServerHostname"

 

Though this is the bad security through obscurity practice doing so is a good practice.
 

3. Create iptable firewall rules to allow ACTIVE FTP mode


But anyways, next step was to configure the firewall to be allowed to communicate on TCP PORT 21 and 20 to incoming source ports range 1024:65535 (to enable ACTIVE FTP) on firewal level with iptables on INPUT and OUTPUT chain rules, like this:

 

iptables -A INPUT -p tcp –sport 1024:65535 -d 0/0 –dport 21 -m state –state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -p tcp -s 0/0 –sport 1024:65535 -d 0/0 –dport 20 -m state –state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -s 0/0 –sport 21 -d 0/0 –dport 1024:65535 -m state –state ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -s 0/0 –sport 20 -d 0/0 –dport 1024:65535 -m state –state ESTABLISHED,RELATED -j ACCEPT


Talking about Active and Passive FTP connections perhaps for novice Linux users it might be worthy to say few words on Active and Passive FTP connections

Once firewall has enabled FTP Active / Passive connections is on and FTP server is listening, to test all is properly configured check iptable rules and FTP listener:
 

/sbin/iptables -L INPUT |grep ftp
ACCEPT     tcp  —  anywhere             anywhere             tcp spts:1024:65535 dpt:ftp state NEW,ESTABLISHED
ACCEPT     tcp  —  anywhere             anywhere             tcp spts:1024:65535 dpt:ftp-data state NEW,ESTABLISHED
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:ftp
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:ftp-data

netstat -l | grep "ftp"
tcp6       0      0 [::]:ftp                [::]:*                  LISTEN    

 

4. Loading nf_nat_ftp module and net.netfilter.nf_conntrack_helper (for backward compitability)


Next step of course was to add the necessery modules nf_nat_ftp nf_conntrack_sane that makes FTP to properly forward ports with respective Firewall states on any of above source ports which are usually allowed by firewalls, note that the range of ports given 1024:65535 might be too much liberal for paranoid sysadmins and in many cases if ports are not filtered, if you are a security freak you can use some smaller range such as 60000-65535.

 

Here is time to say for sysadmins who haven't recently had a task to configure a new (unecrypted) File Transfer Server as today Secure FTP is almost alltime used for file transfers for the sake of security might be puzzled to find out the old Linux kernel ip_conntrack_ftp which was the standard module used to make FTP Active connections work is substituted nowadays with  nf_nat_ftp and nf_conntrack_sane.

To make the 2 modules permanently loaded on next boot on Debian Linux they have to be added to /etc/modules

Here is how sample /etc/modules that loads the modules on next system boot looks like

cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
softdog
nf_nat_ftp
nf_conntrack_sane


Next to say is that in newer Linux kernels 3.x / 4.x / 5.x the nf_nat_ftp and nf_conntrack-sane behaviour changed so  simply loading the modules would not work and if you do the stupidity to test it with some FTP client (I used gFTP / ncftp from my Linux desktop ) you are about to get FTP No route to host errors like:

 

Cannot create a data connection: No route to host

 

cannot-create-a-data-connection-no-route-to-host-linux-error-howto-fix


Sometimes, instead of No route to host error the error FTP client might return is:

 

227 entering passive mode FTP connect connection timed out error


To make the nf_nat_ftp module on newer Linux kernels hence you have to enable backwards compatibility Kernel variable

 

 

/proc/sys/net/netfilter/nf_conntrack_helper

 

echo 1 > /proc/sys/net/netfilter/nf_conntrack_helper

 

To make it permanent if you have enabled /etc/rc.local legacy one single file boot place as I do on servers – for how to enable rc.local on newer Linuxes check here

or alternatively add it to load via sysctl

sysctl -w net.netfilter.nf_conntrack_helper=1

And to make change permanent (e.g. be loaded on next boot)

echo 'net.netfilter.nf_conntrack_helper=1' >> /etc/sysctl.conf

 

5. Enable PassivePorts in ProFTPD or PassivePortRange in PureFTPD


Last but not least open /etc/proftpd/proftpd.conf find PassivePorts config value (commented by default) and besides it add the following line:

 

PassivePorts 60000 65534

 

Just for information if instead of ProFTPd you experience the error on PureFTPD the configuration value to set in /etc/pure-ftpd.conf is:
 

PassivePortRange 30000 35000


That's all folks, give the ncftp / lftp / filezilla or whatever FTP client you prefer and test it the FTP client should be able to talk as expected to remote server in ACTIVE FTP mode (and the auto passive mode) will be not triggered anymore, nor you will get a strange errors and failure to connect in FTP clients as gftp.

Cheers 🙂

How to make Samba smbfs / cifs mount share location with user / pass credentials authenticate via file stored credentials

Friday, July 19th, 2019

how-to-use-username-and-password-to-authenticate-to-samba-share-server-or-linux-share-server-linux-samba-logo
That's pretty trivial and perhaps if you had to manage samba server or cifs on a Linux host you already know it but for beginners, that might be interesting.

So in this short article I will explain how to make configure smbfs / cifs authentication from Linux host A client to Linux host B server running smbd and nmbd samba server (which is the smfs / cifs share server) by using external authentication file for either mount command or if /etc/fstab used to automatically authenticate using a preconfigured mount saba share via /etc/fstab.

Before you start to do anything with samba on Linux host A client machine, you will need as a minimum to have installed cifs-utils or smbfs (assuming you're on Debian Linux like you can check with dpkg -l and if missing install it via:

 

 

apt-get install cifs-utils

 

Or on older systems or for smbfs support

 

apt-get install smbfs

 

The general mount smbfs share command without specified external credentials file would look like so:

 

mount //mynetworksharename/ /shares/data -o username=myusername, password=mypassword


So how to use external auth file to prevent samba shares  users and passwords to not be stored in root user history all the time?

To do so it is pretty straight forward all you need to do is to create a single user / pass credentials variable defined lets say to file called .smbcredentials or .cifs under some directory lets /root/.smbcredentials.

One note here is (many people prefer to store the password under /root) for security reasons as root directory is usually readable only by administrator and would prevent a non-privileged user to read the user / pass which are stored in plain text.

.smbcredentials is described in mount.cifs man page, here is what it says about credentials variable understood by mount / mount.cifs command  file syntax:
 

 

credentials=filename
    specifies a file that contains a username and/or password. The format of the file is:

         username=value
         password=value


For a CIFS (Common Internet File System) which is a new implementation of old Windows Share (SMB protocol) avaiable in newer Windows XP / 7 / 10 machines, to do the cifs mount manually:
 

mount -v -t cifs //WINSHARESERVER/topsecretfiles /mnt/network/ -o credentials=/mnt/creds-file

or use 

 

mount.cifs //WINSSHARE/topsecretfiles /mnt/network/ -o credentials=/root/.creds-file

 

For old smbfs protocol for backward compatibility so older Win 2000 or Winblows server XP PCs configured to also access the Linux samba mount.

mount -t smbfs //WINHARESERVER/topsecretfiles /mnt/network/ -o credentials=/mnt/.smbcredentials


Once you have the defined .smbcredentials file name, be sure to also protect it with properly set permissions like 0600 (rw) readable only for root user. 

chmod 0600 /root/.smbcredentials

Note that in that example .smbcredentials is set to be a hidden file on purpose as this is a hidden file it will make it slightly less seenable if introduder breaks on the server (an example of security through obscurity)

 

Next lets see how to mount the Windows Samba Share permanently with predefined user / pass server login

For many non secured Windows shares one can use /etc/fstab line definition as simple as:
 

//server-share-name/sharename  /mnt/shares/sharename  cifs  guest,uid=1000,iocharset=utf8  0


For password protected Win Share mounts however, the simplest way to do is via /etc/fstab line add like so:

 

 

 

//servername/sharename  /mnt/shares/sharename  cifs  username=msusername,password=mspassword,iocharset=utf8,sec=ntlm  0  0


Note that the sec=ntlm is optional and remote samba server or Windows Share server version has to support this kind of authentication and in some cases you could safely reove sec=ntlm, just use it, when you know what you're doing. iocharset is good to have as for Russian / Bulgarian e.g.  Cyrillic, Chineese, Indian and other exotic languages and other strange language encoding to be supported and properly shown on the mounted share it should be properly defined …, 

A good permissions would be:

chmod 600 ~/.smbcredentials

To use the external /root/.smbcredentials password it shold be like so:

 

 

 

 

 

 

 

# cat /root/.smbcredentials

username=msusername
password=mssecretpassword
56#

 

 

Finally /root/.smbcredentials record should be as so:
 

//share-server-name/sharename /mnt/shares/windowsshare cifs credentials=/home/ubuntuusername/.smbcredentials,iocharset=utf8,sec=ntlm 0 0


Note You should already have

/mnt/shares/windowshare created on server B (the ount client) with:

mkdir -p  /mnt/shares/windowshare


To mount /etc/fstab defined filesystem to mount on next server boot then do

mount /mnt/shares/windowshare


or completely mount / remount all present /etc/fstab filesystems with the common

mount -a


(but here be careful as this might cause you troubles already other NFS or whatever FS is mounted and being read by clients) :

And you the remote Samba Share (mount location) – should be reachable with ping command and traceroute and remote server ports 139, 445 etc. should be up running opened and connectable from server B share-server-name/sharename

If you face some issues when trying to mount remote share with mount -t smbfs / mount.cifs then you can use smbclient with debug option to find out some more on the connectivity / authentication issue by using the smb share server IP address instead of hostnae and lets say a debug level of 3 like so:

 

 

 

 

smbclient -d3 -L //10.5.8.118/Files -A /root/.smbcredentials

[0] smbclient -d3 -L //10.2.3.111/Files -A /home/acteam/.smbcredentials     lp_load_ex: refreshing parameters
Initialising global parameters
rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384)
Processing section "[global]"
WARNING: The "syslog" option is deprecated
added interface eth0 ip=10.2.3.127 bcast=10.2.3.255 netmask=255.255.255.0
Client started (version 4.3.11-Ubuntu).
Connecting to 10.2.3.111 at port 445
Doing spnego session setup (blob length=120)
got OID=1.3.6.1.4.1.311.2.2.30
got OID=1.2.840.48018.1.2.2
got OID=1.2.840.113554.1.2.2
got OID=1.2.840.113554.1.2.2.3
got OID=1.3.6.1.4.1.311.2.2.10
got principal=not_defined_in_RFC4178@please_ignore
GENSEC backend 'gssapi_spnego' registered
GENSEC backend 'gssapi_krb5' registered
GENSEC backend 'gssapi_krb5_sasl' registered
GENSEC backend 'spnego' registered
GENSEC backend 'schannel' registered
GENSEC backend 'naclrpc_as_system' registered
GENSEC backend 'sasl-EXTERNAL' registered
GENSEC backend 'ntlmssp' registered
GENSEC backend 'ntlmssp_resume_ccache' registered
GENSEC backend 'http_basic' registered
GENSEC backend 'http_ntlm' registered
GENSEC backend 'krb5' registered
GENSEC backend 'fake_gssapi_krb5' registered
Got challenge flags:
Got NTLMSSP neg_flags=0x62898215
NTLMSSP: Set final flags:
Got NTLMSSP neg_flags=0x62088215
NTLMSSP Sign/Seal – Initialising with flags:
Got NTLMSSP neg_flags=0x62088215
NTLMSSP Sign/Seal – Initialising with flags:
Got NTLMSSP neg_flags=0x62088215
Domain=[TMGRID] OS=[Windows Server 2012 R2 Standard 9600] Server=[Windows Server 2012 R2 Standard 6.3]

 

        Sharename       Type      Comment
        ———       —-      ——-
        ADMIN$          Disk      Remote Admin
        C$              Disk      Default share
        Files           Disk
        IPC$            IPC       Remote IPC
        MappedDrive     Disk
Connecting to 10.2.3.111 at port 139
Connecting to 10.2.3.111 at port 139
Connection to 10.2.3.111 failed (Error NT_STATUS_RESOURCE_NAME_NOT_FOUND)
NetBIOS over TCP disabled — no workgroup available

 

Sum it up

Lets Summarize a bit, here I described how to mount smbfs and cifs mount shares with mount command, how to define the auto mount on server boot via /etc/fstab, how to mount manually /etc/fstab defined mount and what should be the syntax of .smbcredentials user / pass file and also pointed how to debug problems on samba / windows server location share mounts with smbclient command.
 

Howto create Linux Music Audio CD from MP3 files / Create playable WAV format Audio CD Albums from MP3s

Tuesday, July 16th, 2019

cdburning-audio-music-cd-from-mp3-on-linuxcomapct-disc-tux-linux-logo

Recently my Mother asked me to prepare a Music Audio CD for her from a popular musician accordionist Stefan Georgiev from Dobrudja who has a unique folklore Bulgarian music.

As some of older people who still remember the age of the CD and who had most likely been into the CD burning Copy / Piracy business so popular in the countries of the ex-USSR so popular in the years 1995-2000 audio ,  Old CD Player Devices were not able to play the MP3 file format due to missing codecs (as MP3 was a proprietary compression that can't be installed on every device without paying the patent to the MP3 compression rights holder.

The revolutionary MP3 compression used to be booming standard for transferring Music data due to its high compression which made an ordinary MP3 of 5 minutes of 5MB (10+ times more compression than an ordinary classic WAV Audio the CPU intensiveness of MP3 files that puts on the reading device, requiring the CD Player to have a more powerful CPU.

Hence  due to high licensing cost and requirement for more powerful CPU enabled Audio Player many procuders of Audio Players never introduced MP3 to their devices and MP3 Neve become a standard for the Audio CD that was the standard for music listening inside almost every car out there.

Nowdays it is very rare need to create a Audio CD as audio CDs seems to be almost dead (As I heard from a Richard Stallman lecture In USA nowadays there is only 1 shop in the country where you can still buy CD or DVD drives) and only in third world as Africa Audio CDs perhaps are still in circulation.

Nomatter that as we have an old Stereo CD player on my village and perhaps many others, still have some old retired CD reading devices being able to burn out a CD is a useful thing.

Thus to make mother happy and as a learning excercise, I decided to prepare the CD for her on my Linux notebook.
Here I'll shortly describe the takes I took to make it happen which hopefully will be useful for other people that need to Convert and burn Audio CD from MP3 Album.

 

1. First I downloaded the Album in Mp3 format from Torrent tracker

My homeland Bulgaria and specific birth place place the city of Dobrich has been famous its folklore:  Galina Durmushlijska and Stefan Georgiev are just 2 of the many names along with Оркестър Кристал (Orchestra Crystal) and the multitude of gifted singers. My mother has a santiment for Stefan Georgiev, as she listened to this gifted accordinist on her Uncle's marriage.

Thus In my case this was (Стефан Георгиев Хора и ръченици от Добруджа) the album full song list here If you're interested to listen the Album and Enjoy unique Folklore from Dobrudja (Dobrich) my home city, Stefan Georgiev's album Hora and Rachenica Dances is available here

 


Stefan_Georgiev-old-audio-Music-CD-Hora-i-Rychenici-ot-Dobrudja-Horos-and-Ruchenitsas-from-Dobrudja-CD_Cover
I've downloaded them from Bulgarian famous torrent tracker zamunda.net in MP3 format.
Of course you need to have a CD / DVD readed and write device on the PC which nowdays is not present on most modern notebooks and PCs but as a last resort you can buy some cheap External Optical CD / DVD drive for 25 to 30$ from Amazon / Ebay etc.

 

2. You will need to install a couple of programs on Linux host (if you don't have it already)


To be able to convert from command line from MP3 to WAV you will need as minimum ffmpeg and normalize-audio packages as well as some kind of command line burning tool like cdrskin  wodim which is
the fork of old good known cdrecord, so in case if you you're wondering what happened with it just
use instead wodim.

Below is a good list of tools (assuming you have enough HDD space) to install:

 

root@jeremiah:/ # apt-get install –yes dvd+rw-tools cdw cdrdao audiotools growisofs cdlabelgen dvd+rw-tools k3b brasero wodim ffmpeg lame normalize-audio libavcodec58

 

Note that some of above packages I've installed just for other Write / Read operations for DVD drives and you might not need that but it is good to have it as some day in future you will perhaps need to write out a DVD or something.
Also the k3b here is specific to KDE and if you're a GNOME user you could use Native GNOME Desktop app such brasero or if you're in a more minimalistic Linux desktop due to hardware contrains use XFCE's native xfburn program.

If you're a console / terminal geek like me you will definitely enjoy to use cdw
 

root@jeremiah:/ # apt-cache show cdw|grep -i description -A 1
Description-en: Tool for burning CD's – console version
 Ncurses-based frontend for wodim and genisoimage. It can handle audio and

Description-md5: 77dacb1e6c00dada63762b78b9a605d5
Homepage: http://cdw.sourceforge.net/

 

3. Selecting preferred CD / DVD / BD program to use to write out the CD from Linux console


cdw uses wodim (which is a successor of good old known console cdrecord command most of use used on Linux in the past to burn out new Redhat / Debian / different Linux OS distro versions for upgrade purposes on Desktop and Server machines.

To check whether your CD / DVD drive is detected and ready to burn on your old PC issue:

 

root@jeremiah:/# wodim -checkdrive
Device was not specified. Trying to find an appropriate drive…
Detected CD-R drive: /dev/cdrw
Using /dev/cdrom of unknown capabilities
Device type    : Removable CD-ROM
Version        : 5
Response Format: 2
Capabilities   :
Vendor_info    : 'HL-DT-ST'
Identification : 'DVDRAM GT50N    '
Revision       : 'LT20'
Device seems to be: Generic mmc2 DVD-R/DVD-RW.
Using generic SCSI-3/mmc   CD-R/CD-RW driver (mmc_cdr).
Driver flags   : MMC-3 SWABAUDIO BURNFREE
Supported modes: TAO PACKET SAO SAO/R96P SAO/R96R RAW/R16 RAW/R96P RAW/R96R

You can also use xorriso (whose added value compared to other console burn cd tools is is not using external program for ISO9660 formatting neither it use an external or an external burn program for CD, DVD or BD (Blue Ray) drive but it has its own libraries incorporated from libburnia-project.org libs.

Below output is from my Thinkpad T420 notebook. If the old computer CD drive is there and still functional in most cases you should not get issues to detect it.

cdw ncurses text based CD burner tool's interface is super intuitive as you can see from below screenshot:

cdw-burn-cds-from-console-terminal-on-GNU-Linux-and-FreeBSD-old-PC-computer

CDW has many advanced abilities such as “blanking” a disk or ripping an audio CD on a selected folder. To overcome the possible problem of CDW not automatically detecting the disk you have inserted you can go to the “Configuration” menu, press F5 to enter the Hardware options and then on the first entry press enter and choose your device (by pressing enter again). Save the setting with F9.
 

4. Convert MP3 / MP4 Files or whatever format to .WAV to be ready to burn to CD


Collect all the files you want to have collected from the CD album in .MP3 a certain directory and use a small one liner loop to convert files to WAV with ffmpeg:
 

cd /disk/Music/Mp3s/Singer-Album-directory-with-MP3/

for i in $( ls *.mp3); do ffmpeg -i $i $i.wav; done


If you don't have ffmpeg installed and have mpg123 you can also do the Mp3 to WAV conversion with mpg123 cmd like so:

 

for i in $( ls ); do mpg123 -w $i.wav $i.mp3; done


Another alternative for conversion is to use good old lame (used to create Mp3 audio files but abling to also) decode
mp3 to wav.

 

lame –decode somefile.mp3 somefile.wav


In the past there was a burn command tool that was able to easily convert MP3s to WAV but in up2date Linux modern releases it is no longer available most likely due to licensing issues, for those on older Debian Linux 7 / 8 / 9 / Ubuntu 8 to 12.XX / old Fedoras etc. if you have the command you can install burn and use it (and not bother with shell loops):

apt-get install burn

or

yum install burn


Once you have it to convert

 

$ burn -A -a *.mp3
 

 

5. Fix file naming to remove empty spaces such as " " and substitute to underscores as some Old CD Players are
unable to understand spaces in file naming with another short loop.

 

for f in *; do mv "$f" `echo $f | tr ' ' '_'`; done

 

6. Normalize audio produced .WAV files (set the music volume to a certain level)


In case if wondering why normalize audio is needed here is short extract from normalize-audio man page command description to shed some light.

"normalize-audio  is  used  to  adjust  the volume of WAV or MP3 audio files to a standard volume level.  This is useful for things like creating mp3 mixes, where different recording levels on different albums can cause the volume to  vary  greatly from song to song."
 

cd /disk/Music/Mp3s/Singer-Album-directory-with-MP3/

normalize-audio -m *.wav

 

7. Burn the produced normalized Audio WAV files to the the CD

 

wodim -v -fix -eject dev='/dev/sr0' -audio -pad *.wav


Alternatively you can conver all your MP3 files to .WAV with anything be it audacity
or another program or even use 
GNOME's CDBurn tool brasero (if gnome user) or KDE's CDBurn which in my opinion is
the best CD / DVD burning application for Linux K3B.

Burning Audio CD with K3b is up to few clicks and super easy and even k3b is going to handle the MP3 to WAV file Conversion itself. To burn audio with K3B just run it and click over 'New Audio CD Project'.

k3b-on-debian-gnu-linux-burn-audio-cd-screenshot

For those who want to learn a bit more on CD / DVD / Blue-Ray burning on GNU / Linux good readings are:
Linux CD Burning Mini Howto, is Linux's CD Writing Howto on ibiblio (though a bit obsolete) or Debian's official documentation on BurnCD.
 

8. What we learned here


Though the accent of this tutorial was how to Create Audio Music CD from MP3 on GNU / Linux, the same commands are available in most FreeBSD / NetBSD / OpenBSD ports tree so you can use the same method to build prepare Audio Music CD on *BSDs.

In this article, we went through few basic ways on how to prepare WAV files from MP3 normalize the new created WAV files on Linux, to prepare files for creation of Audio Music CD for the old mom or grandma's player or even just for fun to rewind some memories. For GUI users this is easily done with  k3b,  brasero or xfburn.

I've pointed you to cdw a super useful text ncurses tool that makes CD Burninng from plain text console (on servers) without a Xorg / WayLand  GUI installed super easy. It was shortly reviewed what has changed over the last few years and why and why cdrecord was substituted for wodim. A few examples were given on how to handle conversion through bash shell loops and you were pointed to some extra reading resources to learn a bit more on the topic.
There are plenty of custom scripts around for doing the same CD Burn / Covnersion tasks, so pointing me to any external / Shell / Perl scripts is mostly welcome.

Hope this learned you something new, Enjoy ! 🙂