Posts Tagged ‘linux kernel’

Recover lost / forgotten root password for CentOS 7 Linux / Boot CentOS 6 into Single User mode to reset admin pass

Friday, September 27th, 2024

centos-community-enterprise-operating-system-logo.

If you have some old CentOS 7 Virtual machine hanging for a long time and you don't remember the root password or you don't remember where you have stored it, but you have something important as data left over, you might need to recover root password for your CentOS 7 Virtual Machine.

I recently had to resolve that issue and here is the few easy steps to take to recover the lost root password.

Assuming you have tried to boot the VM and the VM boots fine and your few attempts to input manually some default passwords of yours failed, next 

1. Reboot the Virtual Machine to the GRUB boot menu

 

grub.png

The GRUB boot screen should appear and be there for few secs

2. Edit the boot loader kernel options ( add add rd.break enforcing=0 )

 

How to reset root password on CentOS Linux - Clouvider

Press 'e' to Edit the boot loader and modify the boot commands options passed to the linux kernel.

In GRUB edit mode:

add rd.break enforcing=0


to the end of the line starting with linux at the end of passed parameters list as shown in the picture.

When done editing, press Ctrl-x (Control button x key simultaneously) to boot with changed parameters.

ALTERNATIVE WAY TO BOOT THE SYSTEM INTO ROOT WITHOUT PASSWORD PROMPT:

Alternative options to use instead of add rd.break.enforcing=0 are to substitute the rhgb quiet kernel option with init=/bin/bash

Edit CentOS Grub Boot Menu Entries rhgb quiet options shot

Modify kernel parameters pass init=/bin/bash to kernel to boot emergency mode centos linux

 

As you might wonder for the meaning of the passed 2 parameters:

rd.break breaks the boot process at initramfs while
enforcing=0 disables the SELinux (which often enabled by default on CentOS).

Another way is to 

3. Boot in CentOS emergency mode and Reset the root password
 

When done editing, press Ctrl-x to boot with changed parameters.

As you might wonder for the meaning of the passed parameters:

rd.break breaks the boot process at initramfs while
enforcing=0 disables the SELinux (which often enabled by default on CentOS).

Whence system boots up with the modified kernel options cmd, the switch_root prompt will appear.
As the emerency mode boots the filesystem into read-only mode under /sysroot default directory, in order to be able to
modify the MD5 root password stored hash inside RO mounted /sysroot/etc/shadow you need to remount the Filesystme
in read-write mode.

To Remount the read-only file system /sysroot in write mode:

# mount -o remount,rw /sysroot

As the /sysroot is not the root directory to be able to use a standard passwd command you need to make /sysroot
as the default root folder for the booted linux by chrooting into it.
 

  • Generate MD5 password manually (for Hardcore masochistic admins 🙂 )

If you're a hard core linux sysadmin of course, generate your own new md5 password and directly modify /etc/shadow copy pasting the md5 string.

If you want to manually generate the md5 string, you can do it depending on the required encryption algorithm with:

For (md5, sha256, sha512) encrypted pass

# openssl passwd -6 -salt xyz  yourpass

For   (md5, sha256, sha512) encrypted pwd

# mkpasswd –method=SHA-512 –stdin

For (des, md5, sha256, sha512) encrypted pw

# perl -e 'print crypt("YourPasswd", "salt", "sha512"),"\n"'


Once the string is generated;

# vim  /etc/shadow


and exchange the old with new string for MD5

  • Change password with chroot (the easy common way)

remount read write the filesystem in emergency single user mode CentOS LINUX

# chroot /sysroot

That should drop you into another shell bash-4.x

 

Reset root user password in CentOS 7

# passwd
Changing password for user root.
New password:
Retype new password:

We need have to sync the entire filesystem we have to use the sync command, for novice sys admins who never heard about this command, below
short description:

The Linux sync command synchronizes cached data to permanent storage.
This data includes modified superblocks, modified inodes, delayed reads and writes, and others. sync uses several system calls:

sync()
syncfs()
fsync()
fdatasync()


For example, the sync command utilizes the sync() system call to write all buffered modifications to file data and metadata to an underlying storage device.

As a Linux systems administrator or developer, understanding the sync command can be crucial for efficient file synchronization. Additionally, sync can be helpful after crashes or when the file system becomes corrupted.

In this tutorial, we’ll explore the various aspects of the sync command. Also, we’ll see how we can use sync in different scenarios.

# sync

# exec /sbin/init

Try out the root password after booting normally into CentOS and the new set administrator pass should work.


Resetting forgotten (lost) root password on CentOS 6

The process is absolutely the same except on the Step 1 (in the modification of GRUB boot menu by pressing e key), add to

rhgb quiet

at the end one 'S'

This S character means 'boot CentOS into Single user mode'

rhgb quiet S

 

Go to single user mode on CentOS 6 Linux in boot loader S kernel setting

Then, press ENTER key and press b key to boot CentOS 6 into to single user mode.
 

Apache Webserver: No space left on device: Couldn’t create accept lock /var/lock/apache2/accept.lock – Fix

Wednesday, April 8th, 2015

Apache-http-server-no-space-left-on-device-semaphores-quotes-hard-disk-space-resolve-fix-howto
If out of a sudden your Apache webserver crashes and is refusing to start up by manually trying to restart it through its init script on Debian Linux servers – /etc/init.d/apache2 and RPM based ones: /etc/init.d/httpd

Checking in php_error.log there was no shown errors related to loading PHP modules, however apache's error.log show following errors:

[Wed Apr 08 14:20:14 2015] [error] [client 180.76.5.149] client denied by server configuration: /var/www/sploits/info/trojans_info/tr_data/y3190.html
[Wed Apr 08 14:20:39 2015] [warn] pid file /var/run/apache2.pid overwritten — Unclean shutdown of previous Apache run?
[Wed Apr 08 14:20:39 2015] [emerg] (28)No space left on device: Couldn't create accept lock (/var/lock/apache2/accept.lock.15974) (5)
[Wed Apr 08 14:25:39 2015] [warn] pid file /var/run/apache2.pid overwritten — Unclean shutdown of previous Apache run?
[Wed Apr 08 14:25:39 2015] [emerg] (28)No space left on device: Couldn't create accept lock (/var/lock/apache2/accept.lock.16790) (5)
[Wed Apr 08 14:27:03 2015] [warn] pid file /var/run/apache2.pid overwritten — Unclean shutdown of previous Apache run?
[Wed Apr 08 14:27:03 2015] [emerg] (28)No space left on device: Couldn't create accept lock (/var/lock/apache2/accept.lock.16826) (5)
[Wed Apr 08 14:27:53 2015] [warn] pid file /var/run/apache2.pid overwritten — Unclean shutdown of previous Apache run?
[Wed Apr 08 14:27:53 2015] [emerg] (28)No space left on device: Couldn't create accept lock (/var/lock/apache2/accept.lock.16852) (5)
[Wed Apr 08 14:30:48 2015] [warn] pid file /var/run/apache2.pid overwritten — Unclean shutdown of previous Apache run?
[Wed Apr 08 14:30:48 2015] [emerg] (28)No space left on device: Couldn't create accept lock (/var/lock/apache2/accept.lock.17710) (5)
[Wed Apr 08 14:31:21 2015] [warn] pid file /var/run/apache2.pid overwritten — Unclean shutdown of previous Apache run?
[Wed Apr 08 14:31:21 2015] [emerg] (28)No space left on device: Couldn't create accept lock (/var/lock/apache2/accept.lock.17727) (5)
[Wed Apr 08 14:32:40 2015] [warn] pid file /var/run/apache2.pid overwritten — Unclean shutdown of previous Apache run?
[Wed Apr 08 14:32:40 2015] [emerg] (28)No space left on device: Couldn't create accept lock (/var/lock/apache2/accept.lock.17780) (5)
[Wed Apr 08 14:38:32 2015] [warn] pid file /var/run/apache2.pid overwritten — Unclean shutdown of previous Apache run?

As you can read the most likely reason behind above errors preventing for apache to start is /var/run/apache2.pid  unable to be properly written due to lack of disk space or due to disk quota set for users including for userID with which Apache is running.

First thing I did is of course to see how much free space is on the server:

df -h
Filesystem                     Size  Used Avail Use% Mounted on
/dev/mapper/vg00-rootvol       4.0G  1.7G  2.2G  44% /
udev                           7.8G  204K  7.8G   1% /dev
tmpfs                           24G     0   24G   0% /dev/shm
/dev/sda1                      486M   40M  422M   9% /boot
/dev/mapper/vg00-lv_crashdump 1008M   34M  924M   4% /crashdump
/dev/mapper/vg00-homevol       496M   26M  445M   6% /home
/dev/mapper/vg00-lv_opt         12G  1.4G  9.9G  13% /opt
/dev/mapper/vg00-tmpvol        2.0G   68M  1.9G   4% /tmp
/dev/mapper/vg00-varvol        7.9G  609M  6.9G   8% /var
/dev/mapper/vg00-crashvol      1.9G   35M  1.8G   2% /var/crash
/dev/mapper/vg00-auditvol      124M  5.6M  113M   5% /var/log/audit
/dev/mapper/vg00-webdienste     60G   12G   48G  19% /webservice

 

As visible from above df command output , there is enough disk on HDD, so this is definitely not the issue:

Then I Checked whether there is Quota enabled on the Linux server with repquota command shows there are no quotas enabled:

# repquota / var/
repquota: Mountpoint (or device) / not found or has no quota enabled.
repquota: Mountpoint (or device) /var not found or has no quota enabled.
repquota: Not all specified mountpoints are using quota.

 

So obviously the only few left possible reason for Apache failing to start after invoked via init script is  either due to left tainted semaphores or due to some server hardware  RAM problem / or a dying  hard disk with bad blocks.

So what are Semaphores? Generally speaking Semaphores are apparatus for conveying information by means of visual signals between applications (something like sockets).They're used for communicating between the active processes of a certain application. In the case of Apache, they’re used to communicate between the parent and child processes, hence if Apache can’t properly write and coordinate these things down, then it can’t communicate properly with all of the processes it starts and hence the Main HTTPD process can't spawn probably its childs preventing Webserver to enter "started mode" and write its PID file.

To check general information about system semaphore arrays there is the ipcs -s command, however my experience is that ipcs -a is more useful (because it lists generally all kind of semaphores) including Semaphore Shared Memory Signals which are the most likely to cause you the problem.

ipcs -a

—— Shared Memory Segments ——–
key        shmid      owner      perms      bytes      nattch     status

—— Semaphore Arrays ——–
key        semid      owner      perms      nsems
0x00000000 22970368   www-data   600        1

—— Message Queues ——–
key        msqid      owner      perms      used-bytes   messages

As you see in my case there is a Semaphore Arrays which had to be cleaned to make Apache2 be able to start again.
 

To clean all left semaphores (arrays) preventing Apache from start properly, use below for one liner bash loop:
 

for i in `ipcs -s | awk '/www-data/ {print $2}'`; do (ipcrm -s $i); done
ipcrm -m 0x63637069


Note that above for loop is specific to Debian on CentOS / Fedora / RHEL and other Linuxes the username with which stucked semaphores might stay will be apache or httpd

Depending on the user with which the Apache Webserver is running, run above loop like so:

For RPM based distros (CentOS / RHEL):

 

for i in `ipcs -s | awk '/apache/ {print $2}'`; do (ipcrm -s $i); done
ipcrm -m 0x63637069


For other distros such as Slackware or FreeBSD or any custom compiled Apache webserver:

for i in `ipcs -s | awk '/httpd/ {print $2}'`; do (ipcrm -s $i); done
ipcrm -m 0x63637069


If there is also Shared Memory Segments you can remove them with ipcrm i.e.:

ipcrm -m 0x63637069


An alternative way to get rid of left uncleaned semaphores is with xargs:
 

ipcs -s | grep nobody | awk ‘ { print $2 } ‘ | xargs ipcrm


Even though this fixes the issue I understood my problems were due to exceeding semaphores, to check default number of set semaphores on Linux Kernel level as well as few Semaphore related values run below sysctl:

sysctl -a | egrep kernel.sem\|kernel.msgmni
kernel.msgmni = 15904

kernel.sem = 250        32000   32      128


As you can see the number of maximum semaphores is quite large so in my case the failure because of left semaphores was most likely due to some kind of Cracker / Automated bot scanner attack or someone trying malicious against the webserver or simply because of some kind of Apache bug or enormous high load the server faced.

Resolving “nf_conntrack: table full, dropping packet.” flood message in dmesg Linux kernel log

Wednesday, March 28th, 2012

nf_conntrack_table_full_dropping_packet
On many busy servers, you might encounter in /var/log/syslog or dmesg kernel log messages like

nf_conntrack: table full, dropping packet

to appear repeatingly:

[1737157.057528] nf_conntrack: table full, dropping packet.
[1737157.160357] nf_conntrack: table full, dropping packet.
[1737157.260534] nf_conntrack: table full, dropping packet.
[1737157.361837] nf_conntrack: table full, dropping packet.
[1737157.462305] nf_conntrack: table full, dropping packet.
[1737157.564270] nf_conntrack: table full, dropping packet.
[1737157.666836] nf_conntrack: table full, dropping packet.
[1737157.767348] nf_conntrack: table full, dropping packet.
[1737157.868338] nf_conntrack: table full, dropping packet.
[1737157.969828] nf_conntrack: table full, dropping packet.
[1737157.969928] nf_conntrack: table full, dropping packet
[1737157.989828] nf_conntrack: table full, dropping packet
[1737162.214084] __ratelimit: 83 callbacks suppressed

There are two type of servers, I've encountered this message on:

1. Xen OpenVZ / VPS (Virtual Private Servers)
2. ISPs – Internet Providers with heavy traffic NAT network routers
 

I. What is the meaning of nf_conntrack: table full dropping packet error message

In short, this message is received because the nf_conntrack kernel maximum number assigned value gets reached.
The common reason for that is a heavy traffic passing by the server or very often a DoS or DDoS (Distributed Denial of Service) attack. Sometimes encountering the err is a result of a bad server planning (incorrect data about expected traffic load by a company/companeis) or simply a sys admin error…

– Checking the current maximum nf_conntrack value assigned on host:

linux:~# cat /proc/sys/net/ipv4/netfilter/ip_conntrack_max
65536

– Alternative way to check the current kernel values for nf_conntrack is through:

linux:~# /sbin/sysctl -a|grep -i nf_conntrack_max
error: permission denied on key 'net.ipv4.route.flush'
net.netfilter.nf_conntrack_max = 65536
error: permission denied on key 'net.ipv6.route.flush'
net.nf_conntrack_max = 65536

– Check the current sysctl nf_conntrack active connections

To check present connection tracking opened on a system:

:

linux:~# /sbin/sysctl net.netfilter.nf_conntrack_count
net.netfilter.nf_conntrack_count = 12742

The shown connections are assigned dynamicly on each new succesful TCP / IP NAT-ted connection. Btw, on a systems that work normally without the dmesg log being flooded with the message, the output of lsmod is:

linux:~# /sbin/lsmod | egrep 'ip_tables|conntrack'
ip_tables 9899 1 iptable_filter
x_tables 14175 1 ip_tables

On servers which are encountering nf_conntrack: table full, dropping packet error, you can see, when issuing lsmod, extra modules related to nf_conntrack are shown as loaded:

linux:~# /sbin/lsmod | egrep 'ip_tables|conntrack'
nf_conntrack_ipv4 10346 3 iptable_nat,nf_nat
nf_conntrack 60975 4 ipt_MASQUERADE,iptable_nat,nf_nat,nf_conntrack_ipv4
nf_defrag_ipv4 1073 1 nf_conntrack_ipv4
ip_tables 9899 2 iptable_nat,iptable_filter
x_tables 14175 3 ipt_MASQUERADE,iptable_nat,ip_tables

 

II. Remove completely nf_conntrack support if it is not really necessery

It is a good practice to limit or try to omit completely use of any iptables NAT rules to prevent yourself from ending with flooding your kernel log with the messages and respectively stop your system from dropping connections.

Another option is to completely remove any modules related to nf_conntrack, iptables_nat and nf_nat.
To remove nf_conntrack support from the Linux kernel, if for instance the system is not used for Network Address Translation use:

/sbin/rmmod iptable_nat
/sbin/rmmod ipt_MASQUERADE
/sbin/rmmod rmmod nf_nat
/sbin/rmmod rmmod nf_conntrack_ipv4
/sbin/rmmod nf_conntrack
/sbin/rmmod nf_defrag_ipv4

Once the modules are removed, be sure to not use iptables -t nat .. rules. Even attempt to list, if there are any NAT related rules with iptables -t nat -L -n will force the kernel to load the nf_conntrack modules again.

Btw nf_conntrack: table full, dropping packet. message is observable across all GNU / Linux distributions, so this is not some kind of local distribution bug or Linux kernel (distro) customization.
 

III. Fixing the nf_conntrack … dropping packets error

– One temporary, fix if you need to keep your iptables NAT rules is:

linux:~# sysctl -w net.netfilter.nf_conntrack_max=131072

I say temporary, because raising the nf_conntrack_max doesn't guarantee, things will get smoothly from now on.
However on many not so heavily traffic loaded servers just raising the net.netfilter.nf_conntrack_max=131072 to a high enough value will be enough to resolve the hassle.

– Increasing the size of nf_conntrack hash-table

The Hash table hashsize value, which stores lists of conntrack-entries should be increased propertionally, whenever net.netfilter.nf_conntrack_max is raised.

linux:~# echo 32768 > /sys/module/nf_conntrack/parameters/hashsize
The rule to calculate the right value to set is:
hashsize = nf_conntrack_max / 4

– To permanently store the made changes ;a) put into /etc/sysctl.conf:

linux:~# echo 'net.netfilter.nf_conntrack_count = 131072' >> /etc/sysctl.conf
linux:~# /sbin/sysct -p

b) put in /etc/rc.local (before the exit 0 line):

echo 32768 > /sys/module/nf_conntrack/parameters/hashsize

Note: Be careful with this variable, according to my experience raising it to too high value (especially on XEN patched kernels) could freeze the system.
Also raising the value to a too high number can freeze a regular Linux server running on old hardware.

– For the diagnosis of nf_conntrack stuff there is ;

/proc/sys/net/netfilter kernel memory stored directory. There you can find some values dynamically stored which gives info concerning nf_conntrack operations in "real time":

linux:~# cd /proc/sys/net/netfilter
linux:/proc/sys/net/netfilter# ls -al nf_log/

total 0
dr-xr-xr-x 0 root root 0 Mar 23 23:02 ./
dr-xr-xr-x 0 root root 0 Mar 23 23:02 ../
-rw-r--r-- 1 root root 0 Mar 23 23:02 0
-rw-r--r-- 1 root root 0 Mar 23 23:02 1
-rw-r--r-- 1 root root 0 Mar 23 23:02 10
-rw-r--r-- 1 root root 0 Mar 23 23:02 11
-rw-r--r-- 1 root root 0 Mar 23 23:02 12
-rw-r--r-- 1 root root 0 Mar 23 23:02 2
-rw-r--r-- 1 root root 0 Mar 23 23:02 3
-rw-r--r-- 1 root root 0 Mar 23 23:02 4
-rw-r--r-- 1 root root 0 Mar 23 23:02 5
-rw-r--r-- 1 root root 0 Mar 23 23:02 6
-rw-r--r-- 1 root root 0 Mar 23 23:02 7
-rw-r--r-- 1 root root 0 Mar 23 23:02 8
-rw-r--r-- 1 root root 0 Mar 23 23:02 9

 

IV. Decreasing other nf_conntrack NAT time-out values to prevent server against DoS attacks

Generally, the default value for nf_conntrack_* time-outs are (unnecessery) large.
Therefore, for large flows of traffic even if you increase nf_conntrack_max, still shorty you can get a nf_conntrack overflow table resulting in dropping server connections. To make this not happen, check and decrease the other nf_conntrack timeout connection tracking values:

linux:~# sysctl -a | grep conntrack | grep timeout
net.netfilter.nf_conntrack_generic_timeout = 600
net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 120
net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 60
net.netfilter.nf_conntrack_tcp_timeout_established = 432000
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
net.netfilter.nf_conntrack_tcp_timeout_last_ack = 30
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_close = 10
net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300
net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 300
net.netfilter.nf_conntrack_udp_timeout = 30
net.netfilter.nf_conntrack_udp_timeout_stream = 180
net.netfilter.nf_conntrack_icmp_timeout = 30
net.netfilter.nf_conntrack_events_retry_timeout = 15
net.ipv4.netfilter.ip_conntrack_generic_timeout = 600
net.ipv4.netfilter.ip_conntrack_tcp_timeout_syn_sent = 120
net.ipv4.netfilter.ip_conntrack_tcp_timeout_syn_sent2 = 120
net.ipv4.netfilter.ip_conntrack_tcp_timeout_syn_recv = 60
net.ipv4.netfilter.ip_conntrack_tcp_timeout_established = 432000
net.ipv4.netfilter.ip_conntrack_tcp_timeout_fin_wait = 120
net.ipv4.netfilter.ip_conntrack_tcp_timeout_close_wait = 60
net.ipv4.netfilter.ip_conntrack_tcp_timeout_last_ack = 30
net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait = 120
net.ipv4.netfilter.ip_conntrack_tcp_timeout_close = 10
net.ipv4.netfilter.ip_conntrack_tcp_timeout_max_retrans = 300
net.ipv4.netfilter.ip_conntrack_udp_timeout = 30
net.ipv4.netfilter.ip_conntrack_udp_timeout_stream = 180
net.ipv4.netfilter.ip_conntrack_icmp_timeout = 30

All the timeouts are in seconds. net.netfilter.nf_conntrack_generic_timeout as you see is quite high – 600 secs = (10 minutes).
This kind of value means any NAT-ted connection not responding can stay hanging for 10 minutes!

The value net.netfilter.nf_conntrack_tcp_timeout_established = 432000 is quite high too (5 days!)
If this values, are not lowered the server will be an easy target for anyone who would like to flood it with excessive connections, once this happens the server will quick reach even the raised up value for net.nf_conntrack_max and the initial connection dropping will re-occur again …

With all said, to prevent the server from malicious users, situated behind the NAT plaguing you with Denial of Service attacks:

Lower net.ipv4.netfilter.ip_conntrack_generic_timeout to 60 – 120 seconds and net.ipv4.netfilter.ip_conntrack_tcp_timeout_established to stmh. like 54000

linux:~# sysctl -w net.ipv4.netfilter.ip_conntrack_generic_timeout = 120
linux:~# sysctl -w net.ipv4.netfilter.ip_conntrack_tcp_timeout_established = 54000

This timeout should work fine on the router without creating interruptions for regular NAT users. After changing the values and monitoring for at least few days make the changes permanent by adding them to /etc/sysctl.conf

linux:~# echo 'net.ipv4.netfilter.ip_conntrack_generic_timeout = 120' >> /etc/sysctl.conf
linux:~# echo 'net.ipv4.netfilter.ip_conntrack_tcp_timeout_established = 54000' >> /etc/sysctl.conf

How to improve Linux kernel security with GrSecurity / Maximum Linux kernel security with GrSecurity

Tuesday, May 3rd, 2011

In short I’ll explain here what is Grsecurity http://www.grsecurity.net/ for all those who have not used it yet and what kind of capabilities concerning enhanced kernel security it has.

Grsecurity is a combination of patches for the Linux kernel accenting at the improving kernel security.

The typical application of GrSecurity is in the field of Linux systems which are administered through SSH/Shell, e.g. (remote hosts), though you can also configure grsecurity on a normal Linux desktop system if you want a super secured Linux desktop ;).

GrSecurity is used heavily to protect server system which require a multiple users to have access to the shell.

On systems where multiple user access is required it’s a well known fact that (malicious users, crackers or dumb script kiddies) get administrator (root) privileges with a some just poped in 0 day root kernel exploit.
If you’re an administrator of a system (let’s say a web hosting) server with multiple users having access to the shell it’s also common that exploits aiming at hanging in certain daemon service is executed by some of the users.
In other occasions you have users which are trying to DoS the server with some 0 day Denial of Service exploit.
In all this cases GrSecurity having a kernel with grsecurity is priceless.

Installing grsecurity patched kernel is an easy task for Debian and Ubuntu and is explained in one of my previous articles.
This article aims to explain in short some configuration options for a GrSecurity tightened kernel, when one have to compile a new kernel from source.

I would skip the details on how to compile the kernel and simply show you some picture screens with GrSecurity configuration options which are working well and needs to be set-up before a make command is issued to compile the new kernel.

After preparing the kernel source for compilation and issuing:

linux:/usr/src/kernel-source$ make menuconfig

You will have to select options like the ones you see in the pictures below:

[nggallery id=”8″]

After completing and saving your kernel config file, continue as usual with an ordinary kernel compilation, e.g.:

linux:/usr/src/kernel-source$ make
linux:/usr/src/kernel-source$ make modules
linux:/usr/src/kernel-source$ su root
linux:/usr/src/kernel-source# make modules_install
linux:/usr/src/kernel-source# make install
linux:/usr/src/kernel-source# mkinitrd -o initrd.img-2.6.xx 2.6.xx

Also make sure the grub is properly configured to load the newly compiled and installed kernel.

After a system reboot, if all is fine you should be able to boot up the grsecurity tightened newly compiled kernel, but be careful and make sure you have a backup solution before you reboot, don’t blame me if your new grsecurity patched kernel fails to boot! You’re on your own boy 😉
This article is written thanks to based originally on his article in Bulgarian. If you’re a Bulgarian you might also checkout static’s blog

How to disable IPv6 on Debian / Ubuntu / CentOS and RHEL Linux

Friday, December 9th, 2011

I have few servers, which have automatically enabled IPv6 protocols (IPv6 gets automatically enabled on Debian), as well as on most latest Linux distribituions nowdays.

Disabling IPv6 network protocol on Linux if not used has 2 reasons:

1. Security (It’s well known security practice to disable anything not used on a server)
Besides that IPv6 has been known for few criticil security vulnerabilities, which has historically affected the Linux kernel.
2. Performance (Sometimes disabling IPv6 could have positive impact on IPv4 especially on heavy traffic network servers).
I’ve red people claiming disabling IPv6 improves the DNS performance, however since this is not rumors and did not check it personally I cannot positively confirm this.

Disabling IPv6 on all GNU / Linuces can be achieved by changing the kernel sysctl settings net.ipv6.conf.all.disable_ipv6 by default net.ipv6.conf.all.disable_ipv6 equals 1 which means IPv6 is enabled, hence to disable IPv6 I issued:

server:~# sysctl net.ipv6.conf.all.disable_ipv6=0

To set it permanently on system boot I put the setting also in /etc/sysctl.conf :

server:~# echo 'net.ipv6.conf.all.disable = 1 >> /etc/sysctl.conf

The aforedescribed methods should be working on most Linux kernels version > 2.6.27 in that number it should work 100% on recent versions of Fedora, CentOS, Debian and Ubuntu.

To disable IPv6 protocol on Debian Lenny its necessery to blackist the ipv6 module in /etc/modprobe.d/blacklist by issuing:

echo 'blacklist ipv6' >> /etc/modprobe.d/blacklist

On Fedora / CentOS there is a another universal “Redhat” way disable IPv6.

On them disabling IPv6 is done by editting /etc/sysconfig/network and adding:

NETWORKING_IPV6=no
IPV6INIT=no

I would be happy to hear how people achieved disabling the IPv6, since on earlier and (various by distro) Linuxes the way to disable the IPv6 is probably different.
 

Alto to stop Iptables IPV6 on CentOS / Fedora and RHEL issue:

# service ip6tables stop

# service ip6tables off

Preventing packages on Debian and Ubuntu Linux to not update on system updates

Wednesday, March 6th, 2013

On Debian based GNU / Linux distros, there are some critical packages which need to be disabled to update during the common routine apt-get update && apt-get upgrade which is a almost daily part of Debian sysadmin living. Example for packages which are good to mark not to upgrade are for example; linux kernel, java virtual machine, adobe flash plugin,  etc.

Setting a package to omit upgrade on system package update for adobe flash plugin for example is with:

# echo adobe-flashplugin hold | dpkg –set-selections
debian:~# echo adobe-flash-properties-gtk hold | dpkg --set-selections
debian:~# echo flashplugin-nonfree-extrasound hold | dpkg --set-selections

To do deb package update on hold for kernel;

 

debian:~# echo linux-image-generic hold | dpkg --set-selections
debian:~# echo linux-image-3.2.0-38-generic | dpkg --set-selections

Do the same for as many packages which seem to break up on updates and when, you explicitly want remove the hold/s run:

debian:~# echo adobe-flashplugin install | dpkg --set-selections
....
debian:~# echo linux-image-generic install | dpkg --set-selections

 

 

 

Creator of Linux kernel Linus Torvalds with a biblical name, Pope Linus second Pope of Rome

Thursday, February 16th, 2012

Western Roman Catholic Pope Linus picture, Pope after Saint apostle Peter

Linus's name is encountered once in the Scriptures (The Holy Bible) in the second book part of the New Testament scriptures:

I really like King James English version of the bible, here is the text extracted from there, mentioning Linus's name:

2 Timothy 4:21
Doe thy diligence to come before winter. Eubulus greeteth thee,
and Pudens, and Linus, and Claudia, and all the brethren. (From KJV 1611 Translation)

Here is a modernized version of the same verse taken from the New American Standard Bible Version (1995):

2 Timothy 4:21
Make every effort to come before winter. Eubulus greets you,
also Pudens and Linus and Claudia and all the brethren.

- New American Standard Version (1995)

Other curious fact maybe, even uknown to Linus Torvalds himself is Saint Linus used to be the first bishop of Rome, after the Apostles bishopship.
This makes Saint Linus the second in place Roman Catholic Pope after Saint Peter in early Western Church. There are some early sources which says Pope Clement I was the second pope of Rome, however probably this sources are erroneous, since some very important early written sources like the Apostolic Constitutions states Linus was the first bishop of Rome and was ordained by St. Paul. The same documents says Pope Linus was succeeded by Pope Clement – ordained by saint Peter.

Below's paste is taken directly from BibleGateway.com cofirming about Pope Linus being the sacond Roman Catholic Pope:

Linus
(a net), a Christian at Rome, known to St. Paul and to
Timothy, (2 Timothy 4:21) who was the first bishop of Rome after the apostles. (A.D. 64.)

Something Pope Linus is known with is, to have issued a church decree that woman should cover their heads in church.This ancient church tradition is still observed more or less in the Orthodox Church. It is not known much about how Saint Pope ruled the early Western Church but since the western and eastern Church used to be in communion in these early days, this means the nowdays Roman Catholic saint Linus is probably a saint in the Eastern Orthodox Church as well.
According to some unprovable written sources Pope Linus later suffered martyrdom and was buried in Vatican Hill next to saint apostle Peter.

St. Linus according to Church tradition passed away in the 1st securury A.D.
Below's paste is taken directly from BibleGateway.com a multilingual website location for reading the bible

Linus
(a net), a Christian at Rome, known to St. Paul and to
Timothy, (2 Timothy 4:21) who was the first bishop of Rome after the apostles. (A.D. 64.)

I've merged a picture of how saint Linus used to look with one of the pictures of Linus Torvalds. It's rather funny they actually look alike 😉 🙂 🙂

Saint Linus and Linus Torvalds creator of GNU Linux kernel

The creator of GNU/Linux kernel Linus Torvalds might not be a saint in Christian sense, but his deed is definitely saintly as he initiated the creation of the Linux kernel and decided to share its source and publish it under GPL (General Public License).
The phenomenon of GNU / Linux Free Operating System existent today and specific type of development is definitely a miracle. The general philosophy of sharing with neighbor your software is also very close to the Christian philosophy of sharing. Actually too many of the ideas of the free software and "open source" movements resemble purely Christian ideas.

The software sharing philosophy has become a reality thanks to Richard Stallman and his GNU Project, however the existence of GNU / Linux as a complete operating system become reality thanks to the Linus torvalds kernel efforts which is known under the code name Linux. Talking about names, maybe not much will know, that Linux kernel used to have a different name in the early stage of its development, its first code name was FreaX

Boost local network performance (Increase network thoroughput) by enabling Jumbo Frames on GNU / Linux

Saturday, March 10th, 2012

Jumbo Frames boost local network performance in GNU / Linux

So what is Jumbo Frames? and why, when and how it can increase the network thoroughput on Linux?

Jumbo Frames are Ethernet frames with more than 1500 bytes of payload. They can carry up to 9000 bytes of payload. Many Gigabit switches and network cards supports them.
Jumbo frames is a networking standard for many educational networks like AARNET. Unfortunately most commercial ISPs doesn't support them and therefore enabling Jumbo frames will rarely increase bandwidth thoroughput for information transfers over the internet.
Hopefully in the years to come with the constant increase of bandwidths and betterment of connectivity, jumbo frames package transfers will be supported by most ISPs as well.
Jumbo frames network support is just great for is small local – home networks and company / corporation office intranets.

Thus enabling Jumbo Frame is absolutely essential for "local" ethernet networks, where large file transfers occur frequently. Such networks are networks where, there is often a Video or Audio streaming with high quality like HD quality on servers running File Sharing services like Samba, local FTP sites,Webservers etc.

One other advantage of enabling jumbo frames is reduce of general server overhead and decrease in CPU load / (CPU usage), when transferring large or enormous sized files.Therefore having jumbo frames enabled on office network routers with GNU / Linux or any other *nix OS is vital.

Jumbo Frames traffic is supported in GNU / Linux kernel since version 2.6.17+ in earlier 2.4.x it was possible through external third party kernel patches.

1. Manually increase MTU to 9000 with ifconfig to enable Jumbo frames

debian:~# /sbin/ifconfig eth0 mtu 9000

The default MTU on most GNU / Linux (if not all) is 1500, to check the default set MTU with ifconfig:

linux:~# /sbin/ifconfig eth0|grep -i mtu
UP BROADCAST MULTICAST MTU:1500 Metric:1

To take advantage of Jumbo Frames, all that has to be done is increase the default Maximum Transmission Unit from 1500 to 9000

For those who don't know MTU is the largest physical packet size that can be transferred over the network. MTU is measured by default in bytes. If a information has to be transferred over the network which exceeds the lets say 1500 MTU (bytes), it will be chopped and transferred in few packs each of 1500 size.

MTUs differ on different netework topologies. Just for info here are the few main MTUs for main network types existing today:
 

  • 16 MBit/Sec Token Ring – default MTU (17914)
  • 4 Mbits/Sec Token Ring – default MTU (4464)
  • FDDI – default MTU (4352)
  • Ethernet – def MTU (1500)
  • IEEE 802.3/802.2 standard – def MTU (1492)
  • X.25 (dial up etc.) – def MTU (576)
  • Jumbo Frames – def max MTU (9000)

Setting the MTU packet frames to 9000 to enable Jumbo Frames is done with:

linux:~# /sbin/ifconfig eth0 mtu 9000

If the command returns nothing, this most likely means now the server can communicate on eth0 with MTUs of each 9000 and therefore the network thoroughput will be better. In other case, if the network card driver or card is not a gigabit one the cmd will return error:

SIOCSIFMTU: Invalid argument

2. Enabling Jumbo Frames on Debian / Ubuntu etc. "the Debian way"

a.) Jumbo Frames on ethernet interfaces with static IP address assigned Edit /etc/network/interfaces and you should have for each of the interfaces you would like to set the Jumbo Frames, records similar to:

Raising the MTU to 9000 if for one time can be done again manually with ifconfig

debian:~# /sbin/ifconfig eth0 mtu 9000

iface eth0 inet static
address 192.168.0.5
network 192.168.0.0
gateway 192.168.0.254
netmask 255.255.255.0
mtu 9000

For each of the interfaces (eth1, eth2 etc.), add a chunk similar to one above changing the changing the IPs, Gateway and Netmask.

If the server is with two gigabit cards (eth0, eth1) supporting Jumbo frames add to /etc/network/interfaces :

iface eth0 inet static
address 192.168.0.5
network 192.168.0.0
gateway 192.168.0.254
netmask 255.255.255.0
mtu 9000

iface eth1 inet static
address 192.168.0.6
network 192.168.0.0
gateway 192.168.0.254
netmask 255.255.255.0
mtu 9000

b.) Jumbo Frames on ethernet interfaces with dynamic IP obtained via DHCP

Again in /etc/network/interfaces put:

auto eth0
iface eth0 inet dhcp
post-up /sbin/ifconfig eth0 mtu 9000

3. Setting Jumbo Frames on Fedora / CentOS / RHEL "the Redhat way"

Enabling jumbo frames on all Gigabit lan interfaces (eth0, eth1, eth2 …) in Fedora / CentOS / RHEL is done through files:
 

  • /etc/sysconfig/network-script/ifcfg-eth0
  • /etc/sysconfig/network-script/ifcfg-eth1

etc. …
append in each one at the end of the respective config:

MTU=9000

[root@fedora ~]# echo 'MTU=9000' >> /etc/sysconfig/network-scripts/ifcfg-eth


a quick way to set Maximum Transmission Unit to 9000 for all network interfaces on on Redhat based distros is by executing the following loop:

[root@centos ~]# for i in $(echo /etc/sysconfig/network-scripts/ifcfg-eth*); do \echo 'MTU=9000' >> $i
done

P.S.: Be sure that all your interfaces are supporting MTU=9000, otherwise increase while the MTU setting is set will return SIOCSIFMTU: Invalid argument err.
The above loop is to be used only, in case you have a group of identical machines with Lan Cards supporting Gigabit networks and loaded kernel drivers supporting MTU up to 9000.

Some Intel and Realtek Gigabit cards supports only a maximum MTU of 7000, 7500 etc., so if you own a card like this check what is the max MTU the card supports and set it in the lan device configuration.
If increasing the MTU is done on remote server through SSH connection, be extremely cautious as restarting the network might leave your server inaccessible.

To check if each of the server interfaces are "Gigabit ready":

[root@centos ~]# /sbin/ethtool eth0|grep -i 1000BaseT
1000baseT/Half 1000baseT/Full
1000baseT/Half 1000baseT/Full

If you're 100% sure there will be no troubles with enabling MTU > 1500, initiate a network reload:

[root@centos ~]# /etc/init.d/network restart
...

4. Enable Jumbo Frames on Slackware Linux

To list the ethernet devices and check they are Gigabit ones issue:

bash-4.1# lspci | grep [Ee]ther
0c:00.0 Ethernet controller: D-Link System Inc Gigabit Ethernet Adapter (rev 11)
0c:01.0 Ethernet controller: D-Link System Inc Gigabit Ethernet Adapter (rev 11)

Setting up jumbo frames on Slackware Linux has two ways; the slackware way and the "universal" Linux way:

a.) the Slackware way

On Slackware Linux, all kind of network configurations are done in /etc/rc.d/rc.inet1.conf

Usual config for eth0 and eth1 interfaces looks like so:

# Config information for eth0:
IPADDR[0]="10.10.0.1"
NETMASK[0]="255.255.255.0"
USE_DHCP[0]=""
DHCP_HOSTNAME[1]=""
# Config information for eth1:
IPADDR[1]="10.1.1.1"
NETMASK[1]="255.255.255.0"
USE_DHCP[1]=""
DHCP_HOSTNAME[1]=""

To raise the MTU to 9000, the variables MTU[0]="9000" and MTU[1]="9000" has to be included after each interface config block, e.g.:

# Config information for eth0:
IPADDR[0]="172.16.1.1"
NETMASK[0]="255.255.255.0"
USE_DHCP[0]=""
DHCP_HOSTNAME[1]=""
MTU[0]="9000"
# Config information for eth1:
IPADDR[1]="10.1.1.1"
NETMASK[1]="255.255.255.0"
USE_DHCP[1]=""
DHCP_HOSTNAME[1]=""
MTU[1]="9000"

bash-4.1# /etc/rc.d/rc.inet1 restart
...

b.) The "Universal" Linux way

This way is working on most if not all Linux distributions.
Insert in /etc/rc.local:

/sbin/ifconfig eth0 mtu 9000 up
/sbin/ifconfig eth1 mtu 9000 up

5. Check if Jumbo Frames are properly enabled

There are at least two ways to display the MTU settings for eths.

a.) Using grepping the MTU from ifconfig

linux:~# /sbin/ifconfig eth0|grep -i mtu
UP BROADCAST MULTICAST MTU:9000 Metric:1
linux:~# /sbin/ifconfig eth1|grep -i mtu
UP BROADCAST MULTICAST MTU:9000 Metric:1

b.) Using ip command from iproute2 package to get MTU

linux:~# ip route get 192.168.2.134
local 192.168.2.134 dev lo src 192.168.2.134
cache mtu 9000 advmss 1460 hoplimit 64

linux:~# ip route show dev wlan0
192.168.2.0/24 proto kernel scope link src 192.168.2.134
default via 192.168.2.1

You see MTU is now set to 9000, so the two server lans, are now able to communicate with increased network thoroughput.
Enjoy the accelerated network transfers 😉

 

The end of the work week :)

Friday, February 1st, 2008

One more week passed without serious server problems. Yesterday after upgrade to debian 4.0rc2 with

apt-get dist-upgrade and reboot the pc-freak box became unbootable.

I wasn’t able to fix it until today because the machine’s box seemed not to read cds well.The problem was consisted of this that after the boot process of the linux kernel has started the machine the boot up was interrupted with a message saying
/sbin/init is missing

and I was dropped to a busybox without being able to read nothing from my filesystem.Thankfully nomen came to Dobrich for the weekend and today he bring me his cdrom-drive I booted with the debian.

Using Debian’s linux rescue I mounted the partition to check what’s wrong. I suspected something is terribly wrong with the lilo’s conf.

Looking closely to it I saw it’s the lilo conf file it was setupped to load a initrd for the older kernel. changing the line to thenew initrd in /etc/lilo.conf and rereading the lilo; /sbin/lilo -C; /sbin/lilo;

fixed the mess and pc-freak booted succesfully! 🙂

Yesterday I had to do something kinky. It was requested from a client to have access to a mysql service of one of the company servers,the problem was that the client didn’t have static IP so I didn’t have a good way to put into the current firewall.

Everytime the adsl they use got restarted a new absolutely random IP from all the BTC IP ranges was assigned.

The solution was to make a port redirect to a non-standard mysql port (XXXXX) which pointed to the standard 3306 service. I had to tell the firewall not to check the coming IPs on the non-standard port (XXXXX) against the 3306 service fwall rules.

Thanks to the help of a guy inirc.freenode.net #iptables jengelh I figured out the solution.

To complete the requested task it was needed to mark all packagescoming into port (XXXXX) using the iptables mangle option and to add a rule to ACCEPT all marked packages.

The rules looked like this

/sbin/iptables -t mangle -A PREROUTING -p tcp –dport XXXXX -j MARK –set-mark 123456/sbin/iptables -t nat -A PREROUTING -d EXTERNAL_IP -i eth0 -p tcp –dport XXXXX -j DNAT –to-destination EXTERNAL_IP:3306

/sbin/iptables -t filter -A INPUT -p tcp –dport 3306 -m mark –mark 123456 -j ACCEPT .

Something I wondered a bit was should /proc/sys/net/ipv4/ip_forward in order for the above redirect to be working, in case you’re wondering too well it doesn’t 🙂 The working week was a sort of quiteful no serious problems with servers and work no serious problems at school (although I see me and my collegues become more and more unserious) at studying. My grand parentsdecided to make me a gift and give me money to buy a laptop and I’m pretty happy for this 🙂 All that is left is to choose a good machine with hardware supported both by FreeBSD and Linux.

END—–