Posts Tagged ‘hostname’

How to change hostname in Ubuntu, Debian and Redhat based Linux machine

Thursday, September 27th, 2018

change-hostname-on-any-linux-distributions-universal-way-to-change-linux-hostname-howto

The hostname is set at the time when a Linux OS is installed by the respective installer (set-up scripts) on a bare-metal server or  virtual machine. 

Historically to change the hostname in most GNU / Lonux distributions (Debian / Ubuntu / Fedora / CentOS etc.) it was as easy as:

1. Getting your current setting for hostname with hostname command
 

hipo@jeremiah:~$ hostname –fqdn
jeremiah


Logging to the remote machine via ssh.
 

 

 

ssh user@whetever-host.com


3. Editting /etc/hosts and substituting with the new desired hostname
 

 

vim /etc/hosts
127.0.0.1   localhost
127.0.0.1   jeremiah

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

4. Run 

/etc/init.d/hostname.sh start


5. Run command
 

hostname your-new-desired-hostname


and logout and login again to the host to make the new hostname active for the ssh session

Since around 2015 a new way was introduced to change hostname in Ubuntu 13.04 onwards and Fedora 21 and Debian 8 / 9 the way to set a new hostname comes again up to editting
/etc/hosts

and running command:
 

hostnamectl set-hostname your-new-desired-hostname

 

 

On Redhat based Linux distributions and Red Hat Enterprise Linux to change the hostname you will also need to edit:
 

vim /etc/sysconfig/network

NETWORKING=yes
HOSTNAME="domain.com"
GATEWAY="192.168.0.1"
GATEWAYDEV="eth0"
FORWARD_IPV4="yes"


Another universal way to edit hostname on any Linux distribution is to use sysctl cmd like so:
 

sysctl kernel.hostname

sysctl kernel.hostname=your-desired-hostname

 

Check linux install date / How do I find out how long a Linux server OS was installed?

Wednesday, March 30th, 2016

linux-check-install-date-howto-commands-on-debian-and-fedora-tux_the_linux_penguin_by_hello

To find out the Linux install date, there is no one single solution according to the Linux distribution type and version, there are some common ways to get the Linux OS install age.
Perhaps the most popular way to get the OS installation date and time is to check out when the root filesystem ( / ) was created, this can be done with tune2fs command

 

server:~# tune2fs -l /dev/sda1 | grep 'Filesystem created:'
Filesystem created:       Thu Sep  6 21:44:22 2012

 

server:~# ls -alct /|tail -1|awk '{print $6, $7, $8}'
sep 6 2012

 

root home directory is created at install time
 

 

server:~# ls -alct /root

 

root@server:~# ls -lAhF /etc/hostname
-rw-r–r– 1 root root 8 sep  6  2012 /etc/hostname

 

For Debian / Ubuntu and other deb based distributions the /var/log/installer directory is being created during OS install, so on Debian the best way to check the Linux OS creation date is with:
 

root@server:~# ls -ld /var/log/installer
drwxr-xr-x 3 root root 4096 sep  6  2012 /var/log/installer/
root@server:~# ls -ld /lost+found
drwx—— 2 root root 16384 sep  6  2012 /lost+found/

 

On Red Hat / Fedora / CentOS, redhat based Linuces , you can use:

 

rpm -qi basesystem | grep "Install Date"

 

basesystem is the package containing basic Linux binaries many of which should not change, however in some cases if there are some security updates package might change so it is also good to check the root filesystem creation time and compare whether these two match.

How to mount NFS network filesystem to remote server via /etc/fstab on Linux

Friday, January 29th, 2016

mount-nfs-in-linux-via--etc-fstab-howto-mount-remote-partitions-from-application-server-to-storage-server
If you have a server topology part of a project where 3 (A, B, C) servers need to be used to deliver a service (one with application server such as Jboss / Tomcat / Apache, second just as a Storage Server holding a dozens of LVM-ed SSD hard drives and an Oracle database backend to provide data about the project) and you need to access server A (application server) to server B (the Storage "monster") one common solution is to use NFS (Network FileSystem) Mount. 
NFS mount is considered already a bit of obsoleted technology as it is generally considered unsecre, however if SSHFS mount is not required due to initial design decision or because both servers A and B are staying in a serious firewalled (DMZ) dedicated networ then NTS should be a good choice.
Of course to use NFS mount should always be a carefully selected Environment Architect decision so remote NFS mount, imply  that both servers are connected via a high-speed gigabyte network, e.g. network performance is calculated to be enough for application A <-> to network storage B two sides communication not to cause delays for systems end Users.

To test whether the NFS server B mount is possible on the application server A, type something like:

 

mount -t nfs -o soft,timeo=900,retrans=3,vers=3, proto=tcp remotenfsserver-host:/home/nfs-mount-data /mnt/nfs-mount-point


If the mount is fine to make the mount permanent on application server host A (in case of server reboot), add to /etc/fstab end of file, following:

1.2.3.4:/application/local-application-dir-to-mount /application/remote-application-dir-to-mount nfs   rw,bg,nolock,vers=3,tcp,timeo=600,rsize=32768,wsize=32768,hard,intr 1 2


If the NTFS server has a hostname you can also type hostname instead of above example sample IP 1.2.3.4, this is however not recommended as this might cause in case of DNS or Domain problems.
If you want to mount with hostname (in case if storage server IP is being commonly changed due to auto-selection from a DHCP server):

server-hostA:/application/local-application-dir-to-mount /application/remote-application-dir-to-mount nfs   rw,bg,nolock,vers=3,tcp,timeo=600,rsize=32768,wsize=32768,hard,intr 1 2

In above example you need to have the /application/local-application-dir-to-mount (dir where remote NFS folder will be mounted on server A) as well as the /application/remote-application-dir-to-mount
Also on server Storage B server, you have to have running NFS server with firewall accessibility from server A working.

The timeou=600 (is defined in) order to make the timeout for remote NFS accessibility 1 hour in order to escape mount failures if there is some minutes network failure between server A and server B, the rsize and wsize
should be fine tuned according to the files that are being red from remote NFS server and the network speed between the two in the example are due to environment architecture (e.g. to reflect the type of files that are being transferred by the 2)
and the remote NFS server running version and the Linux kernel versions, these settings are for Linux kernel branch 2.6.18.x which as of time of writting this article is obsolete, so if you want to use the settings check for your kernel version and
NTFS and google and experiment.

Anyways, if you're not sure about wsize and and rise, its perfectly safe to omit these 2 values if you're not familiar to it.

To finally check the NFS mount is fine,  grep it:

 

# mount|grep -i nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
server-hostA:/application/remote-application-dir-to-mount on /application/remote-application-dir-to-mount type nfs (rw,bg,nolock,nfsvers=3,tcp,timeo=600,rsize=32768,wsize=32768,hard,intr,addr=1.2.3.4)


That's all enjoy 🙂

 

 

Redirect www to non www with .htaccess Apache rewrite rule

Thursday, July 2nd, 2015

https://www.pc-freak.net/images/redirect_domain_name_without_changing_url_apache_rewrite_rule_preventing_host_in_ip_mod_rewrite_
Sometimes it happens that some websites are indexed in Search Engines (Google, Yandex, Yahoo, Bing, Ask Jeeves etc.) with www.website-name.com and you want to get rid of the www in the hostname in favour of just the hostname in terms of Apache .htaccess redirect. I knwo redirect www to non-www, might seem a bit weird as usually people want to redirect their website domain without www to point to www but there is a good reason for that weirdness, if you're a Christian and you dislike the fact that WWW is being red as Waw Waw Waw's or Vav / Vav Vav letters in Hebrew which represents in hebrew 666 or the mark of the beast prophecised in last book of Holy Bible (Revelation) written by saint John, the book is also called often Apocalypse.

Using Apache mod_rewrite's .htaccess is a good way to do the redirect especially if you're in a shared hosting, where you don't have direct access to edit Apache Virtualhost httpd.conf file but have only access to your user's home public_html directory via lets say FTP or SFTP.

To achieve the www to non-www domain URL redirect, just edit .htaccess with available hosting editor (in case if shell SSH access is available) or web interface or download the .htaccess via FTP / SFTP modify it and upload it back to server.

You need to include following mod_rewrite RewriteCond rules to .htaccess (preferrably somewhere near beginning of file):
 

RewriteEngine On
RewriteCond %{HTTP_HOST} ^www.Your-Website.org [NC]
RewriteRule ^(.*)$ http://Your-Website.org/$1 [L,R=301]


As .htaccess is being dynamically red by Apache's mod_rewrite module no Apache webserver restart is required and you should see immediately the affect, hopefully if the webhosting doesn't imply some caching with mod_cache or there is no some cache expiry setting preventing the new .htaccess to be properly redable by webserver.
Also in case of troubles make sure the new uploaded .htaccess file is properly readable e.g. has some permissions such as 755. Also in case if it doesn't immediately works out, make sure to clean up your browser cache and assure your browser is not configured to use some caching proxy host (be it visible or transparent).
Besides this would work and your Search Engines in future will hopefully stop indexing your site with WWW. in front of domain name, there is a downside of using .htaccess instead of including it straight into Apache's VirtualHost configuration is that this will cause a bit of degraded performance and add some milliseconds slowness to serve requests to your domain, thus if you're on your own dedicated server and have access to Apache configuration implement the www to non www hostname redirect directly using VirtualHost as explained in my prior article here

 

‘host-name’ is blocked because of many connection errors; unblock with ‘mysqladmin flush-hosts’

Sunday, May 20th, 2012

mysql-logo-host-name-blocked-because-of-many-connection-errors
My home run machine MySQL server was suddenly down as I tried to check my blog and other sites today, the error I saw while trying to open, this blog as well as other hosted sites using the MySQL was:

Error establishing a database connection

The topology, where this error occured is simple, I have two hosts:

1. Apache version 2.0.64 compiled support externally PHP scripts interpretation via libphp – the host runs on (FreeBSD)

2. A Debian GNU / Linux squeeze running MySQL server version 5.1.61

The Apache host is assigned a local IP address 192.168.0.1 and the SQL server is running on a host with IP 192.168.0.2

To diagnose the error I've logged in to 192.168.0.2 and weirdly the mysql-server was appearing to run just fine:
 

debian:~# ps ax |grep -i mysql
31781 pts/0 S 0:00 /bin/sh /usr/bin/mysqld_safe
31940 pts/0 Sl 12:08 /usr/sbin/mysqld –basedir=/usr –datadir=/var/lib/mysql –user=mysql –pid-file=/var/run/mysqld/mysqld.pid –socket=/var/run/mysqld/mysqld.sock –port=3306
31941 pts/0 S 0:00 logger -t mysqld -p daemon.error
32292 pts/0 S+ 0:00 grep -i mysql

Moreover I could connect to the localhost SQL server with mysql -u root -p and it seemed to run fine. The error Error establishing a database connection meant that either something is messed up with the database or 192.168.0.2 Mysql port 3306 is not properly accessible.

My first guess was something is wrong due to some firewall rules, so I tried to connect from 192.168.0.1 to 192.168.0.2 with telnet:
 

freebsd# telnet 192.168.0.2 3306
Trying 192.168.0.2…
Connected to jericho.
Escape character is '^]'.
Host 'webserver' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'
Connection closed by foreign host.

Right after the telnet was initiated as I show in the above output the connection was immediately closed with the error:

Host 'webserver' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'Connection closed by foreign host.

In the error 'webserver' is my Apache machine set hostname. The error clearly states the problems with the 'webserver' apache host unable to connect to the SQL database are due to 'many connection errors' and a fix i suggested with mysqladmin flush-hosts

To temporary solve the error and restore my normal connectivity between the Apache and the SQL servers I logged I had to issue on the SQL host:

mysqladmin -u root -p flush-hostsEnter password:

Thogh this temporar fix restored accessibility to the databases and hence the websites errors were resolved, this doesn't guarantee that in the future I wouldn't end up in the same situation and therefore I looked for a permanent fix to the issues once and for all.

The permanent fix consists in changing the default value set for max_connect_error in /etc/mysql/my.cnf, which by default is not too high. Therefore to raise up the variable value, added in my.cnf in conf section [mysqld]:

debian:~# vim /etc/mysql/my.cnf
...
max_connect_errors=4294967295

and afterwards restarted MYSQL:

debian:~# /etc/init.d/mysql restart
Stopping MySQL database server: mysqld.
Starting MySQL database server: mysqld.
Checking for corrupt, not cleanly closed and upgrade needing tables..

To make sure the assigned max_connect_errors=4294967295 is never reached due to Apache to SQL connection errors, I've also added as a cronjob.

debian:~# crontab -u root -e
00 03 * * * mysqladmin flush-hosts

In the cron I have omitted the mysqladmin -u root -p (user/pass) input options because for convenience I have already stored the mysql root password in /root/.my.cnf

Here is how /root/.my.cnf looks like:

debian:~# cat /root/.my.cnf
[client]
user=root
password=a_secret_sql_password

Now hopefully, this would permanently solve SQL's 'failure to accept connections' due to too many connection errors for future.

Weblogic – How to change / remove IP/hostname quick and dirty howto

Wednesday, March 11th, 2015

Oracle-Weblogic-Server-logo-how-to-change-ip-hostname-weblogic-quick-and-dirty-howto

This is just quick & dirty doc on how to change/remove IP/host on Oracle WebLogic Application server

– In logs the Error message will be message like:

 

<Oct 21, 2013 1:06:51 AM SGT> <Warning> <Security> <BEA-090504> <Certificate chain received from cluster2.yourdomain.com – 192.168.1.41 failed hostname verification check. Certificate contained cluster1.yourdomain.com but check expected cluster2.yourdomain.com>

 

 

Solution:

On web console – change/remove IP/hostname

 

As root / admin supersuser:

 

– Stop Weblogic Webserver 

As this is RHEL Linux, to stop WLS use standard init script start / stop service command

 

service wls stop

 

– As Application user create directory where new key will be created

 

mkdir /home/uwls11pp/tmp_key
cd /home/uwls11pp/tmp_key

 

– Make backup of current JKS (Keystore File)

 

cp /WLS/app/oracle/wls1036/wlserver_10.3/server/lib/DemoIdentity.jks /WLS/app/oracle/wls1036/wlserver_10.3/server/lib/DemoIdentity.jks_11032015

 

– Execute set env . script

 

/WLS/app/oracle/wls1036/wlserver_10.3/server/bin/setWLSEnv.sh

 

– Copy & paste output from script above and export variables

 

export CLASSPATH;
export PATH;

 

– Check old certificate in keystore

 

/WLS/app/oracle/jdk1.7.0_25/bin/keytool -list -v -keystore /WLS/app/oracle/wls1036/wlserver_10.3/server/lib/DemoIdentity.jks  -storepass DemoIdentityKeyStorePassPhrase

 

– Delete old Weblogic keystore JKS file

 

/WLS/app/oracle/jdk1.7.0_25/bin/keytool -delete -alias demoidentity -keystore /WLS/app/oracle/wls1036/wlserver_10.3/server/lib/DemoIdentity.jks -storepass DemoIdentityKeyStorePassPhrase

 

– Check wether proper Java version is used

 

java -version

 

– Get hostname from hosts file

 

cat /etc/hosts

 

#Replace weblogic1 with your FQDN (Fully Qualified Domain Name) – this step will create new certificate with new hostname

 

java utils.CertGen -cn weblogic1 -keyfilepass DemoIdentityPassPhrase -certfile newcert -keyfile newkey

 

#Import certificate to “official” keystore

 

java utils.ImportPrivateKey -keystore /WLS/app/oracle/wls1036/wlserver_10.3/server/lib/DemoIdentity.jks -storepass DemoIdentityKeyStorePassPhrase -keyfile newkey.pem -keyfilepass DemoIdentityPassPhrase -certfile newcert.pem -alias demoidentity

 

#Recheck once again if correct certificate is in use

 

/WLS/app/oracle/jdk1.7.0_25/bin/keytool -list -v -keystore /WLS/app/oracle/wls1036/wlserver_10.3/server/lib/DemoIdentity.jks  -storepass DemoIdentityKeyStorePassPhrase


– Finally issue as root user restart Weblogic server again

 

 

service wls start

Fix to (OOPS!!! there seems to be some problem while tweeting. Please try again.) – Solution: Why WordPress Tweet-Old-Post can’t authorize and Auto post to Twitter

Wednesday, July 31st, 2013

oops there seems to be some problem with tweeting please try again wordpress tweet old post

I've been happily using Tweet-Old-Post to auto tweet my old blog posts in Twitter to drive some extra traffic to increase a bit Traffic to this blog  and henceforth it used to be working well just until recently. Suddenly it stopped mysteriously working! Until this very day I didn't have the time to investigate what is happening and why Tweet Old Post fails to Auto post in Twitter? with below miserable error:

Tweet-old-post twitter authorized post wordpress screenshot

 

OOPS!!! there seems to be some problem while tweeting. Please try again.


Today I have some free time at work and was wondering what to do, so decided to try some close examination. I red plenty of posts online from people complaining to have the same problems on both current latest WordPress 3.5.1 and older Releases of WordPress. Some claimed this errors are because of WordPress version incompitability others said it is due to fact that some other plugins like (FD FeedBurner) are creating conflicts with Tweet-Old-Post. I use FD Feedburner Plugin myself so I tried disabling it for a while and see if this fix it with no luck.

Some other suggested solutions was to check whether  
Settings -> General -> (Blog hostname)
is properly configured.
 Some even suggested "hacking" manually into plugin code changing stuff in top-admin.php claiming the reason for issues is rooted in some looping mod_rewrite redirect rules.

As a logical step to solve it I moreover tried the good old Windows Philosophy (Restart it and it will magically work again).
Thus from WordPress main menu
Tweet Old Post -> (clicked on) Reset Settings

Tweet-old-post update tweet old post options tweet now reset settings buttons wordpress screenshot
to nullify any custom settings that might have been messing it.
Though reset worked fine trying to do a test Tweet with Tweet Now (button) failed once again with the shitty error msg:

OOPS!!! there seems to be some problem while tweeting. Please try again.

As a next logical step I tried to enable Tweet-Old-Post logging by ticking on
Enable Log (Saves log in log folder)
In log log.txt (located in my case in /var/www/blog/wp-content/plugins/tweet-old-post/log.txt) I've noticed following error msg:

 1375196010 ..CURLOPT_FOLLOWLOCATION is ON
  1375196010 ..CURL returned a status code of 200
  1375196011 do get request returned status code of 400 for url – http://api.twitter.com/1.1/users/show.json?id=126915073

Obviously something was wrong with curl PHP use, however as I was lazy and not a great PHP Programmer I decided not to took the time to further debug PHP curl function but instead. Try to use some kind of Alternative Post-To-Twitter plugin.
It turned out there are at least two more WP plugins that are auto posting to twitter:
  • tweetily-tweet-wordpress-posts-automatically
  • evergreen-post-tweeter
     

I tried to manually download and install both of them with wget in wp-content/plugins set proper readable for apache permissions i.e. (chown -R www-data:www-data /var/www/blog/wp-content/plugins/tweetily-tweet-wordpress-posts-automatically; chown -R www-data:www-data /var/www/blog/wp-cotent/plugins/evergreen-post-tweeter).

Tweet-Old-Post sign in with twitter wordpress screenshot

Further on tried to enable them one by one and then tried using Authorizing Auto Tweeting to Tweeter App, both failed to Authorize Auto Post to Twitter just like in Tweet-Old-Post 4.0.7. As using another plugin was not a solution, then I tried going another direction and followed some ppl suggestion to downgrade Tweet-Old-Post and try with older version again. I used following link to try with old Tweet-Old-Post versions

Old version didn't worked as well, so finally I felt totally stucked  .. unable to fix it for a while and then the lamp bulbed, had the brilliant idea to check curl settings in php.ini (/etc/php5/apache2/php.ini). I looked in config for anything related to curl, until I got what is causing it!!!! A security setting disabling curl use from PHP.INI

Below is paste from php.ini with line making the whole OOPS!!! there seems to be some problem while tweeting. Please try again

disable_functions =exec,passthru,shell_exec,system,proc_open,popen,curl_exec,curl_multi_exec,parse_ini_file,show_source,eval

I've included above disable functions to strengthen security and prevent crackers to download scripts if a security breach happens.

Henceforth to solve I simply removed from disable_functions
curl_exec and curl_multi_exec, so after changes machine PHP disabled functions for security reasons looked like so

disable_functions =exec,passthru,shell_exec,system,proc_open,popen,parse_ini_file,show_source,eval

To make new php.ini settings load finally did the usual Apache restart:

pcfreak:~# /etc/init.d/apache2 restart
...

Well that's all now the error is solved. I hope my little article will shed up some light on problem and will help thousands of users to get back the joy of working Tweet-Old-Posts 😉
 

How to configure Exim to relay mail to remote SMTP server on Debian and Ubuntu

Wednesday, August 24th, 2011

I’m required to do some mail relaying on a Debian Linux host which should use a remote mail server to relay its mails.
Until so far I’ve had not much experience with exim as I prefer using qmail, whever a mail server is needed. However since now only a relaying was necessery and exim is the default installed MTA on Debian, I’ve decided to use exim to take care of the SMTP mail relaying.
After a bit of reading it happened configuring exim to relay via remote SMTP server is more than easy!

All I had to do is run the command:

debian-relay:~# dpkg-reconfigure exim4-config

Next in the Ncruses interface to appear:

Debian Exim relay smtp config screenshot

I had to choose the option:

mail sent by smarthost; no local mail

Next a dialog appears asking for:
System mail name:
Therein it’s necessery to type in the hostname of the remote SMTP to be used for mail relay.
Next dialog asks for:
IP-addresses to listen on for incoming SMTP connections:
and I left it with 127.0.0.1 however if exim is supposed to be visible from external network one might decide to put in real IP address there.

Pressing OK leads to the next dialog:
 Other destinations for which mail is accepted: 
I decided to leave this blank as I don’t want to accept mail for any destinations.
Next pane reads:
Visible domain name for local users:
I’ve typed inside my smtp relay server e.g.:
smtp.myrelaymail.com

Further comes:
IP address or host name of the outgoing smarthost:
There once again I typed my mail relay host smtp.relaymail.com

The next config screen is:
Keep number of DNS-queries minimal (Dial-on-Demand)?
On any modern Linux host the default answer of No is fine.
Following prompt asked if I want to:
Split configuration into small files?
I’ve decided not to tamper with it and choosed No
Afterwards mail relaying works like a charm thx God 😉

How to change hostname permanently on Debian and Ubuntu Linux

Thursday, March 14th, 2013

Change hostname on Debian and Ubuntu Linux terminal hostname screenshot

I had to configure a newly purchased dedicated server from UK2. New servers cames shipped with some random assigned node hostname  like server42803. This is pretty annoying, and has to be changed especially if your company has a naming server policy in some format like; company-s1#, company-s2#, company-sN#.

Changing hostname via hosts definition file /etc/hosts to assign the IP address of the host to the hostname is not enough for changing the hostname shown in shell via SSH user login.

To display full hostname on Debian and Ubuntu, had to type:

server42803:~# hostname
server42803.uk2net.com

To change permanently server host to lets say company-s5;

server42803:~# cat /etc/hostname | \
sed -e 's#server42803.uk2net.com#company-s5#' > /etc/hostname

To change for current logged in SSH session:

server42803:~# hostname company-s5
company-s5:~#

Finally because already old hostname is red by sshd, you have to also restart sshd for new hostname to be visible on user ssh:

company-s5:~# /etc/init.d/ssh restart
...

As well as run script:

company-s5:~# /etc/init.d/hostname.sh

Mission change host accomplished, Enjoy 🙂

How to get full host and IP address of last month logged in users on GNU / Linux

Friday, December 21st, 2012

This post might be a bit trivial for the Linux gurus, but for novices Linux users hopefully helpful. I bet, all Linux users know and use the so common used last command.

last cmd provides information on last logged in users over the last 1 month time as well as shows if at present time of execution there are logged in users. It has plenty of options and is quite useful. The problem with it I have often, since I don't get into the habit to use it with arguments different from the so classical and often used:

last | less

back in time when learning Linux, is that whether run it like this I can't see full hostname of users who logged in or is currently logged in from remote hosts consisting of longer host names strings than 16 characters.

To show you what I mean, here is a chunk of  last | less output taken from my home router www.pc-freak.net.

# last|less
root     pts/1        ip156-108-174-82 Fri Dec 21 13:20   still logged in  
root     pts/0        ip156-108-174-82 Fri Dec 21 13:18   still logged in  
hipo     pts/0        ip156-108-174-82 Thu Dec 20 23:14 - 23:50  (00:36)   
root     pts/0        g45066.upc-g.che Thu Dec 20 22:31 - 22:42  (00:11)   
root     pts/0        g45066.upc-g.che Thu Dec 20 21:56 - 21:56  (00:00)   
play     pts/2        vexploit.net.s1. Thu Dec 20 17:30 - 17:31  (00:00)   
play     pts/2        vexploit.net.s1. Thu Dec 20 17:29 - 17:30  (00:00)   
play     pts/1        vexploit.net.s1. Thu Dec 20 17:27 - 17:29  (00:01)   
play     pts/1        vexploit.net.s1. Thu Dec 20 17:23 - 17:27  (00:03)   
play     pts/1        vexploit.net.s1. Thu Dec 20 17:21 - 17:23  (00:02)   

root     pts/0        ip156-108-174-82 Thu Dec 20 13:42 - 19:39  (05:56)   
reboot   system boot  2.6.32-5-amd64   Thu Dec 20 11:29 - 13:57 (1+02:27)  
root     pts/0        e59234.upc-e.che Wed Dec 19 20:53 - 23:24  (02:31)   

The hostname last cmd output as you can see is sliced, so one cannot see full hostname. This is quite inconvenient, especially, if you have on your system some users who logged in with suspicious hostnames like the user play which is a user, I've opened for people to be able to play my system installed Cool  Linux ASCII (text) Games. In normal means, I would skip worrying about the vexploit.net.s1…..  user, however as I've noticed one of the ascii games similar to nethack called hunt was kept hanging on the system putting a load of about 50% on the CPU   and was run with the play user and according to logs, the last logged in username with play was containing a hostname with "vexploit.net" as a hostname.

This looked to me very much like a script kiddie, attempt to root my system, so I killed hunt, huntd and HUNT hanging processes and decided investigate on the case.

I wanted to do whois on the host, but since the host was showing incomplete in last | less, I needed a way to get the full host. The first idea I got is to get the info from binary file /var/log/wtmp – storing the hostname records for all logged in users:

# strings /var/log/wtmp | grep -i vexploit | uniq
vexploit.net.s1.fti.net

To get in a bit raw format, all the hostnames and IPs (whether IP did not have a PTR record assigned):

strings /var/log/wtmp|grep -i 'ts/' -A 1|less

Another way to get the full host info is to check in /var/log/auth.log – this is the Debian Linux file storing ssh user login info; in Fedora and CentOS the file is /var/log/secure.

# grep -i vexploit auth.log
Dec 20 17:30:22 pcfreak sshd[13073]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=vexploit.net.s1.fti.net  user=play

Finally, I decided to also check last man page and see if last is capable of showing full hostname or IPS of previously logged in hosts. It appears, last is having already an argument for that so my upper suggested methods, turned to be useless overcomplexity. To show full hostname of all hosts logged in on Linux over the last month:
 

# last -a |less

root     pts/2        Fri Dec 21 14:04   still logged in    ip156-108-174-82.adsl2.static.versatel.nl
root     pts/1        Fri Dec 21 13:20   still logged in    ip156-108-174-82.adsl2.static.versatel.nl
root     pts/0        Fri Dec 21 13:18   still logged in    ip156-108-174-82.adsl2.static.versatel.nl
hipo     pts/0        Thu Dec 20 23:14 - 23:50  (00:36)     ip156-108-174-82.adsl2.static.versatel.nl
root     pts/0        Thu Dec 20 22:31 - 22:42  (00:11)     g45066.upc-g.chello.nl
root     pts/0        Thu Dec 20 21:56 - 21:56  (00:00)     g45066.upc-g.chello.nl
play     pts/2        Thu Dec 20 17:30 - 17:31  (00:00)     vexploit.net.s1.fti.net
play     pts/2        Thu Dec 20 17:29 - 17:30  (00:00)     vexploit.net.s1.fti.net
play     pts/1        Thu Dec 20 17:27 - 17:29  (00:01)     vexploit.net.s1.fti.net
play     pts/1        Thu Dec 20 17:23 - 17:27  (00:03)     vexploit.net.s1.fti.net
play     pts/1        Thu Dec 20 17:21 - 17:23  (00:02)     vexploit.net.s1.fti.net
root     pts/0        Thu Dec 20 13:42 - 19:39  (05:56)     ip156-108-174-82.adsl2.static.versatel.nl
reboot   system boot  Thu Dec 20 11:29 - 14:58 (1+03:28)    2.6.32-5-amd64
root     pts/0        Wed Dec 19 20:53 - 23:24  (02:31)     e59234.upc-e.chello.nl

Listing all logged in users remote host IPs (only) is done with last's "-i" argument:

# last -i
root     pts/2        82.174.108.156   Fri Dec 21 14:04   still logged in  
root     pts/1        82.174.108.156   Fri Dec 21 13:20   still logged in  
root     pts/0        82.174.108.156   Fri Dec 21 13:18   still logged in  
hipo     pts/0        82.174.108.156   Thu Dec 20 23:14 - 23:50  (00:36)   
root     pts/0        80.57.45.66      Thu Dec 20 22:31 - 22:42  (00:11)   
root     pts/0        80.57.45.66      Thu Dec 20 21:56 - 21:56  (00:00)   
play     pts/2        193.252.149.203  Thu Dec 20 17:30 - 17:31  (00:00)   
play     pts/2        193.252.149.203  Thu Dec 20 17:29 - 17:30  (00:00)   
play     pts/1        193.252.149.203  Thu Dec 20 17:27 - 17:29  (00:01)   
play     pts/1        193.252.149.203  Thu Dec 20 17:23 - 17:27  (00:03)   
play     pts/1        193.252.149.203  Thu Dec 20 17:21 - 17:23  (00:02)   
root     pts/0        82.174.108.156   Thu Dec 20 13:42 - 19:39  (05:56)   
reboot   system boot  0.0.0.0          Thu Dec 20 11:29 - 15:01 (1+03:31)  

One note to make here is on every 1st number of month last command  clear ups the records storing for user logins in /var/log/wtmp and nullifies the file.

Though the other 2 suggested, methods are not necessary, as they are provided in last argument. They're surely a mus do routine, t when checking a system for which doubting it could have been intruded (hacked). Checking both /var/log/wtmp and /var/log/auth.log / and /var/log/auth.log.1 content and comparing if the records on user logins match is a good way to check if your login logs are not forged. It is not a 100% guarantee however, since sometimes attacker scripts wipe out their records from both files. Out of security interest some time, ago I've written a small script  to clean logged in user recordfrom /var/log/wtmp and /var/log/auth.log – log_cleaner.sh – the script has to be run as a super to have write access to /var/log/wtmp and /var/log/auth.log. It is good to mention for those who don't know, that last reads and displays its records from /var/log/wtmp file, thus altering records in this files will alter  last displayed login info.

Thanks God in my case after examing this files as well as super users in /etc/passwd,  there was no  "signs", of any succesful breach.