Posts Tagged ‘location’

How to check if network ethernet cards are active on Linux server / detect the physical connected state of a network cable / connector?

Tuesday, November 1st, 2022

linux-check-connectivity-interface-software-implementation-of-multi-queue-support-in-Linux-using-commodity-Ethernet-NICs

Lets say you are administrating some Linux server and need to upgrade a switch and temporary move out traffic for ethernet interfaces connected via a Gigabit network to a Gigabit Cisco / Junper EX Series / HPE Aruba or Arista Platform network switch to a newer version of a switch or software.

Usually if you don't have control over the Network switch (if you're employeed in a large corporation), that migration will be handled by a colleague from the Network team in a prescheduled time slot and usually in a coordinated meeting, once the cabling is being physically moved by someone a person in the Computer Room (in DC) in the respective data center.

Then once the correct commands are executed on the network switch to remap the new cable to point to the right location on the Linux server, where the old switch was and the setup has to be double verified by the network team mate.

Once this is done either by a colleague or if you're in a smaller company and you work as one man army sysadmin and you have done it yourself.
Next step is to verify that the Ethernet LAN cards on the Linux server lets say 6 or 8 LAN cards are still connected and active to the preset Active LAN / VLANs.

On Linux this is pretty simple and there is many ways to do it with external tools like ethtool, if you're lucky and your server doesn't have to have a paranoid security rules to follow or have to be a minimilastic machine with a 100% PCI High security standards compliancy.

To check connectivity for all your ethernet interfaces you can simply run a one liner shell script like so:

[root@linux-server ~]# for i in $(ip a s|grep -i :|grep -v link|awk '{ print $2 }'|sed -e 's#:##g'|grep -v lo); do ethtool $i; done
Settings for eth0:
        Link detected: yes
Settings for eth1:
        Link detected: yes
Settings for eth2:
        Link detected: yes

So far so good but what if your RHEL / CentOS / Debian server doesn't have ethtool installed and you're not allowed to install it then how can you check whether network cable connector is indicating a network activity to the connected Ethernet LAN cards?

[root@linux-server ~]# for f in $(ls -1 /sys/class/net/); do echo "Eth inface: $f"; cat /sys/class/net/$f/operstate; done
Eth inface: eth0
up
Eth inface: eth1
up
Eth inface: eth2
up
Eth inface: lo
unknown

If your operstate returns something different like state unknown, e.g.:

root@linux-server ~]# cd /sys/class/net/
[root@linux-server net]# grep "" eth2/operstate
unknown
[root@linux-server net]#

[root@linux-server net]# grep "" eth{0,1,2,3}/operstate  
eth0/operstate:unknown
eth1/operstate:unknown
eth2/operstate:unknown
eth3/operstate:unknown

Then you need to check the carrier file

[root@linux-server net]# grep "" eth{0,1,2,3}/carrier
eth0/carrier:1
eth1/carrier:1
eth2/carrier:1
eth3/carrier:1

It could return either 0 or 1

The number 1 in the above output means that the network cable is physically connected to your network card’s slot meaning your network switch migration is success.

Method 2: Next, we will test a second network interface eth1:

[root@linux-server net]# cat /sys/class/net/eth1/carrier
[root@linux-server net]# cat: /sys/class/net/eth1/carrier: Invalid argument

This command’s output most likely means the the eth1 network interface is in powered down state.

So what have learned?

We have learned how to monitor the state of the network cable connected to a Linux ethernet device via external switch that is migrated without the use of any external tools like ethtool.

How to check Windows server installed Tomcat and Java version

Tuesday, August 19th, 2014

how-to-check-get-java_and-tomcat-version-on-windows-java-and-tomcat-logo
I'm filling up a TOP (Turn to Production) form for a project where my part as Web and Middleware Engineer included install of Tomcat 7 and Java 1.7 on Windows server 2008 R2 standard. TOP is required Excel sheet standard document used by many large companies to fill in together with Project Manager before the server is to be launched into Production mode.

Therefore I needed to find out previously installed Tomcat and Java version, here is how:

1. Go to Tomcat install directory and (click twice) run twice Tomcat7w

As tomcat is installed in Custom location in D:webdienste in this case I had to run:
 
D:webdiensteapplication-jsptomcatapplication-namecurrentbinTomcat7w.exe

I run it using command line (cmd.exe), however you can run it via Windows Explorer, if you're lazy typing.
You will get a window pop up like on below screenshot:

In this case Installed Tomcat version was 7.0.55

If you need to check the version on older Tomcat application server install you can run instead Tomcat6w – whether its Tomcat version 6 or Tomcat5w – for Tomcat ver. 5

In order to Check the java version the quickest way is via command line, again run cmd.exe from

Start -> Run -> cmd.exe

how-to-see-find-get-check-tomcat-version-on-windows-server-install-screenshot

Then cd to whenever is Java VM installed the usual location where it gets installed for Java 1.7 on Windows is:

C:Program FilesJavajre7bin

Java 8 common location is:

C:Program FilesJavajre8bin

Java would automatically add PATH to Windows default PATH definitions during install, hence to find out exactly where java is installed on Windows server, type in cmd:

where java

Then to check the exact installed Java version on Win host is by invoking java (jre) cli with -version parameter:

java -version

how-to-check-get-java-version-info-on-windows-server-screenshot

If you're lazy to type in commands, you can also check Java version in Windows from GUI by using:

Java Control Panel

To launch it in:

Start -> (Search Program and Files)

field type:

Java Conf and click on Java Control Panel

Then click on

General (tab) -> About

java-control-panel-gui-about-version-windows-server-screenshot

 

Windows equivalent of Linux which, whereis command – Windows WHERE command

Friday, June 6th, 2014

windows-find-commands-full-location-where-which-linux-commands-equivalent-in-windows-where-screenshot
In Linux there are the which and whereis commands showing you location of binaries included in $PATH # which lsof /usr/bin/lsof

# whereis lsof
lsof: /usr/bin/lsof /usr/share/man/man8/lsof.8.gz

so question arises what is which / whereis command Linux commands Windows equivalent?

In older Windows Home / Server editions – e.g. – Windows XP, 2000, 2003 – there is no standard installed tool to show you location of windows %PATH% defined executables. However it is possible to add the WHERE command binary by installing Resource Kit tools for administrative tasks.

windows-resource-kit-tools-install-where-linux-command-equivalent-in-windows

 

In Windows Vista / 7 / 8 (and presumably in future Windows releases), WHERE command is (will be) available by default

C:\Users\Georgi>WHERE SQLPLUS
D:\webdienste\application\oracle\11.2.0\client_1\BIN\sqlplus.exe

Cheers! 🙂

Archive Outlook mail in Outlook 2010 to free space in your mailbox

Thursday, May 15th, 2014

outlook-archive-old-mail-to-prevent-out-of-space-problems-outlook-logo
If you're working in a middle or big sized IT company or corporation like IBM or HP, you're already sucked into the Outlook "mail whirlwind of corporate world" and daily flooded with tons of corporate spam emails with fuzzy business random terms like taken from Corporate Bullshit Generator

Many corporations, because probably of historic reasons still provide employees with small sized mailboxes half a gigabyte, a gigabyte or even in those with bigger user Mailboxes like in Hewlett Packard, this is usually no more than 2 Gigabytes.

This creates a lot of issues in the long term because usually mail communication in Inbox, Sent Items, Drafts Conversation History, Junk Email and Outbox grows up quickly and for a year or a year and a half, available Mail space fills up and you stop receiving email communication from customers. This is usually not too big problem if your Mailbox gets filled when you're in the Office (in office hours). However it is quite unpleasent and makes very bad impression to customers when you're in a few weeks Summar Holiday with no access to your mailbox and your Mailbox free space  depletes, then you don't get any mail from the customer and all the time the customer starts receiving emails disrupting your personal or company image with bouncing messages saying the "INBOX" is full.

To prevent this worst case scenario it is always a good idea to archive old mail communication (Items) to free up space in Outlook 2010 mailbox.
Old Outlook Archived mail is (Saved) exported in .PST outlook data file format. Later exported Mail Content and Contacts could be easily (attached) from those .pst file to Outlook Express, leaving you possibility to still have access to your old archived mail keeping the content on your hard drive instead on the Outlook Exchange Mailserver (freeing up space from your Inbox).

Here is how to archive your Outlook mail Calendar and contacts:

Archive-outlook-mail-in-microsoft-outlook-2010-free-space-in-your-mailbox

1. Click on the "File" tab on the top horizontal bar.Select "Cleanup Tools" from the options.

2. Click "Cleanup Tools" from the options.

3. Click on the "Archive this folder and all subfolders" option.

4. Select what to archive (e.g. Inbox, Drafts, Sent Items, Calendar whatever …)

5. Choose archive items older than (this is quite self-explanatory)

6. Select the location of your archive file (make sure you palce the .PST file into directory you will not forget later)

That's all now you have old mails freed up from Outlook Exchange server. Now make sure you create regular backups ot old-archived-mail.pst file you just created, it is a very good idea to upload this folder to encrypted file system on USB stick or use something like TrueCrypt to encrypt the file and store it to external hard drive, if you already don't have a complete backup corporate solution backuping up all your Laptop content.

Later Attaching or detaching exported .PST file in Outlook is done from:

File -> Open -> Open Outlook Data File

outlook-open-backupped-pst-datafile-archive-importing-to-outlook-2010


Once .PST file is opened and attached in Left Inbox pane you will have the Archived old mail folder appear.

 

outlook-archived-mail-pannel-screenshot-windows-7
You can change Archived name (like I did to some meaningful name) like I've change it to Archives-2013 by right clicking on it (Data File properties -> Advanced)

Richard Stallman explaining Why IPads and Cell Phones are bad for freedom

Wednesday, July 11th, 2012

It is a public secret that Mobile Phones which does us very good and generally makes our daily lifes way easier are also a big enemy to our natural ihnibited freedom. Life has become such that it is almost inevitable to do any business or do a daily simple jobs without using Mobile Phone. There is almost none practically today that has wilfully rejected to use the mobile phone on any basis, almost anyone except some strangers like Richard Stallman and probably few others security freaks.

I've been shocked to find out the Father of Free Software (Richard Mathew Stallman), well known in the hacker dome as RMS does not own and didn't use any mobiles. The concerns he pointed are very much logical and rightful. Owning a mobile is a great security hole in personal privacy (mobile phones can be easily sniffed by Mobile Operators) as well as anyone wearing a mobile can be tracked up to 5 to 2 meters to the exact location where he is based on the mobile phone cells to which the mobile is connected.

Many people are not aware actually of the severeness of the issue of constant tracking of people everywhere through this call "goodies". Many mobile operators are already running a software which is building place behaviour patterns of every user of their mobile network. In other words, as we're used to bring and use the mobile everywhere in automated program is creating a map for each number assigned in some of the mobile operators. The gathered data about our location going habits can then be easily used as a indicator for predicting our future behaviour, bying habits (how many times we go to super-market), how many times we go to cinema, what kind of interests we hold etc. etc.
This combined with Google, account monitoring could possibly create a system similar to the old movies Big Brother, where all people goods and even attitudes or desires is monitored, influenced and controlled ….

The severeness of the future implications of this constant "personal surveillance and tracking device" as Stallman use to call it is very dangerous for our freedoms.

I tried to live without a mobile phone, just like Stallman for about months, and to tell you the truth the world around seems completely different when you decide not to use 'em. The time I lived wihtout a mobile, clearly show me we have come to the point we cannot any more live without GSM. We fall the trap of dependanding the little "talk box" communication for absolutely everything, obviously sacrificing privacy and freedom for convenience.
Mobiles are just one side of the coin, as the non-free software which is ruling the software market and the use of computers puts another treat and takes away many foundamential freedoms we used to have in the less technological world.

Apple as a vendor of software and hardware also denies and breaks our freedom very badly, as the company tracks everyone who owns anything created by apple connected to the internet. Besides that non-free software producers, could change the user software with a press of a button giving them the opportunity to decide what is good and bad for us, leaving us at a state of a helpless dependable users.

The topic of technological little-by-little enslavement, we're going through nowdays and the denying freedoms, we experience while being convinced by companies that we became more free by each next mambo-jambo gadget or by owning the latest smart-phone is very huge and complex but unfortunately underseen in society. I don't understand why, is it due to the low technical skills of mass users is it due to a "not-care what will happen in future" attitude, but obviously people openly discussing or protesting the technologization taking away our freedom is almost zero ….

Here is the video I found in youtube in which Stallman is asked few, questions on Ipads (IBADS) and Mobile Phone use. I believe his short explanation synthesizes the problem quite well ;;;;

I just wonder after you check the video, Would you still accept an Ipad as a birthday gift ? 🙂
Do you still think cell-phones are "good" freedom safe and reliable ?

How to check the IP address of Skype (user / Contacts) on GNU / Linux with netstat and whois

Thursday, May 3rd, 2012

netstat check skype contact IP info with netstat Linux xterm Debian Linux

Before I explain how netstat and whois commands can be used to check information about a remote skype user – e.g. (skype msg is send or receved) in Skype. I will say in a a few words ( abstract level ), how skype P2P protocol is designed.
Many hard core hackers, certainly know how skype operates, so if this is the case just skip the boring few lines of explanation on how skype proto works.

In short skype transfers its message data as most people know in Peer-to-Peer "mode" (P2P)  – p2p is unique with this that it doesn't require a a server to transfer data from one peer to another. Most classical use of p2p networks in the free software realm are the bittorrents.

Skype way of connecting to peer client to other peer client is done via a so called "transport points". To make a P-to-P connection skype wents through a number of middle point destinations. This transport points (peers) are actually other users logged in Skype and the data between point A and point B is transferred via this other logged users in encrypted form. If a skype messages has to be transferred  from Peer A (point A) to Peer B (Point B) or (the other way around), the data flows in a way similar to:

 A -> D -> F -> B

or

B -> F -> D -> A

(where D and F are simply other people running skype on their PCs).
The communication from a person A to person B chat in Skype hence, always passes by at least few other IP addresses which are owned by some skype users who happen to be located in the middle geographically between the real geographic location of A (the skype peer sender) and B (The skype peer receiver)..

The exact way skypes communicate is way more complex, this basics however should be enough to grasp the basic skype proto concept for most ppl …

In order to find the IP address to a certain skype contact – one needs to check all ESTABLISHED connections of type skype protocol with netsat within the kernel network stack (connection) queue.

netstat displays few IPs, when skype proto established connections are grepped:

noah:~# netstat -tupan|grep -i skype | grep -i established| grep -v '0.0.0.0'
tcp 0 0 192.168.2.134:59677 212.72.192.8:58401 ESTABLISHED 3606/skype
tcp 0 0 192.168.2.134:49096 213.199.179.161:40029 ESTABLISHED 3606/skype
tcp 0 0 192.168.2.134:57896 87.120.255.10:57063 ESTABLISHED 3606/skype

Now, as few IPs are displayed, one needs to find out which exactly from the list of the ESTABLISHED IPs is the the Skype Contact from whom are received or to whom are sent the messages in question.

The blue colored IP address:port is the local IP address of my host running the Skype client. The red one is the IP address of the remote skype host (Skype Name) to which messages are transferred (in the the exact time the netstat command was ran.

The easiest way to find exactly which, from all the listed IP is the IP address of the remote person is to send multiple messages in a low time interval (let's say 10 secs / 10 messages to the remote Skype contact).

It is a hard task to write 10 msgs for 10 seconds and run 10 times a netstat in separate terminal (simultaneously). Therefore it is a good practice instead of trying your reflex, to run a tiny loop to delay 1 sec its execution and run the prior netstat cmd.

To do so open a new terminal window and type:

noah:~# for i in $(seq 1 10); do \
sleep 1; echo '-------'; \
netstat -tupan|grep -i skype | grep -i established| grep -v '0.0.0.0'; \
done

-------
tcp 0 0 192.168.2.134:55119 87.126.71.94:26309 ESTABLISHED 3606/skype
-------
tcp 0 0 192.168.2.134:49096 213.199.179.161:40029 ESTABLISHED 3606/skype
tcp 0 0 192.168.2.134:55119 87.126.71.94:26309 ESTABLISHED 3606/skype
-------
tcp 0 0 192.168.2.134:49096 213.199.179.161:40029 ESTABLISHED 3606/skype
tcp 0 0 192.168.2.134:55119 87.126.71.94:26309 ESTABLISHED 3606/skype
...

You see on the first netstat (sequence) exec, there is only 1 IP address to which a skype connection is established, once I sent some new messages to my remote skype friend, another IP immediatelly appeared. This other IP is actually the IP of the person to whom, I'm sending the "probe" skype messages.
Hence, its most likely the skype chat at hand is with a person who has an IP address of the newly appeared 213.199.179.161

Later to get exact information on who owns 213.199.179.161 and administrative contact info as well as address of the ISP or person owning the IP, do a RIPE  whois

noah:~# whois 213.199.179.161
% This is the RIPE Database query service.
% The objects are in RPSL format.
%
% The RIPE Database is subject to Terms and Conditions.
% See http://www.ripe.net/db/support/db-terms-conditions.pdf

% Note: this output has been filtered.
% To receive output for a database update, use the "-B" flag.
% Information related to '87.126.0.0 - 87.126.127.255'
inetnum: 87.126.0.0 - 87.126.127.255
netname: BTC-BROADBAND-NET-2
descr: BTC Broadband Service
country: BG
admin-c: LG700-RIPE
tech-c: LG700-RIPE
tech-c: SS4127-RIPE
status: ASSIGNED PA
mnt-by: BT95-ADM
mnt-domains: BT95-ADM
mnt-lower: BT95-ADM
source: RIPE # Filteredperson: Lyubomir Georgiev
.....

Note that this method of finding out the remote Skype Name IP to whom a skype chat is running is not always precise.

If for instance you tend to chat to many people simultaneously in skype, finding the exact IPs of each of the multiple Skype contacts will be a very hard not to say impossible task.
Often also by using netstat to capture a Skype Name you're in chat with, there might be plenty of "false positive" IPs..
For instance, Skype might show a remote Skype contact IP correct but still this might not be the IP from which the remote skype user is chatting, as the remote skype side might not have a unique assigned internet IP address but might use his NET connection over a NAT or DMZ.

The remote skype user might be hard or impossible to track also if skype client is run over skype tor proxy for the sake of anonymity
Though it can't be taken as granted that the IP address obtained would be 100% correct with the netstat + whois method, in most cases it is enough to give (at least approximate) info on a Country and City origin of the person you're skyping with.
 

Auto restart Apache on High server load (bash shell script) – Fixing Apache server temporal overload issues

Saturday, March 24th, 2012

auto-restart-apache-on-high-load-bash-shell-script-fixing-apache-temporal-overload-issues

I've written a tiny script to check and restart, Apache if the server encounters, extremely high load avarage like for instance more than (>25). Below is an example of a server reaching a very high load avarage:;

server~:# uptime
13:46:59 up 2 days, 18:54, 1 user, load average: 58.09, 59.08, 60.05
load average: 0.09, 0.08, 0.08

Sometimes high load avarage is not a problem, as the server might have a very powerful hardware. A high load numbers is not always an indicator for a serious problems. Some 16 CPU dual core (2.18 Ghz) machine with 16GB of ram could probably work normally with a high load avarage like in the example. Anyhow as most servers are not so powerful having such a high load avarage, makes the machine hardly do its job routine.

In my specific, case one of our Debian Linux servers is periodically reaching to a very high load level numbers. When this happens the Apache webserver is often incapable to serve its incoming requests and starts lagging for clients. The only work-around is to stop the Apache server for a couple of seconds (10 or 20 seconds) and then start it again once the load avarage has dropped to less than "3".

If this temporary fix is not applied on time, the server load gets increased exponentially until all the server services (ssh, ftp … whatever) stop responding normally to requests and the server completely hangs …

Often this server overloads, are occuring at night time so I'm not logged in on the server and one such unexpected overload makes the server unreachable for hours.
To get around the sudden high periodic load avarage server increase, I've written a tiny bash script to monitor, the server load avarage and initiate an Apache server stop and start with a few seconds delay in between.

#!/bin/sh
# script to check server for extremely high load and restart Apache if the condition is matched
check=`cat /proc/loadavg | sed 's/\./ /' | awk '{print $1}'`
# define max load avarage when script is triggered
max_load='25'
# log file
high_load_log='/var/log/apache_high_load_restart.log';
# location of inidex.php to overwrite with temporary message
index_php_loc='/home/site/www/index.php';
# location to Apache init script
apache_init='/etc/init.d/apache2';
#
site_maintenance_msg="Site Maintenance in progress - We will be back online in a minute";
if [ $check -gt "$max_load" ]; then>
#25 is load average on 5 minutes
cp -rpf $index_php_loc $index_php_loc.bak_ap
echo "$site_maintenance_msg" > $index_php_loc
sleep 15;
if [ $check -gt "$max_load" ]; then
$apache_init stop
sleep 5;
$apache_init restart
echo "$(date) : Apache Restart due to excessive load | $check |" >> $high_load_log;
cp -rpf $index_php_loc.bak_ap $index_php_loc
fi
fi

The idea of the script is partially based on a forum thread – Auto Restart Apache on High Loadhttp://www.webhostingtalk.com/showthread.php?t=971304Here is a link to my restart_apache_on_high_load.sh script

The script is written in a way that it makes two "if" condition check ups, to assure 100% there is a constant high load avarage and not just a temporal 5 seconds load avarage jump. Once the first if is matched, the script first tries to reduce the server load by overwritting a the index.php, index.html script of the website with a one stating the server is ongoing a maintenance operations.
Temporary stopping the index page, often reduces the load in 10 seconds of time, so the second if case is not necessery at all. Sometimes, however this first "if" condition cannot decrease enough the load and the server load continues to stay too high, then the script second if comes to play and makes apache to be completely stopped via Apache init script do 2 secs delay and launch the apache server again.

The script also logs about, the load avarage encountered, while the server was overloaded and Apache webserver was restarted, so later I can check what time the server overload occured.
To make the script periodically run, I've scheduled the script to launch every 5 minutes as a cron job with the following cron:

# restart Apache if load is higher than 25
*/5 * * * * /usr/sbin/restart_apache_on_high_load.sh >/dev/null 2>&1

I have also another system which is running FreeBSD 7_2, which is having the same overload server problems as with the Linux host.
Copying the auto restart apache on high load script on FreeBSD didn't work out of the box. So I rewrote a little chunk of the script to make it running on the FreeBSD host. Hence, if you would like to auto restart Apache or any other service on FreeBSD server get /usr/sbin/restart_apache_on_high_load_freebsd.sh my script and set it on cron on your BSD.

This script is just a temporary work around, however as its obvious that the frequency of the high overload will be rising with time and we will need to buy new server hardware to solve permanently the issues, anyways, until this happens the script does a great job 🙂

I'm aware there is also alternative way to auto restart Apache webserver on high server loads through using monit utility for monitoring services on a Unix system. However as I didn't wanted to bother to run extra services in the background I decided to rather use the up presented script.

Interesting info to know is Apache module mod_overload exists – which can be used for checking load average. Using this module once load avarage is over a certain number apache can stop in its preforked processes current serving request, I've never tested it myself so I don't know how usable it is. As of time of writting it is in early stage version 0.2.2
If someone, have tried it and is happy with it on a busy hosting servers, please share with me if it is stable enough?

Why and How facebook profits because of you? – People, The real facebook investors

Tuesday, March 6th, 2012

Facebook people real facebook investors, facebook profits because of you / facebook greedy money logo

Facebook is usually praised and very seldom criticized. I've seen already on a couple of occasions on the TV channel news on earthquake occasions or some kind of other calamities, where facebook was said to help the rescuing teams etc. We constantly hear how facebook has helped people point their location in disastrous situations or just helping people organize a protest against a harmful company activity. Whilst this might be true, the harms it does are quite big as well. A primary harm it does is to economy as we know it. As people are engaged in filling in Mark Zuckerberg and facebook investors pockets, they rarely think about how actually facebook gets their money?

Let me explain:

Basicly facebook makes money out of its constantly increased social network data content. This could have not been possible without the 800 000 000+ million of people who constantly post updates on facebook, create groups, post pictures, add likes, comment and post links to other facebook pages. If people had not all this volunteers (facebook users) to post all this bunch of mostly junky information, facebook inc. would not have a penny. Therefore what makes facebook grow is the people itself who willingly choose to be part of this money making machine. One would think with regular company the investors are the owners of the company shares. This classical business model is not facebook model, there it is rather different as the real investors in facebook are not the capital shareholders but the regular social network user base – this means (you and me)!.

For all those who still don't get what I'm talking about I will shortly explain.
Everyone who has a basic idea on how internet advertising works is aware that the primary origin for facebook todays profit is the left pane sky scraper field with ever changing advertisements.

Various advertisers pays facebook all the time big money for displaying those stupid advertisement. As many peole are viewing and clicking the advertisements, facebook makes billions out if its advertisers.

So far so good, facebook generates its profits out of peoples free time and delibarately information sharing you would say and you might argue me that facebook steals people (time / money). This would have been true if you don't put in the picture for a contrast, a regular blogger, who makes its daily living out of blogging.
What a regular blogger does is frequent blogging on various kind of topics of his interest. Various bloggers blog at various titles, but most of them has a few major topics which they're following.

The more articles a blogger collects and the higher the uniqueness of this information is the bigger the probability this blog to have a good users base and the more interesting content it will have for search engine robots like Google Bot Crawlers or Yahoo Bot etc. etc.
With all priod said, the higher the probability this blog to have more traffic drawn from web searches to the blog. As the blogger content increases with time when it gets 10000 or more unique articles (pages), consequently it can be used as an advertising place. A 10000 pages blog could earn a person a few hundred of euros (200, 300 EUR) per month.

Well the business scheme behind facebook is exactly the same, except they store and physically own the data of the facebook registered persons. The user posts content on his facebook wall, makes pages or does various activities which generate pages, the content gets indexed in Google and with time the overall facebook website content grows. As new users joins facebook with the increased popularity of website. The website is growing exponentially like in a atoms chain reaction.

Because of this steady content growth, it becomes an interesting place not only for advertisers but for all kind of people that use the internet.
And there you have the monetarization facebook makes billions of dollars every second because of you.  This is the shocking truth, they get their money because people click or view advertisement on each others profile, so there you're YOU make the little people who develop facebook and the original investors richer and richer with every day, where you make yourself poorer and poorer by investing your personal time in facebook instead of using it to work on something that will potentially generate you some dividents in short or long future.

Actually social network is nothing more than just a multiple blogging platform, but some marketing person come with this marketing hype work "social network".
The social network buzz word is in my view just another big marketing "white lie"!. Correct me if i'm wrong but what in fact is a "Social network?". I don't see facebook neither as social, network as network. I don't know about you but I have never made a long lasting friend or relationship using facebook so far. I think the poor Facebook creator Zuckerberg made facebook with a viral mindset. He intended it to be like a social virus and so far he succeed pretty much. I just wait and eager to see who will start the anti-virus for Zuckerberg's (facebook) – people time eating virus. 
 

How to exclude files on copy (cp) on GNU / Linux / Linux copy and exclude files and directories (cp -r) exclusion

Saturday, March 3rd, 2012

I've recently had to make a copy of one /usr/local/nginx directory under /usr/local/nginx-bak, in order to have a working copy of nginx, just in case if during my nginx update to new version from source mess ups.

I did not check the size of /usr/local/nginx , so just run the usual:

nginx:~# cp -rpf /usr/local/nginx /usr/local/nginx-bak
...

Execution took more than 20 seconds, so I check the size and figured out /usr/local/nginx/logs has grown to 120 gigabytes.

I didn't wanted to extra load the production server with copying thousands of gigabytes so I asked myself if this is possible with normal Linux copy (cp) command?. I checked cp manual e.g. man cp, but there is no argument like –exclude or something.

Even though the cp command exclude feature is not implemented by default there are a couple of ways to copy a directory with exclusion of subdirectories of files on G / Linux.

Here are the 3 major ones:

1. Copy directory recursively and exclude sub-directories or files with GNU tar

Maybe the quickest way to copy and exclude directories is through a littke 'hack' with GNU tar nginx:~# mkdir /usr/local/nginx-new;
nginx:~# cd /usr/local/nginx#
nginx:/usr/local/nginx# tar cvf - \. --exclude=/usr/local/nginx/logs/* \
| (cd /usr/local/nginx-new; tar -xvf - )

Copying that way however is slow, in my case it fits me perfectly but for copying large chunks of data it is better not to use pipe and instead use regular tar operation + mv

# cd /source_directory
# tar cvf test.tar --exclude=dir_to_exclude/*\--exclude=dir_to_exclude1/* . \
# mv test.tar /destination_directory
# cd /destination# tar xvf test.tar

2. Copy folder recursively excluding some directories with rsync

P>eople who has experience with rsync , already know how invaluable this tool is. Rsync can completely be used as for substitute=de.a# rsync -av –exclude='path1/to/exclude' –exclude='path2/to/exclude' source destination

This example, can also be used as a solution to my copy nginx and exclude logs directory casus like so:

nginx:~# rsync -av --exclude='/usr/local/nginx/logs/' /usr/local/nginx/ /usr/local/nginx-new

As you can see for yourself, this is a way more readable for the tar, however it will not work on servers, where rsync is not installed and it is unusable if you have to do operations as a regular users on such for that case surely the GNU tar hack is more 'portable' across systems.
rsync has also Windows version and therefore, the same methodology should be working on MS Windows and good for batch scripting.
I've not tested it myself, yet as I've never used rsync on Windows, if someone has tried and it works pls drop me a short msg in comments.
3. Copy directory and exclude sub directories and files with find

Find in collaboration with cp can also be used to exclude certain directories while copying. Actually this method is better than the GNU tar hack and surely more efficient. For machines, where rsync is not installed it is just a perfect way to copy files from location to location, while excluding some directories, here is an example use of find and cp, for the above nginx case:

nginx:~# cd /usr/local/nginx
nginx:~# mkdir /usr/local/nginx
nginx:/usr/local/nginx# find . -type d \( ! -name logs \) -print -exec cp -rpf '{}' /usr/local/nginx-bak \;

This will find all directories inside /usr/local/nginx with find command print them on the screen, then execute recursive copy over each found directory and copy to /usr/local/nginx-bak

This example will work fine in the nginx case because /usr/local/nginx does not contain any files but only sub-directories. In other occwhere the directory does contain some files besides sub-directories the files had to also be copied e.g.:

# for i in $(ls -l | egrep -v '^d'); do\
cp -rpf $i /destination/directory

This will copy the files from source directory (for instance /usr/local/nginx/my_file.txt, /usr/local/nginx/my_file1.txt etc.), which doesn't belong to a subdirectory.

The cmd expression:

# ls -l | egrep -v '^d'

Lists only the files while excluding all the directories and in a for loop each of the files is copied to /destination/directory

If someone has better ideas, please share with me 🙂