Archive for October, 2012

Debian Linux how to remove Xorg, Gnome / KDE, GDM and other graphical environment packages from a server

Wednesday, October 17th, 2012

Lets say by mistake you install a package and apt installs as a package dependency a whole bunch of Xorg, GDM GNOME 2 / 3 (desktop environment) along with whole other multitude of meta packages just like, lets say xinit , nautilus, totem, gedit,remmina etc.:
Mistakenly installing a graphical environment happens common (at least in my experience as admin happed many, many times). Often installing GUI by mistake is done on already well configured productive server, serving thousand of HTTP, SQL and Mails daily.
Having a started GDM login on the server takes some from the CPU time and also is extends possibilities for security breach to the server, so as always if something is not used it is better to wipe it off …

Here are some apt-get remove commands which will (COMPLETELY) remove Xserver ( Xorg ), Graphical Login Manager (GDM), GNOME desktop environment and their surrounding stuff:


# apt-get remove xorg
# apt-get remove nautilus-data nautilus-sendto libnautilus-extension1
# apt-get remove desktop-base
# apt-get remove python-gnomedesktop
# apt-get remove gdm3
# apt-get remove totem seahorse remmina gedit-common gconf2 epipha gedit-common gconf-defaults-service xauth
# apt-get remove epiphany-browser-data evolution-webcal gconf2
# apt-get remove nautilus-data nautilus-sendto libnautilus-extension1
# apt-get remove x11-common
# apt-get autoremove --purge gnome*

Here something worthy to mention is in Debian and (its deb based linux erivatives including Ubuntu), there are the so called metapackages. For those who don’t know what a meta-package is?; it is a package linked to a group of packages. Actually the meta package itself is a pre-selected Packages ready to install / remove with apt, aptitude or rest of “intelligent” package management utils available for Debian.
Once a meta-package is installed, all linked package dependencies; be it binaries or libraries as well as the proper configurations are downloaded and installed.

Very useful thing hence is listing all install-able metapackages; to list all available metapackages in Debian Linux use:


# apt-cache search metapackage
....

.....
......

As of time of writing this post there are 276 apt installable metapackages existent on Debian Squeeze 6.0.5 Linux:


# apt-cache search metapackage|wc -l
276

Another more general way to see the basic types of metapackages, installable is via tasksel (tasksel is run and used during initial Debian Installer via install CD);
In tasksel, there are few meta-packages; Actually tasksel is very handy for sysadmins who install new servers :). Here is list of available meta-packs through it:


# tasksel --list-tasks
i web-server Web server
u print-server Print server
i dns-server DNS server
u file-server File server
u mail-server Mail server
u database-server SQL database
i ssh-server SSH server
u laptop Laptop
u manual manual package selection
u desktop Graphical desktop environment
i web-server Web server
u print-server Print server
i dns-server DNS server
u file-server File server
u mail-server Mail server
u database-server SQL database
i ssh-server SSH server
u laptop Laptop
u manual manual package selection

It is possible to also view sub-packages contained within, each of tasksel meta-packs, i.e..:


# tasksel --task-packages desktop
twm
eject
openoffice.org
xserver-xorg-video-all
cups-client
openoffice.org-help-en-us
hp-ppd
avahi-daemon
system-config-printer
openoffice.org-thesaurus-en-us
cpufrequtils
myspell-en-us
xdg-utils
pm-utils
cups
cups-bsd
xorg
iceweasel
xserver-xorg-input-all
hplip
desktop-base
alsa-base
libnss-mdns
browser-plugin-gnash
xterm
anacron
alsa-utils
cups-driver-gutenprint
foo2zjs
hpijs
gimp
menu
kerneloops
openoffice.org-gcj
libgl1-mesa-dri
foomatic-db-engine

Actually using tasksel is much more “intelligent” way to remove GNOME, GDM and Xorg from a server. It will completely wipe out everything previously installed for running Desktop Environment on the host.
To remove desktop environment with tasksel:


# tasksel remove desktop

Ncurses progress bar will appear displaying all removed packages …
In my case, during trying to figure out what packages I need to remove ImageMagick as long as few other packages got removed as dependencies so I had to install them over with:


apt-get install --yes imagemagick libice6 php5-imagick libxvmc1 \
libzbar0 libxt6 libsm6 libxres1 libxtst6 libxvmc1 x-ttcidfont-conf libxxf86dga1

For people who need to remove KDE desktop environment from a host to be used as a server, check out KDE meta-packages:


apt-cache search metapackage|grep -i kde

You can remove all KDE related meta-packs within a bash loop, like so:


for i in $(apt-cache search metapackage|grep -i kde|awk '{ print $1 }'); do \
apt-get remove $i; done

It is also usually good idea, once all packages are removed the RC Remove Candidate deb packagse are removed too – if you don’t know what is RC I suggest you read my previous post here

Removing all rc‘s from system can be done with:


# for i in $(dpkg -l | grep -i '^rc' | awk '{ print $2 }'); do \
dpkg --purge $i; \
done

Though, I tested this if you follow my tutorial be careful, something might break and some essenail package or lib for (your custom) services might be removed. Be careful what is offered to uninstall only approve it if you’re 1000% sure; Please don’t count me responsible if apt- removes something which breaks your productive server 🙂

Debian Linux: dump and migrate identical packages with (dpkg) from server 1 to server 2 /A common sysadmin dpkg package dump mistake

Wednesday, October 17th, 2012

Debian dump and migrate dpkg common mistake, copy migrating deb identical packages between Linux hosts

Over the last years it happened multiple times to me to migrate identical Debian installation (with identical services) hosts, running identical Debian version and identical installed packages and configs in order to move Old (hardware) servers to newer (harware) hosts. I will call for simplicity first system from which migrating “copy from host” and second “copy to host”. Moving exact number of installed packages between “copy host” and “copy to host” systems can probably be done in many ways but I personally prefer using a single method – using dpkg to dump all deb packages list on the system in a file; move this file to “copy to host” and there use a tiny for loop bash (cycle) + dpkg to install all listed packages. Last time I’ve done this is just 2 days ago while I was “Resurrecting” Pc-Freak machine using my l337 h4x0r zk!1lZ and same good old well tested logic 🙂

I used following to dump all packages;


# dpkg -l | awk '{ print $2 }' >> /root/packages_list.txt

This though dumps all deb packages, along with all current installed ones dumps also, package names of debs, which used to some point in time be existent on the system – removed and the belonging package configs were kept on the system (in other words a tiny part of the package left installed on the system, just in case if one needs to install and use package some time in lets say short future).

This keeping of package name configs and skele files in Debian is called in “dpkg language”
(rc – Remove Candidate). While doing operations dpkg package manager marks different packages with different flags, so rc flags are set once the package is apt-get remove-d or dpkg -r packagename is done over a pack.

For unfamiliar with Debian’s dpkg, package system flags, check out man dpkg. Just to give example of rc, here are few packages marked as RC (Remove Candidates):


# dpkg -l |grep -i ^rc|head -n 3
rc acidrip 0.14-0.3 ripping and encoding DVD tool using mplayer and mencoder
rc airsnort 0.2.7e-2 WLAN sniffer
rc airstrike 0.99+1.0pre6a-4 2d dogfight game in the tradition of 'Biplanes' and 'BIP'

The reason, why this package are still “remembered” by dpkg is they were not purged after install- i.e. (dpkg –purge whatever-packagename) was not issued over ’em.

With this said in mind, it is common mistake I make while making a dump of all packages to also dump inside list names of packages mared as RC, e.g.:


# dpkg -l | awk '{ print $2 }' >> /root/packages_list.txt

Later I install often install every packages inside /root/packages_list.txt as for exmp., pointed out in my previous article Debian Linux Squeeze 32 bit i386 to amd64 hell just to later find out I have numerous (daemons), on the old “copy from host” but are installed and ran by dpkg (config scripts) on the 2nd “copy to host ….

Thus to prevent this I recommend people, always think well before doing something (something I often miss).

Thus it is much better to dump only packages obtaining, the ii (dpkg flags).
Here is example of few packages which have ii dpkg package flags:


# dpkg -l | grep -i '^ii' | tail -n 3
ii zip 3.0-3 Archiver for .zip files
ii zlib1g 1:1.2.3.4.dfsg-3 compression library - runtime
ii zlib1g-dev 1:1.2.3.4.dfsg-3 compression library - development

Probably other people just like me, did same mistake as me to dump all ever available package names on the system and later ended up in same situation, where have to remove packages and stop services from running on system boot …

Thus the “correct” way to dump only installed and configured ones debs having the II system flags is by:


# dpkg -l | grep -i '^ii' | awk '{ print $2 }' >> /root/only_installed_deb_packages_list.txt

Then the rest of package copy from “copy from host” machine 1 to “copy to host” 2-nd machine is to be done by uploading /root/only_installed_deb_packages_list.txt to 2nd host with ftp, sftp, scp whatever transfer proto and running on copy to host:


for i in $(cat /root/only_installed_deb_packages_list.txt); do
apt-get install --reinstall $i; done
.

Generally this will make programs on copy host, be on copy to host.
Enjoy 🙂

Pc-Freak 2 days Downtime / Debian Linux Squeeze 32 bit i386 to amd64 hell / Expression of my great Thanks to Alex and my Sister

Tuesday, October 16th, 2012

Debian upgrade Squeeze Linux from 32 to 64 problems, don't try do it except you have physical access !!!

Recently for some UNKNOWN to ME reasons New Pc-Freak computer hardware crashed 2 times over last 2 weeks time, this was completely unexpected especially after the huge hardware upgrade of the system. Currently the system is equipped with 8GB of memory a a nice Dual Core Intel CPU running on CPU speed of 6 GHZ, however for completely unknown to me reasons it continued experience outages and mysteriously hang ups ….

So far I didn’t have the time to put some few documentary pictures of PC hardware on which this blog and the the rest of sites and shell access is running so I will use this post to do this as well:

Below I include a picture for sake of History preservation 🙂 of Old Pc-Freak hardware running on IBM ThinkCentre (1GB Memory, 3Ghz Intel CPU and 80 GB HDD):

IBM Desktop ThinkCentre old pc-freak hardware server PC

The old FreeBSD powered Pc-Freak IBM ThinkCentre

Here are 2 photos of new hardware host running on Lenovo ThinCentre Edge:

New Pc-Freak host hardware lenovo ThinkEdge Photo
New Pc-Freak host hardware Lenovo ThinkEdge Camera Photo
My guess was those unsual “freezes” were caused due to momentum overloads of WebServer or MySQL db.
Actually the Linux Squeeze installed was “stupidly” installed with a 32 bit Debian Linux (by me). I did that stupidity, just few weeks ago, when I moved every data content (SQL, Apache config, Qmail accounts, Shell accounts etc. etc.) from old Pc-Freak computer to the new purchased one.

After finding out I have improperly installed (being in a hurry) – 32 Bit system, I’ve Upgrade only the system 32 bit kernel hich doesn’t support well more than 4GB to an amd64 one supporting up to 64GB of memory – if interested I’ve prior blogged on this here.
Thanks to my dear friend Alexander (who in this case should have a title similar to Alexander the Great – for he did great and not let me down being there in such a difficult moment for me spending from his personal time helping me bringing up Pc-Freak.Net. To find a bit more about Alex you might check his personal home page hosted on www.pc-freak.net too here 🙂
I don’t exaggerate, really Alex did a lot for me and this is maybe the 10th time I disturb him over the last 2 years, so I owe him a lot ! Alex – I really owe you a lot bro – thanks for your great efforts; thanks for going home 3 times for just to days, thanks for recording Rescue CDs, staying at home until 2 A.M. and really thanks for all!!

Just to mention again, to let me via Secure Shell, Alex burned and booted for me Debian Linux Rescue Live CD downloaded from linke here.

This time I messed my tiny little home hosted server, very very badly!!! Those of you who might read my blog or have SSH accounts on Pc-Freak.NET, already should have figured out Pc-Freak.net was down for about 2 days time (48 HOURS!!!!).

The exact “official” downtime period was:

Saturday OCTOBER 13!!!( from around 16:00 o’clock – I’m not fatalist but this 13th was really a harsh date) until Monday 15-th of Oct (14:00h) ….

I’m completely in charge and responsible for the 2 days down time, and honestly I had one of my worst life days, so far. The whole SHIT story occurred after I attempted to do a 32 bit (i386) to AMD64 (64 bit) system packages deb binary upgrade; host is installed to run Debian Squeeze 6.0.5 ….; Note to make here is Officially according to documentation package binary upgrades from 32 bit to 64 arch Debian Linux are not possible!. Official debian.org documentation recommended for 32 bit to 64 packs update (back up all system existent data) and do a clean CD install / re-install, over the old installed 32 bit version. However ignoring the official documentation, being unwise and stubborn, I decided to try to anyways upgrading using those Dutch person guide … !!!

I’ve literally followed above Dutch guy, steps and instead of succeeding 64 bit update, after few of the steps outlined in his article the node completely (libc – library to which all libraries are linked) broke up. Then trying to fix those amd64 libc, I tried re-installing coreutils package part of base-files – basis libs and bins deb;
I’ve followed few tutorials (found on the next instructing on the 32bit to 64 bit upgrade), combined chunks from them, reloaded libc in a live system !!! (DON’T TRY THAT EVER!); then by mistake during update deleted coreutils package!!!, leaving myself without even essential command tools like /bin/ls , /bin/cp etc. etc. ….. And finally very much (in my fashion) to make the mess complete I decided to restart the system in those state without /bin/ls and all essential /bins ….
Instead of making things better I made the system completely un-bootable 🙁

Well to conclude it, here I am once again I stupid enough not to follow the System Administrator Golden Rule of Thumb:

IF SOMETHING WORKS DON’T TOUCH IT !!!!!!!!! EVER !!!!, cause of my stubbornness I screw it up all so badly.
I should really take some moral from this event, as similar stories has happened to me long time ago on few Fedora Linux hosts on productive Web servers, and I went through all this upgrades nightmare but apparently learned nothing from it. My personal moral out of the story is I NEVER LEARN FROM MY MISTAKES!!! PFFF …

I haven’t had days like this in which I was totally down, for a very long time, really I fell in severe desperation and even depressed, after un-abling to access in any way Pc-Freak.NET, I even thought it will be un-fixable forever and I will loose all data on the host and this deeply saddened me.
Here is good time to Give thanks to Svetlana (Sveta) (A lovely kind, very beautiful Belarusian lady 🙂 who supported me and Sali and his wife Mimi (Meleha) who encouraged and lived up my hardly bearable tempper when angry or/and sad :)). Lastly I have to thank a lot to Happy (Indian Lady whose whose my dear indian brother Jose met me with in Skype earlier. Happy encouraged me in many times of trouble in Skype, giving me wise advices not to take all so serious and be more confied, also most importantly Happy helped me with her prayers …. Probably many others to which I complained about situation helped with their prayers too – Thanks to to God and to all and let God return them blessing according to their good prayers for me !

Some people who know me well might know Pc-Freak.Net Linux host has very sentimental value for me and even though it doesn’t host too much websites (only 38 sites not so important ones ), still it is very bad to know your “work input” which you worked on in your spare time over the last 3 years (including my BLOG – blogging almost every day for last 3 yrs, the public shell SSH access for my Friends, custom Qmail Mail server / POP3 and IMAP services / SQL data etc. might not be lost forever. Or in more positive better scenario could be down for huge period of time like few months until I go home and fix it physically on phys terminal …

All this downtime mess occurred due to my own inability to estimate properly update risks (obviously showing how bad I’m in risk management …). Whole “down time story”” proofed me only, I have a lot to learn in life and worry less about things ….
It also show me how much of an “idol”, one can make some kind of object of daily works as www.pc-freak.net become to me. Good thing is I at least realize my blog has with time, become like an idol to me as I’m mostly busy with it and in a way too much worrying for it makes me fill up in the gap “worshipping an idol” and each Christian knows pretty well, God tells us: “Do not have other Gods besides me”.

I suppose this whole mess was allowed to happen by God’s Great Mercy to show me how weak my faith is, and how often I put my personal interest on top of real important things. Whole situation teached me, once again I easy fall in spirit and despair; hope it is a lesson given to me I will learn from and next time I will be more solid in critical situation …

Here are some of my thoughts on the downtime, as I felt obliged to express them too;

Whole problem severeness (in my mind), would not be so bad if I only had some kind of physical access to System terminal. However as I’m currently in Arnhem Holland 6500 kilometers away from the Server (hosted in Dobrich, Bulgaria), don’t have access to IPKVM or any kind of web management to act on the physical keyboard input, my only option was to ask Alex go home and tell him act as a pro tech support which though I repeat myself I will say again, he did great.
What made this whole downtime mess even worser in my distorted vision on situation is, fact; I don’t know people who are Linux GURUs who can deal with the situation and fix the host without me being physically there, so this even exaggerated me worrying it even more …

I’m relatively poor person and I couldn’t easily afford to buy a flight ticket back to Bulgaria which in best case as I checked today in WizzAir.com’s website would costs me about 90EUR (at best – just one way flight ticket ) to Sofia and then more 17 euro for bus ticket from Sofia to Dobrich; Meaning whole repair costs would be no less than 250 EUR with prince included train ticket expenses to Eindhoven.);

Therefore obviously traveling back to fix it on physical console was not an option.
Some other options I considered (as adviced by Sveta), was hiring some (pro sysadm to fix the host) – here I should say it is almost impossible to find person in Dobrich who has the Linux knowledge to fix the system; moreover Linux system administrators are so expensive these days. Most pro sysadmins will not bother to fix the host if not being paid hour – fee of at least 40 / 50 EUR. Obviously therefore hiring a professional UNIX system adminsitrator to solve my system issues would have cost approximately equal to travel expenses of myself, if going physically to the computer; spend the same 5 hours fixing it and loose at least 2 or 3 more days in traveling back to Holland …..
Also it is good to mention on the system, I’ve done a lot of custom things, which an external hired person will be hardly possible to deal with, without my further interference and even if I had hired someone to fix it I would have spend at least 50 euro on Phone Bills to explain specifics ….

As I was in the shit, I should thanks in this post also (on first place) to MY DEAR SISTER Stanimira !!! My sis was smart enough to call my dear friend Alexander (Alex), who as always didn’t fail me – for a 3rd time BIG THANKS ALEX !, spending time and having desire to help me at this critical times. I instructed him as a first step to try loading on the unbootable linux, the usual boot-able Debian Squeeze Install LiveCD….
So far so good, but unfortunately with this bootable CD, the problem is Debian Setup (Install) CD does not come equipped with SSHD (SSH Server) by default and hence I can’t just get in via Internet;
I’ve searched through the net if there is a way to make the default Debian Install CD1 (.iso) recovery CD to have openssh-server enabled, but couldn’t find anyone explainig how ?? If there is some way and someone reading this post knows it please drop a comment ….

As some might know Debian Setup CD is running as its basis environment busybox; system tools there provided whether choosing boot the Recovery Console are good mostly for installing or re-installing Debian, but doesn’t include any way to allow one to do remote system recovery over SSH connection.

Further on, have instructed Alex, brought up the Network Interfacse on the system with ifconfig using cmds:


# /sbin/ifconfig MY_IP netmask 255.255.255.240
# /sbin/route add default gw MY_GATEWAY_IP;

BTW, I have previously blogged on how to bring network interfaces with ifconfig here
Though the LAN Interfaces were up after that and I could ping ($ ping www.pc-freak.net) this was of not much use, as I couldn’t log in. Neither somehow can access system in a chroot.
I did thoroughfully explained Alex, how to fix the un-chroot-table badly broken (mounted) system. ….
In order to have accessed the system via SSH, after a bit of research I’ve asked Alex to download and boot from the CD Drive Debian Linux based AMD64 Rescue CD available here ….

Using this much better rescue CD than default Debian Install CD1, thanks God, Alex was able to bring up a working sshd server.

To let me access the rescue CD, Alex changed root pass to a trivial one with usual:


# passwd root
....

Then finally I logged in on host via ssh. Since chroot over the mounted /vev/sda1 in /tmp/aaa was impossible due to a missing working /bin/bash – Here just try imagine how messed up this system was!!!, I asked Alex to copy over the basic system files from the Rescue CD with cp copy command within /tmp/aaa/. The commands I asked him to execute to override some of the old messed up Linux files were:


# cp -rpf /lib/* /tmp/aaa/lib
# cp -rpf /usr/lib/* /tmp/aaa/usr/lib
# cp -rpf /lib32/* /tmp/aaa/lib32
# cp -rpf /bin/* /tmp/aaa/bin
# cp -rpf /usr/lib64/* /tmp/aaa/usr/lib64
# cp -rpf /sbin/* /tmp/aaa/sbin
# cp -rpf /usr/sbin/* /tmp/aaa/usr/sbin

After this at least chroot /tmp/aaa worked!! Thanks God!

I also said Alex to try bootstrap to install a base debian system files inside the broken /tmp/aaa, but this didn’t make things better (so I’m not sure if debootstrap helped or made things worse)??. Exact bootstrap command tried on the host was:


# debootstrap --arch amd64 squeeze /tmp/aaa http://ftp.us.debian.org/debian

This command as explained in Debian Wiki Debootstrap section is supposed to download and override basis Linux system with working base bins and libs.

After I logged in over ssh, I’ve entered chroot-ing and following instructions of 2 of my previous articles:

1. How to do proper chroot and recover broken Ubuntu using mount and chrooting

2. How to mount /proc and /dev and in chroot on Linux – for fail system recovery

Next on, after logging in via ssh I chrooted to mounted system;


# mount /dev/sda1 /mnt/aaa
# chroot /mnt/aaa

Inside chrooted environment, I tried running ssh server, listen on separate port 2208 with command:


# /usr/sbin/sshd -p 2208

sshd did not start up but spitted mer error: PRNG is not seeded, after reading a bit online I’ve found others experiencing PRNG is not seeded err in thread here

The PRNG is not seeded error is caused due to a missing /dev/urandom inside the chroot-ed environment:


# ls -al /dev/urandom
ls: cannot access /dev/urandom: No such file or directory

To solve it, one has to create /dev/urandom with mknod command:


# mknod /dev/urandom c 1 9

….

Something else worthy to mention is very helpful post found on noah.org explaining few basic things on apt, aptitude and dpkg which helped me over the whole severe failed dependency apt-get issues experienced inside chroot.

Inside the chroot, I tried using few usual apt-get cmds to solve the multiple appearing broken packages inter-dependency. I tried:


# apt-get update
....
# apt-get --yes upgrade
# apt-get -f install

Even before that apt, package was broken, so I instructed Alex, to download me one from a web link. By mistake I gave him, a Debian Etch apt version instead of Debian Squeze. So using once again dpkg -i apt* after downloading the latest stable apt deb binaries from debian.org, I had to re-install apt-get…

Besides that Alex, had copied a bunch of libraries, straight copied from my notebook running amd64 Debian Squeeze and has to place all this transferred binaries in /mnt/aaa/{lib,usr/lib} in order to solve missing libraries for proper apt-get operation.

As it seemed slightly impossible fix the broken dependencies with apt-get, I first tried fixing failed inter-dependencies using the other automated dependency solver tool (written in perl language) aptitude. I tried with it solving the situation issuing:


# aptitute update
# aptitude safe-upgrade
# aptitude safe-upgrade --full-resolver

No of the above aptitude command options helped anyhow, so
I’ve decided to try the old but gold approach of combining common logic with a bit of shell scripting 🙂
Here is my customly invented approach 🙂 :

1. Inside the chroot, make a dump of all installed deb packages names in a file
2. Outside the chroot straight ssh-ing again to the Rescucd shell, use RescueCD apt-get to only download all amd64 binaries corresponding to dumped packages names
3. Move all downloaded only apt-get binaries from /var/cache/apt/archives to /mnt/aaa/var/cache/apt/archives
4. Inside chroot, run cd to /var/cache/apt/archives/ and use for bash loop to install each package with dpkg -i

Inside Chroot-ed environment chroot /tmp/aaa, dpkg – to dump list of all installed i386 previous packages on broken system:


# dpkg -l|awk '{ print $2 }' >> /mnt/aaa/root/all_deb_packages_list.txt

Thereon, I delete first 5 lines in beginning of file (2 empty lines) and 3 lines with content:


Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
Err?=(none)/Reinst-required
Name

should be deleted.

Onwards outside of chroot-ed env, I downloaded all deb packages corresponding to previous ones in all_deb_packges.txt:


# mkdir /tmp/apt
# cd /tmp/apt
# for i in $(cat /mnt/aaa/root/all_deb_packages.txt; do \
apt-get --download-only install -yy $i \
....
.....
done

In a while after 30 / 40 minutes all amd64 .deb packages were downloaded in rescuecd /var/cache/apt/archives/.
/var/cache/apt/archives/ in LiveCDs is stored in system memory, thanksfully I have 8 Gigabytes of memory on the host so memory was more than enough to store all packs 😉
Once above loop, completed. I copied all debs to /mnt/aaa/var/cache/apt, i.e.:


# cp -vrpf /var/cache/apt/archives/*.deb /mnt/aaa/var/cache/apt/archives/

Then back in the (chroot-ed broken system), in another ssh session chroot /mnt/aaa, I run another shell loop aim-ing to install each copied deb package (below command should run after chroot-ing):


# cd /var/cache/apt/archives
# for i in *.deb; do \
dpkg -i $i
done

I had on the system installed Qmail server which was previously linked against old 32 bit installed libs, so in my case was also necessery rebuild qmail install as well as ucsp-tcp and ucsp-ssl, after rebooting and booting the finally working amd64 libs system (after reboot and proper boot!):

a) to Re-compile qmail base binaries, had to issue:


# qmailctl stop
# cd /usr/src/qmail
# make clean
# make man
# make setup check

b) to re-compile ucspi-tcp and ucspi-ssl:


# rm -rf /packages/ucspi-ssl-0.70.2/
#mkdir /packages
# chmod 1755 /packages
# cd /tmp
# tar -zxvf /downloads/ucspi-ssl-0.70.2.tar.gz
....
# mv /tmp/host/superscript.com/net/ucspi-ssl-0.70.2/ /packages
# cd /packages/ucspi-ssl-0.70.2/
# rm -rf /tmp/host/
# sed -i 's/local\///' src/conf-tcpbin
# sed -i 's/usr\/local/etc/' src/conf-cadir
# sed -i 's/usr\/local\/ssl\/pem/etc\/ssl/' src/conf-dhfile
# openssl dhparam -check -text -5 1024 -out /etc/ssl/dh1024.pem

Then had to stop temporary daemontools service, through commenting line in /etc/inittab:


# SV:123456:respawn:/usr/bin/svscanboot


# init q

After that remove commented line:


SV:123456:respawn:/usr/bin/svscanboot

and consequentually install ucsp-{tcp,ssl}:


# cd /packages/ucspi-ssl-0.70.2/
# package/compile
# package/rts
# package/install

c) Rebuild Courier-Imap and CourierImapSSL

As I have custom compiled Courier-IMAP and Courier-IMAPSSL it was necessery to rebuild Courier-imaps following steps earlier explained in this article

I have on the system running DjbDNS as local caching server so I had to also re-install djbdns, re-compiling it from source

Finally after restart the system booted OKAY!! Thanks God!!!!!! 🙂
Further on to check the boot-ed system runs 64 bit architecture dpkg should be used
To check if the system architecture is 64 now 64 bit, there is a command dpkg-architecture, as I learned from superuser.com forums thread here


root@pcfreak:~# dpkg-architecture -qDEB_HOST_ARCH
amd64

One more thing, which helped me a lot during the whole system recovery was main Debian deb HTTP repositories ftp.us.debian.org/debian/pool/ , I’ve downloaded apt (amd64 Squeeze) version and few other packages from there.
Hope this article helps someone who end up in 32 to 64 bit debian arch upgrade. Enjoy 🙂

How to: Improve Adobe Flash Player Video speed on Debian / Ubuntu Linux

Friday, October 12th, 2012

How to improve Adobe Flash Player Video, accelerate Flash video speed on Ubuntu, Xubuntu, Debian, Fedora - ArchLinux

I have recently installed Xubuntu to a friend with an old computer hardware. The computer is used just for basic access to the Internet web browsing – (Firefox, Opera) and Skype. All runs smoothly but sometimes the Videos in Youtube are lagging. Hence I looked for a way to make the Adobe Flash Player run smoother on this (Ubuntu 12.04) based Linux.

After a bit of searching if there is something written on the topic of Optimizing Flash Player / Flash Videos speed on Linux, I’ve stumbled acrossed one flash variable, which if used could improve Video Speed; The variable is OverrideGPUValidation and should be turned on in Flash Player with:


OverrideGPUValidation=true

The Flash Player configuration, settings on Linux could be set either globally by using:

  • /etc/adobe/mms.cfg – (system-wide configuration file, set Flash player policy for all existing users)

or locally for individual users through:

  • ~/.adobe/mms.cfg – (user-local configuration file affecting only /home/sampleuser/ flash player settings)

For Desktop Linux purposes which are used as a home desk station it is quite rarely the host to be used than more than one single user, so if that’s the case with you there is no worth to set OverrideGPUValidation=true via /etc/adobe/mms.cfg

Well anyways if need to set Flash player setting globally you will have to create /etc/adobe (which is created on deb flash player package install):


root@xubuntu:~# mkdir /etc/adobe
root@xubuntu:~# echo 'OverrideGPUValidation=true' >> /etc/adobe/mms.cfg

The local user (hidden) directory ~/.adobe is created automatically on first time the Flash Player is used in browser, just like usual with rest of Linux programs. Inside are a few directories created used by flash player but mss.cfg is not created.
For local users hence to enable OverrideGPUValidation=true type in terminal:

Enabling

user@xubuntu:~$ echo "OverrideGPUValidation=true" >> ~/.adobe/mms.cfg

The option does accelerate a bit the Flash Videos, but don’t expect huge speed ups. Normally using this option on some hosts up to 10 to 20/ 30% in Video playing (overall) speed, could be improved. On some hosts it is possible using the variable does not have a significant impact at all.

The options should work equal on Linux hosts and only Debian based ones as it is a Flash Player it is however tested with latest Flash Player Linux version which of time of writing this post is v. (11.2.202.243)
Don’t know if the same option will work on earlier Flash Player versions, so it is up to testing it. I will be glad to hear from people who tested the value and can report a speed improvement. I will be glad to hear the Video Adapter and general hardware configuration on whom OverrideGPUValidation=true speed up Flash Player.
Hope this tip helps someone.

Easy access to all Management Tools on Microsoft Windows 7 and Windows Vista with “GodMode” / A little known hidden Windows feature

Thursday, October 11th, 2012

Windows 7 Vista God Mode little known windows hidden feature, GodMode Microsoft Windows 7 and MS Vista, built in microsoft windows hack shortcut

Are you looking for a quick way to find some kind of security issue with a particular Windows 7 install? You’re an IT geek or your regular daily IT job includes fixing a lot of Windows 7 and Vista broken computers?

I’m sure you will be puzzled to hear about Microsoft embedded default “hack” (feature / shortcut) GodMode. I’m not how exactly “GodMode” should be called is it a security leak? a nice pro admin required feature? or a bad joke with M$ users M$ developers did …

Now you probably still wonder what is GodMode, it is an easy to create Shortcut directory, which when created does an automatic link to all Administrator and Settings services on a Windows OS Desktop PC or Notebook.
GodMode is like all administrator programs link to all management tools accessible via one specific to create (New Directory). It is a sort of Shortcut to everything one accesses from Start Menu, Control Panel and all admin PC interfaces.
GodMode is super handy if you hate to use the messed up and hard to memorize tools in Control Panel ….

Here is a list of all the Utilities you can change via creating the “GodMode” shortcut (New Folder):

Action Center, Administrative Tools, AutoPlay, Backup and Restore, BitLocker Drive Encryption,
Color, Management, Credential Manager, Date and Time, Default Programs, Desktop Gadgets, Device Manager,
Devices and Printer, Display, Easy of Access Center, Folder Options, Fonts, Gettings Started, HomeGroup,
Indexing Options, Internet Options, Keyboard, Mouse, Network and Sharing Center, Notification Area Icons,
Performance Information and Tools, Personalization, Phone and Modem, Power Options, Programs and Features,
Recovery, Region and Language, RemoteApp and Desktop Connections, Sound, Speech Recognition, Sync Center,
System, Taskbar and Start Menu, Troubleshoting, User Accounts, Windows CardSpace, Windows Defender,
Windows Firewall, Windows Update

GodMode is really AWESOME! Its THE DREAM OF THE WINDOWS SYSTEM ADMINISTRATOR COME TRUE 🙂

To get access to all this handy Linked under one Shortcut utilities, all you have to do is create a New Folder (on Windows Desktop plot or anywhere on the Hard Drive) with the following string (name) as New Folder name:


GodMode:{ED7BA470-8E54-465E-825C-99712043E01C}

You will get on your Desktop (or whatever New Directory is created) a Shortcut to all system available management tools.
Below are two screenshots of “GodMode” :):

Windows 6 Vista Godmode hack link to all admin tools feature

GodMode is an handy shortcut for tech savvys and for anyone running a PC repair shop it will certainly be previous one to know of.
Using it one can save really a lot of time, screen pondering and obsolete clicks.

GodMode “trick” should work on all M$ Windows 7 / Vista (Pro, Home, Business, Laziness) or whatever. From security standpoint it is a big joke as it can give so easy to way to a possible malicious easier to review system configs and look up for things to break.
I wondery Why Microsoft developers call it GodMode at all? It is just a shortcut and it doesn’t grant you administrator (god like) system access ? Well anyways, Enjoy 😉

Fixing weird problems with missing File Explorer on Polaroid MPCS700 tablet with Andorid 2.2 (Linux kernel 2.6.32)

Wednesday, October 10th, 2012

Polaroid MPCs700 Tablet Android version 2.2 with 2.6.32 linux kernel picture

I’ve been given Android Polaroid Tablet which had only 5 clickable icons on main (touchscreen) display:

  • Settings
  • Wi-Fi Settings
  • Youtube
  • Browser
  • App market

The shipped in standard applications for Listening Music, Watching Videos, Managing Photos and Browsing files / ( My Photo, My Video, My Music, File Browser) were missing.

Main problem was File Browser by default shipped with Anroid was missing, and if an USB Stick / Flash Drive is plugged in to the USB port, a message appears along with a sound indicating the USB Flash Drive is detected but there was no way to access the USB Flash drive data ….

The tablet is a second hand bought one and is not my own and it appeared like someone has messed up with it trying to change default Android Linux kernel with some “hacked” (custom compiled one?).

The exact Android version installed on it is 2.2, I checked navigating to:


Settings -> About Device

Android 2.2 Polaroid Settings About Device Screenshot pic

Onwards to fix the messed android I had to reset the device to its factory settings by navigating to:


Settings -> Privacy -> Factory data reset

Android 2.2 Polaroid Settings Privacy Reset to Factory Defaults Screenshot

This formats the device and all installed programs and restores the kernel to its original version, so after few minutes of waiting all worked like a charm 🙂

The normal programs for viewing pics, listening music, File Explorer all come at place. Even both VFAT (Fat 32) and NTFS formatted USB drives file systems worked normally with the device. Before that I was puzzled because I suspected the USB Drive is not detected because the kernel is not supporting NTFS and I need to install something. I was wrong just this Factory data reset and NTFS USB bundled by default works as usual 🙂

Enable write “write” command between logged in users on Debian GNU / Linux

Tuesday, October 9th, 2012

efault Debian GNU / Linux install does not permit messaging between ssh logged in users. Messages are disabled like this for security reasons as if they are on by default it is quite easy to flood one’s terminal with messages using a little loop like for instance:


while [ 1 ]; do
echo "You're flooded" | mesg username
done

Hence smartly, all users between write is switched off, i.e. mesg n

For those unfamiliar with mesg I suggest you check man mesg – which is one of the shortest UNIX manual written 🙂

Mesg head manual description is:


mesg - control write access to your terminal

Options mesg can accept are either yes or no ( y / n ).
To check on current logged in user if write username messaging is turned on, on any logged in user shell use:


# mesg
mesg is n

While mesg is set to no by default, if you try to message a random logged in system user you will get a message like:


$ write testuser
write: write: you have write permission turned off.

It is actually, quite handy to have messages switched on especially if you have a Linux host with user accounts which are friends of yours and
it is not very likely mesg is used for bad.

To change the default mesg n to mesg y you need to edit /etc/bash.bashscr (in case if all users are configured to use bash) or even better to set mesg y for all existing users add a new line on top or at the end of /etc/profile file:


echo 'mesg y' >> /etc/profile

On next login via ssh or physical tty, messaging will be on. To check re-login and type:


$ mesg
is y

One note to make here, is even though if you set messaging to yes for all users via /etc/profile, still for some reason the root user m
essaging keeps set to NO.


$ mesg
is y

One note to make here, is even though if you set messaging to yes for all users via /etc/profile, still for some reason the root user m
essaging keeps set to NO.


root@debian:~# mesg
is n

I have no clue, like this happens, but if you need to enable mesg to root as well add mesg y to /root/.bashrc

Well that’s all, I hope this helps someone 🙂 Cheers.

How to install Awstats Apache weblog statistics on Debian Squeeze GNU Linux

Monday, October 8th, 2012

I like using Webalizer to keep an eye in web of my access.log, however since the information it shows is a bit chaotic and much less than one in Awstats statistics, I decided to install awstats. I haven’t installed awstats for a long time so I have no exact memory how I previously did it and hence run quick search too see if there is information on specifics concerning Debian Squeeze. I did not find any specific article and therefore decided to write this short one to document how awstats install is done on Debian Squeeze Linux.

1. Installing awstats deb package

There is already a deb package so no need to hunt for specific perl CPAN modules and manually fulfill dependencies.

Installation is as straight as any other deb package:


debian:~# apt-get install --yes awstats
...

2. Change basic awstats.conf configurations to make it properly working

First thing to do immediately after install is to set the primary SiteDomain= for which Awstats will process site statistics.

For that in the beginning (first line) of /etc/awstats/awstats.conf add directive:


SiteDomain="www.your-domain-name.com"

Substitution www.your-domain-name.com with whatever your primary domain will be.

Next in /etc/awstats/awstats.conf change value for DNSLookup. By default DNSLookup is 1, which means each of the IP request in /var/log/apache2/access.log is attempted be resolved via separate DNS request; Most IP Addresses that have quieried Apache webserver however miss proper PTR DNS record and hence attempts to be resolved fail after 10 to 20 seconds.
The overall result of this is processing execution of /var/log/apache2/access.log takes hours in case access.log is >100MB or so. This slow processing slowness is due to failed DNS requests. Besides that it does useless hundreds of queries to DNS servers which take up bandwidth for nothing …

To prevent this I disabled immediately DNSLookup right after install by substituting


DNSLookup=1

with:


DNSLookup=0

Other thing is by default Awstats is set to use LogFormat=4. As you can read in awstats.conf (Comments section) 4 stands for:


# 4 - Apache or Squid native common log format (NCSA common/CLF log format)

However in Debian Linux Apache2 default config is done in a way that Apache keeps logged requests formatted in combined (not in common log

Here is LogFormat directive extracted from /etc/apache2/apache2.conf:


LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined
LogFormat "%h %l %u %t \"%r\" %>s %O" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent

With that said in awstats.conf to match (combined) Apache set logging change LogFormat to 1:


LogFormat=1

3. Generate manual AWStats access.log statistics for first time

You will have to run as superuser following cmd:


debian:~# /usr/lib/cgi-bin/awstats.pl -config=/etc/awstats/awstats.conf
Create/Update database for config "/etc/awstats/awstats.conf" by AWStats version 6.95 (build 1.943)
From data in log file "/var/log/apache2/access.log"...
Phase 1 : First bypass old records, searching new record...
Searching new records from beginning of log file...
Phase 2 : Now process new records (Flush history on disk after 20000 hosts)...
Flush history file on disk (unique url reach flush limit of 5000)
Flush history file on disk (unique url reach flush limit of 5000)
Flush history file on disk (unique url reach flush limit of 5000)
Flush history file on disk (unique url reach flush limit of 5000)
Flush history file on disk (unique url reach flush limit of 5000)
Flush history file on disk (unique url reach flush limit of 5000)
Flush history file on disk (unique url reach flush limit of 5000)
Flush history file on disk (unique url reach flush limit of 5000)
Flush history file on disk (unique url reach flush limit of 5000)
Flush history file on disk (unique url reach flush limit of 5000)
Jumped lines in file: 0
Parsed lines in file: 602983
Found 8 dropped records,
Found 5 corrupted records,
Found 0 old records,
Found 602970 new qualified records.

4. Access awstats statistics in Web Browser

Once the command execution completes, open in Epiphany, Firefox or whatever browser you like URL:


http://www.your-domain-name.com/cgi-bin//awstats.pl?config

If all is okay you should see some numbers on Unique Visitors like in below browser screenshot:

Screenshot Awstats example Statistics for www.pc-freak.net in Epiphany

5. Set ScriptAlias for easier awstats access path and set directory permissions

In /etc/apache2/apache2.conf or in VirtualHost file, lets say /etc/apache2/sites-enabled/your-domain-name.com, place following configs:


Alias /awstats-icon/ /usr/share/awstats/icon/
ScriptAlias /awstats/ /usr/lib/cgi-bin/

Options None
AllowOverride None
Order allow,deny
Allow from all

For new configs to take effect as usual Apache should be restarted:


debian:~# /etc/init.d/apache2 restart
....

From now on Awstats can be accessed via the much easier to remember access URL:


http://your-domain-name.com/awstats/awstats.pl

6. Protecting Awstats statistics with Apache HTACCESS password

It is a must to protect awstats statistics with password via .htaccess and htpasswd

a.) Use htpasswd to generate user/pass:


linux:~# htpasswd -c /etc/apache2/awstats.passwd admin
New password:
Re-type new password:
Adding password for user admin

b.) Create /usr/lib/cgi-bin/.htaccess with following content:


linux:~# vim /usr/lib/cgi-bin/.htaccess

AuthType Basic
AuthUserFile /etc/apache2/awstats.passwd
AuthGroupFile /dev/null
AuthName "Please Enter Password to access AWstat"
AuthType Basic
Require valid-user

7. Set awstats to generate statistics via daily cron job:

awstats binary deb automatically installs a cron job in /etc/cron.d/awstats:


linux:~# cat /etc/cron.d/awstats
*/10 * * * * www-data [ -x /usr/share/awstats/tools/update.sh ] && /usr/share/awstats/tools/update.sh
# Generate static reports:
10 03 * * * www-data [ -x /usr/share/awstats/tools/buildstatic.sh ] && /usr/share/awstats/tools/buildstatic.sh

I prefer not to use it but use a custom root cron job. To stop /etc/cron.d/awstats from executing I move it to /root:


mv /etc/cron.d/awstats /root

Then I set a new root user cron job to process Apache access.log. The reason I use root user crontab, instead of Apache’s www-data is with www-data user, /var/log/apache2/access.log is unreadable ,…


linux:~# crontab -u root -e
8,18,27,38,48,59 * * * * [ -x /usr/lib/cgi-bin/awstats.pl -a -f /etc/awstats/awstats.conf -a -r /var/log/apache2/access.log ] && /usr/lib/cgi-bin/awstats.pl -config=awstats -update >/dev/null

This cron makes awstats web statistics be refreshed every our in minutes 8,18,27,38,48,59.

That’s it. Enjoy 🙂

How to speed up qmail qmail-smtpd response time on QmailRocks Thibs based install

Saturday, October 6th, 2012

I have recently installed Qmail following the Updated Debian QmailRocks Thibs Install

The qmail is configured just like Thibs points out the server was configured to a run qmail-smtpd script together with DJB’s Daemontools

I’ve figured out today connecting to the newly install Qmail host with telnet, using:


qmail:~# telnet mail.qmailhost.com 25
Trying 83.228.93.76...
Connected to mail.qmailhost.com.
Escape character is '^]'.
220 This is Mail mail.qmailhost.com ESMTP

Does a few seconds delay until my configured qmail greeting shows up. This is not a a deadly problem, but the delay itself might have a negative influence and make the host look like a spammer host to someone, hence I took few seconds to find a way to reduce this SMTP port connection delay.

The mail server responds on port 25, using qmail-smtpd so it was logical delay is caused somewhere by /service/qmail-smtpd/run (which actually links to /var/qmail/supervise/qmail-smtpd/run).

I did a quick review of /var/qmail/supervise/qmail-smtpd/run and found two lines that possibly create unnecessery delay, cause on each and every Port 25 connection request from repote SMTP server /usr/bin/head and /usr/bin/which are executed.
Here are two lines in /service/qmail-smtpd/run, I refer to:


LOCAL=`head -1 $VQ/control/me

(located on line 34)

and


TRUE=`which true`

located on line 116

The script was smartly written as planned to run on multiple Linux distributions. However since QmailRocks Thibs guide and my particular case needs to run on Debian Linux I think this is totally waste of system CPU time.

Therefore I substituted above two lines with:


LOCAL="/var/qmail/control/me"


TRUE="/bin/true"

I checked in /var/qmail/control/me I have only my primary mail server host defined, cause otherwise this changed could pose random mail server errors:


qmail:~# wc -l /var/qmail/control/me
1 /var/qmail/control/me
qmail:~# cat /var/qmail/control/me
mail.qmailhost.com

An updated version of /service/qmail-smtpd/run script you can download from here

If you don’t want to temper manually edit the script the quickest way is to overwrite old script with changed one, i.e.:


qmail:~# cd /var/qmail/supervise/qmail-smtpd
qmail:/var/qmail/supervise/qmail-smtpd# wget -q https://www.pc-freak.net/files/qmail-smtpd-daemontools-run
qmail:/var/qmail/supervise/qmail-smtpd# mv qmail-smtpd-daemontools-run run

To test connection time delay afterwards, you can use:


# time (echo HELO localhost | telnet mail.qmailhost.com 25)
Trying 83.228.93.76...
Connected to mail.qmailhost.com.
Escape character is '^]'.
Connection closed by foreign host.
real 0m0.070s
user 0m0.004s
sys 0m0.000s

Well still there is a connection delay – it is not so quick as smtp.gmail.com, but now connection response delay is better. For sake of comparison here is same test with Google’s SMTP:


$ time (echo HELO localhost | telnet smtp.gmail.com 25)
Trying 173.194.70.108...
Connected to gmail-smtp-msa.l.google.com.
Escape character is '^]'.
Connection closed by foreign host.
real 0m0.017s
user 0m0.004s
sys 0m0.000s

BTW a bit of time delay sometimes, can have a positive impact against spammers, as it can reduce a bit the amount of spammer mail servers connecting to the host. So i’m not sure if being 4 times slower in connection than Gmail is necessery bad 🙂

How to disable Debian GNU / Linux (Squeeze) Apache 2 version reporting to improve security – Hide Apache server version

Friday, October 5th, 2012

Debian GNU / Linux's Apache default behavior is to report Apache server name, version and Linux distribution version and codename.
This is shown as a minor security leak in many Security Scanner (audit) software like Nessus. It reveals a vital information which could help malicious attacker later to use exploit if the version number in question is vulnerable.

The quickest way to check, either Apache versioning and distro info is disabled is with telnet:

hipo@noah:~/Desktop$ telnet www.pc-freak.net 80
Trying 83.228.93.76…
Connected to www.pc-freak.net.
Escape character is '^]'.
HEAD / HTTP/1.0

Connection closed by foreign host.
hipo@noah:~/Desktop$ telnet www.pc-freak.net 80
Trying 83.228.93.76…
Connected to www.pc-freak.net.
Escape character is '^]'.
HEAD / HTTP/1.0

HTTP/1.1 200 OK
Date: Fri, 05 Oct 2012 10:48:36 GMT
Server: Apache/2.2.16 (Debian)
X-Powered-By: PHP/5.3.3-7+squeeze14
Vary: Accept-Encoding
Connection: close
Content-Type: text/html

Disabling this Distro version codename and version number reporting on Debian is done by changing in file /etc/apache2/conf.d/security directives:


ServerTokens OS
ServerSignature On

to


ServerSignature Off
ServerTokens ProductOnly

Here important note to make is if you try adding:

ServerSignature Off and ServerTokens Prod straight in Debian general config /etc/apache2/apache2.conf, but did not change the settings set for the vars through /etc/apache2/conf.d/security; settings from /etc/apache2/conf.d/security will overwrite ServerSignature / ServerTokens settings set in /etc/apache2/apache2.conf

I tried this myself (forgotting about /etc/apache2/conf.d/security) and adding both variables straight in apache2.conf. After Apache restart Apache version number and type of distribution continued be returned by the WebServer.
I thought something specific changed in Debian Squeeze – Apache/2.2.16 so this two variables are probably not working so I did a quick research online seing other people complaining also unable to disable Apache ver and Linux distro version and looking for a reason why. Well anyways if you happen to also ponder, why ServerSignature Off and ServerTokens ProductOnly does not take effect keep in mind it is due to overwritten settings via /etc/apache2/conf.d/security, changing the values there and restarting Apache and you're done 🙂

To make sure 100% Apache is no longer returning exact version number and host installed distro type, use telnet again:

hipo@noah:~/Desktop$ telnet www.pc-freak.net 80
Trying 83.228.93.76…
Connected to www.pc-freak.net.
Escape character is '^]'.
HEAD / HTTP/1.0

Connection closed by foreign host.