sysadmin - 29.4.2005 - 3.8.2005

Django, Apache and FCGI

In Django, lighttpd and FCGI, second take I described a method how to run Django with FCGI behind a lighttpd installation. I did run the Django FCGIs as standalone servers so that you can run them under different users than the webserver. This document will give you the needed information to do the same with Apache 1.3.

Update: I maintain my descriptions now in my trac system. See the Apache+FCGI description for Django.

Update: I changed from using unix sockets to using tcp sockets in the description. The reason is that unix sockets need write access from both processes - webserver and FCGI server - and that's a bit hard to setup right, sometimes. tcp sockets are only a tad bit slower but much easier to set up.

First the main question some might ask: why Apache 1.3? The answer is simple: many people still have Apache 1.3 running as their main server and can't easily upgrade to Apache 2.0 - for example if they run large codebases in mod perl or mod python they will run into troubles with migrating because Apache 2.0 will require mod perl2 or mod python2 and both are not fully compatible with older versions. And even though lighttpd is a fantastic webserver, if you already run Apache 1.3 there might just not be the need for another webserver.

So what do you need - besides the python and django stuff - for Apache 1.3 with FastCGI? Just the mod rewrite module and mod fastcgi module installed, that's all. Both should come with your systems distribution. You will still need all the python stuff I listed in the lighttpd article.

mod_fastcgi is a bit quirky in it's installation, I had to play a bit around with it. There are a few pitfalls I can think of:

  • the specification of the socket can't be an absolute path but must be a relative path with respect to the FastCgiIpcDir
  • the specification of the FCGI itself (even though it's purely virtual) must be in a fully qualified form with respect to the document root you want to use. If you use a relative path, it will be relative to the document root of the default virtual host - and that's most surely not the document root you will use if you want to set up a virtual host with the FCGI.
  • the FCGI itself can't be defined within a virtual host - it must be defined in the main server config. That's where the relative addressing problem comes into play.
  • the socket file must be both readable and writeable by the FCGI user and the Apache user. Usually you do this by changing the socket file to group writeable and changing the group of that socket file to a group where both the user and the apache are members of.

Now here is the config snippet you have to add to your httpd.conf. I use the same directories as with the lighttpd sample, you most surely will have to adapt that to your situation.


 FastCgiExternalServer /home/gb/work/myproject/publichtml/admin.fcgi -host 127.0.0.1:8000
FastCgiExternalServer /home/gb/work/myproject/publichtml/main.fcgi -host 127.0.0.1:8001

 <VirtualHost *> ServerAdmin gb@bofh.ms
 Servername www.example.com
 ErrorLog /home/gb/work/myproject/logs/django-error.log
 CustomLog /home/gb/work/myproject/logs/django-access.log combined
 DocumentRoot /home/gb/work/myproject/public_html
 RewriteEngine On
 RewriteRule ^(/admin/.)$ /admin.fcgi$1 [L]
 RewriteRule ^(/main/.)$ /main.fcgi$1 [L]
 </VirtualHost> ```

You have to allow the webserver write access to the logs directory, so you might want to use a different location for them - possibly in `/var/log/apache/ `or whereever your apache puts it's logs. The FastCgiExternalServer directives must be outside of the virtual host definitions, but must point to files within the virtual hosts document root. But those files needn't (and probably shouldn't) exist in the filesystem, they are purely virtual. The given setup reflects the setup I did for the lighttpd scenario.

Now restart your apache, start your django-fcgi.py and you should be able to access your django application. Keep in mind to copy the admin_media files over to the document root, otherwise your admin will look very ugly.

django-fcgi.py --settings=myproject.settings.main --host=127.0.0.1 --port=8000 --daemon django-fcgi.py --settings=myproject.settings.admin --host=127.0.0.1 --port=8001 --daemon


Have fun.

The equivalent to Apple FileSafe under Linux: Automatically mount dm-crypt encrypted home with pam_mount. Very useful for laptops, but also for workstations of administrators (due to the many security-relevant files that accumulate in the home directory).

Whoever wants to deal with larger Erlang software and try out a Jabber server, might find ejabberd interesting - a Jabber server that uses all the nice features of Erlang to offer, for example, simple clustering and good data distribution.

And another Linux-on-Mac story. This time an iBook and Gentoo. Quite useful for a small and affordable Linux box for on the go.

The Linux on an Apple Powerbook HOWTO provides exactly what I would need if I wanted to switch my 12" Powerbook to Linux - the author even uses exactly my model. And no, I don't want to switch yet.

(Un)trusted platform Apple?

Since it's currently fashionable to explain that one switched when Apple uses TPA - or whatever it might be called in the future: first wait and see. See what Apple does and how - there are always rumors beforehand.

If TPA is actually included: Linux can also be a usable system, even if the interfaces are quite sick (although current XFCE versions don't look that bad) and if there is no more PPC in Apple hardware and you put Linux on it: you can also buy your notebook from IBM. They have nice devices that also work very well under Linux.

And last but not least: just because new Apple hardware is different, it doesn't change the already purchased hardware - and Apple-typically, this usually lasts a few years longer. And under Linux, some Macs even run faster than under OS X.

Zerospan seems to be a P2P software with encryption and Bonjour (ex-Rendevouz, ex-Zeroconf) integration. I'm not quite getting it, as the download contains no documentation and the wiki with the documentation is currently broken, so I'll just blogmark it to check it out later.

Novell will go for SCO's throat

And their considerations on the legal situation would - if they were to hold up in court - really deliver a significant blow to SCO.

The whole SCO-Linux movie is quite exciting, but quite honestly: the lengths between the action scenes are a bit exaggerated.

On Dealing with Security

Under ISS takes action against publication of Cisco vulnerability talk you can find a description of how Cisco and ISS envision security: massive interference with the freedom of expression of a speaker at the Black-Hat conference. Okay, he was a former ISS employee and probably used information he shouldn't have published - but it's exactly this ridiculous secrecy that undermines security - because attackers will gain this knowledge sooner or later - if security vulnerabilities exist, they will be found sooner or later. If someone reports about it publicly, at least you can defend yourself and take countermeasures. If the publication is suppressed, the end user is ultimately the victim - who has no chance to protect themselves - and even in an emergency by switching to another router manufacturer.

Therefore, it is indeed the case: neither ISS nor Cisco make a good impression in the public eye. On the contrary, their censorship attempts are actually only another argument in future product decisions to decide against Cisco - because you can obviously not trust their security statements.

Sysadmins Day

Lisa9 shows how to properly pay tribute to a sysadmin! Even DAU-friendly with illustrated instructions

Linux-VServer is a kernel patch and a set of utilities that enable running a series of virtual Linux boxes on a base machine, with resources strongly isolated from each other. Chroot on steroids, or most comparable to BSD Jails. Interesting for hosting projects where virtual root servers are required. It's even included in the current Debian.

Tor Network Status provides an overview of exit nodes in the Tor network with traffic information, allowed ports, and IP data. Nice. (found via the Rabenhorst)

Django, lighttpd and FCGI, second take

In my first take at this stuff I gave a sample on how to run django projects behind lighttpd with simple FCGI scripts integrated with the server. I will elaborate a bit on this stuff, with a way to combine lighttpd and Django that gives much more flexibility in distributing Django applications over machines. This is especially important if you expect high loads on your servers. Of course you should make use of the Django caching middleware, but there are times when even that is not enough and the only solution is to throw more hardware at the problem.

Update: I maintain my descriptions now in my trac system. See the lighty+FCGI description for Django.

Caveat: since Django is very new software, I don't have production experiences with it. So this is more from a theoretical standpoint, incorporating knowledge I gained with running production systems for several larger portals. In the end it doesn't matter much what your software is - it only matters how you can distribute it over your server farm.

To follow this documentation, you will need the following packages and files installed on your system:

  • [Django][2] itself - currently fetched from SVN. Follow the setup instructions or use python setup.py install .
  • [Flup][3] - a package of different ways to run WSGI applications. I use the threaded WSGIServer in this documentation.
  • [lighttpd][4] itself of course. You need to compile at least the fastcgi, the rewrite and the accesslog module, usually they are compiled with the system.
  • [Eunuchs][5] - only needed if you are using Python 2.3, because Flup uses socketpair in the preforked servers and that is only available starting with Python 2.4
  • [django-fcgi.py][6] - my FCGI server script, might some day be part of the Django distribution, but for now just fetch it here. Put this script somewhere in your $PATH, for example /usr/local/bin and make it executable.
  • If the above doesn't work for any reason (maybe your system doesn't support socketpair and so can't use the preforked server), you can fetch [django-fcgi-threaded.py][7] - an alternative that uses the threading server with all it's problems. I use it for example on Mac OS X for development.

Before we start, let's talk a bit about server architecture, python and heavy load. The still preferred Installation of Django is behind Apache2 with mod python2. mod python2 is a quite powerfull extension to Apache that integrates a full Python interpreter (or even many interpreters with distinguished namespaces) into the Apache process. This allows Python to control many aspects of the server. But it has a drawback: if the only use is to pass on requests from users to the application, it's quite an overkill: every Apache process or thread will incorporate a full python interpreter with stack, heap and all loaded modules. Apache processes get a bit fat that way.

Another drawback: Apache is one of the most flexible servers out there, but it's a resource hog when compared to small servers like lighttpd. And - due to the architecture of Apache modules - mod_python will run the full application in the security context of the web server. Two things you don't often like with production environments.

So a natural approach is to use lighter HTTP servers and put your application behind those - using the HTTP server itself only for media serving, and using FastCGI to pass on requests from the user to your application. Sometimes you put that small HTTP server behind an Apache front that only uses mod proxy (either directly or via mod rewrite) to proxy requests to your applications webserver - and believe it or not, this is actually a lot faster than serving the application with Apache directly!

The second pitfall is Python itself. Python has a quite nice threading library. So it would be ideal to build your application as a threaded server - because threads use much less resources than processes. But this will bite you, because of one special feature of Python: the GIL. The dreaded global interpreter lock. This isn't an issue if your application is 100% Python - the GIL only kicks in when internal functions are used, or when C extensions are used. Too bad that allmost all DBAPI libraries use at least some database client code that makes use of a C extension - you start a SQL command and the threading will be disabled until the call returns. No multiple queries running ...

So the better option is to use some forking server, because that way the GIL won't kick in. This allows a forking server to make efficient use of multiple processors in your machine - and so be much faster in the long run, despite the overhead of processes vs. threads.

For this documentation I take a three-layer-approach for distributing the software: the front will be your trusted Apache, just proxying all stuff out to your project specific lighttpd. The lighttpd will have access to your projects document root and wil pass on special requests to your FCGI server. The FCGI server itself will be able to run on a different machine, if that's needed for load distribution. It will use a preforked server because of the threading problem in Python and will be able to make use of multiprocessor machines.

I won't talk much about the first layer, because you can easily set that up yourself. Just proxy stuff out to the machine where your lighttpd is running (in my case usually the Apache runs on different machines than the applications). Look it up in the mod_proxy documentation, usually it's just ProxyPass and ProxyPassReverse.

The second layer is more interesting. lighttpd is a bit weird in the configuration of FCGI stuff - you need FCGI scripts in the filesystem and need to hook those up to your FCGI server process. The FCGI scripts actually don't need to contain any content - they just need to be in the file system.

So we start with your Django project directory. Just put a directory public html in there. That's the place where you put your media files, for example the admin media directory. This directory will be the document root for your project server. Be sure only to put files in there that don't contain private data - private data like configs and modules better stay in places not accessible by the webserver. Next set up a lighttpd config file. You only will use the rewrite and the fastcgi modules. No need to keep an access log, that one will be written by your first layer, your apache server. In my case the project is in /home/gb/work/myproject - you will need to change that to your own situation. Store the following content as /home/gb/work/myproject/lighttpd.conf


 server.modules = ( "mod_rewrite", "mod_fastcgi" )
 server.document-root = "/home/gb/work/myproject/public_html"
 server.indexfiles = ( "index.html", "index.htm" )
 server.port = 8000
 server.bind = "127.0.0.1"
 server.errorlog = "/home/gb/work/myproject/error.log"

fastcgi.server = (
"/main.fcgi" => (
"main" => (
"socket" => "/home/gb/work/myproject/main.socket"
 )
 ),
"/admin.fcgi" => (
"admin" => (
"socket" => "/home/gb/work/myproject/admin.socket"
 )
 )
 )

url.rewrite = (
"^(/admin/.*)$" => "/admin.fcgi$1",
"^(/polls/.*)$" => "/main.fcgi$1"
 )

mimetype.assign = (
".pdf" => "application/pdf",
".sig" => "application/pgp-signature",
".spl" => "application/futuresplash",
".class" => "application/octet-stream",
".ps" => "application/postscript",
".torrent" => "application/x-bittorrent",
".dvi" => "application/x-dvi",
".gz" => "application/x-gzip",
".pac" => "application/x-ns-proxy-autoconfig",
".swf" => "application/x-shockwave-flash",
".tar.gz" => "application/x-tgz",
".tgz" => "application/x-tgz",
".tar" => "application/x-tar",
".zip" => "application/zip",
".mp3" => "audio/mpeg",
".m3u" => "audio/x-mpegurl",
".wma" => "audio/x-ms-wma",
".wax" => "audio/x-ms-wax",
".ogg" => "audio/x-wav",
".wav" => "audio/x-wav",
".gif" => "image/gif",
".jpg" => "image/jpeg",
".jpeg" => "image/jpeg",
".png" => "image/png",
".xbm" => "image/x-xbitmap",
".xpm" => "image/x-xpixmap",
".xwd" => "image/x-xwindowdump",
".css" => "text/css",
".html" => "text/html",
".htm" => "text/html",
".js" => "text/javascript",
".asc" => "text/plain",
".c" => "text/plain",
".conf" => "text/plain",
".text" => "text/plain",
".txt" => "text/plain",
".dtd" => "text/xml",
".xml" => "text/xml",
".mpeg" => "video/mpeg",
".mpg" => "video/mpeg",
".mov" => "video/quicktime",
".qt" => "video/quicktime",
".avi" => "video/x-msvideo",
".asf" => "video/x-ms-asf",
".asx" => "video/x-ms-asf",
".wmv" => "video/x-ms-wmv"
 )

I bind the lighttpd only to the localhost interface because in my test setting the lighttpd runs on the same host as the Apache server. In multi server settings you will bind to the public interface of your lighttpd servers, of course. The FCGI scripts communicate via sockets in this setting, because in this test setting I only use one server for everything. If your machines would be distributed, you would use the "host" and "port" settings instead of the "socket" setting to connect to FCGI servers on different machines. And you would add multiple entries for the "main" stuff, to distribute the load of the application over several machines. Look it up in the lighttpd documentation what options you will have.

I set up two FCGI servers for this - one for the admin settings and one for the main settings. All applications will be redirected through the main settings FCGI and all admin requests will be routed to the admin server. That's done with the two rewrite rules - you will need to add a rewrite rule for every application you are using.

Since lighttpd needs the FCGI scripts to exist to pass along the PATH_INFO to the FastCGI, you will need to touch the following files: /home/gb/work/myprojectg/public_html/admin.fcgi ``/home/gb/work/myprojectg/public_html/main.fcgi

They don't need to contain any code, they just need to be listed in the directory. Starting with lighttpd 1.3.16 (at the time of this writing only in svn) you will be able to run without the stub files for the .fcgi - you just add "check-local" => "disable" to the two FCGI settings. Then the local files are not needed. So if you want to extend this config file, you just have to keep some very basic rules in mind:

  • every settings file needs it's own .fcgi handler
  • every .fcgi needs to be touched in the filesystem - this might go away in a future version of lighttpd, but for now it is needed
  • load distribution is done on .fcgi level - add multiple servers or sockets to distribute the load over several FCGI servers
  • every application needs a rewrite rule that connects the application with the .fcgi handler

Now we have to start the FCGI servers. That's actually quite simple, just use the provided django-fcgi.py script as follows:


 django-fcgi.py --settings=myproject.work.main
 --socket=/home/gb/work/myproject/main.socket
 --minspare=5 --maxspare=10 --maxchildren=100
 --daemon

django-fcgi.py --settings=myproject.work.admin
 --socket=/home/gb/work/myproject/admin.socket
 --maxspare=2 --daemon

Those two commands will start two FCGI server processes that use the given sockets to communicate. The admin server will only use two processes - this is because often the admin server isn't the server with the many hits, that's the main server. So the main server get's a higher-than-default setting for spare processes and maximum child processes. Of course this is just an example - tune it to your needs.

The last step is to start your lighttpd with your configuration file: lighttpd -f /home/gb/work/myproject/lighttpd.conf

That's it. If you now access either the lighttpd directly at http://localhost:8000/polls/ or through your front apache, you should see your application output. At least if everything went right and I didn't make too much errors.

There are days when my computer hates me

For example, when I play with Flup and instead of the threaded server I want to use a forked server. And I realize that the latter requires the socketpair function, which unfortunately is only available from Python 2.4, which is available on Debian Sarge, but for Python 2.4 there is no Psycopg in Debian Sarge - which in turn is a prerequisite for Django and PostgreSQL, which is why I am dealing with FastCGI in the first place. Installing Psycopg itself is no fun, as you not only need the PostgreSQL headers that are normally installed, but also a few internal headers - so in principle a build tree. And then you also need the egenix-mx-base headers, which you can only get for Python 2.3, so you would have to install that yourself as well. Backports from the next Debian version don't work either, as they are just switching to PostgreSQL 8.0 and Sarge is still using 7.4 and I didn't want to upgrade the whole system right away. And so you go in circles and feel a bit cheated by all the dependencies and version conflicts.

And what do you do as a solution, because the threaded server unfortunately only produces segfaults in Psycopg? You take the threaded server, forbid it to thread and start it via the spawn-fcgi from lighttpd, or directly from lighttpd. But that's somehow stupid again, because then there are always 3 threads per FCGI server, two of which just stand in the process list and do nothing. And all this just because mod python2 (which is needed for Django) requires Apache2, which in turn requires mod perl2, which is incompatible with the old mod perl, which is why a whole bunch of my sites wouldn't work anymore if I switched to Apache2. Which I don't want to do anyway, because Apache2 with mod python is damn slow. And once again I feel cheated. I really should have looked for a more meaningful job.

If you didn't understand anything: doesn't matter, it's technology, it's not important, I just wanted to say that.

Running Django with FCGI and lighttpd

Diese Dokumentation ist für einen grösseren Kreis als nur .de gedacht, daher das ganze in Neuwestfälisch Englisch. Sorry. Update: I maintain the actually descriptions now in my trac system. See the FCGI+lighty description for Django. There are different ways to run Django on your machine. One way is only for development: use the django-admin.py runserver command as documented in the tutorial. The builtin server isn't good for production use, though. The other option is running it with mod_python. This is currently the preferred method to run Django. This posting is here to document a third way: running Django behind lighttpd with FCGI.

First you need to install the needed packages. Fetch them from their respective download address and install them or use preinstalled packages if your system provides those. You will need the following stuff:

  • [Django][2] itself - currently fetched from SVN. Follow the setup instructions or use python setup.py install .
  • [Flup][3] - a package of different ways to run WSGI applications. I use the threaded WSGIServer in this documentation.
  • [lighttpd][4] itself of course. You need to compile at least the fastcgi, the rewrite and the accesslog module, usually they are compiled with the system.

First after installing ligthttpd you need to create a lighttpd config file. The configfile given here is tailored after my own paths - you will need to change them to your own situation. This config file activates a server on port 8000 on localhost - just like the runserver command would do. But this server is a production quality server with multiple FCGI processes spawned and a very fast media delivery.


 # lighttpd configuration file
 #
 ############ Options you really have to take care of ####################

server.modules = ( "mod_rewrite", "mod_fastcgi", "mod_accesslog" )

server.document-root = "/home/gb/public_html/"
 server.indexfiles = ( "index.html", "index.htm", "default.htm" )

 these settings attch the server to the same ip and port as runserver would do

server.errorlog = "/home/gb/log/lighttpd-error.log"
 accesslog.filename = "/home/gb/log/lighttpd-access.log"

fastcgi.server = (
"/myproject-admin.fcgi" => (
"admin" => (
"socket" => "/tmp/myproject-admin.socket",
"bin-path" => "/home/gb/public_html/myproject-admin.fcgi",
"min-procs" => 1,
"max-procs" => 1
 )
 ),
"/myproject.fcgi" => (
"polls" => (
"socket" => "/tmp/myproject.socket",
"bin-path" => "/home/gb/public_html/myproject.fcgi"
 )
 )
 )

url.rewrite = (
"^(/admin/.*)$" => "/myproject-admin.fcgi$1",
"^(/polls/.*)$" => "/myproject.fcgi$1"
 )

This config file will start only one FCGI handler for your admin stuff and the default number of handlers (each one multithreaded!) for your own site. You can finetune these settings with the usual ligthttpd FCGI settings, even make use of external FCGI spawning and offloading of FCGI processes to a distributed FCGI cluster! Admin media files need to go into your lighttpd document root.

The config works by translating all standard URLs to be handled by the FCGI script for each settings file - to add more applications to the system you would only duplicate the rewrite rule for the /polls/ line and change that to choices or whatever your module is named. The next step would be to create the .fcgi scripts. Here are the two I am using:


 #!/bin/sh
 # this is myproject.fcgi - put it into your docroot

export DJANGOSETTINGSMODULE=myprojects.settings.main

/home/gb/bin/django-fcgi.py

 #!/bin/sh
 # this is myproject-admin.fcgi - put it into your docroot

export DJANGOSETTINGSMODULE=myprojects.settings.admin

/home/gb/bin/django-fcgi.py

These two files only make use of a django-fcgi.py script. This is not part of the Django distribution (not yet - maybe they will incorporate it) and it's source is given here:


 #!/usr/bin/python2.3

def main():
 from flup.server.fcgi import WSGIServer
 from django.core.handlers.wsgi import WSGIHandler
 WSGIServer(WSGIHandler()).run()

if name == 'main':
 main()

As you can see it's rather simple. It uses the threaded WSGIServer from the fcgi-module, but you could as easily use the forked server - but as the lighttpd already does preforking, I think there isn't much use with forking at the FCGI level. This script should be somewhere in your path or just reference it with fully qualified path as I do. Now you have all parts togehter. I put my lighttpd config into /home/gb/etc/lighttpd.conf , the .fcgi scripts into /home/gb/public_html and the django-fcgi.py into /home/gb/bin . Then I can start the whole mess with /usr/local/sbin/lighttpd -f etc/lighttpd.conf . This starts the server, preforkes all FCGI handlers and detaches from the tty to become a proper daemon. The nice thing: this will not run under some special system account but under your normal user account, so your own file restrictions apply. lighttpd+FCGI is quite powerfull and should give you a very nice and very fast option for running Django applications. Problems:

  • under heavy load some FCGI processes segfault. I first suspected the fcgi library, but after a bit of fiddling (core debugging) I found out it's actually the psycopg on my system that segfaults. So you might have more luck (unless you run Debian Sarge, too)

  • Performance behind a front apache isn't what I would have expected. A lighttpd with front apache and 5 backend FCGI processes only achieves 36 requests per second on my machine while the django-admin.py runserver achieves 45 requests per second! (still faster than mod_python via apache2: only 27 requests per second) Updates:

  • the separation of the two FCGI scripts didn't work right. Now I don't match only on the .fcgi extension but on the script name, that way /admin/ really uses the myproject-admin.fcgi and /polls/ really uses the myproject.fcgi.

  • I have [another document online][6] that goes into more details with regard to load distribution

Apache modauthtkt is a framework for Single-Signon in Apache-based solutions across technology boundaries (CGI, mod_perl and whatever else exists). I should take a look at it, could be interesting for me.

SCO trips over its own feet

At least that's how it seems when there is an email about No 'smoking gun' in Linux code.

The e-mail, which was sent to SCO Group CEO Darl McBride by a senior vice president at the company, forwards an e-mail from a SCO engineer. In the Aug. 13, 2002, e-mail, engineer Michael Davidson said "At the end, we had found absolutely nothing ie (sic) no evidence of any copyright infringement whatsoever."

The email has been known for some time but has only now been published - previously it was still under seal as part of the court records. Quite embarrassing for SCO when the sad details gradually come to light. Especially embarrassing: SCO argues with the same consultant who apparently found nothing here but previously claimed there was identical code. Somehow, SCO should get its argumentation in order soon, otherwise the whole lie and extortion won't last in the long run ...

Hardly with clean means

Has the transfer of the .net registrar to VeriSign gone through, given how ICANN is under VeriSign's thumb:

VeriSign can raise the prices of .net addresses at will starting January 1, 2007. Additionally, the Internet Corporation for Assigned Names and Numbers (ICANN) secured them an automatic extension of the term after six years.

Anyone who still believes that no money changed hands, I'd be happy to sell them a washing machine with a rubber band drive ...

Microsoft Loves Spyware

Anyway, Microsoft now classifies these differently:

According to this, since the update at the end of March, the program recommends ignoring various Claria products classified as moderately dangerous, as well as those from the spyware mills WhenU and 180solutions.

Sorry, but background programs that display news are fundamentally unacceptable, and I don't care in the slightest about the velvet-glove arguments the manufacturers of this junk come up with.

Sorry, but a manufacturer of operating system software that does not suggest uninstalling such trash in an anti-spyware check is simply not credible.

macminicolo Mac Mini colocation - set up your own Mac Mini in a data center. Is there something like this in Germany?

Plash: the Principle of Least Authority shell

Interesting concept: Plash is a shell that inserts a library under programs through which all accesses to the file system are sent. This allows you to control which functions a program is actually allowed to execute. This time, it is not about protecting against user activities, but about protecting the user against activities of the program. Especially when installing programs that you do not know, you can sometimes catch Trojans - Plash helps here by explicitly only enabling the areas of the disk for the program that it actually needs.

For this purpose, all accesses to the file system are internally routed via a own mini-server - the actual program is executed under a freshly allocated user in a own chroot-jail, so it has no chance to do anything outside that is not explicitly allowed.

Very interesting concept, especially for system administrators. Unfortunately (as expected) it does not work with grsecurity - of course, grsecurity is supposed to help prevent some of the tricks used in Plash. In this case, it fails due to the requirement of executable stack.

Boot KNOPPIX from an USB Memory Stick - maybe an alternative to spblinux, especially with the c't-Knoppix variant?

SPB-Linux is a very small Linux that can be booted from a USB flash drive and enhanced with various extensions (X, Mozilla, XFCE Desktop). It should also be relatively easy to extend with various system administration tools.

Spyce is a Python web framework with damn good performance: a simple page with a template behind it delivers over 90 hits per second on my machine (Spyce integrated into Apache via mod_python, memory cache). Take that, PHP!

Sometimes DarwinPorts Drives Me to Despair

For example, if I want to install ghc (a Haskell compiler), but it first wants to install Perl 5.8. As if I didn't already have a quite usable Perl 5.8.6 on the disk under Tiger, no, the DarwinPorts want their own versions of it. And then, depending on the path setting, I have either the Apple-Perl or the one from DarwinPorts active. Quite stupid - I think there should be pseudo-packages in the DarwinPorts that then refer to the pre-installed versions from Apple.

This causes problems especially when I also install packages manually. Because then sometimes the Perl accessible via the path is used - and with active DarwinPorts, that is the one there. But this is absolutely not the desired effect - after all, the Perl in this case only got in because the port for ghc has a build-dependency. But I don't want to use the DarwinPorts Perl at all ...

For the same reason, I find all the Python and Ruby modules in DarwinPorts unusable: they automatically pull in a new installation of Python and Ruby and do not use the pre-installed version. Rarely stupid ...

As a result, you can only use DarwinPorts on an OS X box for well-isolated tools - which is a bit of a shame, because the idea and the implementation itself are pretty great. Only too little consideration is given to the already installed stuff.

By the way, I installed ghc simply via the binary package from haskell.org. It says there that it is for 10.3, but it also works with 10.4 - at least what I do with it. And it saves me from having to build all that stuff.

SSL-VPN with Browser Control

Colleague found a pretty brilliant tool: SSL Explorer, a small https-server that together with a Java applet in the browser implements a VPN. Specifically, when the applet starts (which must be confirmed, as the applet requires additional capabilities), tunnel connections are established over https, and various applications are then integrated over these connections. For example, you can establish a VNC connection to an internal server with a click on a link, browse the local Windows network via web forms, transfer files, or access Linux servers behind the firewall via SSH. And the whole thing works with a simple Java-capable web browser - I tested it with Safari, for example, and it works flawlessly. Completely without additional client software to be installed. Ideal for roaming users who don't always have their own device with them.

Oh, and the whole thing is also under the GPL.

Hardened-PHP project

No idea how good this really is, but the Hardened-PHP project already sounds quite nice. Due to the high prevalence of PHP for web applications, it is a central point of entry for servers. Should put this on my ToDo list.

Whiners and Open Source

IT decision-makers demand in an open letter more focus on the areas important to them:

In an open letter to "the" Open Source Community, IT decision-makers from various fields have urged to orient themselves more towards the actual needs of users from the corporate sector.

I always find it fascinating with what audacity some people make demands on voluntary work, only to then use it for their own purposes. Some demand the abolition of the GPL because the conditions don't suit them, the next demand focus on the desktop because they want an alternative to Microsoft, others demand more focus on high-performance servers because SUN machines with Solaris or IBM servers with AIX are too expensive for them.

Strangely enough, I only ever hear demands in open letters - but it would be much more sensible to simply support the corresponding project financially and with manpower. But that would be one's own effort, which one wants to avoid precisely. Demands for better support and better documentation also fit in here - both things that companies could easily set up themselves. But one is too good for that.

Sorry, but to me, such open letters to Open Source developers always sound like whiny little children who absolutely want an ice cream.

Sorry, folks, but that's not how it works. A large part of the Open Source Community still consists of hackers and enthusiastic amateurs and tinkerers. This often produces great crap and occasionally brilliant solutions. And it produces only what people feel like doing - if writing documentation is boring and annoying for someone, they will not spend their free time on it.

You have an itch? Scratch it. Yourself.

Shit hits Fan

The recently published Sharp Internet Explorer Exploit should make it clear to Microsoft that their stance on the recent IE hole was a bit overly naive. They should have released a patch instead of just an advisory. Ideally, a patch that completely removes Internet Explorer.

Microsoft never learns

Error in Internet Explorer with uncertain consequences:

According to Bernhard MĂĽller from SEC Consult, Microsoft can also reproduce the crashes but does not see any risk that foreign code could be executed. Therefore, Microsoft intends to make the handling of COM objects more robust in the future, but will not release a security update.

This is about a crash of the hard kind - in direct machine code. Anyone with even a rudimentary understanding of such things knows that this is a potential gateway for malware - appropriately set data for the crash and you might have a direct path into the system. But Microsoft sees no danger ...

Who wants to laugh again ...

Study Shows Windows Beats Linux on Security - this time, Microsoft bought the desired results from the company Wipro. Just as absurd as previous attempts in the same direction. Contains such gems as:

“We already know how to secure a Windows-based solution and keep it running smoothly,” says Stephen Shaffer, the airline’s director of software systems. “With Linux, we had to rely on consultants to tell us if our system was secure. With Windows, we can depend on Microsoft to inform us of and provide any necessary updates.”

Sorry, but seriously: if my IT manager tells me he relies on Microsoft for the security of his systems, that would be a reason for me to fire the guy as quickly as possible.

WordPress 1.5.1.3

WordPress 1.5.1.3 includes an important security fix. So at least take the xmlrpc.php from the release.

CardSystems Exposes 40 Million Identities

Bruce Schneier with some thoughts and possible demands regarding the recent security debacle at a large American credit card authorizer. Apparently, the data should not have been on their system at all - due to the high demands that credit card companies (at least in the documents) place on authorizers, Card Systems should actually be out with Mastercard and Visacard.

Microsoft's Omnipotence Fantasies

Microsoft will enforce Sender ID:

Now Microsoft apparently wants to enforce the system on its own, because soon all emails to Hotmail users that do not come with Sender ID will be visibly marked for Hotmail users and thus labeled as potential spam.

Great. Very big strategy. The working group was dissolved because no agreement could be reached because the patent situation with Sender ID was not resolved by Microsoft - and now Microsoft simply wants to enforce it again.

But I think that in this case Microsoft is cutting into its own flesh: there have long been significantly better webmail services that also play significantly better in the network community. Hotmail has long since lost the importance it once had before being sold to Microsoft. Therefore, my prognosis is that not many people will be particularly impressed by this step. The victims are the Hotmail users and possibly their correspondents, who are stuck with an even inferior mail service anyway ...

OXlook - Open-XChange connects to Outlook - blogged for the company. Don't ask ...

Another Colored Study by Microsoft

Study: Windows security updates more cost-effective than open source - nothing new, just another Microsoft-funded and therefore pre-determined study with no value. The interesting part about the studies is only the name of the respective company that conducts the study - you can then add that to the corruption list and remember it in case you need to substantiate any statements with falsified and biased studies ...

Otherwise? Well, the standard errors, of course. First of all, no real evidence, but an unspecified list of companies that were asked what they think about it (as opposed to collecting hard facts). And of course, equating Red Hat with Linux - which is sheer nonsense in itself.

From personal experience with both systems, I can say that our Debian GNU/Linux systems are much easier to keep up to date and therefore much cheaper to patch than the Windows boxes. And this despite the fact that both use their integrated update mechanisms over the network (and for our Windows systems, even fueling stations and internal update servers exist). But I wouldn't be asked for such a study - I wouldn't fit into the Microsoft-funded picture ...

Tunnelblick - GUI for OpenVPN on the Mac

Tunnelblick is a graphical user interface for OpenVPN on the Mac. The great thing: the latest installers come with OpenVPN included. So if you have OpenVPN running as infrastructure and also need to integrate Macs – it's now easier than ever before. And considering the fact that OpenVPN is one of the nicest open source VPN solutions, it's worth taking a look even if you're still considering which VPN solution to go with.

ICANN as an agent for VeriSign's monopoly claims

At Heise: .net Registry: And the winner is ... VeriSign!. Yes, exactly the company that made itself so popular with the wildcard A-record on .com and .net and that has repeatedly distinguished itself by not adhering to agreements and forging ahead before ICANN or other central bodies had even created a basis for it (for example, with international domains) and thereby repeatedly caused problems, exactly the company that is not interested in more democratic regulation of the Internet and is anyway only on a monopoly course, exactly that company receives the contract from ICANN. No surprise - the competitors were not American companies and how ICANN stands towards non-American initiatives (and possible greater involvement of Internet users) has been seen in the dismantling of the regionally elected representatives.

VS Confidential nfD and Outlook?

According to Heise: cryptovision secures Bundeswehr emails - one reason was that their plugin works with Outlook and Notes. Hello? They want to encrypt confidential and restricted information via a crypto plugin, but then use Outlook? They might as well save the encryption, the next worm will send the contents of the inbox all over the world anyway ...

Devil's grin

System upgrade on simon.bofh.ms

Since I need to upgrade a Debian 3.0 to 3.1 somewhere to gain some experience for the company, I'm just using my own server. So, it might be that things get a bit messy here in the next time or something might fly around your ears. You have been warned.

System Upgrade simon.bofh.ms Part 2

Ok, the system upgrade is basically done. The only losses so far are the mailing list system - although that's mainly because I simply have no interest in running it anymore. In principle, it was completely updated, I just threw it out because I don't want to do anything else with it - there was only one list in it. And otherwise, mainly old junk has been thrown out.

However, after two system upgrades, I have to say that I'm not really enthusiastic about this upgrade - it already shows the problem of the extremely long release cycle. The first upgrade went through quite smoothly - the machine in question was one that already ran Sarge, just an old version from Testing and not the current Stable. The upgrade caused no problems.

The second upgrade, however, was simon.bofh.ms - a machine that was still largely on Stable, with a whole range of backports (self-made and from the net). The latter is of course the real problem - because the release cycles are very long, it is often necessary to install packages yourself. The Debian upgrade mechanism should still handle this. But reality shows that packages from backports often refer to intermediate states in which bugs in testing packages are present or simply special features that were not taken into account. As a result, a whole range of package upgrades were very tricky and I would not want to subject any normal user to going through that.

The highlight of all the problems was the PostgreSQL upgrade, which went through cleanly but then did not start due to an outdated option in the config. The messages were so cryptic that even I could not immediately see what it was - only digging in the logs and looking in the scripts confirmed to me that the upgrade was clean and really only the start had jammed.

However, I still have to say that the upgrade of a machine with partly up to 3 years old program versions went surprisingly well and 99% of the packages were updated completely problem-free - even things like my rather exotic Exim4 installation (a self-made backport with special features) went through quite smoothly - manual fixes were necessary, but I had caused them myself. The Apache and the whole PHP mess ran completely problem-free, the MySQL database also ran immediately. And one should also note that the whole upgrade - although described by me as suboptimal - only took 1:45 hours. And most of that was waiting for the packages to unpack ...

Well, in the next few days it will show what else has broken and which of the scripts no longer run that I have overlooked so far.

Debian GNU/Linux 3.1 released - wow. It took quite a while

PGP Corporation disrupts PGP Freeware Mirror

Found at rabenhorst: PGP Corporation disrupts PGP Freeware Mirror. I always find it disgusting when I look at what has become of the old PGP project, which has now turned into a commercial mess. PGP was once the pioneer in making usable cryptography available to ordinary citizens—and during the PGP 2 versions, it was indeed openly available (up to version 2.3 under GPL). For exactly this reason, I made the PGP ports to DOS back then. And now the PGP Corporation is lashing out and taking action against free mirrors of the freeware versions. A good example of why it's better to invest energy in projects that belong to companies, but rather in freeware with free as in free speech...

Therefore: use gnupg. The code is also better—I still remember with horror the pseudo-object-oriented code in PGP 5, fixing that stuff was not really entertaining.

By the way, the changelog for the DOS version of PGP 5 (scroll down) was my first weblog, so to speak—and that started as early as October 97. Should I now challenge Dave Winer?

Our computers belong to us - still

As rabenhorst(whose site, by the way, sends my Safari to the happy hunting grounds) links: Intel has built DRM techniques into the new dual-core processors that, for example, Microsoft can use to build upon in the system. Then Microsoft, on behalf of the entertainment industry or for its own benefit, determines which software and which data can be used on the system. Private copies are no longer up for discussion, and let's wait and see when Microsoft then classifies Open Source as untrustworthy and blocks it.

Performance of the Tiger

I was asked yesterday if I notice any difference in performance: yes and no. Yes, because all the display stuff is noticeably faster - especially browsers get their content displayed much quicker. There is a significant improvement here.

No, because the nice - yet useful - features like Spotlight and FileVault (which weren't available in Jaguar) also consume some of the system performance. Especially more intensive memory operations in my home directory are affected. On the other hand, the features are really useful, so I'm happy to pay the performance price.

So overall, the display is faster and the rest is not slower. Considering that I'm two major releases ahead on the same hardware as before (867 Mhz 12" PowerBook with 640 MB memory), this is a good result. The leap over two Windows versions certainly requires more frequent hardware upgrades to remain enjoyable.

Paypal sends phishing emails

Paypal sends phishing emails - they just don't get it. I'm annoyed by eBay's, PayPal's, and the whole pack of banks' quite stupid attitude towards phishing anyway - why don't they finally use signed emails? The whole issue would be very easy to resolve: an email from eBay that is not signed with the correct key: into the trash.

But the fact that PayPal is so stupid as to send phishing emails itself - or emails that look just like the usual phishing attacks - is really quite stupid.

Key theft on Hyperthreading systems - cool. I mean, sure, shit, it's a security hole. But that's really cool. Using Hyperthreading and cache timing to steal data from the second pseudo-processor right from under its nose - you have to come up with something like that first.

Bill Gates Brain Fart

Bill Gates: The iPod doesn't stand a chance. The internet is unimportant. Nobody needs Java. 640 KB is enough for every user. Windows is the safest operating system. The PowerPC chip is unimportant. Users want Bob, the social interface. Unix doesn't matter.

The man has a real problem

RBL operators are either sociopaths or incompetent

Or both. Sorry, but you can't categorize something like this any other way. If any providers now filter for rfc-ignorant.org, emails may be bounced or sent to the spam folder - just because the operator of rfc-ignorant.org doesn't like the whois from DeNIC. By the way, the mail RFCs do not contain any indication (and certainly no mandatory condition) that a whois service must exist for a domain. So much for the technical competence of the operator of this idiotic list ...

It's bad enough that as a mail admin you have to deal with spam, trojans, viruses and similar nonsense - and the gigantic mountains of traffic that result. More and more often you also have to deal with completely brainless block list operators and similarly stupid mail admins who implement these block lists (and possibly even bounce emails because of the listing!).

And when you point this nonsense out to them, the standard line is: "RBL filtering has almost eliminated all my spam". Great. The fact that the email medium is more damaged by such incompetent fools than by the spam itself is of no concern to them. Let's just break everything, every idiot can be a mail admin today. It's disgusting.

(Found via fh).

Apple users in parliaments complain about discrimination - I can well imagine the nonsense of the responsible IT people. Of course, for network security, you clearly rely on Microsoft products ...