A very interesting service - it provides (dummy) login credentials for services that require registration. For example, all those newspaper archives that need logins. If you don't want to enter your data, you simply use BugMeNot. With the bookmarklet, the whole thing is nice and easy. And you can also use the service to link to pages on services that are so heavily restricted.
Especially practical for those annoying demographic-data-collecting and we-spam-your-registration-to-death services.
Is that ethically correct? Sorry, folks, but is the constant asking for shoe size, hair color, and penis length (hey, the penis-enlargement providers have to get their addresses from somewhere) by the various archive services ethically correct?
Here's the original article.
PyBlosxom (Blosxom in Python) has been available as a 1.0 release since May 25th - still new enough that it's worth writing about now
Here you can find the original article.
Everything around human-computer interfaces and their origins such as the mouse, graphical user interfaces, windowing systems. Currently mainly documenting the presentation from 1968, when Doug Engelbart showed in a 90-minute multimedia presentation how to work with a networked computer with a window-oriented interface, hyperlinking and other features. It's quite fascinating what ideas and partially realized implementations already existed so early on. Here you can find the original article.
Are the toll rates hard-wired into the devices? What kind of idiots designed all this crap?

I hope it was just some official without a clue speaking again and the thing isn't actually built that stupidly. On the other hand, we're talking about Toll Collect here...
At tagesschau.de - Die Nachrichten der ARD you can find the original article.
One of the reasons why I don't like Greylisting. In short, what greylisting is: when a server makes a connection to another server for mail delivery, a triple is formed from the sending host, destination address, and source address, and it is checked whether this combination is known. If not, the combination is noted and the current mail is rejected with a temporary rejection. The theory is that mail servers attempt redeliveries but spambots and virus distributors typically do not. So far, so good. Problems with this approach:
- not every mail server responds correctly to temporary rejections. Example: Yahoo. And that's far from the only server that reacts this way.
- even with temporary rejections, bounces often occur, which then cause mailing list hosts to unsubscribe you from lists.
- a spammer only needs to attempt to send the spam twice in quick succession and the spam gets through. This is minimal effort for spambots — either the user gets one or two spams — but they will get them.
- greylisting only works if you have control over all MX servers for your own domain, otherwise spam simply comes in through the other mail servers on which greylisting is not running.
- if all MXes use greylisting, delivery attempts of legitimate mail are slowed down, since these normally try the other MXes on temporary rejections and then also fail there. Depending on configuration, you then automatically end up in slower queues or longer waiting times on that server (because three delivery attempts have already failed at three MXes).
- Whitelisting (which is mentioned as a solution for some problems) is itself a problem: spam from servers on the whitelist is not detected. But precisely some of the large distribution servers have to be added to the whitelist because they have exactly the problems mentioned (Yahoo is not only a source for many mailing lists, but also for a lot of spam).
- Problems with greylisting are typically only noticed indirectly — since it is a largely transparent process and you can really only conclude that there are problems with greylisting from reactions by others.
All in all, greylisting only has an advantage temporarily: because it is rarely widespread, it is currently not taken into account by spambots. But taking it into account is trivial and would automatically happen with wider adoption. Thus greylisting is doomed to become ineffective if it spreads further.
Of course, many of the problems can be fixed. But ultimately, this is just as much an attempt to plug the holes in a sieve with paper as using rule-based spam filters against spam. Statistical spam filtering (Bayesian filter) is still the best available solution.
Here's the original article.