sysadmin - 19.5.2004 - 11.7.2004

SubrosaSoft.com - Product Information - And another repartitioning tool like Partition Magic

Agreement on Spam under the Auspices of the ITU?

And now the covetousness begins: The ITU, being the only standards organization that practically brings together all international governments and private entities at one table, would be very well suited for this. Sorry, Mr. Hill, that's wrong. Private at the ITU is equivalent to large corporations. But quite amusing: At the protocol level, a solution would basically be needed that lies between the X.400 standard, which failed in the IP world, and SMTP, Hill said. Ouch. No. Nobody wants anything that even remotely lies on or in the direction of X400. That's one of the stillbirths of the ITU's design-by-committee philosophy. A pile of garbage. Mountains of paper. Far too complicated.

At heise online news there's the original article.

UN: Spam problem solvable in two years

Oh man, those at Netzeitung don't have much of a clue, do they? Calling the ITU the UN is more than silly. The ITU is indeed a UN organization - but it is primarily carried by companies, especially large telecommunications enterprises. Above all, the ITU is one thing: the arch-enemy of the IETF.

Because the ITU thinks it's responsible for all communication systems and believes it should have a say in the Internet as well. But the IETF is the standards body there (or rather, not a standards body, but just an RFC manager). Standards in the IETF are created in a completely different way than in the ITU.

The ITU defines standards in committees. Access is regulated and burdened with hefty fees. Private individuals have no chance of getting into the ITU - that only works through national institutions or large companies. What becomes a standard is drafted in closed working groups - and based on what the participants want. As a rule, an ITU standard ends up as a collection of all requirements. The standard itself is often only available for a fee, reference implementations before standardization work is rarely available, and implementations in general are usually proprietary and cost money.

The IETF, on the other hand, only manages the organizational part - the actual RFCs are created in open working groups. Anyone who wants to can participate. RFCs must - if they want to be on the standards track - demonstrate two independent but interoperable implementations that must be accessible to everyone. Existing and free code defines what becomes a standard.

ITU statements on the Internet topic are often simply attempts to gain influence in an area where the ITU plays no role, even though communication technology is increasingly oriented in that direction. You only need to think about Internet telephony to see what kind of problem this could cause for the ITU - which currently controls almost the entire telephony sector.

But precisely because of the very different working methods, almost nobody actually wants the ITU as a relevant organization in the Internet sector. An Internet in which standards are defined by national representatives and large companies would not be where the Internet is today thanks to the sleeves-rolled-up and pragmatic approach of the IETF.

So please don't sneak the ITU into titles as the UN. It's not the UN that wants something, but the ITU - and what it wants is only indirectly related to our problems. What it wants is influence and control.

At NETZEITUNG.DE Internet you'll find the original article.

iPods a security risk, warns complete idiot

Exciting. Gartner is warning about iPods because they can bypass firewalls and virus scans on mail servers - users can transport data on them. Sure, the same applies to all other mobile hard drives, USB memory sticks, floppy disks and whatever else, but of course they have to explicitly warn about the iPod.

Do the Gartner guys just want to rip off a few iPods from scared enterprises, or are they really that stupid?

The original article is at Engadget - here.

Because yesterday was about online oldies...

I searched groups.google.com for when the first newspostings appeared in which the provider association I started with (as a user and founding member) was referenced. We founded ourselves (OUT e.V. - Domain westfalen.de) in 1993. You can read about this in the association history. However, we initially only had UUCP because we couldn't get an IP connection - the University of Münster refused to connect us, contrary to the DFN's directional guidelines. From 1995 onwards we had a connection through a local provider. And on May 1, 1995, the first posting was archived in which one of our users provided his homepage. Unfortunately http://archive.org/ doesn't go back far enough, the first homepages would be quite interesting... Here's the original article.

MUTE: Simple, Anonymous File Sharing - File sharing based on "chaotic" routing

BSI provides government desktop for download

Nice. Especially that they're based on Debian.

At heise online news there's the original article.

Glibc-based Debian GNU/kFreeBSD - Debian on the FreeBSD Kernel

Security hole in iptables in Linux kernel 2.6

Disgusting. Ok, not relevant for all configurations, but still disgusting. And once again proof that C is a stupid language - at best a glorified assembler.

At heise online news there's the original article.

Bergen Linux User Group

RFC 1149 - TCP over Avian Carrier - was actually implemented once. There are pictures of the event and a short description. Ping times of over 6000 seconds. Wow.

Here's the original article.

LinuxTag: Global File System under GPL

Finally. Maybe there's finally a usable cluster filesystem solution for Linux. We've needed that for a while now ...

At heise online news there's the original article.

News: Microsoft distances itself from AdTI study

Even Microsoft doesn't want to support this nonsense, and they're usually not too shy about spreading lies about Open Source

Here's the original article.

Severe Security Vulnerability in Linux Kernel 2.4 and 2.6

And Menno ...

(Fortunately I only have trustworthy users on my servers)

At heise online news there's the original article.

Back to the Future

DSL is acting up again

Catastrophe! World Collapse! Anarchy!

My DSL went down. Horrible. I had to dust off my old modem skills, call colleagues for dial-up numbers and perform a few other mental gymnastics (like remembering dial-up procedures at http://www.westfalen.de/) just to get an ISDN internet connection running again. Now the world wide web crawls here at rural 64 KBit per X75 encoding over the line ... Welcome to the stone age

New image gallery plugin - needs testers - WordPress plugin for images in posts

PyWork - Web framework based on Apache, mod_python, XSLT and ZPT

PS/2 and MCA History - History of IBM PS/2 Systems

Rebuttal to Ken Brown

Tanenbaum is cool

Here you can find the original article.

iSync is Rubbish

I just wanted to do a sync after several days, and iSync suddenly wants to copy all my appointments from the organizer to my Mac - without me making a single change to any of these appointments on the organizer. That's absolutely ridiculous. And you can't even tell from the message that mentions 132 new objects which objects those actually are and which device they're coming from - at first I suspected the sync was coming from my phone.

Why can't Apple get synchronization right and why do they pester us with this pathetic pile of junk?

Sure, it's nice that you can synchronize all kinds of devices with iSync and even sync multiple devices at once. But that's completely useless if synchronization regularly duplicates and scrambles your data.

QDB: Quote #330261

Oh man ...

Here's the original article.

The Spinning Cube of Potential Doom

Very interesting: a graphical visualization of security events that makes various port scan techniques visually recognizable.

Here's the original article.

E-MailRelay -- SMTP proxy and store-and-forward MTA - general purpose SMTP proxy with its own spool handling and the ability to integrate external filters

mtaproxy.py - Teergrube utuility for SpamBayes - Tarpit with integrated SpamBayes

RoughingIT - pyblosxom 1.0 Release

PyBlosxom (Blosxom in Python) has been available as a 1.0 release since May 25th - still new enough that it's worth writing about now

Here you can find the original article.

Stopping spam with the Anti-Spam-SMTP-Proxy (ASSP) - SMTP proxy with Bayesian filtering, this one is without honeypot

Greylisting with Exim and PostgreSQL

Greylisting is a technique to reduce spam by temporarily rejecting emails from unknown senders. The mail server is then expected to retry sending the email after a short delay. Since most spam is sent by machines that don't retry, this is an effective way to filter out a large portion of spam.

I've implemented greylisting for my mail server using Exim and PostgreSQL. Here's how it works:

How it works

When an email arrives, Exim checks if the combination of sender, recipient, and sending server has been seen before. If not, the email is temporarily rejected with a "try again later" response. If the combination has been seen before and enough time has passed, the email is accepted and the database is updated.

Implementation

The implementation uses a PostgreSQL database to store the greylisting information. A simple table stores the sender, recipient, sending server, and timestamp of the last attempt.

Configuration

To enable greylisting in Exim, you need to add an ACL rule that queries the database and decides whether to accept or reject the email. The rule should be placed in the DATA ACL.

Results

Since I've enabled greylisting on my mail server, the amount of spam has decreased significantly. Most spam never retries, so it's never delivered. Legitimate emails are still delivered, just with a slight delay on the first message from a new sender.

Tollef Fog Heen : Yahoo Breaking SMTP Standards

One of the reasons why I don't like Greylisting. In short, what greylisting is: when a server makes a connection to another server for mail delivery, a triple is formed from the sending host, destination address, and source address, and it is checked whether this combination is known. If not, the combination is noted and the current mail is rejected with a temporary rejection. The theory is that mail servers attempt redeliveries but spambots and virus distributors typically do not. So far, so good. Problems with this approach:

  • not every mail server responds correctly to temporary rejections. Example: Yahoo. And that's far from the only server that reacts this way.
  • even with temporary rejections, bounces often occur, which then cause mailing list hosts to unsubscribe you from lists.
  • a spammer only needs to attempt to send the spam twice in quick succession and the spam gets through. This is minimal effort for spambots — either the user gets one or two spams — but they will get them.
  • greylisting only works if you have control over all MX servers for your own domain, otherwise spam simply comes in through the other mail servers on which greylisting is not running.
  • if all MXes use greylisting, delivery attempts of legitimate mail are slowed down, since these normally try the other MXes on temporary rejections and then also fail there. Depending on configuration, you then automatically end up in slower queues or longer waiting times on that server (because three delivery attempts have already failed at three MXes).
  • Whitelisting (which is mentioned as a solution for some problems) is itself a problem: spam from servers on the whitelist is not detected. But precisely some of the large distribution servers have to be added to the whitelist because they have exactly the problems mentioned (Yahoo is not only a source for many mailing lists, but also for a lot of spam).
  • Problems with greylisting are typically only noticed indirectly — since it is a largely transparent process and you can really only conclude that there are problems with greylisting from reactions by others.

All in all, greylisting only has an advantage temporarily: because it is rarely widespread, it is currently not taken into account by spambots. But taking it into account is trivial and would automatically happen with wider adoption. Thus greylisting is doomed to become ineffective if it spreads further.

Of course, many of the problems can be fixed. But ultimately, this is just as much an attempt to plug the holes in a sieve with paper as using rule-based spam filters against spam. Statistical spam filtering (Bayesian filter) is still the best available solution.

Here's the original article.

Gallery :: your photos on your website - Interesting software for photo albums on the web

Photo Organizer - Feature-rich web photo album with a rather stylish default look

SCO vs. Linux: Mission impossible

First SCO stands up and says there are millions of stolen lines of code. And that they can name them. Then they demand sources. They get them. Search through them for ages and find nothing. Hello? Why do they even have to search if the locations are supposedly known? And why don't they notice that the JFS for Linux is based on the OS/2 JFS? That's even stated in the documentation - if they search the sources, why don't they read it at the same time? But probably that's exactly the problem: if you don't read text, you can search through it forever without ever finding anything.

At heise online news there's the original article.

Symantec Chief: Windows is not less secure than Linux

Sure, quite clearly. Windows is the easier target to hit, which is why it's not inherently less secure than Linux. And of course the security problems are due to attachment clickers - funny only that considerably more server attacks against Windows are possible, all of which have nothing to do with attachments. And all this despite the fact that with servers, Linux and Apache are definitely the train rolling through the whole city, while IIS - alongside IE and Outlook, the security hole par excellence - rather only runs in the seedier suburbs ... At heise online news there's the original article.

EditThisPagePHP - Edit pages online in PHP - Alternative for situations where a real CMS is too large and a wiki or weblog is too rigid in structure

SCO vs. Linux: Investor Baystar exits

Final beginning of the preliminary end?

At heise online news there's the original article.

Vellum: a weblogging system in Python - Nice little weblogging system in Python

drbs - Distributed Replicated Blob Server - Server system modeled after Google File System

paramiko: ssh2 protocol for python - SSH2 protocol implementation in Python

PYSH: A Python Shell - Shell that uses Python as a shell language

SoftPear - PC/Mac Interoperability

Wow - now they've got a recompiler for machine code in there too. That sounds increasingly interesting - a recompiler is the most important step for usable performance for such systems.

Here's the original article.

Sun Insists that Red Hat Linux is Proprietary

Just to show that the IT world has more crazy people than just the SCO boss. The SUN boss's loss of touch with reality is also quite remarkable.

Here's the original article.

The Contiki Operating System - System for computers with limited memory

=F6 über Debian ...

Well. What do I expect from a distribution? And why do I use Debian in particular - and have for years? Probably it's different expectations that's why I'm so satisfied with Debian.

A distribution must realize the base system for me - this must be stable (which is why I almost always use Debian Stable), but should be easily expandable (which is why I use backports from Unstable or Testing at selected points).

The distribution must make updating the base system simple - a base system consists of a bunch of components, all of which can have some vulnerabilities. I have no desire to deal with these potential holes - that's the job of the distribution. Debian makes this almost trivial through apt. I want to be able to see what an upgrade means - so I can decide whether to do it or not. Debian provides the tools for this (e.g., automatic display of changelogs and critical bugs before installing a package). The distribution must allow me at defined points and with simple means to break out of the normal distribution. Every binary distribution has the same problem: package maintainers decide how programs should be configured. This often works well - occasionally it goes extremely wrong. Therefore, a binary distribution must allow me to compile the packages myself if necessary. With Debian, the build structure for packages is very simple. Adapting packages, backporting packages from Unstable or Testing (to get newer versions than in Stable), and creating your own packages is easy. I'm not forced into the Stable corset - but I can still stay in Stable for the base system to take advantage of Debian's good security infrastructure. The fact that it's additionally trivial to distribute your own packages to many machines by setting up your own package repository and including it alongside the standard repositories is not just nice to have - it's essential with a sufficiently large number of machines. A distribution must have functioning package dependencies and actually use them. Consistently. I have no desire to start a program and then get strange messages just because some libraries or other tools are missing. Sure, other distributions also have dependencies - but sometimes they're optional or only used very shallowly. Debian is consistent and goes very deep - everything is built on dependencies. This means you can be relatively sure that dependencies are met when you install a package normally. If not, that's a clear bug and can be reported via bug reporting - and will be fixed. Dependencies are not nice to have, they're essential. Period. Of course, a distribution must also allow breaking out of the corset with dependencies. Debian has several nice utilities for this that let you resolve dependencies - e.g., pseudo-packages that simply say a particular package is installed. This package can certainly be installed manually. A distribution must know what config files are. That means it must under no circumstances trample on my config files. If a distribution overwrites my configs on update and I get comments like make backups of them first, the distribution is out. Sorry, but I have absolutely no tolerance for that. A distribution may only change a configuration under clearly defined circumstances. And no, I have little sympathy for Debian's debconf either - if a package upgrade shreds my configs, it rains bug reports. Config files belong to me, not the distribution. Period. A distribution should damn well not try to solve all the world's problems. And especially should not try to be smarter than the original programmers of a package. If a program has a structure of config files, then it should at least optionally be usable without problems with the distribution. And that also means the distribution doesn't trample on it just because it thinks it has a better tool for it. Besides, all configuration tools stink to high heaven.

What I'm not particularly keen on: always having the very latest packages. Sorry folks, but that's the stupid update-itis that spreads in the Windows world. Always having to have the latest. Such nonsense. Apache 1.3 does its job well, you don't even need the latest 1.3 - as long as security patches have been backported. And that's what Debian does. Security patches for Stable don't simply update silently to a new version with new, unknown problems. Instead, the patch is - if possible - backported to the old version and made available via that. Security updates should only under absolute exceptional circumstances require configuration changes from the admin or alter system components, which leads to potential problems. I want a smoothly running system before and after the update!

I'm also not particularly keen on nice graphical or text-based configuration or administration tools. Sorry, but the ideal tool for these purposes is called vim and the perfect data format is text files. And yes, I can't particularly stand debconf - fortunately you can simply work around it where it's annoying - and Debian keeps its hands off the existing standard configurations, even if a package normally uses debconf. If not, that's a bug.

But I do expect a certain transparency from a distribution in what it does. I don't like one-man shows that you can't see into - where someone autocratically decides what's good or right. Or perhaps a few. I want to be able to look into everything - because the process of distribution creation can also have bugs that are essential for me. Therefore, I'm also not keen on a company building a distribution. Sorry, but sooner or later come the nice profit-maximization strategies à la RedHat Enterprise or comparable Suse approaches. If a distribution changes the standard mailer, I want to see the discussion about why it was changed - with the pro and con arguments. I want to be able to understand why something develops the way it does. I want to be warned in advance. Of course, I'm not interested in this for every package - but for the essential ones that interest me, I want this information. Transparency is important - it starts with transparent bug tracking and ends with a transparent project structure. If I had no interest in transparency, I could just as well install Solaris. Or Windows. I have no problem with: a learning curve in using the system. System administration is a job. A job requires learning. Anyone not willing to learn should stay away from the job. Arguments like I first have to understand how the system works don't count. There are plenty of documentation and good books on Debian as a starting point. Read. Learn. That's just part of it. No colorful tools and no grandiose promises from manufacturers about the easiest-to-install Linux distribution help either - it's all bullshit. When push comes to shove, you have to master the system from the kernel to the dotfile. And you have to learn that anyway, no matter what the system is called. Learning a distribution and how it works is an investment for years. Therefore, I also don't want to see my investment go down the drain just because the system was suddenly rebuilt because it appeals to the manufacturer or because it's cooler or because it sells better or because another buzzword is fulfilled. Distributions need evolution, not revolution.

Debian is not the perfect Linux distribution - no such thing exists. But Debian is damn close.

At Die wunderbare Welt von Isotopp you can find the original article.

Ingres Database Becomes Open Source

Another tuned dinosaur returns to its ancestral world

At heise online news you can find the original article.

Little Snitch - Reverse Firewall for Mac OS X - take a look when I have time

The Worst of All Susens

Every time I read upgrade stories like this, I wonder what the actual advantage of Suse over Debian is supposed to be. What good is a distribution that looks nice and colorful during installation but can't be upgraded properly? And don't tell me this is an isolated case with Suse 9.1 - I've read similar horror stories about pretty much every Suse upgrade.

At Die wunderbare Welt von Isotopp you can find the original article.

Rubicode - RCDefaultApp

Very handy: setting the various default handlers for various file types, URL types, MIME types, etc. Exactly the panel that Apple left out of System Preferences...

The original article is here.

WordPress 1.2

The final is out. However, trackbacking still doesn't work quite right - at least not when the target is a topic at TopicExchange. At WordPress WordBlog there's the original article.

WordPress Tinkering

Since I'm currently playing around with blog utilities and CMSes, my current WordPress installation has already gotten some content and layout improvements. Of all the alternatives for small sites, I still like it the best. For Drupal (my current favorite for larger sites), I might also find a use case.

Do I have too many domains and sites? Oh well.

Update: since I'm now running this blog with WordPress and the other one had become outdated, I simply shut it down. One less site to maintain...

drupal.org

I'm currently playing around with Drupal a bit. First impression: wow! Extremely powerful, extremely many features. Though possibly too many features. But what I like right away is the very clean interface with quite logical menu structure, and how all extensions automatically hook into these menus. I also like the solution with templates and themes: themes can be divided into templates or stylesheets. This allows you to change the general system, but also just choose variants of a system. The default theme is table-based, but there's another CSS-based one to choose from. I can't really say yet how XHTML compatibility looks. Also good is the support for MySQL and PostgreSQL - I normally prefer the latter. You can also make weblogs with it, as well as static articles, entire books, stories with discussion forums similar to Slashdot or Kuro5hin and much more. However, what stands out right away is that the tools in the individual content areas are somewhat sparse - tools that specifically target weblogs often seem more complete. Specifically things like Trackback, Pingback, update pings or similar have to be installed afterwards or at least reconfigured - the standard only pings drupal.org itself for the distributed login mechanism. Also such elementary things as simple categories (more complex categories - even hierarchical - do exist, but elsewhere) for blog entries require some searching. RSS feeds are automatically created, but on some pages (for example the homepage) they first have to be linked (in user blogs the link is automatic though). Otherwise they are only contained as alternate links, but not necessarily visible to users. Overall, the whole system clearly aims to design and build entire websites with entire groups of users. However, the distributed login mechanism is really cool: users from participating systems can log into other participating systems with user@host and the login is automatically passed to the home system. Login with always the same password, but with distributed authorization. Very nice! Overall, a lot of value is placed on user management - it almost has Zope dimensions with its permission groups and the ability to create symbolic permission groups for individual activities. Less cool are the many missing metadata. There's actually hardly any metadata on content. Author, date, status - but that's more or less it (of course besides title and text, those are self-evident). Content organization is also left to the user - though there are helper tools that make creating navigation easier. However, many metadata topics (such as categories) can apparently be solved using taxonomies - these are groupings of content. The description of this is somewhat unintuitive, the topic is quite complex. Taxonomies are groupings of keywords on a topic. So I don't assign posts to categories, but rather assign keywords to posts and then organize the keywords into categories. While this provides mountains of metadata, it's far more complex than the normal blog categories you're used to.

Great again are all the content status and content versioning functionalities. All changes are logged. All changes to content are versioned. You can go back to older content and thus, for example, fix errors (or remove garbage from rogue users).

The whole system is extensible, but I suspect (haven't checked it yet, but given the range of functionality it's a likely guess) that creating plugins and filters is more involved than with small solutions like WordPress. But that's in the nature of things.

Another potential disadvantage is the unavailability of a ready-made German translation. While there are other sites working with Drupal in German, apparently no one releases the complete translation tables for download - at least I haven't found anything, neither at drupal.org itself nor on Google.

Where would I classify Drupal? Clearly in the CMS category - that's where systems like Typo3, Mambo Open Source, Plone and similar systems shine. However, it beats discussion-oriented CMSs like Scoop or Squishdot by a mile - as well as simple blog CMSs. For a simple blog system it's clearly overkill. For a complete site it seems very usable.

Here's the original article.