sysadmin - 7.10.2003 - 6.1.2004

Mambo Open Source

Mambo Open Source advertises itself with bold claims:

  • Mambo Open Source is the finest open source Web Content Management System available today.
  • The easiest to use CMS there is.

Does it live up to these self-imposed expectations? I installed it and tested it a bit. Here are my first impressions:

  • Installation via the web installer is very easy - however, you won't find any instructions on the homepage about how to do it. You have to switch to the documentation server for that.
  • Mambo supports various languages. The basic package only contains English though. If you want additional languages, you search the homepage in vain for hints or links. Only Google unearths the Mamboportal, where you can finally find what you're looking for.
  • There is plenty of documentation for Mambo. It's worth taking a look at beforehand. Unfortunately, there isn't a single link to the documentation from the homepage - it's on a different server.
  • The standard layout is surprisingly good for a standard template. However, it has some nasty display errors with Mozilla (text wrapping issues, graphics overlapping). Especially from an open source project, I would expect it to at least properly support the open source browser par excellence (Mozilla)!
  • Mambo has WYSIWYG HTML editing fields. Unfortunately, in a variant that only works with IE. I'd like someone to explain to me why a Mozilla-compatible variant isn't used instead (especially since they exist)!
  • Search engine-friendly links (i.e., those without query strings) are optionally supported, which is good. However, they're not used as standard, and modules often use normal links instead. These naturally won't be converted. Links are hardcoded, not generated programmatically, so such an implementation only works if the template creator remembers to add the function calls themselves. However, this feature is still very new, so it will probably spread.
  • Mambo strictly requires MySQL. I always find it a shame when software locks itself into a single database. For PHP there is PEAR for database access, so you don't have to restrict yourself like that.
  • Mambo is incompatible with PHP's Safe Mode! Completely incompatible: if Safe Mode is active, you can't install components. Error messages point to errors with mkdir.
  • Mambo basically needs write permissions on its entire tree. So either you have to set everything under owner www-data, or make it world-writable. Combined with the missing Safe Mode support, this means secure operation is only possible in a chroot jail!
  • Additional modules (I looked at, for example, yopsFM and Mamblog) are of very varying quality, especially regarding layout integration and consideration of search engine-friendly links.
  • The administration interface sometimes uses confusing terminology - what is a module, what is a component?
  • Mambo is a plug-and-play-possible package. After installation, you have a finished system with a finished layout to play with. This can be a decisive advantage!
  • There are lots of ready-made modules that you can pull in when building a new site.

Generally speaking: Mambo is an absolutely impressive tool, there's no question about that. Of all the PHP-based CMS I've looked at so far, it's definitely the most convincing (Typo3, for example, was much more confusing and harder to access - and I have CMS experience with Zope and various custom developments). So this isn't about dissing Mambo - anyone looking for a powerful CMS, already using PHP+MySQL as a base and comfortable with it, and willing to read the documentation, will definitely be well served by Mambo. What is absolutely bullshit, however, is the hype that the Mambo programmers are creating around their product. It's only as simple as they claim if you just edit content and don't do anything more advanced with it. Otherwise, you'll also spend time at Mambo digging through the documentation (at least it exists!) and if necessary looking at source code. However, to be fair, the same can be said of Plone: there's also a lot of hype there that unfortunately also misses reality. Put your code where your mouth is! When you look at Mambo's source code, that's where the crux of the matter lies: anyone who is a PHP guru will certainly be able to dig through the PHP sources. But modules with thousands of lines of source code aren't everyone's cup of tea - searching for bugs and features becomes quite laborious. I don't see a major advantage of Mambo over Zope or Plone here. Mambo is not small! You always have to keep that in mind - getting into Mambo is similar to getting into Zope. Mountains of source code, but well-structured extensibility. A clear gain compared to Zope is Mambo's plug-and-play approach. The first working page comes together much faster. In principle, you shouldn't really compare Mambo with Zope, but rather with Plone, since Mambo - like Plone - already offers completely finished content tools.

Another advantage is Mambo's rather modest server requirements: Linux+Apache+MySQL+PHP = LAMP. You can get that on any street corner. Zope hosting is harder to find (or you get a root server, then you won't have any problems with Zope or Plone).

Here is the original article.

MamboOS Documentation : Home Page - Documentation server for Mambo Open Source

Mamboportal.com - Mambo Open Source CMS Portal - Mambo Open Source Modules and Languages

MOS - Homepage of Mambo - Open Source CMS

mt-daapd - Home Page

Nice: an MP3 streaming server that is Rendezvous compatible and cooperates with iTunes. You can use it to build a central jukebox that can feed multiple computers. Ideal for training rooms and computer pools at universities

Here's the original article.

Webgres - Web interface for PostgreSQL

CVS Module for Apache - An Apache module that serves files from a CVS repository and performs a checkout when needed

mailman-discard home - Batch processing of Mailman administrative requests

THE BASTARD OPERATOR FROM HELL OFFICIAL ARCHIVE - The role model of sysadmins

LinuxWorld | Linux's other file sharing software

Nothing particularly special has happened: Red Hat has bought another company, this time Sistina. Sistina is interesting because they have been driving the commercial development of GFS - a cluster filesystem for Linux. OpenGFS has existed for a while, but GFS has more features and above all can work with more base technologies (e.g. via network block devices or iSCSI).

Now another company, Proserve, is writing that their product MatrixServer would be much better, that Sistina would need two more years to bring their product up to their level, and of course that their product would be better suited for critical services. Oddly enough, their product is naturally commercial software.

Where's the logical flaw? Quite simple: OpenGFS already exists and is maintained by more people than just those from Sistina. The features of GFS that were previously reserved for the commercial version will find their way into the free version, provided they are useful. OpenGFS will continue to develop, not necessarily GFS - Proserve has picked the wrong opponent. Proserve will have to think carefully about what to do - mere noise won't be enough on its own. It may well be that their product is better - but the question is whether it still will be in a year, or in two years. Open source develops on the basis of needs, not on the basis of marketing features - and development can happen damn fast.

Of course, there can be a disaster like with Mozilla or OpenOffice, where almost only the original developers from the companies work on the projects, and free development only proceeds very hesitantly (Mozilla is slowly getting better, but who knows OpenOffice hackers?). But given the need for cluster filesystems without a single point of failure, I don't think that's the case here.

Here's the original article.

Linux Magazine - ssh

An older article in Linux Magazine about the use of ssh. Contains a whole series of tips and further links. Among other things, it also points to nosh ( Source and Debian Binary), a shell that very restrictively allows the definition of commands that users can execute. Since the article contains old links, the current ones are provided above. I developed nosh at some point from osh (I can't find a homepage for it) because I needed a shell that really only allowed users to do what was absolutely necessary. In critical environments, this is often much simpler than blocking inappropriate areas through Unix permissions (or alternatively building chroot jails). Here is the original article.

freshmeat.net: Project details for PostgreSQL Log Analyzer - Log file analysis including statement statistics for PostgreSQL

NdisWrapper - Use NDIS network drivers under Linux

mDNkit installation guide - Patches for various programs to prepare them for multilingual domains

RFC 3492 - Punicode - RFC for representing Unicode strings in domain names

[Inkjet-list] HP Inkjet Linux Driver 1.5 Release - Announcement of the HP inkjet printer driver for Linux

GROKLAW on SCO's alleged victim role in a DDOS attack

SCO claims that its web server was once again crippled by a DDOS (distributed denial of service) attack. Two security experts express their opinion on Groklaw and provide brief analyses on why this claim from SCO should rather be viewed with skepticism.

Here's the original article.

Confusion about Microsoft patches

Confusion, anarchy, chaos!

Well, whether Microsoft releases patches or not doesn't really matter anyway - the security holes come faster than the patches anyway...

At heise online news you can find the original article.

GROKLAW explains exactly what SCO must submit to IBM

According to the schedule, SCO has to provide a large amount of information, in particular exactly where the alleged patent violations are located (with file and line number specifications), as well as stating why SCO believes a patent violation exists, whether and if so who else had access, and some other information surrounding this whole matter. I'm really curious to see what SCO can actually submit to the court.

IBM, by the way, doesn't have to submit anything until SCO has completed their part, and that's also part of the judge's ruling.

Here you can find the original article.

Security? What security?

Ouch. Yes, it really still works:

 simon:/usr/local/sbin# traceroute bell.ca traceroute: Warning: bell.ca has multiple addresses; using 204.101.196.36 traceroute to bell.ca (204.101.196.36), 30 hops max, 38 byte packets 1 HOSGate.your-server.de (213.133.111.1) 2 et-2-1.RS86001.RZ3.hetzner.de (213.133.96.121) 3 gi-2-2.RS8K1.RZ2.hetzner.de (213.133.96.57) 4 nbg.de.lambdanet.net (213.133.96.234) 5 F-2-eth100-0.de.lambdanet.net (217.71.105.13) 6 PZU-1-pos100.fr.lambdanet.net (217.71.96.34) 7 LDCH-1-ge000.fr.lambdanet.net (217.71.96.86) 8 109.ge1-0.er1a.cdg2.fr.above.net (62.4.77.225) 9 pos0-3.cr1.cdg2.fr.above.net (208.184.231.206) 10 so-5-1-0.cr1.lhr3.uk.above.net (64.125.31.129) 11 so-0-0-0.cr2.lhr3.uk.above.net (208.184.231.146) 12 so-7-0-0.cr2.lga1.us.above.net (64.125.31.182) 13 pos12-0.pr1.lga1.us.above.net (64.125.30.190) 14 bellnexxia-mfn-oc12.pr1.lga1.us.mfnx.net (64.125.12.34) 15 bells-network-has-lots-of-security-holes-to-exploit.bell-nexxia. (206.108.103.197) 16 bells-network-has-lots-of-security-holes-to-exploit.bell-nexxia. (206.108.103.213) 17 64.230.243.217 (64.230.243.217) 18 bells-network-has-lots-of-security-holes-to-exploit.bell-nexxia. (206.108.97.206) 19 bells-network-has-lots-of-security-holes-to-exploit.bell-nexxia. (206.108.105.138)

It's quite embarrassing when someone points out a security problem to you in this way.

Devil grin

The original article can be found at kasia in a nutshell here.

Software Update is being updated

It would actually make quite a lot of sense. It's been annoying me for a while that I have to check for various software updates separately. From Debian I'm used to only needing one update service. On OS X that's a bit cumbersome sometimes ...

At Industrial Technology & Witchcraft you can find the original article.

Bug in Linux kernel enabled break-in to Debian server

Although an unpleasant problem, it's good that you discovered it - so it can be fixed. So it's time for another round of kernel updates.

At heise online news you can find the original article.

Apache: mod_auth_remote

Nice Module - instead of checking a URL locally on Apache for permissions, a second request is sent to another server and its check is used. This way Apache can link the authorization for static paths to the authorization system of e.g. a Zope. Otherwise there was often the problem that you couldn't move static content to an Apache (to use its performance) if this content should be under Zope's permission management. With mod auth remote this now works, at least if you use Apache 2.

I found the original article at Channel 'python'.

Bill Kearney: MacOS doesn't cut it in the Enterprise

Oh yes, here we go again, another one who thinks he knows what he's talking about. Well, if he had specified somewhere in his tirade against Apple networking which part of networking he actually means, then maybe one could find his comments worth considering, but like this? Does he mean AppleTalk? AppleTalk/IP? Samba? NFS? Or one of the many alternative protocols that the Apple Finder can mount directly like FTP or WebDAV? But it's much easier to make sweeping statements against Apple, you're in good company with so-called analysts. And if he really said stuff like that 10 years ago, he was talking just as stupid nonsense then as he is today.

And where does he see the solution? Of course in Windows protocols. The poor sap: with Longhorn, Microsoft will give him another kick in the ass and he gets to start all over again. But surely he'll still claim that his approach was the right one. Anyway, my life is too precious to waste on such nonsense.

Devil's grin

Here's the original article.

Personal Firewall causes DNS disruption

I find it repeatedly shocking how stupid programmers are who work in supposedly security systems. Something like this is an absolute beginner's mistake! And such software is supposed to protect users from attacks from the Internet...

At heise online news there's the original article.

Security Hole in Moveable Type

Ouch. Big hole in Moveable Type: the email addresses entered for sending entry notifications are not validated. This gives attackers the opportunity to abuse it - for example, according to this post, spammers have used the hole to send spam. A patch is also provided there that you can use to add validation, so that spammers can no longer easily abuse MT.

So, people, patch your Moveable Type! Or better yet: get rid of the script!

Here's the original article.

Internet Explorer vulnerable again

It would be easier to report only when he is supposedly invulnerable at the moment...

At heise online news there is the original article.

Criticism of Web Server Statistics

Oh yes, he who pays the piper calls the tune ...

At heise online news you can find the original article.

McBride intimates code cleanup in Linux nigh impossible

And he keeps spinning on. If indeed Linux 2.2 - and this is now a statement from SCO itself - had no problems, then Linux could still set up at that level without problems and continue on. The Linux 2.2 kernel was readily functional and usable. So if he were right (which would be absurd and silly - because so far he's done nothing but spout hot air), it certainly wouldn't be the catastrophe he's talking about.

Apart from the fact that he still hasn't grasped that Linux is the kernel and not the system - switching the kernel is really the least of all problems.

And also charming is the claim that suddenly there are millions of lines of code that SCO is now objecting to - if there are that many, why can't he produce even a single example so far that holds up to more than 10 minutes of analysis?

The guy is really amusing. Has certain similarities to the former Iraqi information minister, all of McBride's fuss and feathers.

At XMLMania.com - Google News Search: SCO I found the original article.

Heise News Ticker: PostgreSQL released in version 7.4

Hi. All the important points I needed have been improved. Replication is in the standard source, performance is better, and full-text indexing is usable. Now all that's needed is for someone to create Debian packages that I can backport (our servers are still all Woody) and I can finally tackle a few problems that have been bugging us for a while.

Here's the original article.

InfoWorld: Microsoft prepares security assault on Linux: November 11, 2003: By Kieren McCarthy, ...

The loudmouth from Redmond should probably smoke less of that stuff that gets him so riled up, then he might see reality a bit more clearly too

Teufelsgrinsen

Here you can find the original article.

Security Focus has RSS Feeds

Just found at ScriptingNews: quite practical for administrators, the new feeds from Security Focus:

Test of the Influence of Google Spamming on Blog Hosting Users

Well, I have to chime in too. Of course Dirk is right when he points out a weakness in the Google system. Of course he has the free choice what he does with his blog platform - even making it available to Google spammers. But if he does that, he has to deal with the resulting backlash. And it comes.

What is Google spamming about? Not everyone may be clear on what's behind it. So here's an explanation of some phenomena related to it. Ultimately, so-called search optimizers bet on the fact that the advertised websites are linked in many places. Through linking, the ranking in Google rises. The more links, the higher. The higher the ranking, the higher the advertised website appears in Google's result list. And that's exactly what these people sell. Improvement of the position in Google search results. Search optimizers sometimes do this themselves by finding link partners for the site to be promoted. That's the positive method. It requires work, but that's what they're paid for. The result is usually small link networks between companies with complementary products - in principle a real-life form of "customers who like this product also like that product," like you know from Amazon. But there are also others that are far from positive. These other methods rely on external Google-juice (that's the jokingly used term for the base ranking of a website). The reason: if a website has a high Google ranking through linking, it can pass some of that on to pages it links to. A link from a site with high ranking is rated better than a link from a site with low ranking. Google spamming targets this.

Some search optimizers operate link farms - these are simply lots of domains and web servers without real meaningful content. Usually the pages are simply filled with words according to usage frequency (so search engines can find something). These link farms form a closed circle of sites that now has to become large enough for Google to use it for ranking. That gives the optimizer a basic ranking. Advertised sites are now linked from these link farms and pushed in ranking. Who operates like this, for example, is the Scientology Church. The disadvantage for the optimizer: it's some work and the costs are there for the many sites in the link farm. But it can be well automated. However, Google can also easily recognize it and ban it from the index!

The second approach to using external Google ranking is simple ranking parasitism. This can happen in two ways. The better known is comment spamming. Many suffer from it.

The reason for comment spamming is simply that the weblog scene through its index pages, central services, and links to each other has very high Google ranking. This means that links from weblogs to websites are highly valued by Google - the corresponding links appear multiple times through page replication and content syndication (for example, the excerpts and links at blogg.de are syndication).

We simply have with weblogs.com, various German indices, main pages of weblog communities like Antville servers, and all the regional services, with Geourl, with open syndicators (Phantom4 thingies for example) and who knows what else crawling around there - extremely fast high ranking. Many central pages simply link only to weblogs - are classified by Google through the many links as link hubs, and the linked pages inherit some of the Google-juice of the central page. Additionally, on some central pages the title links and sometimes the content of weblogs are replicated. Also, the weblogs link very strongly to each other and thus push their Google-juice up mutually.

Comment spamming relies on the fact that Google indexes the comments on many weblogs - this is especially true for weblog systems where the comments are integrated into the layout, as Moveable Type can do it, or for another example: the Schockwellenreiter's blog. My own is less of a target for comment spammers because my comments are external to the website - and are excluded from Google indexing by a robots.txt (you can also just disable traversal and leave indexing active, that should be enough). Comments on weblogs with lots of Google-juice become positive for the target websites.

Here people build on foreign Google-juice without providing their own performance (content) and therefore it is parasitism.

The same applies to what happens at blogger.de. Huh? Why are they parasiting? They have their own blogs, don't they? Yes. But these own pseudo-blogs are on a community server. The blogger.de server - if it's actively used for blogs - will build up quite a bit of Google-juice, just like it is with all Antville systems. The reason is the integration into the main index. What's only missing is a correspondingly high number of weblogs with content, preferably with high internal linking as seen on other Antville servers (many ants link to other ants), and that reflected through the main page - that produces plenty of Google-juice. Through the bloggers.

A ranking optimizer can use this well by occasionally posting on their own blogs that point to websites to be promoted. Once the blogger.de main page has good Google-juice through the bloggers themselves, it becomes interesting for them. Ultimately they're also just parasiting from the webloggers here, though not quite as directly as with comment spam - here they're more parasiting from the community than from the individual blogger.

Everyone has to decide for themselves how they handle this and what they think of such methods, but I'm of the opinion that normal, editorially maintained blogs would bring just as much for the companies to be promoted - but they would also give back content for the bloggers. Of course that's work - but it would be an honest form of search engine optimization that would also benefit from external Google-juice, but at least would integrate itself into the system with content.

The current ranking blogs on blogger.de with their nonsense content are a pure parasitic solution that I would strictly reject on hosts I operate, should they ever appear.

At Nochn Blogg. you'll find the original article.

Heise News-Ticker: DNS confusion due to new country code

Reusing historical abbreviations should be avoided, writes the IAB, or only considered after 200 years. - right. 200 years. Sure.

Here's the original article.

Soon the directory will be full

Oh yes, another well-researched article from Spiegel Online. Still prominently announced in the article's headline: In 2005, the numerical address book is full - but nowhere in the article is there evidence for this absurd claim. Nowhere is there mention that even through the return of some large (Class-A) networks more addresses are now available than were previously expected for that time, nowhere is there mention that even more Class-A networks exist as reserves that could also be drawn upon, nowhere is there mention that through CIDR (Classless Inter-Domain Routing), NAT (Network Address Translation), and dynamic address assignment for dial-ups the problem of limited address space has been significantly eased, nowhere is there mention of whose numerical address range is actually supposed to be full and who has claimed this based on what data.

For those who prefer facts and numerical models as the basis for such claims, I recommend the very good article at ISP Column - here various models are presented showing how the address space is viewed. Depending on the model, the point of exhaustion lies between 2019 and 2029 - so still quite some time, time being used to establish IPv6. Well, dear Spiegel Online journalists. The Internet is full. Just go somewhere else.

Devil's grin

At Spiegel Online: Netzwelt there is the original article.

Internet Explorer endangers Windows

Wow. That's serious. All patches included and still holes without end. People, finally use a proper browser (or better yet a proper operating system) At heise online news you can find the original article.

Unsigned Java Applets Break Out of Sandbox [Update]

Wow, that's serious. We've seen sandbox breakouts from time to time, but the fact that an unsigned Java applet can access the floppy drive is definitely a sign of insecurity that can reach critical proportions. That's quite a heavy blow to Java's security. But ultimately, it's not surprising: even though the virtual machine specification assumes the sandbox is secure, there are always implementations behind it that can have errors at the Java level or even at the actual machine level (in the implementation of the virtual machine itself).

And the fact that the computer had to be rebooted after the applet, and that access to physical media is possible, suggests that there's such a deep-rooted implementation problem here.

Technologies don't simply become secure through specification and claims...

You can find the original article at heise online news here: the original article.

Online Backup for Small and Medium-Sized Enterprises

Looking at the key figures of the offering, one does indeed look rather bewildered: the price of EUR 11.90 per month includes only 500 MB. An (expensive) CD blank costs EUR 1 and holds 700 MB.

Apart from that, the costs are quite hefty when you consider typical disk usage patterns (the usual collector-and-hunter scenario of a typical user). Let me take my own notebook's hard drive as an example: 30 GB in use. If I subtract the operating system and installed applications, a good 20 GB remains. Of that, another 8 GB is music (all originals, so no comments here!), which I can also subtract - still leaves 12 GB. Of that, another 2 GB or so of accumulated downloads that don't necessarily need to be backed up. 10 GB remain that I can't quite categorize, so sorting through them would be more work than I'm comfortable with. A lot.

But I can't afford to pay for 10 GB per month at T-Com's prices: that's EUR 11.90 for the first 500 MB and then EUR 5.80 for every additional half GB, so a total of EUR 122.10 per month for the storage. Plus I still have to pay EUR 200 for the initial upload of all the data - and if I don't have a flat rate, I also pay the internet costs on top.

If I back up these 10 GB to multiple DVDs, I need 2-3 media (if you organize it yourself it's usually not optimal, so one medium more) - for a total of around EUR 10 for the media at expensive prices.

And the duration of the backup won't be any faster than uploading via the rather thin upstream channels of typical DSL connections. 128 KBit/sec is about 7.5 MB per minute, or about 450 MB per hour - so 22 hours for the backup over the line, if it's free and unoccupied and no disruptions occur.

And the argument about the lack of qualified staff for backup: if you want to back up the data with T-Com's solution over DSL, and you don't want the costs to eat you alive, and the whole thing needs to run overnight, an employee must select the data and prepare it for backup - gathering it in directories, or structuring the directories accordingly, etc. But that's already the biggest part of the work in any backup - figuring out which data should be backed up and how. The rest is just one click with today's backup solutions for end users and the necessary frequent changing of DVD or CD blanks (or MO or tape if the user prefers reasonably reliable backups).

Somehow I have the feeling that T-Com has calculated things a bit strangely here. Sounds similar to Apple's calculation with .Mac Backup. Except that Apple didn't want to back up mass data over the internet in the first place, but only settings and selected file areas; Apple's backup program backs up mass data like pictures and music locally to hard drives by default.

A usable backup solution on the internet would really be nice. But so far I haven't seen one that would have made sense for DSL users...

The original article is on heise online news at this link.

Report: Online Banking Cracked

A general problem in networks: tools that allow session hijacking make it possible to position themselves between connections. The key point is that the connections are routed transparently through this program: the user doesn't notice it. This also works across switches - the corresponding programs steal the connection via ARP spoofing and then insert themselves in between. The only solution here is a consistent migration to protocols that work with mutual certificates and encryption - where both server and client ensure that they are communicating with the correct partner. But even here, attack vectors are still possible. Absolute security in networks where you have no control over the infrastructure does not exist.

By the way, the technology behind the attack is quite interesting: first, ARP spoofing is used to steal the connection. Then all connections are routed through the intermediate computer. In doing so, the computer presents itself to the server as the client, and vice versa. Encryption is therefore only useful if the protocol regularly performs checks using a shared secret and if the two partners identify themselves to each other using asymmetric methods. Still, the man-in-the-middle can often impersonate the other by using data from a transparently passed-through connection to replay it later (this can crack some encryption setups).

Ultimately, the problem can only be solved at the lowest level - securing connections at the lowest protocol level. Only when appropriate security mechanisms are in place at the IP level can we even hope to get this problem under control.

In the meantime, admins can provide some protection by using ARP watchers and monitoring programs to detect when such attacks occur. But this too is only a very shaky and unreliable tool, since the admin theoretically has to regularly review all protocols - and the signs are often only very minimal (such as the brief appearance of an unknown MAC address in the network).

At RP-Online: Multimedia I found the original article.

Rollup package for Windows XP heralds new update policy from Microsoft

Cool. Security updates only once a month. That's how Windows security issues are addressed.

Teufelsgrinsen

At heise online news there's the original article.

Vulnerabilities in Exchange Server

Interesting that Microsoft assumes that no one is crazy enough to connect an Exchange server directly to the Internet without a firewall - To protect against attacks from the Internet, port 25 should be blocked on the firewall

Teufelsgrinsen

At heise online news there's the original article.

Stopping Spam

Paul Graham examines and evaluates all known methods of responding to spam. As an overview of possible (and also possible future) solutions and an initial assessment, it's quite useful.

Here's the original article.

The Resurrection of SimplyGNUstep - OSNews.com

SimplyGNUStep is now based on Debian Sarge (the upcoming Debian version). So it's simply just a collection of Debian packages with current GNUStep applications. The previous project of the same name aimed to be a full distribution, with its own directory structure, just like NextStep was organized. I find the current incarnation much more sensible though - having yet another package system and yet another distribution doesn't really make sense, especially when Debian already offers everything in very usable form...

Here's the original article.

Loadbalancer in Python

A special feature of this load balancer (besides the fact that it's written completely in Python): it doesn't use multiple processes or threads, instead it uses asynchronous I/O. This allows many connections to be handled simultaneously in just one thread, which keeps the system load much lower than classical balancers that start a process or thread for each connection. It uses either Twisted or the asyncore module that comes with Python. And the whole thing is also blazingly fast - for example, the same approach is used in Medusa, a web server in Python that comes close to Apache's performance when serving static HTML pages. Here's the original article.

Technical Incompetence or Wishful Thinking?

When I read the linked article, I had to grin somehow. But then the head-shaking took over at so much nonsense. The article contains so many wrong ideas and interpretations of open source that you can only wonder how so many errors fit into such a short article. The biggest mistake is probably once again the mistaken assumption that open source needs a business model to function. Absurd notion - searching for a business model in the creation and distribution of open source is just as sensible as pulling on the value chain of weblogs. Of course there are companies that build a business model on the existence of open source - similar things exist with weblogs too. But the business model is absolutely irrelevant to the actual engine.

But then I thought about what it would really mean if SCO won (which apart from the article's author and maybe Darl McBride, probably nobody really believes). What would that mean for open source? Not much - the questionable sources would have to be named sooner or later and would simply be removed from the Linux kernel. Version 2.2 is according to SCO's own statements clean, it has already worked, at worst subsystems would fall back to the 2.2 level. Not fatal, at most annoying.

What would happen if the Linux kernel were banned by SCO? Wouldn't that destroy open source? Apart from the fact that this notion is quite absurd, here lies the biggest mistake in the article - a mistake, however, that is made almost consistently in the media. Open source is not Linux - Linux is only one (even relatively small, though significant) component of the entire open source field. Linux is a kernel - and thus important, but only one possible component that can easily be replaced. In the Intel processor environment, one could relatively quickly simply use the FreeBSD kernel (due to its compatibility functions for the Linux API) instead of the original Linux kernel. For other processors, just take NetBSD - much open source is not dependent on Linux anyway, but runs on almost everything that is Unix-like.

And what if companies no longer want to use open source because of the proceedings? Please what? Companies should refrain from using something they can get for free, just because there's a court case in a marginal area? Why should companies do that? How many companies use pirated software, knowing that it's illegal, knowing what that could mean, because they don't want to spend the money? As long as greed exists, open source will also find commercial use. And greed will exist as long as we have a market economy. So for a damn long time.

But surely companies won't release their own things under open source licenses anymore? Why not? It's a fairly inexpensive way for many companies to get free advertising. Besides, these companies rely on project business, less on software creation. The SCO proceedings don't change that at all. And even if it does decrease - much open source is created by individuals, originated at universities, or created in loose developer groups. Companies have contributed things - but usually only those in which they themselves had an interest for their own business fields. If companies no longer contribute to open source, they primarily harm themselves. Open source typically arises from someone having a problem that bothers them - and begins to create a solution for it. Suddenly something should change about that?

What bothers me most about what is written in the press about open source is the complete obtuseness of the authors about the facts of open source - that there is far more than just Linux, that the companies based on Linux are absolutely not necessary for the survival of open source, and that the motivation for open source has absolutely nothing to do with business models: Open source is the enthusiasm of people to create something that other people use with just as much enthusiasm. This motivation, the core of open source, cannot be stopped by court proceedings or bans. Open source would continue to exist even if it were banned by law - then just underground. Because creative achievements by people cannot be prohibited or suppressed - that applies in the software world just as much as with writers, painters, or musicians.

Open source will - no matter what the representatives of proprietary software attempt to do - continue to exist. Get ready for that. There is no going back.

Here's the original article.

VeriSign Defends Sitefinder

As suspected, Verisign shows no insight whatsoever. But the justifications are truly absurd - presenting price gouging based on a technical monopoly as innovation is quite audacious.

At heise online news you can find the original article.