There is no idea too stupid that lawyers or politicians couldn't have: Is the robots.txt file suitable as copy protection? a law firm in the USA is now asking, because access to historical data was possible via the Internet Archive, although in newer versions of the website, access was denied to the Internet Archive via robots.txt:

Harding, Earley, Follmer & Frailey, who had previously been involved in a legal dispute with Healthcare Advocates, now accuses Healthcare Advocates of violating the DMCA and the Internet Archive of breach of contract, as they did not, as explained, block access to the historical data. Therefore, Healthcare Advocates also demands compensation from the Internet Archive.

Not only does someone misunderstand the function of robots.txt - it is not copy protection, but merely a hint for robots as to whether they are allowed to scrape the data or not - they are also extremely bold. The Internet Archive provides its service for free - but you can sue for breach of contract and demand compensation.

This is somehow pretty stupid. The ideas of such tech failures are always amusing ...

(and yes, there is also internet access in Munich)