Last Update: January, 2024
Terms Of Service
However, there are other resources that do need some terms. These are mainly compressed (binary) archives of our source code.
Linking To Pages
Obviously, there is no Internet without (inter) links. And therefore, obviously, anyone can link to any page here without restriction. (I feel silly for having mentioned this.)
Linking To Archives
We feel we have the right to ask for permission for any site to directly link to our source code archives. There are two technical reasons for this: 1) the archives can be moved, deleted or renamed at anytime; 2) there exists important information about the archives that need to be seen by potential users.
We believe these archive link restrictions are fair and are for the benefit of the Internet community as a whole. (Of course, it is up to us to maintain a proper — and stable — base page for our code archives. We did not always do this...)
Any direct linking that affects bandwidth will be prevented.
Spiders, Search Engines and other Bots that do not read or do not abide by the "robots.txt" exclusion standard are in violation of netiquette and will be banned.
Bots that continually download our source code archives (ZIP files) will be banned.
Bots that Lie about their shit will (eventually) be banned. This is about Bots that make public statements about how "nice" they are when they aren't. (Like not abiding by the robots.txt standard.)
Bots that are too aggressive will (eventually) be banned. This is about Bots that continually (repeatedly) download images. Bots that do that will be banned. (You're on the list AhrefsBot!)
If you consistently lie about your referer [sic] you will be banned.
User Agent Strings
If you consistently vary your user agent string you will be considered suspicious and you might be banned. (A Bot that does not use an agent string will be denied.)
If you explicitly request files or directories that do not exist — and therefore there are no links to them in any of our pages — you will be banned.
The first "Rule of Exploit" is when a Bot requests a non-existenting resource. Of course, there is a
difference between a deleted blog post and and shit like Microsoft Exchange's
So, deleted stuff aside, any request to any non-existing resource is by definition an exploit attempt.
- There are a couple of websites that purport to be Program Archive sites or something, which scour the Internets and create pages around other people's work — those websites have "hosted" THIS program for example, never having contacted us. They do us and their visitors a disservice for they falsely represent us as in partnership with them. We are not. (That code has been long gone...)
- Unauthorized sites linking to our archives do us and their users no favors for it bypasses important information that may be necessary for the source code to work. Direct links cause Spiders, Search Engines and other Bots to bypass our "robots.txt" policies — our logs show that 99.9% of all offsite direct archive link downloads are by Bots.
- Why do Bots do this, anyway? ZIP file contents do not change per version. Why would a Bot download the same ZIP file dozens of times?
- Why do Bots do this, anyway? Image file contents do not change almost forever and ever.
- Common sign of exploiters. Of course, search engines that vary some UA data, like for mobile tests, are not considered explotation.
- Most of these requests are to "admin" and "login" pages of common Blog/BBS/CMS software. Hey Administrators! Please write your code so that your default Admin LOGIN PAGE can be renamed! sheesh