Expand Cut Tags

No cut tags

Mar. 24th, 2011

bikerwalla: (Evil Koala)
Google has an idea.

First, scan all the books.
Then, put them up online.
Then, charge for hard copy reprints.
THEN, once the system is profitable, consider paying royalty claims to the rightful copyright owners!
And even better, let's not make any effort to locate them, and say the copyright owners have to bring their claims to us!

Not so fast, says the judge.


As I've said before, Google has a history of this.

They started crawling the web in 1997 to gather all their BackRub (PageRank) data. Then, they put up a FAQ saying "If you don't want your data searched, then just make a new kind of file called robots.txt and our servers won't index it."


What they don't say is that they had ALREADY searched your servers... and that's why you were reading that particular page at Stanford; because you just looked at your httpd logs and found that some program from *.stanford.edu requested ALL your web pages at one go! Including the secret web server that you NEVER posted any links to! All your content is already theirs! Finders keepers! If you didn't want everyone to see it you should never have put it in a WWW directory. :-P;;;
"5) I have a robots.txt file. Why isn't BackRub obeying it?
In order to save bandwidth BackRub only downloads the robots.txt file every week or so."

So yeah, when they say "Don't Be Evil", I laugh, because the company was founded on an evil program.


Joe Engledow

September 2016

181920 21222324

Most Popular Tags

Style Credit

Page generated Sep. 24th, 2017 07:13 pm
Powered by Dreamwidth Studios