Don't Be Evil, After Today At Least
Mar. 24th, 2011 02:09 am![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Google has an idea.
First, scan all the books.
Then, put them up online.
Then, charge for hard copy reprints.
THEN, once the system is profitable, consider paying royalty claims to the rightful copyright owners!
And even better, let's not make any effort to locate them, and say the copyright owners have to bring their claims to us!
Not so fast, says the judge.
http://www.mnn.com/green-tech/gadgets-electronics/stories/ny-judge-slaps-down-google-book-deal
As I've said before, Google has a history of this.
They started crawling the web in 1997 to gather all their BackRub (PageRank) data. Then, they put up a FAQ saying "If you don't want your data searched, then just make a new kind of file called robots.txt and our servers won't index it."
http://web.archive.org/web/19971210065437/backrub.stanford.edu/FAQ.html
What they don't say is that they had ALREADY searched your servers... and that's why you were reading that particular page at Stanford; because you just looked at your httpd logs and found that some program from *.stanford.edu requested ALL your web pages at one go! Including the secret web server that you NEVER posted any links to! All your content is already theirs! Finders keepers! If you didn't want everyone to see it you should never have put it in a WWW directory. :-P;;;
So yeah, when they say "Don't Be Evil", I laugh, because the company was founded on an evil program.
First, scan all the books.
Then, put them up online.
Then, charge for hard copy reprints.
THEN, once the system is profitable, consider paying royalty claims to the rightful copyright owners!
And even better, let's not make any effort to locate them, and say the copyright owners have to bring their claims to us!
Not so fast, says the judge.
http://www.mnn.com/green-tech/gadgets-electronics/stories/ny-judge-slaps-down-google-book-deal
As I've said before, Google has a history of this.
They started crawling the web in 1997 to gather all their BackRub (PageRank) data. Then, they put up a FAQ saying "If you don't want your data searched, then just make a new kind of file called robots.txt and our servers won't index it."
http://web.archive.org/web/19971210065437/backrub.stanford.edu/FAQ.html
What they don't say is that they had ALREADY searched your servers... and that's why you were reading that particular page at Stanford; because you just looked at your httpd logs and found that some program from *.stanford.edu requested ALL your web pages at one go! Including the secret web server that you NEVER posted any links to! All your content is already theirs! Finders keepers! If you didn't want everyone to see it you should never have put it in a WWW directory. :-P;;;
"5) I have a robots.txt file. Why isn't BackRub obeying it?
In order to save bandwidth BackRub only downloads the robots.txt file every week or so."
So yeah, when they say "Don't Be Evil", I laugh, because the company was founded on an evil program.
no subject
Date: 2011-03-24 01:40 pm (UTC)*bows down to Google*
no subject
Date: 2011-03-24 09:09 pm (UTC)no subject
Date: 2011-03-25 10:59 am (UTC)no subject
Date: 2011-03-25 01:11 pm (UTC)