Artificial intelligence, the Singularity, and Robert J. Sawyer
Totally off-topic here, but I just wanted to highlight an essay of mine that appears in next month’s Walrus magazine (and is now available online). Using Robert J. Sawyer‘s WWW trilogy (whose final volume is about to be released) as a jumping off point, it explores some of the issues surrounding computers whose intelligence is beginning to approach human-like levels:
If you’ve got any spare change, the Lifeboat Foundation of Minden, Nevada, has a worthy cause for your consideration. Sometime this century, probably sooner than you think, scientists will likely succeed in creating an artificial intelligence, or AI, greater than our own. What happens after that is anyone’s guess — we’re simply not smart enough to understand, let alone predict, what a superhuman intelligence will choose to do. But there’s a reasonable chance that the AI will eradicate humanity, either out of malevolence or through a clumsily misguided attempt to be helpful. The Lifeboat Foundation’s AIShield Fund seeks to head off this calamity by developing “Friendly AI,” and thus, as its website points out, “will benefit an almost uncountable number of intelligent entities.” As of February 9, the fund has raised a grand total of $2,010; donations are fully tax deductible in the United States… [READ THE REST OF THE ESSAY]
And for the record, I wrote this before Watson beat the puny humans on Jeopardy!