by Angela Guess
Erik Sofge of Popular Science recently wrote, “During an AMA (ask me anything) session on Reddit this past Wednesday, a user by the name of beastcoin asked the founder of Microsoft a rather excellent question. ‘How much of an existential threat do you think machine superintelligence will be and do you believe full end-to-end encryption for all internet activity can do anything to protect us from that threat (eg. the more the machines can’t know, the better)??’ Sadly, Gates didn’t address the second half of the question, but wrote: ‘I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned’.”
Sofge is less than sympathetic to these fears. He opines, “For robo-phobics, the anti-artificial-intelligence dream team is nearly complete. Elon Musk and Bill Gates have the cash and the clout, while legendary cosmologist Stephen Hawking—whose widely-covered fears include both evil robots and predatory aliens—brings his incomparable intellect. All they’re missing is the muscle, someone willing to get his or hands dirty in the preemptive war with killer machines. Actually, what these nascent A.I. Avengers really need is something even more far-fetched: any indication that artificial superintelligence is a tangible threat, or even a serious research priority.”
He admits, “Maybe I’m wrong, though, or too dead-set on countering the growing hysteria surrounding A.I. research to see the first glimmers of monstrous sentience gestating in today’s code. Perhaps Gates, Hawking and Musk, by virtue of being incredibly smart guys, know something that I don’t. So here’s what some prominent A.I. researchers had to say on the topic of superintelligence.”