Loading...
You are here:  Home  >  Data Blogs | Information From Enterprise Leaders  >  Current Article

The Bogus Bogeyman of the Brainiac Robot Overlord

By   /  September 7, 2015  /  1 Comment

by James Kobielus

There is such a thing as watching too much science fiction. If you spend a significant amount of time immersed in dystopic fantasies of the future, it’s easy to lose your grip on the here and now. Or, even when some science fiction is grounded in plausible alternative futures, it’s far too easy to engage in armchair extrapolation in any troublesome direction your minds leads you.

One of the most overused science-fiction tropes is that of the super-intelligent “robot overlord” that, through human negligence or malice, has enslaved us all. Any of us can name at least one of these off the tops of our heads (e.g., “The Matrix” series). Fear of this Hollywood-fueled cultural bogeyman has stirred up anxiety about the role of machine learning, cognitive computing, and artificial intelligence (AI) in our lives, as I discussed in this recent IBM Big Data & Analytics Hub blog. It’s even fostering uneasiness about the supposedly sinister potential for our smart devices to become “smarter” than us and thereby invisibly monitor and manipulate our every action. I discussed that matter in this separate blog.

This issue will be with us forever, much the way that UFO conspiracy theorists have kept their article of faith alive in the popular mind since the early Cold War era. In the Hollywood-stoked popular mindset that surrounds this issue, the supposed algorithmic overlords represent the evil puppets dangled among us by “Big Brother,” diabolical “technocrats,” and other villains for whom there’s no Superman who might come to our rescue.

Hysteria is not too extreme a word for this popular perspective. The fact that renowned scientists (Hawking), tech entrepreneurs (Musk), and iconic geeks (Wozniak) are sounding the alarm on this matter is ample confirmation in the minds of believers. No one can deny that there is rich potential for misuse of AI, machine learning, decision automation, and kindred technologies. However, concerns about their misuse as a tool for mass enslavement are the stuff of demagogic sensationalism, not sound public policy.

My feeling is that this discussion would be more constructive if we broke it down to specific issues that we can address with concrete approaches. Chief among these is the notion of algorithmic transparency, which I’ve addressed on several occasions, most recently in this blog. Another important discussion concerns the extent to which we can build machine-learning algorithms that ensure bots take ethical actions within specific circumstances. I recently discussed that in this blog. Yet another important issue is how to ensure democratic accountability in a world where laws are increasingly monitored and enforced by algorithmic processes operating in tandem with the Internet of Things (IoT), decision automation, and big data analytics. I dissected that issue in this blog.

Fortunately, the “robot overlord” debate has recently shifted away from groundless paranoia and toward a specific type of AI-driven application that should concern us all. I’m referring to autonomous weapons. This recent article discusses efforts by Hawking, Musk, Wozniak, and many others for treaties to have such futuristic devices banned before they stoke an arms war of Armageddon-grade potential. When I say “futuristic,” I’m referring to devices that are everywhere now, or well on their way, such as drones, autonomous vehicles, and unattended IoT endpoints. Considering our species’ track record, it’s a sure bet that any and all of these newfangled devices will be weaponized to the hilt if we don’t defuse the issue in time.

You don’t need to invoke a fictional “robot overlord” to identify the likely future developers and users of autonomous weapons. The responsible parties will almost certainly be usual human suspects—the world’s militaries and, possibly, terrorist organizations—rather than some 21st century kindred of HAL9000. Consequently, any such weapons can be controlled by the usual geopolitical initiatives that, among other successes, have limited nuclear stockpiles and nearly eradicated chemical and biological weapons.

Rest assured that calmer minds are beginning to prevail on the “overlord” issue in high-tech circles. As reported in this Computerworld article, a recent industry panel rated the risk of mass enslavement by a race of AI-powered brainiac machines at roughly the same level of urgency as an asteroid impact or aliens touching down on our home planet.

Of course, I’m not discounting the possibility that robot overlords have already conquered other planets in our galaxy. I like to keep an open mind on such matters.

 

 

About the author

James Kobielus, Wikibon, Lead Analyst Jim is Wikibon's Lead Analyst for Data Science, Deep Learning, and Application Development. Previously, Jim was IBM's data science evangelist. He managed IBM's thought leadership, social and influencer marketing programs targeted at developers of big data analytics, machine learning, and cognitive computing applications. Prior to his 5-year stint at IBM, Jim was an analyst at Forrester Research, Current Analysis, and the Burton Group. He is also a prolific blogger, a popular speaker, and a familiar face from his many appearances as an expert on theCUBE and at industry events.

  • Mark Gubrud

    “any such weapons can be controlled by the usual geopolitical initiatives
    that, among other successes, have limited nuclear stockpiles and nearly
    eradicated chemical and biological weapons.”

    So – you support the call for banning autonomous weapons, i.e. mandating verifiable, accountable human control & decision in every use of violent force?

You might also like...

Machine Learning Will Do Auto-Programming’s Heavy Lifting

Read More →