Loading...
You are here:  Home  >  Data Blogs | Information From Enterprise Leaders  >  Current Article

CAPTCHA as CAPTCHA Can

By   /  June 29, 2015  /  No Comments

by James Kobielus

I’m curious whether the Turing test makes sense in a world where the practical boundaries between humans and computers are dissolving rapidly.

Does the distinction between people and machines really matter all that much anymore? The distinction is blurred every time you defer to a recommendation engine or turn left at your GPS’s insistence. And if you’re literally clothing yourself in wearable computers that guide you every step of the way, where do computers end and you yourself begin?

Consider the rationale behind CAPTCHA, which stands for “Completely Automated Public Turing Test to Tell Computers and Humans Apart.” At heart, it’s a bit like “it takes a thief to catch a thief,” but understood as “it takes a machine to catch a machine.” Or, as illustrated in detail in this SlideShare, it’s more like “it takes machine learning (ML) to catch a machine.”

This assumes, however, that the machines you’re trying to catch don’t themselves have access to ML, hence the ability to identify distorted text, random images, and the like on a par with humans exercising their organic cognitive abilities. It also assumes that the ML-wielding machines you’re trying to capture aren’t tapping into crowdsourcing-tuned cloud-based ML algorithms that abstract the judgments of humans considering similar presented content. And, taking it one step further, it assumes that the crowdsourced humans making such judgments aren’t themselves using wearables that fine-tune their organic perceptual/cognitive capabilities through ML/deep-learning algorithms for computer vision and the like.

Try as you may in this brave new world, you won’t be able to disentangle the fused decisions of humans and machines in many real-world scenarios, such as the ability to authenticate themselves via CAPTCHA. If humans can trust machines (and ML) 100 percent to drive a car in which they’re a passenger, and to do so more safely the majority of human drivers, then what practical decision scenario can tell humans and computers apart? If human judgment in text, voice, face, image, and other object-recognition scenarios becomes, on average, inferior to ML-driven machine judgment, do CAPTCHAs flip in focus so that the relative klutzes being sniffed out are people, not, as in the past, machines?

Perhaps we should retrench back to biometrics? What you want to know, down deep, is not whether it’s a human or a machine responding to your security challenge. It’s whether the responder is a unique (albeit machine-empowered, enhanced, extended, and accelerated) human. If a Terminator-style robot from the future is forcing that specific human at gunpoint to respond to your challenge, that’s not your problem. It really isn’t.

 

About the author

James Kobielus is an industry veteran and serves as IBM’s big data evangelist. He spearheads IBM’s thought leadership activities in Big Data, Hadoop, enterprise data warehousing, advanced analytics, business intelligence, data management, and next best action technologies. He works with IBM’s product management and marketing teams in Big Data. He has spoken at such leading industry events as Hadoop Summit, Strata, and Forrester Business Process Forum. He has published several business technology books and is a very popular provider of original commentary on blogs and many social media.

You might also like...

A Brief History of Cloud Computing

Read More →