I think we’ve already grown weary of yet another discussion of the myths around big data. Though I doubt that cognitive computing—which obviously builds on big data—has its own special myth pantheon, so far, that day is probably fast approaching.
Myths sprout when a paradigm catches hold faster in the popular imagination—for good or bad—than people’s ability to dissect it down to its essential elements. With cognitive computing, the mythology stems in great part from its inheriting the intersecting auras of both big data and a paradigm of much longer vintage: artificial intelligence (AI). If the likes of Stephen Hawking and Elon Musk are starting to raise the alarm about the supposed evils of AI—by which they mean AI’s practical renaissance in ubiquitous cognitive computing—then it’s probably time to start facing down the myths before they contribute to inflated expectations, on the positive side, or counterproductive vilification on the other.
Just off the top of my head, here are the top 5 myths about cognitive computing in circulation today:
- Myth #1: Cognitive computing is just a synonym for “big data analytics”: It most certainly is not. Cognitive computing is about the ability of automated systems to handle the conscious, critical, logical, attentive, reasoning mode of thought. You can do that, after a fashion, at any scale, in terms of data volumes, velocities, and varieties. But you can certainly do more powerful cognitive applications—such as natural language processing, sentiment analysis, and streaming video object recognition—at scale.
- Myth #2: Cognitive computing is high-powered AI of the sort that few organizations will ever need to put into operational applications: Definitely not true. The era of big data analytics has thrust cognitive computing—as powered through classic AI plus machine learning, artificial neural networks, semantic web, and other leading-edge advanced analytics—into the very heart of the online economy. It’s getting near impossible, for example, to point to a multichannel, online customer-engagement infrastructure that isn’t powered by cognitive computing. Next best actions, decision automation, and recommendation engines depend on it.
- Myth #3: Cognitive computing is a cutting-edge blend of machine learning, artificial neural networks, and deep learning that only a high priesthood of computer scientists can understand: It used to be, but no longer. Cognitive computing is the new power tool for data scientists everywhere, especially as an enabler for the real-world experiments and A/B testing they perform inline to operational business applications. Considering that a deepening portfolio of these analytic algorithms are included in the big-data analytics platforms—such as Hadoop and Spark—that they’re operating, it’s clear that knowledge of cognitive computing is rapidly disseminating into the core skillsets of application-focused data scientists everywhere.
- Myth #4: Cognitive computing is just a souped-up search engine for unstructured data and semantic search: Clearly, cognitive computing technologies—as implemented through IBM Watson and in search platforms such as Google, are enabling heretofore undreamt-of capabilities such as real-time searches through media streams. But that’s only one of many applications of this technology, which is rapidly expanding in scope to include new frontiers such as graph-based searches through massively parallel analysis of machine data logs sources from the Internet of Things.
- Myth #5: Cognitive computing is just an IBM marketing slogan being used to drive interest in Watson: Hah! I’ve already spoken at three industry events in the past 9 months, all of them organized by independent third parties, that focused on cognitive computing. Just as important, all of them had avid turnout from a growing number of startups, as well as bigger players such as IBM and Google, who are pushing the frontiers of this hot new space. Yes, it can be (pleasantly) embarrassing when I speak at these events and other (non-IBM) speakers keep mentioning Watson as the leader in this arena (thank for doing our marketing—but that’s my job). But it’s a free country and I’m certainly not going to shut them up.
All of this reminds me. You can go further with this list by highlighting the cognitive computing myths that overlap with the extant big-data myths. And you can add the myths surrounding another piece of the mosaic of cognitive computing: neural networks. Here’s a great recent article that drills into the latter.
For my next trick, I’d like to dispel the myths surrounding the high-tech industry’s mythmaking machinery. No, we techies don’t make them for the explicit purpose of busting them in our monthly columns. And, no, we haven’t outsourced the latter function yet to Jamie and Adam.
But that would be cool, wouldn’t it?