Bringing Clarity to the Topic of Cognitive Computing

By   /  July 31, 2014  /  1 Comment

by Jennifer Zaino

DATAVERSITY’s™ Cognitive Computing Forum is taking place this August in San Jose, and in advance of the event, the team here also has released the 2014 Cognitive Computing Survey as well as conducted the webinar Understanding The World of Cognitive Computing. Just like the upcoming Forum, with presenters tackling topics ranging from deep neural networks in practice to expressive machines, the survey results and webinar speakers provide some much-needed insight and clarity into just what the new world of Cognitive Computing (CC) encompasses. All of the various projects really demonstrate how CC connects with human intelligence and Big Data, and why it matters to businesses.

The webinar was hosted by Steve Ardire, an advisor to Cognitive Computing, Artificial Intelligence, and Machine Learning startups and a contributor to the Cognitive Computing Survey, and included panelists Tony Sarris, founder and principal at semantic technology consultancy N2Semantics; James Kobielus, IBM Big Data evangelist; and Adrian Bowles, founder of STORM Insights, which provides research and advisory services for buyers, sellers, and investors in emerging technology markets including Cognitive Computing, Gig Data/analytics, and Cloud Computing.

Below we highlight some of the key discussion points, driven in large part by the survey findings, of the webinar:

  • According to the survey, nearly half the respondents are comfortable with Ardire’s definition of Cognitive Computing:

Cognitive Computing Article Pic 1a

That definition, agreed Kobielus during the webinar, gets a lot of things right – and equally important, it ties what Cognitive Computing is technically to what its business value is. Survey-takers indicated that’s a much-needed perspective, with 53.4 percent of them noting that Cognitive Computing needs to provide more clarity in terms of business perspectives, and more than one-third noting they were unclear about their organization’s plans for implementation because of a lack of understanding in how to present the business case.

Think of Cognitive Computing, Kobielus said, as “AI for the 21st century,” with multi-structured data at the center of its applications, whose goal will be to pull non-obvious insights out of massive data, much as humans themselves could do if they had the capacity. “It’s an extension of [the] human cognition apparatus, it’s our grey matter in the cloud, [leveraging] all the advanced analytics we can throw at it.”

Sarris furthered that observation, commenting that while people may experience Cognitive Computing in various ways, what matters is what you do with it – that is, automating some aspect of what would otherwise be a human cognition process to make decisions, with some business goal in mind, whether that’s improving task efficiency, driving more discovery in content, or enhancing recommendations. “It really is just a helper to humans for better business outcomes,” he said.

Bowles commented that there are some aspects included in the definition that can, but don’t necessarily have to be part of Cognitive Computing – such as NLP, which he noted is not much involved in projects like Deep Learning initiatives such as Google Brain and Microsoft’s Project Adam. The learning-through-experience element, though, is a baseline requirement. “Learning is the fundamental thing for cognitive systems,” he said. “Patterns, relationships and context are the three words that come up.”

  • Where does Big Data end and Cognitive Computing begin? Ardire summed up the key differentiators in the slide below:

Cognitive Computing Article Pic 2

The webinar participants made some critical observations about the difference between Big Data’s focus on information acquisition, and Cognitive Computing’s role in driving contextual associations that help draw out and take advantage of the knowledge implicit in all that information. In applying computer science for business, Bowles noted, the talk has always been about going from data to information to knowledge, “and if you want to get worked up about it, you talk about wisdom,” he joked. “Here we are looking at how to extract something of value out of a lot of data, which [means] finding patterns and relationships, [and] putting context around [that] for use in business.”

Given the plethora of data, patterns and relationships possible, and potentially findable, automation is an essential part of Cognitive Computing if the human race is to avoid being swamped by it all, Kobelius noted. “You have to find insight in as close to an automated fashion as possible [and drive that] downstream into all applications and decision points without having to write code,” he said, later adding that “you must take knowledge, decode it, make it explicit and reuseable,” so that the patterns that have been exposed are understandable in a business context. The process has to happen “mathemagically.”

While there are similarities between Big Data and Cognitive Computing, such as leveraging the Hadoop infrastructure ecosystem, the gap between the two stacks widens at the top levels, which are defined by Ardire as: application engagement and UI (user interface), and descriptive, predictive and prescriptive analytics. Application engagement, Kobielus said, is indeed fundamental to the Cognitive Computing stack. “Think of Watson or Siri [and how they] are all about engaging human beings in conversations around the data, however conceptualized,” he said. “It is all about engagement to drive decision support and guidance to humans trying to make decisions in various contexts.”

Ardire also offered up the observation that UIs for Cognitive Computing apps are contingent on the vertical use case – on who the targeted user is, whether knowledge worker, consumer or anyone else. With Watson, Kobielus said, IBM already has been engaged in building applications for specific verticals such as healthcare that use very specific interfaces to help medical professionals work through decision trees to drive the outcomes they want to achieve. Now, on the heels of the recently announced IBM and Apple partnership to bring IBM’s Big Data and analytics capabilities to the iPhone and iPad platforms, Kobielus says it’s his belief that it’s extremely likely that efforts will be put in place to blend the general assistant UI of Siri with Watson’s more verticalized capabilities.

“How will Watson and Siri play together,” Kobielus said. “Both do conversational computing atop contextual computing atop Cognitive Computing – the three C’s. It’s very exciting.”

  • “Machine Learning,” Ardire said, “is the new black,” noting as some examples of its importance to the Cognitive Computing ecosystem the dollars flowing into Machine Learning startups and the announcement of Microsoft Azure Machine Learning. He also cited the comment by Matthew Zeiler, CEO of visual search startup Clarifai and a former intern on the Google Brain project that, “Google is not really a search company. It’s a Machine Learning company.”

But can we democratize a domain that has been primarily the province of Data Scientists?

Cognitive Computing Article Pic 3

As Ardire’s slide above shows, that thinking is in the air. Added Sarris, “It will be democratized in the future to the extent that [Machine Learning to build explicit knowledge models will be] available everywhere at lower and at no cost” to any front-end client. But that also requires, he says, that the tools for tuning these models be so easy and embedded into developer and knowledge application experiences that users aren’t even aware that they are digging into Machine Learning. While democratization will come, he cautions, it won’t be anywhere near enabling ubiquitous Machine Learning on the back- and front-ends.

And, according to panelists, expect a co-dependent relationship between Machine Learning and human learning to continue. The tacit knowledge of human beings – whether that be Data Scientists, Subject Matter Experts, or crowd-sourcing contributors – is necessary to feed back human judgments into automated systems that produce one pattern after another, to help refine Machine Learning models to ensure they are still on track.

The group had plenty of other perspectives to bring to the topic, all of which you can listen to in the full recording, which you’ll find here. You can also visit the Semantic Web Blog here to explore some additional commentary from the webinar, regarding how Cognitive Computing takes the Semantic Web to the next level.

  • There are many streams contending for prominence in cognitive computing. The definition matters!
    I have worked for 30 years in AI R&D. So I remember well the hype waves that accompanied expert systems in the 1980s and machine learning (ML) in the 1990s. We’re now seeing a new round of ML hype in addition to continuing advances in ML substance.
    What ML has always lacked, and thus ML and AI have always been limited by for business value, is the ability to very effectively accumulate and combine the knowledge resulting from many instances of ML with each other and with knowledge input directly (“told”) by humans. There are hundreds of different ML techniques, and an overwhelming amount of different forms and contexts and schemas involved. A crucial missing ingredient has been expressive yet scalable knowledge representation, to enable deep reasoning and provide transparency so humans can check, trust, and edit/shape what the ML comes up with. The definition and vision of cognitive computing must incorporate not just (1) ML and (2) natural language (NL) understanding and generation, but also (3) knowledge representation with deep reasoning and explanations, and (4) humans telling complex knowledge. In the last decade, there have been major research advances in areas (3) and (4), not just in (1) and (2). But from most media coverage of cognitive computing, you would not know it.

You might also like...

Machine Learning Algorithms: Introduction to Random Forests

Read More →