Advertisement

Progress in Cognitive Computing and Machine Learning

By on

machine learning 300 x 224In a recent talk at the DATAVERSITY® Smart Data 2015 Conference, Tim Estes of Digital Reasoning discussed the long way we have come with Cognitive Computing, AI, and Machine Learning, and the long way that we have to go.

Estes started out with a slideshow talking about how the reality of Cognitive Computing has fit the cultural expectations that we have had as a society, going back to the 1960s. Using cultural symbols of our AI familiarity (Hal in the 1960s, Terminator in the 1980s, and Star Trek’s Lt. Data in the 1990s), Estes also tracked the actual progress of AI with benchmarks like the work of pioneers such as Shannon and McCarthy in the 1960s, the mainstreaming of neural networks in the 1980s, Deep Blue in the 1990s, and Watson, a disembodied form of AI, in the early years of the millennium.

In some ways, Estes said, the uncoupling of AI from a robot or “body” type structure works, as when Watson, unencumbered by the neurological need to trigger the pressing of an invisible button, had a slight edge over human Jeopardy contestants. Nonetheless, Estes added, there are still challenges that face the community of researchers trying to take AI to the next level.

As an example, Estes called out the familiar “personal assistant wizards” of the 1990s, such as the infamous “Clippy” who tried to advise users of Microsoft Office.

“Why were they so bad?” Estes asked of those early tries, and then answered the question: these cognitive models, he said, couldn’t understand language, and so their inferences were wrong. They didn’t really fit the needs of the human user, and so they were consigned to the dust bin of history.

Moving on, Estes tackled three challenges he sees remaining in the way of deep and effective progress on AI. One is feature selection, the need to draw out particular bits of data from a large and potentially diverse field. Another is knowledge representation. A third is something Estes called “planning and intent.”

We’ve made big steps forward on the first two, he said, but still need to focus on the third.

Looking at the task of feature selection, Estes identified two “curses” that have bedeviled scientists. One is the “curse of sparse data,” where a desired data set is small and hidden in a diverse background. The other is the “curse of dimensionality” commonly attributed to Bengio, which deals with the idea that sampling or training methods must grow exponentially along with a proliferation of data in high dimensionality.

Moving on to knowledge representation, Estes talked about “guiding principles” that drive the use of data assets, such as the idea that meaning is inherent in a signal, and that models can emerge from raw data. He cautioned against over-defining models at the outset or making processes too rigid. Promoting a “hierarchy of invariances” for capturing meaning in context, Estes described using ideas from different schools of design to build a “learned model of the world.”

“What categories of algorithms do we use in combination?” Estes asked.

Estes used the example of finding aliases for a known terrorist, where, he said, brute-forcing translations can create false positives. By separating instances conceptually, he said, you get a product that is more precise than something based on just a string matching model, a model he said is still common in business architectures.

In going over some of the problems Cognitive Computing people encountered from 2008-2014, Estes stressed the importance of linking data, of going beyond simple matching to apply more sophisticated models based on semantic relationships and context.

“When we think about Big Data, people think about the sheer amount of petabytes,” Estes said. “That’s only one axis of Big Data. The other is how much the features within the data interact with each other, the combinatorial aspect of Big Data, and there’s a lot of information if you really exploit that.”

Estes said recent work on Machine Learning involved tackling the task of learning from limited examples. Another challenge, he said, was figuring out semantic type information from “dirty data” or unstructured high volume data sets. Pioneers looked at reducing errors and finding precise results in high volume fields.

Now, he said, with big advances in data modeling, companies like Digital Reasoning are looking to move clients up a chain of activity that starts at a basic “data” level and moves progressively from data toward information, from information to knowledge, and from knowledge to wisdom, all with much less human involvement than has been needed in the past.

“Cognitive Computing – going from that original ‘intelligence assistance’ idea, there now is definition around what that is, and it’s inescapable that it deals with human language and human signals, and being able to draw knowledge out of that that drives very different outcomes. So it really is about learning systems that are fully integrated assemblies of algorithms. So for us, that has allowed us to take customers on a certain journey.”

A lot of companies, Estes said are still “living on the data level.” For example, he cited enterprise systems still unable to separate a reference to Apple, the tech company, or apple, the fruit. The solution: linking and mapping data.

Unveiling aspects of Digital Reasoning’s Synthesys Platform, Estes talked at length about how this tool is being used in fields like finance and medicine, for example, in FX trading to analyze chat and discover indicators for certain likely behaviors or outcomes. Here, he said, users may be looking for evidence of black swan events like manipulation or collusion, with false positives a distinct possibility. Estes demonstrated how a more precise visual model can help human agents weed out these false positives and get a better idea of what’s actually being monitored: practical applications often involve creating alerts and profiles that need to be as precise as possible.

In addition, Estes said, these much more powerful knowledge models can be used to help law enforcement combat criminal behaviors such as human trafficking.

In closing, Estes talked about some expectations for Cognitive Computing going forward. These include more developed attention structures in Deep Learning for visual understanding, as well as better memory networks for QA, advances in structured predictions, and fusion of graphical and Deep Learning structures.

Overall, Estes said, there’s the hope that future advances will help the reality map to the cultural expectations that we have for AI.

“We’re all part of that process and journey.” he commented.

With so much invested in Machine Learning, Cognitive Computing, and other aspects of AI, we’re likely to see some very interesting applications come out of this kind of progress, not just for companies, but in consumer markets as well. All of it relies on groundbreaking processes to find better ways of sifting through data for intelligent results.

Leave a Reply

We use technologies such as cookies to understand how you use our site and to provide a better user experience. This includes personalizing content, using analytics and improving site operations. We may share your information about your use of our site with third parties in accordance with our Privacy Policy. You can change your cookie settings as described here at any time, but parts of our site may not function correctly without them. By continuing to use our site, you agree that we can save cookies on your device, unless you have disabled cookies.
I Accept