How far have we really come in Artificial Intelligence (AI) and what will it take to get us beyond our current horizons in creating better smart machines? In a recent talk at the Smart Data 2015 Conference in San Jose, California, Gary Marcus, a Professor of Psychology and Neural Science at NYU, weighed in on the current state of AI, and the context of past half a century.
First, Marcus presented evidence of exponential advances in technology, for instance, in the density of transistors, under what is commonly called “Moore’s law” which paved the way for ever smaller cameras and devices. However, Marcus claimed, there hasn’t been the same type of progress in Artificial Intelligence. Marcus called successive virtual entities like Eliza and Siri “template matching” technologies and suggested the results of our Artificial Intelligence work over the past 30 years or so have been lackluster.
“We wanted Rosie the Robot,” Marcus said. “Instead, we got Roomba.”
Marcus also showed some introductory data on the 1973 Lighthill Report which essentially ended funding on British research for Artificial Intelligence. Marcus described the report as paving the way for the first “AI Winter,” a period of time where Artificial Intelligence and related concepts were for the most part ignored.
Marcus called the result “a collection of idiot savants,” citing driverless cars, chess playing computers, and other projects that may be really good at one thing, but don’t really diversify in the ways that humans do.
“We haven’t yet gone where we hoped that we might,” Marcus said. He also pointed out restrictions and limitations on newer Artificial Intelligence technologies like Deep Mind and Watson, which he said are now “at the head of the class.”
First, Marcus talked about how Watson, while being really good at Jeopardy, essentially read a lot of question answers from Wikipedia pages. He also compared Deep Mind, a general purpose video game player purchased by Google in 2015, to what an average teenager can do, and suggested that these smart machine technologies still focus on numbers and grouping knowledge more than judgment and learning the facts. Instead, he said, people want “domain general” systems that can really understand what they’re working with, something he called “True AI.”
Part of why AI remains stuck, Marcus said, is the lack of critical demand for change.
“We’ve come to accommodate (the status quo),” he said, while talking about how we settle for virtual assistant technologies that are less advanced than we might expect. Using the examples of a typical Google search and a search on a Wolfram Alpha computational knowledge engine, Marcus showed how the kinds of answers that we get today are really just compilations of lists and algorithms crunching numbers, rather than anything that could be called truly cognitive. As a result, it’s often hard to interpret the answers that we get. Going back to the Google search, because the search engine is not using deeper-level cognitive ordering, a complex question will often return a partial result, or a lot of irrelevant suggestions.
“Smart AI can help us enormously in medicine, energy, and science.” Marcus said, giving the example of cancer treatment and how it could be developed with the combination of statistics and the ability to reason. He also cited an estimate by Peter Norvig, Director of Research at Google, which suggested having true AI tools, might add to the world economy by between half a trillion dollars and two trillion dollars.
To further explain the failure of Artificial Intelligence to live up to its potential, Marcus suggested three things; Artificial Intelligence, he said, has:
- fallen in love with statistics
- falling in love with Big Data
- forgotten its roots
Presenting the above three items on a PowerPoint slide, Marcus used the example of Google Translate, which he described as having “elegant mathematics” as well as scalability. But while this scaling capacity implies that more data will give truer results, Marcus demonstrated how relatively flat results suggest that no matter how much data you put into Google Translate, you’ll never get the kinds of complex and rich results you get from a truly cognitive system.
In addition, Marcus referenced a paper by Noam Chomsky on statistical language models, written some decades ago, that suggested systems need a lot of profound and high level intelligence to really work with the quotidian nature of linguistic systems, for example, to work out ‘dependencies’ where something references something else downstream.
“He pointed out some problems that still persist,” Marcus said, looking at certain problems with translating from English to Celtic languages, and back. The result, Marcus suggested, is “word salad,” something vague that doesn’t really provide a true picture.
Referencing an article he wrote in the past, Marcus expounded further on the difficulties of making today’s generic Artificial Intelligence and smart machine technologies into really cognitive tools.
“It’s all correlation and no causation,” Marcus said.
Going back to the example of deep learning, Marcus said it’s relatively good in some narrow domains, but many of these Machine Learning technologies are limited by somewhat defined boundaries, beyond which they fail to have provide good or even coherent results.
“The systems are easily fooled if you go outside the training situation,” Marcus said, showing examples of people cracking various AI systems and visual programs “hallucinating” crazy results. He calls all of this the “long tail problem” – results, he said, are often rich in the ‘corpus,’ inside the box or where data is abundant, but not in the tail, where examples get less frequent, and available data becomes sparse. Marcus said his company is dedicated to trying to get better quality further out of the tail of a query.
Taking another tack to illustrate the limitations of today’s Artificial Intelligence tools, Marcus talked about his two-year-old son, and pointed out some examples of basic human reasoning that computers cannot yet match. Showing a situation where his son responded to complex questions about his rabbit, bear, and platypus stuffed animals, Marcus went over issues like understanding complicated syntax, producing logical reasoning, creating novel answers, and understanding what someone or something is likely to do in a situation.
“They’re trying to build theories of the world,” Marcus said. “They’re not just collecting all of this data – there’s a lot more to learning than just statistics.”
Unfortunately, Marcus said, the tech world has rushed toward Big Data, and to an extent, left behind some of the principles that could have advanced AI in specific ways. “AI has abandoned cognitive science to its peril,” he said. “AI’s roots were in trying to understand human intelligence – but hardly anybody asks about that anymore.”
In addition, Marcus produced examples of technologies misidentifying objects, mislabeling images, and producing logical fallacies, where humans use observation and deduction for basic logic and identification.
In closing, Marcus talks about the Turing Olympics, a contest that aims to try to move the chains on Artificial Intelligence. Referencing the famous ‘Turing Tests’ where engineers tried to build computers that fooled humans, Marcus references the “13-year-old boy from Odessa” trope, illustrating that although you might be able to fool people into thinking that a computer is a person with limited language skills and social cues, that still doesn’t really improve the quality of Machine Learning.
“We’re trying to find some foundational problems that might move things forward,” Marcus said.
All of this criticism might seem easy, but it’s made for a reason – Marcus and others would argue that partly because of the enterprise obsession with Big Data, we’re losing sight of other types of advances. If we’re not careful, they counsel, we could end up with a data rich world that’s poor in analysis and intelligence.