Advertisement

Pedro Domingos’ on “Five Machine Learning Tribes”

By on

Machine Learning - Tribes x300by Justin Stoltzfus

Many of us have a passing knowledge of the ways that Artificial Intelligence researchers are promoting Machine Learning, but how are computers actually getting smarter and learning to expand their cognitive models and pursue greater knowledge?

In a recent talk, “The Five Tribes of Machine Learning, and What You Can Take from Each” given at the Smart Data 2015 Conference, Pedro Domingos of the University of Washington provides some pretty precise answers about five different core methods involved in modern Machine Learning.

Answering the essential question of where knowledge comes from, Domingos cites three factors in human intelligence: evolution, experience, and culture. Each one, he says, builds on the one before it – the knowledge that we get from experience is acquired faster and generated more broadly than the knowledge that we get from evolution. The knowledge that we get from culture is acquired faster and developed more broadly than the information we get from experience.

The fourth source of knowledge? Computers.

Domingos suggests that computers are going to provide us with more knowledge than any of the above three factors and in the future the majority of knowledge is going to be computerized. That, he says, makes it important to understand how computers extract knowledge from the physical world around them.

Domingos starts by identifying five basic methods of computer knowledge acquisition, as follows:

  • filling in gaps in existing knowledge
  • mimicking the human brain
  • simulating the evolutionary process
  • reducing uncertainties
  • making contrast between old and new sets of information

In order to underscore how these methods are valued and used, Domingos talks about five ‘camps’ or ‘tribes,’ each of which uses one of these core methods and a philosophy and algorithm that can pursue categorically different kinds of Machine Learning.

The Symbolists

Identifying prominent symbolists and their roles in the Machine Learning industry, Domingos shows how these researchers work on the premise of inverse deduction.

Instead of starting with the premise and looking for the conclusions, inverse deduction starts with some premises and conclusions, and essentially works backward to, as Domingos says, fill in the gaps. The system has to ask itself “what is the knowledge that is missing?” and acquire that knowledge through analysis of existing data sets.

“It’s an ever-growing virtual circle of knowledge,” says Domingos. “In many ways, this is a lot like a scientists at work.”

In fact, Domingos shows the audience an actual functioning robot scientist named “Eve,” engaged in actual scientific experiments, and credited with things like discovering a new malaria drug. Along with human handlers, these robot scientists are automating many of the scientific processes that used to be the domain of highly educated and trained humans.

Connectionists

By contrast, Domingos said, a group called “connectionists” wants to reverse engineer the brain.

This very ambitious approach involves actually creating artificial neurons and connecting them in a neural network. Domingos calls this approach “deep learning” and shows how companies like Google are applying it to areas like vision and image processing, machine translation and experimental neural networks like Google’s Cat Network that helps the computer to recognize cat images.

Taking the example of a cat image network, Domingos talks about how neurons work on a weighted value of inputs, and how binary results can be enhanced into a “continuous value” with methods like back propagation. All of this leads the computer to be able to learn more about a given set of information criteria – in this case, about what is and is not a cat, to be able to more correctly label random sets of images.

The Evolutionaries

Another radically different approach, says Domingos, involves looking at evolution as a phenomenon.

“Evolution made your brain and everything else,” says Domingos, articulating the idea and philosophy behind the evolutionary mindset. “So it must be a good thing.”

In essence, Domingos says, evolutionaries are applying the idea of genomes and DNA in the evolutionary process to data structures. The survival and offspring of units in an evolutionary model are the performance data. An algorithm for an evolutionary learning project would mimic those processes in key ways.

Domingos likens it to farmers and what they do with selective breeding, but notes that because the process is being applied to specific technologies, the model is a bit different. However, using the example of robotic selection, he goes into detail about a process of “robot evolution”, and how researchers can start with random assemblies and 3D print the best performing models.

“You wind up with surprisingly smart and robust robots,” says Domingos. “You can learn surprisingly powerful things this way.”

The Bayesians

The Bayesians, Domingos says, deal in uncertainty and solutions. Their master algorithm solution is called probabilistic inference.

Domingos explains that researchers can take a hypothesis and apply a type of “a priori” thinking, believing that there will be some outcomes that are more likely. They then update a hypothesis as they see more data.

“After some iteration of this,” Domingos says. “Some hypotheses become more likely than others.”

Domingos talks about strategies for efficient computing that support this process. He mentions vision learning applied to spam filtering, which is a key way to stop spammers from clogging up user inboxes. As another sort of scientific process, the probabilistic models do bring a certain concrete result to Machine Learning.

The Analogizers

The fifth tribe of Machine Learning philosophers, Domingos says, is made up of analogizers, or pioneers in the field of matching particular bits of data to each other. Although it sounds simple and rudimentary, Domingos says it’s really at the heart of a lot of outcomes that are extremely effective for some kinds of Machine Learning. He cites one of the leading proponents of this method, Douglas Hofstadter, in saying that “all intelligence is nothing but analogy.”

The master algorithm here, he says, is the “nearest neighbor” principle. Nearest neighbor outcomes can give results that are similar to neural network models. Domingos gives the example of two country models with defined city locations, but with undefined borders. Through the application of the analogy principles, the computer generates a likely border. Domingos calls this “generalizing from similarity” and suggests that it has economic ramifications for technology. One example, he says, is the movie advice technologies that supply movie ratings based on known data sets, where users get recommendations based off of what others have watched previously.

“It’s a very nice type of similarity-based learning.” Domingos says, adding another example of how real results can boost profits for companies: one third of Amazon sales, he says, are based on recommendations.

Tribes Come Together

In closing, Domingos talks about how all five of these tribes have something key to offer and how the best Machine Learning technologies combine all five angles. In addition, he says, some new ideas are also needed to further refine Machine Learning into something that would give us the future outcomes we’ve anticipated for a long time, including things like cancer cures, home robots, and worldwide neural networks.

“This is only the beginning,” Domingos says. “There’s much more that remains to be done.”

Indeed, these Machine Learning technologies are rapidly advancing toward future results that will change the ways that we view our interactions with computers and digital technologies. Some of that future depends on the work of these five “tribes” and how they can push the boundaries of what’s possible with Artificial Intelligence.

Leave a Reply