Advertisement

Artificial Intelligence vs. Machine Learning

By on

Artificial Intelligence vs. Machine LearningCurrently, Artificial Intelligence (AI) and Machine Learning are being used, not only as personal assistants for internet activities, but also to answer phones, drive vehicles, provide insights through Predictive and Prescriptive Analytics, and so much more. Artificial Intelligence can be broken down into two categories: Strong (also known as General or Broad) AI and Weak (Applied or Narrow) AI. According to a recent DATAVERSITY® interview with Adrian Bowles, the lead analyst at Aragon Research, Strong AI is the goal of achieving intelligence equal to a human’s, and continues to evolve in that direction.

The debate on the differences between Artificial Intelligence vs. Machine Learning are more about the particulars of use cases and implementations of the technologies, than actual real differences – they are allied technologies that work together, with AI being the larger concept that Machine Learning is a part of. Deep Learning also fits into this debate and is a more distinct usage of Machine Learning.

Weak AI describes the status of most Artificial Intelligence entities currently in use, said Bowles, which is highly focused on specific tasks, and very limited in terms of responses. (AI entities answering phones and driving cars are examples of weak AI.) There is a trend in corporations to replace human workers with AI controlled robots, rationalizing the practice with the argument humans don’t actually want to do tedious, boring work. That a corporation saves large amounts of money by using Artificial Intelligence, Machine Learning, and robotics, rather than people, is mentioned less often.

Artificial Intelligence vs. Machine Learning: Lots of Confusion

Artificial Intelligence and Machine Learning are two popular catchphrases that are often used interchangeably. The two are not the same thing, and the assumption they “are” can lead to confusing breakdowns in communications. Both terms are used frequently when discussing Analytics and Big Data, but the two catchphrases do not have the same meaning. Artificial Intelligence (AI) came first, as a concept, with Machine Learning (ML), as a method for achieving Artificial Intelligence, emerging later.

The Future for Human Workers

Theoretically (according to some), truck drivers and taxi drivers will be replaced by weak AIs by the year 2027. About the same time, robots, controlled by AI, will take over flipping burgers in restaurants and assembly line work in factories. Bankers, lawyers, and doctors will begin to rely on Artificial Intelligence for consulting purposes more and more (Rather than being replaced, people working in these career fields will be “augmented” by AI, at least for a while.) Watson, IBM’s AI, can currently be used to access professional information for lawyers, doctors, bankers, and nonprofessionals. Such prognostications may or may not play out in reality, but to be sure, Artificial Intelligence and Machine Learning are changing the way the world works.

Bowles believes Augmented Intelligence could be a very effective tool for the health care industry. It would do this by acting as a consultant for doctors. For example, Watson can currently research a patient’s case data, and in medical journals, and then cross-reference symptoms. Watson will include a number of possible diagnoses, graded by confidence level, which the doctor can test. The augmentation makes for a more efficient, smarter diagnostic process.

Bowles stated:

“When we’re dealing with humans, we are dealing with some level of uncertainty. But, when we are dealing with machines, we’ve become comfortable with deterministic systems, where the same restricted input always produces the same answer.”

He continued:

“Before, if you said to an AI, ‘I’m in Connecticut and I want to go to Boston. What’s the best route?’ you would get the same answer consistently. But now, if the system has access to more context than you have given it, such as weather data, traffic data, and historical data, you may not get an answer that says this is the best route, or the best three routes. Instead of an answer, you may end up with a conversation.


It possible to look at the conversation (between you and the AI) and decide the chances are high to get to Boston in three hours. But, if certain assumptions and adjustments are made, then there are going to be different answers.

“If you watched IBM’s Watson on Jeopardy, one of the nice things they did was when Watson came up with the question in response to the answer, it would have the top two or three answers it was evaluating, and their confidence levels,” said Bowles.

If a patient goes to a doctor with a set of symptoms, they want the doctor to say, “This is what I think it is, but it might be something else,” said Bowles. The doctor should be able to justify it with evidence. “Those are the sort of probabilistic situations AI is getting really good at.” The AI, coupled with advanced Machine Learning algorithms, becomes a sort of assistant that can help guide the doctor to the answer.

“This is the real promise of AI. In the past, we relied on highly-paid consultants. Now, an AI based system can give you the sort of guidance you are looking for. So, it’s about probabilistic responses versus deterministic responses.”

Developing Artificial Intelligence

Our understanding of how the human mind works has continued to progress, altering our understanding of Artificial Intelligence. The focus of AI has shifted to studying the human decision-making process, and using complex algorithms to imitate human behavior. Applied (Weak) AI is much more common than General (Strong) AI systems. General AI takes a lot more memory, and a lot more training. An AI designed to specialize in trading shares and stocks, or drive an autonomous vehicle, would receive significantly less training and have simpler algorithms.

Generalized AIs have not yet achieved the broad array of skills humans have and use, said Bowles. But, they are getting closer. AI personal assistants are becoming more and more popular, and continue to evolve. Part of this evolution includes Machine Learning, Neural Networks, and Deep Learning.

Machine Learning

Machine Learning, at its most basic, is the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world. So rather than hand-coding software routines with a specific set of instructions to accomplish a particular task, the machine is “trained” using large amounts of data and algorithms that give it the ability to learn how to perform the task.

Two important realizations supported the development of Machine Learning algorithms as a way to train AI entities quickly and efficiently. In 1959, Arthur Samuel realized it might be possible for a computer to “teach itself” to learn. The second realization came about more recently, and is based on using the Internet, and the incredible amount of digital information available for training AI entities. With the availability of Big Data by way of the Internet, engineers recognized it would be much more efficient to design AI entities to imitate human thinking. They could then be plugged into the Internet, allowing them to learn from a broad, extensive information base.

One of the best applications for Machine Learning is visual recognition. Though trained AI entities could identify stop signs having good visibility, minor problems could block recognition. On a foggy day, when the stop sign wasn’t completely visible, or when a bush partially obscured it, computer visual recognition could not compare to human recognition skills. However, time, and improved learning algorithms, have narrowed that gap. The Martian rover, Curiosity, is a good example.

Deep Learning and Neural Networks

Deep Learning is the use of artificial neural networks in training AI entities containing more than a single hidden layer. It is a subdivision of Machine Learning and can be unsupervised, partially supervised, or completely supervised. Research of Artificial Neural Networks (ANN) has been sporadic over the decades and has been inspired by our limited understanding of the human brain, and the interconnections between neurons. However, unlike a living brain, where neurons can connect to other neurons within a limited physical distance, Artificial Neural Networks are designed with multiple layers and connections.

For example, an image broken into a number of sections is entered into a neural network’s first layer, and is then passed on to a second layer. Neurons in the second layer do their task, and pass appropriate data on to the next layer, and so on, until the final layer and outputs are complete.

Returning to the stop sign problem, Deep Learning can be used to improve recognition. The AI entity requires hundreds of thousands of visual samples, until the neurons are tuned so exactly that it will get the correct answer nearly every time. A neural network can now teach itself to recognize a fuzzy, partial image of a stop sign, or a specific face from a large selection of photos.

Bowles believes public Cloud AI services will be the predominant model. He said:

“One of the biggest reasons the Cloud is a particularly good fit for AI is experimentation. Because most organizations are still exploring potential uses for technologies such as machine learning, predictive analytics, or natural language processing, they want an environment that lets them experiment, without significant financial investment or risk.”

At present, image recognition used by Deep Learning AI entities can, in some situations, see and recognize images “better” than humans. In medical situations, indicators for cancer in blood and tumors in MRI scans can be identified more readily by AI entities.

 

Photo Credit: Zapp2Photo/Shutterstock.com

Leave a Reply