Artificial Intelligence, Machine Learning, and Deep Learning Explained

By on

The first reference to artificial intelligence was in a paper written by Alan Turing in 1950, entitled Computing Machinery and Intelligence, where he asked the question, “Can machines think?” Turing’s article, as well as a 1977 paper entitled History of Artificial Intelligence by P. McCorduck are recommended reading for those wanting a greater understanding of the beginnings and history of the field. McCorduck addresses how the field has evolved, initial perceptions, and the thought processes behind it, said Pragyansmita Nayak, Senior Data Scientist at Hitachi Vantara Federal, during her DATAVERSITY® Enterprise Analytics Online Conference presentation titled AI, Machine Learning, and Deep Learning. Her presentation focused on differences and similarities between artificial intelligence (AI) and its relationship to robotics, machine learning, and deep learning. As an overview, she said, the relationships between artificial intelligence (AI), machine learning, and deep learning have what she calls an “is-a-kind-of” relationship. Deep learning is a kind of machine learning algorithm, and machine learning is a kind of AI.

Can Machines Think? Factors Accelerating the Hype

AI has become quite popular, Nayak said, and there are several factors adding to the hype. Although AI algorithms have been around for some time, new learning algorithms and theory have emerged as we learn new ways to use AI. The desire to better understand the current data avalanche is also a factor, and she illustrated just how far we’ve come in recent memory by asking participants to remember that at one time, the “data” for an entire project would fit on a floppy disk.

Mobile devices are generating and consuming large amounts of unstructured data, and the proliferation of applications involving sensors driving the IoT are adding exponential amounts as well. So much data is being generated every day, she said, “And we want that data to be accessible to us.” Nayak said that according to a study by Statistica, 46 percent of companies are using AI in some form, and 32 percent have not yet adopted, but plan to in future. Only 22 percent have not used AI and have no plans to do so.

AI, Deep Learning, and Machine Learning are All Around Us

The effectiveness of these technologies is a key factor in their expanding adoption. The American Society for Reproductive Medicine published recent findings showing that when a computer equipped with AI was given images of hundreds of embryos, it could predict which would lead to a live birth with 85 percent accuracy.

Voice recognition technology is now at a point where the demarcation between human speech and that of a virtual assistant is becoming lost:

“The best part of the Google annual conference was where the [Google Duplex] virtual assistant on the phone makes a call to book a haircut appointment and the person on the other end of the line has no clue that they’re talking to a virtual assistant.”

Nayak shared another example of deep learning technology that enables a user to record a message, which is analyzed and reproduced by a machine. The machine uses the replicated voice to call a family member who is unable to tell the difference between the voice of the family member and the voice of the bot. “You can imagine how advanced these technologies have become, when your own mother cannot recognize your own voice.”

Building Blocks of AI

The foundation of AI is built on three concepts: automata, context-free grammar, and the Imitation Game—the latter invented by Alan Turing, and discussed in Computing Machinery and Intelligence. In the 1950s, the concept of “automata” or “self-acting” emerged to describe a machine that is able to perform on its own based on certain rules. The Imitation Game starts with two people on either side of a partition.On one side of the partition there is a person designated as a listener. On the other side, there is a person and a robot. The human and the robot speak at different times, and if the listener is unable to discern the human voice from the non-human one, it can be said that the robot has passed the “Turing Test.”

Programming “intelligence” requires a store of knowledge, the ability to learn from experience, and improvement over time without any manual intervention. Other fields that have emerged from these basic concepts were beyond the scope of her presentation, such as machine intelligence, augmented intelligence, and cognitive intelligence, and she encouraged participants to explore those further.

AI General Theory

The general theory is that an artificial intelligence has a human-like intelligence, but it is machine intelligence, she said. Attributes include some type of short- and long-term memory mechanics, the ability to handle a sensor system, some motor skills coordination, and in some cases, the machine may be capable of motivation, thinking, and/or consciousness. An AI solution won’t necessarily have all these traits, but it can have one or more in combination, Nayak said.

Robotics vs. AI

Although often considered interchangeable, there are differences between AI and robotics. The most significant difference is that is that robots typically perform a specific task repeatedly. The task may involve taking readings in the environment, or interactions with objects in its vicinity, but by definition, a robot is a machine designed to do repeated tasks, she said.

When the robot gathers information, it needs to respond to that information in some form. The response may involve the use of sensors, which allow the robot to provide an autonomous response, or the response can be entirely human-controlled. The response can be dependent on certain rules or on the performance of certain tasks in a repeated matter.

Showing a photo of a water-bottling machine, Nayak said that bottling water is the only task that a particular robot can do—there is no learning or problem-solving process:

“It’s not going to figure out: ‘Okay. I’m getting these water bottles, but I need to learn and do something around it.’ That’s not one of the traits of this particular robotic system.”

Chatbots, which are quite common now, can respond to a particular word with a pre-defined response, but they are limited to those responses and cannot venture outside of those parameters. There is no learning involved in this process—the robot simply produces a particular response to a trigger word.

Machine Learning

Machine learning and data mining are often conflated, but data mining is typically used just for pattern recognition and model formulation, and machine learning is more complex. AI and Machine Learning are also often used interchangeably in publications, but there is a difference between the two concepts. Machine learning evolved from the study of pattern recognition and computational learning theory in artificial intelligence and is closely related to computational statistics, a discipline that also focuses on prediction-making through the use of computers, Nayak said.

The machine learning field of study is a subset of artificial intelligence that gives computers the ability to learn without being explicitly programmed. Based on the data it is handling and how it interacts with certain scenarios, the machine picks up characteristics and information from its environment and stores them in some form. In the process of interacting, the machine is constantly learning and attempting to optimize its responses.

When it receives feedback that the response it has offered is incorrect, it stores that information and uses it to inform a different response the next time it encounters that same scenario. Nayak said that performance is better with larger amounts of data because the more data the machine has available, “The more examples it has to figure out the trends and patterns and be able to formulate a model around those patterns and trends.”


Machine learning starts with the process of training, validation, testing, and cross-validation. To illustrate, Nayak split a hypothetical dataset of 100 records into three subsets of 30, 30, and 40 records. One set of 30 (the Training set),is given to the machine learning algorithm so it can formulate a model for learning. The second set of 30, (the Validation set), serves as a way to tune the parameters of the model to improve predictive accuracy. The third group of 40 records (the Test set), provides additional information on how the model works on new test data. These three subsets can then be used in combination to cross-validate and further understand how the model is working.

Computational Learning Theory

The basis of machine learning is the inductive learning hypothesis, in which Nayak explained, “a model that has worked successfully on one sufficiently large set of training data can be expected to work on other test data.” She stressed that machine learning works best with a general-to-specific ordering, and that the model should not be designed to handle every possible scenario. If the model is too specific, it will be unable to perform when confronted with slight variations in the data. If the model is too general, she said, it will lead to low predictive accuracy.

Types of Learning Algorithms

  • Supervised Learning: Labeled data exploration, where labeled values exist. “You’re formulating your model of y based on x.”
  • Unsupervised Learning: Unlabeled data analysis such as clustering, anomaly detection, and latent variable analysis. “You formulate your model based on the data itself without being targeted for a particular thing.”
  • Reinforcement Learning: Reward/penalty feedback from the environment such as agent-based modeling. Based on its behavior, the machine receives either a reward or a penalty. “If it gets a reward, it continues with that behavior, but if it receives the penalty, then it knows that it has to correct itself and it takes the necessary action for it.” A Roomba uses reinforcement learning—getting feedback about where to go next based on the obstructions it encounters in the room.

Ten Steps to a Machine Learning Solution

  • Define problem statement—gather available data
  • Identify target variables and measure of predictive accuracy
  • Measure data or select from readily available datasets—when external data such as census data can provide context
  • Clean and join datasets
  • Select the algorithm that will best help analyze the defined problem statement
  • Train and validate the model
  • Test the model
  • Deploy the model for use
  • Determine execution frequency
  • Identify frequency of model update—set the interval at which the model should be updated or executed again

Deep Learning

Deep learning is a subset of machine learning that works with unstructured data—data that is not in table form. Examples are speech-to-text conversion, voice recognition, image classification, object recognition, and sentiment data analysis. Deep learning is able to capture complicated models by using a hierarchy of concepts, starting with simple understanding and building progressively until a picture emerges.

The foundation of deep learning is in the fields of algebra, probability theory, and machine learning. One way to use deep learning is with image recognition. Using an image of a car, Nayak illustrated how each view of the image creates a layer, and as the number of layers increases, the model becomes closer to understanding the image and the category it belongs in becomes clearer. “If you stop at an early point, of course, you have no idea that it’s actually an image of a car, but if you go into more layers, that’s when you get a better understanding.”

The two most common types of deep learning networks are convolutional and recurrent. Convolutional networks are used primarily for object or image recognition, data that has like a grid-like topology, or images that are made up of pixels. Recurrent networks are used for sequential data, cyclical computations, or natural language data. Deep learning requires massive data stores and therefore is cloud-based.

Nayak shared a quote from McCorduck’s History of Artificial Intelligence that could explain the enduring interest in this field over the past 60+ years:

“Artificial Intelligence comes blessed with one of the richest and most diverting histories in science because it addresses itself to something so profound and pervasive in the human spirit.”

Want to learn more about DATAVERSITY’s upcoming events? Check out our current lineup of online and face-to-face conferences here.

Here is the video of the Enterprise Analytics Online Presentation:

Image used under license from

Leave a Reply