Loading...
You are here:  Home  >  Data Education  >  BI / Data Science News, Articles, & Education  >  BI / Data Science Articles  >  Current Article

Artificial Neural Networks: An Overview

By   /  May 24, 2018  /  No Comments

Artificial Neural NetworksNeural Networks and Deep Learning currently provide some of the most reliable image recognition, speech recognition, and Natural Language Processing solutions available. However, it wasn’t always that way.

One of the earliest and simplest teaching philosophies for Artificial Intelligence was marginally successful. It suggested that loading the maximum amount of information into a powerful computer and then maximizing the directions used to understand the data should give that computer the ability to “think.” It was a simple concept, and it was certainly worth a try.

This concept was used to develop chess computers, such as IBM’s famous Deep Blue. By attempting to program every possible move into the chess computer including known strategies, it should learn to predict each possible move, allowing it to outplay its opponent. The system did work, winning its first game against world chess champion, Garry Kasparov, in 1996.

This kind of computer training relies on rigid, built-in rules written meticulously by engineers (should this happen, respond this way; should that happen, respond this way). This isn’t thinking. It’s more like an uncontrolled, habitual response.

In the last ten years, scientists have dropped the concept of relying on a gigantic encyclopedic memory and have focused on using simpler ways of working with data – ways that are loosely based on the human thinking process. Known as Deep Learning – and using Neural Networks – this concept was originally developed in the 1940s and is now showing great promise.

Artificial Neural Networks

Artificial Neural Networks are computing systems loosely modeled after the Neural Networks of the human brain. Though not as efficient, they perform in roughly similar ways. The brain learns from what it experiences, and so do these systems. Artificial Neural Networks learn tasks by comparing samples, generally without specifically assigned goals.

For example, while learning image recognition, Neural Networks in training would learn to identify images containing dogs by examining sample images that have been tagged with “dog” or “no dog” labels and then use those results to locate and identify dogs in new images. These Neural Networks start from zero, with no data about dog characteristics, such as tails, ears, and fur. The systems develop their own understanding of relevant characteristics based on the learning material being processed. (The human brain doesn’t start from zero. Room for a little evolution?)

One significant advantage of Neural Networks is their ability to learn in nonlinear ways. This means they have the ability to spot features in an image that are not obvious. For example, when identifying oranges, Neural Networks could spot some in direct sunlight and others in the shade on a tree, or they might spot a bowl of oranges on a shelf in a picture with a different subject. This ability is the result of an activation layer designed to highlight the useful details in the identification process.

An Artificial Neural Network uses a collection of connected nodes called artificial neurons – a simplistic imitation of biological neurons. The connections are versions of synapses and operate when an artificial neuron transmits a signal from one to another. The artificial neuron that receives the signal can process it and then signal artificial neurons connected to it.

There are six types of Neural Networks, but two are the most popular: Recurrent and Feedforward. A Feedforward Neural Network sends data in one direction only. Data moves from input nodes, through hidden nodes (if any exist), and to the output nodes. Feedforward Neural Networks do not use loops or cycles and are considered the simplest type of Neural Network. This type of system can include many hidden layers.

A Recurrent Neural Network, on the other hand, uses connections between nodes to create a directed graph as a sequence, allowing for data to flow back and forth. The network forms a directed cycle, which is expressed as “dynamic temporal behavior” – data whose state changes over time and obeys differential equations using time derivatives. The Recurrent Neural Network can use its internal memory to process the sequence of inputs. This type of Neural Network is popular for handwriting and speech recognition.

Deep Learning Neural Networks

Deep Learning uses Neural Networks to imitate how the human brain works. Thousands of interconnected artificial neurons are arranged in multiple processing layers. (Two layers are common with other Machine Learning systems.) The additional processing layers provide higher-level abstractions, offering better classifications and more accurate predictions. Deep Learning is ideal for working with Big Data, voice recognition, and conversational skills.

Artificial neurons often have a weight which adjusts as the learning process proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that only if the aggregate signal crosses that threshold is the signal sent.

Typically, artificial neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input) layer to the last (output) layer, possibly after traversing the layers multiple times.

An algorithm called feature extraction is another facet of Deep Learning. This facet automatically constructs meaningful “features” from the data for learning, training, and understanding.

Deep Neural Network Algorithms

Deep Neural Network algorithms are a subdivision of Machine Learning that use “Deep Learning” for training purposes. These algorithms are able to predict patterns by using previous experiences. As a form of Machine Learning, Deep Learning uses algorithms in processing data and imitating the thinking process. Deep Learning uses different layers of algorithms to handle the data, allowing the computer to visually recognize objects and understand human speech. Data passes through each layer, and the output from the previous layer provides input for the following layer. The very first layer of a network is referred to as the input layer. The last layer is called an output layer. All layers between these two are called hidden layers. These layers normally use simple, uniform algorithms which contain only one kind of activating function.

The concept of Machine Learning covers both robotics (working with the real world) and processing data (the equivalent of thinking for computers). Machine Learning algorithms search for and find predictable and repeatable patterns which can then be used for Data Management, eCommerce, and other new technologies. The full impact of Machine Learning is just starting to be felt and may significantly alter the way products are created – and the way people earn a living.

Robots use Neural Networks to learn and anticipate problems and patterns. The Mars rover, Curiosity, uses a version of Machine Learning to traverse Martian terrain. Similar algorithms are used for driverless cars.

Convolutional Neural Networks

The latest in image recognition relies heavily on Convolutional Neural Networks (CNN). This concept uses a mathematical system known as “convolution,” which allows computers to analyze images using non-literal strategies. This allows CNNs to identify something partially obscured, for example. Generally speaking, in addition to its input and output layers, a convolutional neural network comes with four essential layers of neurons:

  • Convolution
  • Activation
  • Pooling
  • Fully connected

In the primary convolution layer, thousands of neurons behave as filters, scanning each part and pixel within the image, searching for patterns. As images are processed, providing more experience, each neuron learns to seek out specific features, dramatically improving accuracy.

A convolution layer creates a crude mapping system, providing different broken-down variations of the image, with each focused on a different filtering feature. Neurons will see characteristics such as the color, shape, and distinct features.

Deep Neural Network Tutorials

Deep Learning, as a new form of Machine Learning research, typically focuses on the goal of Artificial Intelligence. Deep Learning uses a variety of methods to make sense of image, sound, and text data. To learn more about Deep Learning algorithms, check out these tutorials:

 

 

Photo Credit: vs148/Shutterstock.com

About the author

Keith is a freelance researcher and writer.He has traveled extensively and is a military veteran. His background is physics, and business with an emphasis on Data Science. He gave up his car, preferring to bicycle and use public transport. Keith enjoys yoga, mini adventures, spirituality, and chocolate ice cream.

You might also like...

From Data Discovery to Smart Data Discovery: The Next Generation is Here

Read More →