Advertisement

Artificial Neural Networks: An Overview

By on
Artificial Neural Networks

Neural networks and deep learning currently provide some of the most reliable image recognition, speech recognition, and natural language processing solutions available. However, it wasn’t always that way.

One of the earliest and simplest teaching philosophies for artificial intelligence was marginally successful. It suggested that loading the maximum amount of information into a powerful computer and then maximizing the directions used to understand the data should give that computer the ability to “think.” It was a simple concept, and it was certainly worth a try.

This concept was used to develop chess computers, such as IBM’s famous Deep Blue. By attempting to program every possible move into the chess computer including known strategies, it should learn to predict each possible move, allowing it to outplay its opponent. The system did work, winning its first game against world chess champion, Garry Kasparov, in 1996.

This kind of computer training relies on rigid, built-in rules written meticulously by engineers (should this happen, respond this way; should that happen, respond this way). This isn’t thinking. It’s more like an uncontrolled, habitual response.

In the last ten years, scientists have dropped the concept of relying on a gigantic encyclopedic memory and have focused on using simpler ways of working with data – ways that are loosely based on the human thinking process. Known as deep learning – and using neural networks – this concept was originally developed in the 1940s and is now showing great promise.

Artificial Neural Networks

Artificial neural networks are computing systems loosely modeled after the Neural Networks of the human brain. Though not as efficient, they perform in roughly similar ways. The brain learns from what it experiences, and so do these systems. Artificial neural networks learn tasks by comparing samples, generally without specifically assigned goals.

For example, while learning image recognition, neural networks in training would learn to identify images containing dogs by examining sample images that have been tagged with “dog” or “no dog” labels and then use those results to locate and identify dogs in new images. These neural networks start from zero, with no data about dog characteristics, such as tails, ears, and fur. The systems develop their own understanding of relevant characteristics based on the learning material being processed. (The human brain doesn’t start from zero. Room for a little evolution?)

One significant advantage of neural networks is their ability to learn in nonlinear ways. This means they have the ability to spot features in an image that are not obvious. For example, when identifying oranges, neural networks could spot some in direct sunlight and others in the shade on a tree, or they might spot a bowl of oranges on a shelf in a picture with a different subject. This ability is the result of an activation layer designed to highlight the useful details in the identification process.

An artificial neural network uses a collection of connected nodes called artificial neurons – a simplistic imitation of biological neurons. The connections are versions of synapses and operate when an artificial neuron transmits a signal from one to another. The artificial neuron that receives the signal can process it and then signal artificial neurons connected to it.

There are six types of neural networks, but two are the most popular: Recurrent and feedforward. A feedforward neural network sends data in one direction only. Data moves from input nodes, through hidden nodes (if any exist), and to the output nodes. Feedforward neural networks do not use loops or cycles and are considered the simplest type of neural network. This type of system can include many hidden layers.

A recurrent neural network, on the other hand, uses connections between nodes to create a directed graph as a sequence, allowing for data to flow back and forth. The network forms a directed cycle, which is expressed as “dynamic temporal behavior” – data whose state changes over time and obeys differential equations using time derivatives. The recurrent neural network can use its internal memory to process the sequence of inputs. This type of neural network is popular for handwriting and speech recognition.

Deep Learning Neural Networks

Deep learning uses neural networks to imitate how the human brain works. Thousands of interconnected artificial neurons are arranged in multiple processing layers. (Two layers are common with other machine learning systems.) The additional processing layers provide higher-level abstractions, offering better classifications and more accurate predictions. Deep learning is ideal for working with big data, voice recognition, and conversational skills.

Artificial neurons often have a weight which adjusts as the learning process proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that only if the aggregate signal crosses that threshold is the signal sent.

Typically, artificial neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input) layer to the last (output) layer, possibly after traversing the layers multiple times.

An algorithm called feature extraction is another facet of deep learning. This facet automatically constructs meaningful “features” from the data for learning, training, and understanding.

Deep Neural Network Algorithms

Deep neural network algorithms are a subdivision of machine learning that use “deep learning” for training purposes. These algorithms are able to predict patterns by using previous experiences. As a form of machine learning, deep learning uses algorithms in processing data and imitating the thinking process. Deep learning uses different layers of algorithms to handle the data, allowing the computer to visually recognize objects and understand human speech. Data passes through each layer, and the output from the previous layer provides input for the following layer. The very first layer of a network is referred to as the input layer. The last layer is called an output layer. All layers between these two are called hidden layers. These layers normally use simple, uniform algorithms which contain only one kind of activating function.

The concept of machine learning covers both robotics (working with the real world) and processing data (the equivalent of thinking for computers). Machine learning algorithms search for and find predictable and repeatable patterns which can then be used for Data Management, eCommerce, and other new technologies. The full impact of machine learning is just starting to be felt and may significantly alter the way products are created – and the way people earn a living.

Robots use neural networks to learn and anticipate problems and patterns. The Mars rover, Curiosity, uses a version of machine learning to traverse Martian terrain. Similar algorithms are used for driverless cars.

Convolutional Neural Networks

The latest in image recognition relies heavily on convolutional neural networks (CNN). This concept uses a mathematical system known as “convolution,” which allows computers to analyze images using non-literal strategies. This allows CNNs to identify something partially obscured, for example. Generally speaking, in addition to its input and output layers, a convolutional neural network comes with four essential layers of neurons:

  • Convolution
  • Activation
  • Pooling
  • Fully connected

In the primary convolution layer, thousands of neurons behave as filters, scanning each part and pixel within the image, searching for patterns. As images are processed, providing more experience, each neuron learns to seek out specific features, dramatically improving accuracy.

A convolution layer creates a crude mapping system, providing different broken-down variations of the image, with each focused on a different filtering feature. Neurons will see characteristics such as the color, shape, and distinct features.

Deep Neural Network Tutorials

Deep learning, as a new form of machine learning research, typically focuses on the goal of artificial intelligence. Deep learning uses a variety of methods to make sense of image, sound, and text data. To learn more about deep learning algorithms, check out these tutorials:

Image used under license from Shutterstock.com

Leave a Reply