HAVE YOU HEARD? WE HAVE A NEW PODCAST!
Tune in weekly to hear different data experts discuss how they built their careers and share tips and tricks for those looking to follow in their footsteps.
Researchers have tried for decades to create computers capable of learning. Recently, using the human brain as a model, they have had some success. Complicated algorithms have been developed, allowing computers to learn on a limited scale. Deep Learning (DL) is the name used for the process of computers “learning” appropriate responses as they interact with their users, or seek patterns in Big Data. This Big Data “pattern seeking aspect” has the potential to replace Data Scientists as Big Data pattern seekers.
More primitive computers have information and responses installed at the factory, and again after the owner uses a CD to install a secondary layer of pre-programmed responses (a software program), and there is no “learning” involved. It also has significant advantages over Machine Learning when processing unstructured data. Some examples of its uses are listed below
Examples of Deep Learning currently being used in real world applications:
- Voice Recognition is used by UX/UI, the automotive industry, and security
- Sentiment Analysis is used by CRM
- Recommendation Engines are used by E-Commerce, the media, and social networks
- Threat Detection is used by the government, social media, security, and airports
- Facial Recognition is used by security and social media
Many people associate algorithms with mathematical equations, but more precisely, an algorithm is a series of steps, or an established process, for solving a problem. The word comes from a mathematician named Mohammed ibn-Musa al-Khwarizmi (the word algorithm is actually an extreme distortion of al-Khwarizmi), who also created algebra and lived in Baghdad from about 770 to 860. Al-Khwarizmi was also an astronomer and geographer, and his works included large sections on how to survey plots of land and divide up inheritances.
The idea of imitating the human brain in designing computers started in the 1950s, when biologists were just starting to develop simplistic theories of how learning takes place using signals passed between neurons in the brain. The theory assumed the connections between neurons were strengthened when those cells communicated frequently. This theory predicted the amount of neural stimulation set off during a new experience readjusts the brain’s neural connections, allowing it to understand a little better after a second, and third, similar experience.
How Deep Learning Works
In order for a computer to operate, a computer program must be created, using a series of algorithms. The program and its algorithms, step by step, tell the computer exactly to do. The program organizes the computer, allowing it to mechanically follow each step, until it reaches the intended objective. When you are programming the computer, you tell the computer what to do and how to do it… unless, of course, the computer is learning these steps by itself. As Deep Learning algorithms process information, the algorithms will make guesses at the best response, and later measure erroneous guesses against the established “best answer.” The Deep Learning program will then attempt to correct the way it makes guesses, in an effort to become a better “thinking machine.”
Deep Learning is sometimes described of as a set of algorithms that imitates the brain. A more accurate description would state the algorithms organize the computer to “learn in layers.” The human brain can be viewed as a gigantic parallel analog computer containing over 10 billion simple processors (or neurons). In the “early” model, each individual neuron was replaced with a small processor which imitated the neuron. Each processor was connected to many other processors, imitating the neural network of the brain. This sounds good, but in practice, it didn’t actually work very well (suggesting humanity has a less than thorough understanding of how the brain actually works).
After experimenting with modifications and simplifications, the model did begin to work. The finalized network was named the “feed-forward back-propagation network.” In simplifying the system, the connectivity between neuron/processors was changed, creating distinct, individual layers. Each neuron/processor in one layer is connected to every neuron/processor in the next layer. Signals flow in one direction, only. Another change simplified the neuron/processor design to ‘fire,’ after receiving stimulation from a minimum number of other neurons. This simplified network is much more practical and reasonable to build and use. Deep Learning involves learning through layers, which allows a computer to develop a hierarchy of complex concepts, based on a foundation of simpler concepts.
Deep Learning algorithms became news in 2012, when researchers at Google downloaded 10 million random, unlabeled YouTube images into their experimental system. The researchers then instructed the computer to recognize the most basic components in a picture, and how the components should fit together. The experimental Deep Learning computer identified images sharing similar characteristics (such as images of cats or birds). The experiment proved DL algorithms have great potential. Deep Learning algorithms can be applied to a number of areas including Pattern Recognition, Image Recognition, and Behavior Recognition.
Infinite Problem Domains (The Real World)
Earlier efforts to develop a learning program had focused on a “Top-Down” approach. The Top-Down approach involves writing rules for all possible circumstances. While this might be a bureaucrat’s extreme dream, it doesn’t actually work. There is simply no way to develop an infinite number of rules to cover all circumstances. The limitations to this approach, or model, are the inflexibility of the rules and the “finite” number of rules available.
Problem Domains in the “real world” are a part of this problem, because they have an infinite number of alternative solutions. Consider the Problem Domains of chess, which is complex, but still has a finite number chess pieces and a finite number of allowable moves. The moves can be programmed without the need of Deep Learning. Because there are a limited number of options, programming is more efficient than DL. However, in the real world, at any specific point in time, there can be a variety of alternative responses.
These limitations were overcome by completely reversing the Top-Down model, and using a Bottom-Up approach. This process allows for learning from experience. The Top-Down model starts with a big picture format and breaks it down into smaller segments. The Bottom-Up model starts piecing together a system of simple concepts, which build into more complex systems. Pre-training can be provided by what is called Labeled Data. This Labeled Data is downloaded to a DL computer and it provides training for the appropriate responses. This method of pre-training works well for labelled applications, such as spam filtering, but unlabeled data (sounds, pictures, video feeds) does not work as well.
Deep Learning in the Marketplace
Facebook, Google, Microsoft, and IBM entered this arena quite quickly, primarily because Deep Learning is better than any previous Artificial Intelligence training techniques. Typically, early AIs required a human expert to examine and update the system, essentially acting as the computers learning process. Deep Learning software organizes data into patterns and possible solutions without constantly reprogramming. Some DL systems have learned to recognize faces and images nearly as well as human beings.
Researchers have now shifted their focus to combining language and Deep Learning. Combining the latest Machine Learning techniques, and working with Deep Learning networks, they are working to develop more intuitive, smarter virtual assistants and chatbots. They want to develop software capable of using both the common sense, and the language skills, needed for carrying on a basic conversation. The goal is to communicate with computers verbally, as though it were another person. Listed below are some of the newsworthy highlights of Deep Learning.
- In 2013, Yann LeCun was hired by Facebook to head its new Artificial Intelligence (AI) lab and develop DL techniques to help the Facebook website in performing tasks like tagging uploaded photos with the subject’s names, automatically.
- In 2014, Google purchased DeepMind Technologies, a start-up in Britain who had created a system that can learn to play Atari video games using nothing more than raw pixels as the data source. In 2015, Google shared an AlphaGo system that succeeded in winning a “grand challenge” in the AI community by learning the Japanese board game, Go, and beating a human expert player.
- In 2015, Blippar shared a new mobile augmented reality application that makes use of DL to recognize objects in real time.
- The company Lazada, out of southeastern Asia, is an e-commerce marketplace selling millions of competitively priced products, uses image recognition technology capable of telling the difference between individual bags, jackets, and shoes.
- Deep Instinct, a cybersecurity company, uses Deep Learning networks to find, predict, and stop threats of the worst sort, in real time. A program designed to a specific customer’s security needs acts as a middleman between their DL network and the client’s desktop/mobile platforms.
- Deep Genomics uses Deep Learning networks in predicting how natural and therapeutic gene variations change cellular processes. They predict changes in DNA/RNA transcription, the splicing of genes, and provide a better understanding of disease mutations and genetic therapies.
Deep Learning is an exciting new field that will likely change Data Management and its associated practices, along with so much else, for decades to come.