If you have ever used eBay, Amazon or some other big retailer to shop online you are familiar with typing in a search description such as “Women’s Flip Flops” Or “Men’s Neck Ties” and seeing a sleuth of search results. Then you select or deselect options and your search results get narrower and narrower. You may speed up the search process by typing in an exact search description such as “Samsung Galaxy S6 Edge 32GB” to find all the items that match this description. The technique uses Deep Learning, a branch of Machine Learning.
One aspect of Deep Learning helps searches become more intuitive and predictive. Deep Learning can help machines learn how consumers act and know what they want. It can identify items based on single images from various angles. It makes people and businesses smarter. Many businesses are already using Deep Learning to transform real-time data analysis to understand users’ activities and then recommend products they might want to buy. It has hundreds of uses such as voice search and voice-activated assistants, recommendation engines, image recognition, image tagging and image search, advertising and pattern recognition.
“We help other companies take Deep Learning, implement them, and put them in their existing products, we offer a framework and consulting services around that,” said Dave Sullivan, Co-founder and Chief Executive Officer of Ersatz Labs, while speaking at the DATAVERSITY® Smart Data 2015 Conference. Mr. Sullivan asserted, Deep Learning is neutral network research that has come after 2006. Neural networks are general purpose complex pattern finders.
Ersatz Labs uses a Deep Learning platform that is available either as a Cloud service or as a Deep Learning appliance. Companies can use Ersatz Labs to build and deploy neural networks without needing to hire neural network experts. Deep Learning and neural networks are beyond the reach of most companies because they do not have the technical expertise. Ersatz uses Graphics Processing Units (GPUs) to allow it to generate numbers up to forty times faster than their CPU based equivalents. Mr. Sullivan said:
“The role of the Data Scientist in Deep Learning is to formulate the problem as classification or regression. Images, video, audio, text or DNA data, you can treat them as a time series, all of that is where Deep Learning is being applied today.”
Sullivan remarked, “There is legitimate debate around what, if anything, this has to do with human brains and artificial intelligence (AI).” If you take an input and an output a picture of a cat, will it find connection between the two to make it work? There are a lot of discussions about brain connections, towards AI, that are mostly hype.
Don’t these neural networks have to be deep?
“Don’t they have to have multiple layers?” He asked. Sullivan said, “No, not when it comes to searching for more information about Deep Learning and researching it because there are several areas where work is being performed.” Improvements to optimization methods have been made like ADAM and RBMs Prop that are two new recent optimization techniques. “Network depth” is very important but there other advances being made in the industry.
ImageNet is a dataset that was created a few years ago that contains 14 million images. Microsoft COCO is a dataset that is similar to ImageNet that makes that data accessible in the public domain. Encoder and Decoder networks can provide better results on Microsoft COCO. Sullivan said, “You develop a representation of an image and then train the neural network to convert that to a result.” This is the same task as ImageNet. You take image markers and train them with ImageNet.
The role of GPUs
GPUs have played a very important role in Deep Learning. There is going to be a trend in graphics cards. Sullivan said, “We are going to see a Moore’s Law effect of performance gains in graphics cards.” Deep Learning is changing quickly. You want fast matrix multiplication and this is where graphics cards proved to be useful and why they are used in Deep Learning. You can train a convolutional neural network on this type of neural network architecture. Neural networks allow machines to learn in a way similar to the brain.
A New Harder Application
There is a new hardware application. In his presentation, Sullivan showed two images: a picture of a brown down standing next to a large body of water and a picture of a dog with reddish hair looks out over a river. Sullivan said,
“The issue is, can you show an image of a picture and in natural text describe what is in that picture by generating one word at a time the description of whatever is in the picture. This is a very difficult problem to solve. An example: a picture of a dog with reddish hair looks out over a river.”
Big deal with Big Learning
Sullivan remarked that models can learn their own features and those features can be used on other problems. This is the biggest problem with Deep Learning – letting the data dictate what features you use. The depth in the neural network that helps each layer is an operation that is performing on that data. Deep Learning develops multiple layers of simplification.
Pipeline for Training and Predicting
During the presentation, he showed images from ImageNet and Microsoft COCO. He said,
“We now look at ImageNet and Microsoft COCO and show an image dataset that it hasn’t seen. This network can take an image and generate some features. Those features will inhabit a space that was learning off of this dataset.”
These are the features that you are going to be working with. How do we come up with dense representations where it learns its own word features and then combines those into word vectors? This is what the word2vec tool does. The word2vec program allows you to take words and to output a dense vector that you can use, as you want.
What are the odds that benchmark performance increases as quickly as ImageNet did?
According to Sullivan, over the next few years some experts say convolutional networks will be better at image recognition than humans. Similar techniques are being applied to language translation and recently to chatbots.
What are the odds that this gets better quickly?
Things improve pretty rapidly. Once you can set a benchmark and set a goal things improve very quickly. Sullivan said, “I think that is one of the next areas we are going to see and that’s going to be applied to a lot of things. But in particular, we going to end up with better personal assistants, better Siri type software probably staring with Google Now.”
What’s coming up next?
Rapid progress will be made on existing datasets such as Microsoft COCO and Flickr. We will get new datasets. We will have better tools that are more accessible. More people will be experimenting with Deep Learning. New applications will be created. It is a great career for anyone interested in the field. According to Sullivan, Machine Learning in the future whether they call it Deep Learning or something else and it’s going to be drawing from the concepts that are emerging from this discipline.
Here is the video of the Smart Data 2015 Conference Presentation: