Advertisement

A Beginner’s Guide to Deep Learning

By on

hf_dl_031616by Angela Guess

Bob O’Donnell, founder of Technalysis Research, recently wrote in Re/Code, “Deep learning refers to the number, or depth, of filtering and classification levels used to recognize an object. While there seems to be debate about how many levels are necessary to justify the phrase ‘deep learning,’ many people seem to suggest 10 or more. (Microsoft’s research work on visual recognition went to 127 levels!) A key point to understanding deep learning is there are two critical but separate steps involved in the process. The first involves doing extensive analysis of enormous data sets and automatically generating “rules” or algorithms that can accurately describe the various characteristics of different objects. The second involves using those rules to identify the objects or situations based on real-time data, a process known as inferencing.”

O’Donnell continues, “The ‘rule’ creation efforts necessary to build these classification filters are done offline in large data centers using a variety of different computing architectures. Nvidia has had great success with their Tesla-based (the chip, not the car) GPU-compute initiatives. These leverage the floating-point performance of graphics chips and the company’s GPU Inference Engine (GIE) software platform to help reduce the time necessary to do the data input and analysis tasks of categorizing raw data from months to days to hours in some cases. We’ve also seen some companies talk about the ability of other customizable chip architectures, notably FPGAs (Field Programmable Gate Arrays), to handle some of these tasks, as well. Intel recently purchased Altera to specifically bring FPGAs into their data center family of processors, in an effort to drive the creation of even more powerful servers and ones uniquely suited to performing these (and other) types of analytics workloads.”

Read more here.

Photo credit: Flickr

Leave a Reply