Loading...
You are here:  Home  >  Data Blogs | Information From Enterprise Leaders  >  Current Article

Deep Learning Lives and Dies By Dimensionality Reduction

By   /  May 27, 2015  /  2 Comments

by James Kobielus

We don’t live in a simple binary world or one where everything of interest can be calculated on our fingers and toes. Hence, humans have invented the higher mathematics to bridge between the numbers we understand organically and the complex numerical relationships too abstract to explain in simple terms.

In other words, we live in an n-dimensional world. By “dimension,” I’m referring to any conceivable yardstick we might use to measure any property, operation, or relationship pertaining to any combination of objects and entities. In other words, I’m referring to “dimension” in the strictly mathematical sense (http://bit.ly/1PuyEZI) of the irreducible set of quantifiably descriptive attributes needed to uniquely specify any datum of interest. For example, one might specify someone’s face by referring to just a few physical dimensions—such as height from crown to chin, width from ear to ear, and depth from bridge of nose to base of eye sockets. Or we might use different quantitative dimensions to describe that face, such as gender (1 = male, 0 = female), age (0 to 100+), and beauty (0 to 10).

Dimensions are both a core resource for subsequent analysis but also a potential drag. The dimensions with which you measure some entity facilitate some types of analyses. If some key dimensions of interest are lacking from the data, you might find it difficult or impossible to do some types of analyses. And if the dimensions are too high level or low level for you to do the analyses of interest, you might either have to reduce them down to more meaningful levels of abstraction, or make bold inferences from them regarding the specific attributes you seek to analyze. For example, you can’t easily do gender-based analyses that incorporate facial data if the data itself lacks the gender dimension. By the same token, you can’t easily infer gender from facial data if all you have is grainy bit-mapped images of faces that aren’t tagged as male or female.

Deep learning lives and dies by high-dimensional data analysis. As I discussed here (http://linkd.in/1BL2qGN), examples of the sorts of high-dimensional objects against which deep learning algorithms are usually applied include streaming media, photographic images, aggregated environmental feeds, rich behavioral data, and geospatial intelligence. None of this comes cheap. In data scientists’ attempts to algorithmically replicate the unfathomable intricacies of the mind, they must necessarily leverage the fastest chips, the largest clusters, and the most capacious interconnect bandwidth available.

When dealing with images, media, and other bit-mapped content, data scientists quickly realize that they can’t easily do machine-driven analysis of this content unless they reduce the number of dimensions in the data down to a manageable subset. As I stated here (http://linkd.in/1ddI8Vh), expanding the number of dimensions in your data also grows the potential number of interrelationships among them. High-dimensional modeling is the biggest resource hog in the known universe, and, left unchecked, can quickly consume the full storage, processing, memory, and bandwidth capacity of even the most massive big-data cluster.

Consequently, dimensionality reduction is critical if we’re going to do efficient analyses of any complex data set. As discussed in this recent article (http://ow.ly/MtKPS ), dimensionality reduction is a core technique for data scientists in many domains, though it’s especially important in face, voice, video, gesture, and other media-driven pattern-recognition challenges. The article by Priya Rana discusses the mathematics of two principal approaches to dimensionality reduction: principal component analysis (PCA) and singular value decomposition (SVD). It’s a great primer for anybody who needs the basics on the practical magic of how machine-learning tools can quickly boil down the staggering high-dimensionality of media objects to a manageable subset that uniquely identifies some entity of interest.

Dimensionality reduction is also essential for scalable graph analysis, which is used to crunch through high-dimensional data in behavioral, social, semantic, and other complex data sets. As I discussed here (http://linkd.in/1ifRYJE), another mathematical approach—topological data analysis (TDA)—reduces large, raw multi-dimensional data sets down to compressed representations with fewer dimensions while preserving properties of relevance to subsequent analyses. Not just that, but TDA is adept at the inverse: connecting many low-dimensional data points so that we may infer the higher-dimensional picture in which they are mere details.

Mathematical approaches such as PCA, SVD, and TDA are important tools in wrestling these high-dimensional analytics challenges down to earth. At the very least, progress in the frontier disciplines of deep learning, cognitive computing, and machine learning depends on our ability to build high-performance hardware substrates—such as quantum computing—that can make mincemeat of any unbounded set of dimensions.

About the author

James Kobielus is an industry veteran and serves as IBM’s big data evangelist. He spearheads IBM’s thought leadership activities in Big Data, Hadoop, enterprise data warehousing, advanced analytics, business intelligence, data management, and next best action technologies. He works with IBM’s product management and marketing teams in Big Data. He has spoken at such leading industry events as Hadoop Summit, Strata, and Forrester Business Process Forum. He has published several business technology books and is a very popular provider of original commentary on blogs and many social media.

  • rpantony

    What about natural language sounds? Does it make sense to try to represent them mathematically or geometrically? Are phonemes like smoke rings which have a shape? Padlet.com / the_ideas_guy / 3Dphonemes (no spaces). Phonemes come within digital signals processing but “in the air” they’re analog and geometrical. Is our math and processing able to solve this?

  • bobdc

    Great piece, Jim. http://ow.ly/MtKPS points to an article “Understanding Dimensionality Reduction- Principal Component Analysis And Singular Value Decomposition” with a byline of Manu Jeevan. Which article is the Priya Rana one?

You might also like...

The Leader’s Data Manifesto Debuts: Making Data as an Asset Everyone’s Business

Read More →