Advertisement

How DeepHand Uses Deep Learning to Improve Virtual Reality

By on

dhby Angela Guess

Researchers at Purdue University have created a new system to be used in both virtual reality and augmented reality which allows users full use of their hands. According to an article out of the university, “A new system, DeepHand, uses a ‘convolutional neural network’ that mimics the human brain and is capable of ‘deep learning’ to understand the hand’s nearly endless complexity of joint angles and contortions. ‘We figure out where your hands are and where your fingers are and all the motions of the hands and fingers in real time,’ [Karthik] Ramani said.” Ramani is the director of the C Design Lab at Purdue.”

The article goes on, “A research paper about DeepHand will be presented during CVPR 2016, a computer vision conference in Las Vegas from Sunday (June 26 )to July 1 (http://cvpr2016.thecvf.com/). DeepHand uses a depth-sensing camera to capture the user’s hand, and specialized algorithms then interpret hand motions. (A YouTube video is available at https://youtu.be/ScXCqC2SNNQ) ‘It’s called a spatial user interface because you are interfacing with the computer in space instead of on a touch screen or keyboard,’ Ramani said.”

The article adds, “The research paper was authored by doctoral students Ayan Sinha and Chiho Choi and Ramani. Information about the paper is available on the C Design Lab Web site at https://engineering.purdue.edu/cdesign/wp/deephand-robust-hand-pose-estimation/. The Purdue C Design Lab, with the support of the National Science Foundation, along with Facebook and Oculus, also co-sponsored a conference workshop (http://www.iis.ee.ic.ac.uk/dtang/hands2016/#home). The researchers ‘trained’ DeepHand with a database of 2.5 million hand poses and configurations. The positions of finger joints are assigned specific ‘feature vectors’ that can be quickly retrieved.”

Read more here.

Photo credit: C Design Lab

Leave a Reply