You are here:  Home  >  Uncategorized  >  Current Article

Using Artificial Human Brains to Understand Real Ones

By   /  November 4, 2013  /  No Comments

Lucas Laursen of IEEE Spectrum reports, “As [Henry] Markram has been telling everyone since he got the €1 billion nod to lead the Human Brain Project, the way researchers study the brain needs to change.”


Lucas Laursen of IEEE Spectrum reports, “As [Henry] Markram has been telling everyone since he got the €1 billion nod to lead the Human Brain Project, the way researchers study the brain needs to change. His approach—and it’s not the only one—stands on an emerging type of computing that he and others claim will let machines learn more like humans do. They could then offer generalizations from what’s known about a handful of neural pathways and find shortcuts to understanding the rest of the brain, he argues. The concept will rely as much on predictions of neural behavior as on experimental observations.”

Laursen continues, “Yet such predictions will have to come from people until they can better train their computers to do it. So-called cognitive computing, which relies on recognizing elements of a familiar thing in new settings, is difficult to achieve through the kind of raw calculation to which most supercomputers are suited. It’s not like winning at chess or even Jeopardy!, two tasks IBM machines have mastered. But IBM researchers are already turning Watson, the supercomputer that beat Jeopardy!, into a recipe-remixing machine, and they are sure to program it for other tasks that require massive data sifting and some level of semantic analysis.”

He goes on, “That’s the direction Markram expects computing to go for biologists, who need their computers to think more like people do. Human intelligence seems to rely on the art of the analogy, as Douglas Hofstadter writes in his new book on artificial intelligence, which James Somers explores at length in The Atlantic this month. That’s why CAPTCHAS have been so hard to defeat: the letters are easy for a computer to learn but difficult to recognize out of context. Yet we can quickly hypothesize what’s important enough about a letter to recognize it when it is distorted.”

Read more here.

Image: Courtesy Flickr/ joestump

You might also like...

Smart Data Webinar: Advances in Natural Language Processing II – NL Generation

Read More →