Loading...
You are here:  Home  >  Education Resources For Use & Management of Data  >  Data Daily | Data News  >  Current Article

DarwinAI Announces Explainability Platform for Neural Network Performance

By   /  November 15, 2018  /  No Comments

According to a new press release, “DarwinAI, a Waterloo, Canada startup creating next-generation technologies for Artificial Intelligence development, today announced the next milestone in its product roadmap with the release of its explainability toolkit for network performance diagnostics. Based on the company’s Generative Synthesis technology, this first iteration of the tool provides granular insights into neural network performance. Specifically, the platform provides a detailed breakdown of how a model performs for specific tasks at the layer or neuron level. This deep understanding of the network’s components and their involvement in specific tasks enables a developer to fine-tune the model designs for efficiency and accuracy. The introduction of explainability comes two months after the company announced its emergence from stealth, its Generative Synthesis platform, and $3 million in seed funding, co-led by Obvious Ventures and iNovia Capital, as well as angels from the Creative Destruction Lab accelerator in Toronto.”

The release continues, “Explainability is key in addressing the ‘black box’ problem at the heart of deep learning. Given the tremendous complexity of neural networks (hundreds of layers with millions of parameters), it is virtually impossible for a human to understand how such a network makes a decision and, more generally, what makes a good network. Generative Synthesis, DarwinAI’s core technology and the product of years of academic research, uses AI itself to understand the capacity of each neuron and its impact on network performance. This data yields valuable insights into how developers can improve the neural network for specific tasks.”

Read more at Globe Newswire.

Image used under license from Shutterstock.com

You might also like...

Thinking Inside the Box: How to Audit an AI

Read More →