Loading...
You are here:  Home  >  Education Resources For Use & Management of Data  >  Data Daily | Data News  >  Current Article

Indico Launches Enso Open Source Project for Machine Learning

By   /  June 27, 2018  /  No Comments

According to a new press release, “Indico, a provider of Enterprise AI solutions for unstructured content, today announced the launch of a new open source project focused on simplifying the use of transfer learning with natural language. Enso is an open-source library designed to streamline the benchmarking of embedding and transfer learning methods for a wide variety of natural language processing tasks. It provides machine learning engineers and software developers with a standard interface and useful tools for the fair comparison of varied feature representations and target task models. ‘The Open Source community is the driving force for innovation in machine learning, and Indico has benefitted from it and embraces the open source effort fully,’ said Slater Victoroff, co-founder and CTO at Indico. ‘Enso is a way for us to give back to the community and continue to promote the benefits of transfer learning to accelerate its adoption and reduce the barriers to machine learning’.”

The release continues, “Transfer learning is the practice of applying knowledge gained on one machine learning task to aid the resolution of subsequent tasks. It has seen historic success in the field of computer vision and image classification. Tasks that would typically require hundreds of thousands of images can be tackled with just dozens of training examples per class thanks to the use of these pre-trained models. The field of natural language processing, however, has seen fewer gains from transfer learning. The Enso project is focused on addressing a core set of interrelated problems that underlie these limitations: (1) A lack of academic reproducibility. Due to the use of custom datasets and variations in coding practices, it is difficult to determine whether a new methodology is truly effective. (2) Weak baseline benchmarks that limit general applicability. It is important to evaluate new methods on a broad range of datasets to determine whether or not a new approach represents a substantial improvement over alternatives. (3) ‘Overfitting’ to specific datasets. Many of the models used for benchmarking are tied to specific datasets making it too difficult to take a model trained for one domain and train it on another.”

Read more at Globe Newswire.

Photo credit: Indico

You might also like...

Thinking Inside the Box: How to Audit an AI

Read More →