Advertisement

Building an Artificial Intelligence Playbook with MindMeld’s Tim Tuttle

By on

Artificial IntelligenceSports coaches and sales teams know the value of a good playbook. Now the anyone in the data profession who wants to get involved with Artificial Intelligence (AI) can access a playbook to help them develop useful applications for their organizations.

MindMeld, which offers a platform to help companies create intelligent conversational voice and chat assistants for any app or device, recently released The Conversational AI Playbook. It’s intended as a free, practical tool to help walk developers through the art of building good conversational applications using supervised Machine Learning based on high-quality, representative training data, and state-of-the-art algorithms.

DATAVERSITY® recently had a chance to talk with MindMeld CEO and Founder Tim Tuttle about why the data industry needs a AI playbook, and what readers can expect from it.

DATAVERSITY (DV): Why is the time right to put out a conversational Artificial Intelligence playbook?

Tim Tuttle: The world really woke up in 2016 and became excited about the idea of talking to our devices and systems. We’ve gone from Siri to Amazon Echo and Google Home, having these devices on-command and helping us solve many tasks. I think the trend will continue – it’s inevitable considering technology improvements.

There’s been a lot of academic literature on conversational Artificial Intelligence, but that’s a step removed from the realizations of product implementation. There really haven’t been any resources out there to tell you how to build your own version of Siri. And Google’s, Amazon’s, and Microsoft’s proprietary technology is not broadly known outside those companies. This playbook is a first attempt to make more knowledge available so that voice applications that rely on data and Machine Learning can get much better over the coming years.

DV: What are some of the issues developers face in creating useful conversational applications

Tim Tuttle: People are realizing that building good conversational interfaces is challenging. Many experiments launched last year were either narrowly focused or trivial or they were trying to be too ambitious and so had bad accuracy.

Today a lot of developers and companies don’t fully understand how AI works, and as a result they believe it can do far more than is actually possible. We emphasize that today’s Artificial Intelligence systems all are powered by training data. That’s the foundation of everything intelligent your conversation bot will be able to do. If you are unable to get high quality data that reflects your use case, it will be very challenging to build a bot that users will love. So, as a prerequisite to building a bot you need to think through: Is it possible to create thousands or even millions of pieces of training data to power the intelligence? For people who know machine learning that’s obvious, but we run into lots of customers every day who don’t know that is the case.

The second challenge is selecting the right use case – it’s as important as using the right technology. There are a lot of use cases where the utility of providing a voice interface might be marginal or little, so you’ve wasted your time on something users don’t care about.

DV: How to know a good use case from a bad one?

Tim Tuttle: We suggest that you should pick a use case that resembles a familiar real world interaction – people would probably find it intuitive, for example, to order coffee through a mobile conversational bot app. It would be just like talking to a barista. Without that familiarity and context, telling people to use a bot and just tap on the mike button, they wind up at a loss for words. And, you don’t have a lot of opportunity to educate consumers so a lot of times these bots fail right out of the gate. For instance, there are a lot of conversational eCommerce shopping bots on Facebook now, but nobody talks their way through a product catalogue. So when you tell a user he can buy products using a bot, he isn’t sure what the first question to ask is. That might change over time but you’ve got to ease people into it.

We emphasize that if you’re going to use voice or conversation, it’s got to be because it’s more convenient or will save users time over using a basic native app. With poor technology, a lot of times using voice just gets in the way. It’s usually a very convenient choice, though; if users are looking for something very specific or want to do a very specific task and they know how to articulate it then a voice query can be faster than using a tree of menus. That’s an obvious win. A voice app is also an obvious win if it’s useful for people to use when they’re busy doing other things, like driving.

DV: Tell us a little about the makeup of the playbook.

Tim Tuttle: We’ve been on the front lines of this for several years and have launched some of the most advanced conversational systems that exist today, so we’ve learned a lot about what it takes to build a good conversational app.

The playbook essentially captures all our learnings here. It’s a crash course on what to know to build a useful conversational assistant. It’s designed to be very hands on, for software engineers and data scientists. It’s not about futuristic research, but covers the practical constraints you have to deal with to launch production apps.

You’ll find takeaways including step-by-step procedures to walk you through all the components of building a good assistant. Think of it as a recipe to follow because until now the cookbook hasn’t been there.

DV: Can you give us some examples of recipe “ingredients”?

Tim Tuttle: Yes. We walk you through how to get training data resources, for one thing.

What we first say is that for the app you envision there must be a practical way for you to generate that training data, either by looking at product in traffic when your users use your current app or possibly just by using crowd sourcing or polling to generate it artificially – that’s very possible for many use cases.

For example, if you are creating a conversational assistant to help you order coffee, using crowd sourcing is probably a fairly easy way to get a few hundred or a few thousand people to voice their orders, and using mechanisms that exist today you can generate training data from that set. But to create a voice bot to provide financial advice to high net-worth individuals is an entirely different thing. There probably aren’t many experts who know the right Qs & As, and it’s probably hard to get the ones that do to share this information, even if you pay them. A lot of financial services companies want to create voice bots to automate things like investment banking but you need to be able to use resources to get that data to learn from, and even if you have the money you are going to have a hard time to get the six experts in the world who know everything about corporate tax compliance and the other issues involved into a room to download their knowledge. In Artificial Intelligence, so much depends on the domain you’re working in.

We also provide advice about tools you can use to manage training data sets and how to leverage machine learning models.

DV: Can you tell us about how some MindMeld customers are using your technology today?

Tim Tuttle: A lot of retail stores are investing heavily in mobile apps that let you place orders as you walk down the street so your food or drinks or even clothes will be ready when you get there for pickup. In December when Starbucks announced its barista voice interface mobile app to order anything off the menu that got a lot of attention in the retail and national restaurant category. We can’t say if we are working with Starbucks but we do know a lot about that kind of application.

DV: What do you envision the world of AI-enabled voice apps to look like a few years down the road?

Tim Tuttle:  Everybody is excited about the potential for these conversational interfaces, unfortunately they are slow to arrive and a lot of companies are launching really bad ones. The way I recommend viewing this industry is like where web apps were in the early 1990s – web pages were terrible when they were first put up and it took three to four years for standards, best practices and new software tools to become available to help companies get the scaffolding to create good apps. We are in the very early days of these conversational apps but it’s inevitable that 10 years from now it will be routine to walk up to an app and have the option to speak if you want to. But a lot of work has to happen to get there, including educating a new generation of developers on this new type of interaction modality. That’s just starting to happen.

 

Photo Credit: agsandrew/Shutterstock.com

Leave a Reply