Advertisement

The Three Pillars of Trusted AI

By on

Click to learn more about author Jett Oristaglio.

As AI becomes ubiquitous across dozens of industries, the initial hype of new technology is beginning to be replaced by the challenge of building trustworthy AI systems. We’ve all heard the headlines: Amazon’s AI hiring scandal, IBM Watson’s $62 million failure in oncology, the now-infamous COMPAS recidivism model that discriminated against Black defendants. AI failures are becoming commonplace among large organizations, and they draw justified scrutiny and ire from the public, media, and regulators alike.

AI can radically transform an organization, but just like in human decision making, there are many ways that an AI system can go wrong – inaccuracy, overconfidence, bias, privacy concerns, and dozens of other risks can be encoded into an organization’s AI. And because a single point of failure can have massive repercussions when it comes to automated decision making, one-off solutions and tools don’t solve the broader problem of AI trust. In order for an organization to be able to trust their AI models, they have to approach the problem of trust from a holistic perspective – understanding the high-level picture of how AI can fail across every stage of its development, from data preprocessing to model building and deployment.

Ultimately, there are three main pillars of trusted AI that are necessary to successfully implement trustworthy enterprise AI:

1. Performance

2. Operations

3. Ethics

Performance relates to the question: “How well can my model use data to make predictions?” Model accuracy is the most commonly discussed dimension of performance, but trusting your AI’s predictions requires much more than accuracy. Performance also includes criteria like Data Quality, your model’s robustness to dirty or missing data, and the speed with which it can make predictions.

Operations relates to the question: “How reliable is the system that my model is deployed on?” This pillar ensures that you can trust your model in the real world – where data is messy and dynamic, regulations abound, and security is always a concern. Many models that perform perfectly in a sandbox end up breaking once they’re deployed and tested with real data.

Ethics relates to the question: “Does my model align with the ethics and values of my organization?” Or, put another way: “What is the impact of my model on the world?” This is the most important requirement for trusted AI, and also the most overlooked. Ethics includes criteria like bias and fairness, the value generated by the model, and the explainability of its decisions. Ultimately, it doesn’t matter whether your model is accurate and reliable if its impact on the world and on your organization is negative.

It’s important to understand each of these pillars in detail, which I will do over a three-part series. First, let’s dive deeper into the first pillar of AI success – performance – and how it’s one key way to successfully implement trusted AI.

Performance: The First Pillar of Trusted AI

Performance is important throughout the entire AI lifecycle, but it is first evaluated during the data cleaning and model building phase. This is when the model is tested in a sandbox, and the goal is to build a model with the highest possible performance before deploying it out into the real world.

The main criteria tested under performance are:

1. Data Quality

2. Accuracy

3. Speed

Data Quality

Data Quality is the foundation of all trustworthy AI: As the old saying goes, “Garbage in, garbage out.” Even the most advanced machine learning model can’t make up for low-quality data.

The first way to ensure Data Quality requires you to track the data’s provenance. Many AI projects require combining data from multiple sources: in-house data warehouses, third-party data, and even open-source datasets, such as census records or even weather reports. It’s critical to understand the different data sources being used by the AI system. This can help identify problems like incompatible data and poor data collection methodologies before they cause real-world failures.

The second way to ensure Data Quality involves performing data cleaning as a part of the AI pipeline. You can draw meaningful insights about your data by computing summary statistics on each feature, calculating feature correlations with both the target and with other features, and even modifying the data. Ultimately, you can dramatically improve your model’s performance with techniques like imputing missing values, dropping duplicate rows, and removing “leaky” features that encode data that won’t be known during prediction time and that cause overconfidence.

Trusted AI has to be robust to dirty data. This also means that data cleaning can’t be considered a singular process performed only once before modeling begins. Instead, these data cleaning techniques used during model training have to be built into the same repeatable pipeline that is used for the model’s predictions – each time the model receives new data, it has to perform all necessary data cleaning again. This ensures that the model will not break as soon as it’s deployed into the real world.

Accuracy

Accuracy is the most commonly analyzed component of performance, but it encompasses a wide range of different kinds of analysis. Accuracy attempts to generate insights about the model’s error rate through aggregating its predictions, but there are many different ways to measure it.

Solid Data Science foundations like out-of-sample testing and cross-validation have to be table stakes when evaluating your model. You also have to ensure that you’re using an error metric well-suited for the problem at hand – for instance, Log Loss and RMSE are the defaults for binary classification and regression problems, respectively, but there are cases when you want to use less common accuracy metrics. It’s important to select your accuracy metric carefully.

Additionally, you can dig deeper into your model’s accuracy with insights like the Confusion Matrix, which lets you evaluate what kinds of errors your model is most likely to make, such as “false positives” vs. “false negatives.” Lift charts and ROC curves can also help fill in other pieces of the accuracy puzzle. Evaluating accuracy is critical to trustworthy AI, and there are many different techniques that can be used to generate deeper understanding.

Speed

Imagine being told that your self-driving car can only make decisions once every three seconds. It seems obvious that you shouldn’t trust that AI system with your life. Every model, regardless of its function, has somelimitations around how slowly it can make predictions – whether that’s three milliseconds, three seconds, or three weeks.

Oftentimes, the most accurate model is also the slowest, such as complex blenders and deep neural networks trained on top hardware. Optimizing purely for accuracy can lead to model failures along other dimensions, such as cost, explainability, and most relevant to performance, speed. Before selecting a model, you have to ensure that it can return predictions within an appropriate time frame for the use case.

Other than model selection, there are a few ways to improve model speed, such as using sparse matrices for numeric data, and removing unnecessary/unimportant features from the dataset. It’s often possible to get the same or even better accuracy by training a model using only the top 10 most important features of a dataset, even if it contains thousands of different features in total.

Performance is the first pillar of successfully implementing trustworthy enterprise AI. But none of these pillars can exist independently. For example, once the model is deployed, these performance criteria have to be tested continuously to ensure there is no degradation. Performance and operations intersect in a field known as ML Ops. In our next article, we’ll dive deeper into operations as the second pillar of trusted AI.

Leave a Reply