Advertisement

Thinking Inside the Box: How to Audit an AI Project

By on

Click to learn more about author Paul Barba.

Over the years we’ve read about far too many AI fails. But it’s not all AI’s fault.

AI’s less-than-stellar track record is in large part due to poorly defined project goals and organizations’ tendency to treat the technology as inscrutable high-tech “magic”.

Without transparency, accountability or an understanding of what you’re trying to achieve, it’s almost impossible for an AI to deliver the goods. That’s where audits come in. Whether you’re part-way through a project or celebrating a job well done (or not), an audit helps you figure out where you’re going or how you got there.

There are two key aspects to conducting an AI audit: the audit itself and building AI systems that lend themselves to an effective audit in the first place.

Here’s how to achieve both.

Creating an Auditable AI

1. Build transparent systems

Clear-box AI systems are gaining ground for a reason: the more you can see into the system, the more you can find.  When spinning up an AI system, build in capability to show how the system is making a decision, and what factors were most important in making it. This makes it possible to map expectations against reality and figure out what’s causing the “gap”.

2. Use best practices

Best practices for AI projects are analogous to securing a network. You want to use professional products, keep detailed logs and incorporate failback mechanisms. This, together with rigorous testing, examination of raw data and continued monitoring will help make interim or forensic audits more fruitful.

3. Mitigate risk

While AI systems can’t be always completely understood, it’s possible to mitigate the risk of them acting in ways you don’t want them to. Build additive or iterative systems, and review outputs at regular intervals to confirm that the machine is learning what was assumed. It’s much easier to right a wayward AI early on in the process than it is after the project has been completed.

4. Use active experimentation

AI is full of unknown unknowns. Have team members actively try to “break” the system to seek out as-yet unidentified issues. They can attempt to hack it, fool it or try to guess where issues might arise.

Conducting the Audit

1. Have a plan

Don’t approach an AI audit as though it’s a sandbox question. Define your success criteria, as well as how you’ll determine whether these have been met. If you know what you’re trying to achieve, it’s a lot easier to determine whether you’re succeeding.

2. Know what you’re looking for

To conduct a successful audit you’ll need to know what you’re aiming to find and correct. Some common issues to look for include bugs, bias, security risks, changing behaviors and errors.

3. Check the whole chain.

You should be observing your system for changing results. There are a lot of steps in machine learning, and issues can occur at any point in your system. Set up a process to watch for and identify  unaccounted-for variations across your whole system.

Towards Better AI Auditing

Too often the mystique of AI is used as an excuse for pressing forward with inefficient systems or poorly performing projects. These excuses would never fly in other areas of business. While there is some degree of opacity around AI, we have all the tools we need to evaluate and improve these systems.

By building robust, transparent systems subject to checks and monitoring along the way, and applying outcomes-based analyses at specified project checkpoints, we can ensure that our projects are performing as intended – and if not, where the variances are arising and how to manage them.

AI may have its own special cachet, but it’s like any other organizational process. Making sure it’s working as intended is just part of doing business.

Leave a Reply