Advertisement

Ask a Data Ethicist: Can We Trust Unexplainable AI?

By on
Read more about author Katrina Ingram.

In last month’s column, I asked readers to send in their “big questions” when it comes to data and AI. This month’s question more than answered that call! It encompasses the enormous areas of trust in AI tools and explainability.

How can we know if an AI tool is delivering an ethical result if we have no idea how it is getting to its answers?

Before we get to directly answering the question, there are a few important things to touch on first:

AI Is Not One Thing

There are a whole range of technologies being marketed under the umbrella of AI –  everything from facial recognition technologies using computer vision to recommendation systems to large language model chatbot-type tools like ChatGPT, to name just a few. The specific ways in which these technologies work and what they are used for plays into the question of explainability and trust. Generally speaking, machine learning involves finding patterns in a lot of data in order to produce a result or output. There are a host of general ethical concerns related to that process. However, to fully address the question we should attempt to be as specific as we can about which AI tool we are discussing.

Ethics in Context

Similar to the term AI, ethics also covers a whole range of issues and depending on the particular situation, certain ethical concerns can become more or less prominent. To use an extreme example, most people will care less about their privacy in a life and death situation. In a missing person situation, the primary concern is locating that person. This might involve using every means possible to find them, including divulging a lot of personal information to the media. However, when the missing person is located, all of the publicity about the situation should be removed. The ethical question now centers on ensuring the story doesn’t follow the victim throughout their life, introducing possible stigma. In this example, the ethical thing to do completely shifts in light of the contextual circumstances.

Human Agency and Explanations

In order for a person to exercise their agency and to be held accountable as a moral agent, it’s important to have some level of understanding about a situation. For example, if a bank denies a loan, they should provide the applicant with an explanation as to how that decision was made. This ensures it wasn’t based on irrelevant factors (you wore blue socks) or factors outside a person’s control (race, age, gender, etc.) that could prove discriminatory. The explanation needs to be reasonable and understandable for the person who requires the explanation. Thus, giving a highly technical explanation to a layperson will be inadequate. There’s also a human dignity aspect to explanations. Respecting people means treating them with dignity. 

The Elements of Trust

Trust is multi-faceted. Our society has built infrastructures that help enable trust in using technologies. For example, in the 1850s, when the elevator was a new technology, it was designed in ways that were not always safe. Ropes were used as cables and those could fray and break. Over time, we saw better designs, plus we have a process to oversee elevator operations. There are laws that require regular safety checks. How do we know the safety checks are done? We trust the system that mandates compliance. We no longer need to wonder if we’ll arrive safely at the 77th floor before we step inside the little metal box. Trust, in this case, is a construct of reliable, safely designed technology as well as appropriate systems of oversight and governance.

On to Our Question …

With these elements in mind, let’s dive into our question. The super-short and likely unsatisfying answer to the question is “we can’t know for sure.” However, let’s try to fill in some of the specifics about the tool and context that will help us get to a more useful response.

Let’s assume that we are end users and we’re using a generative AI tool to help us make content for a presentation we’re giving at work. How might we ensure we’re making good choices so we can responsibly use this tool given this context?

Ethics of How It’s Made

There are ethical issues involving generative AI that we, as an end-user, cannot address. Most generative AI was made using questionably acquired data from the internet. It includes biased and unrepresentative data. There are also labor supply chain issues and environmental issues related to training large language models. What’s more, it’s not (currently) possible to have interpretability – a detailed technical understanding of a large language model. For a layperson, it might be enough of an explanation to understand that a large language model uses probabilistic methods to determine the next word that seems plausible and that it will always aim to provide an answer even if the answer is not accurate. 

As an end user, you will not address any of these ethical issues. The best you can do is decide if you still want to use the tool or not given how it was made. Over time, my hope is that some companies will design better, more responsibly developed tools that address these issues or that regulations will require these issues be fixed. 

Using AI Responsibly

Assuming you decide to proceed, the next step is to take responsibility for the outcomes. This means knowing that generative AI does not understand anything. There have been many stories about how the tool “hallucinates” and why it should not be used for high stakes things like legal work. Given this information, where does it make sense for you to use the generative AI tool? Perhaps it helps with brainstorming. Maybe it can create an outline or help you with a first draft. 

There are also differences between generative AI that can make it more or less safe for use. For example, an enterprise solution deployed within the confines of your business is likely to have more privacy and other guardrails than a public-facing tool like ChatGPT. If you’re using an enterprise tool, you can ask your company’s IT department about what due diligence was conducted before the tool was adopted. (Hint: if you are procuring AI, you should be asking vendors tough questions and doing due diligence!) In addition, your company should have policies and procedures in place for using the tools in accordance with their expectations.

You can also double-check the outputs. You can use other sources to verify information. If you’re generating images, you can use specific prompts to ensure you get greater diversity of representation. Be aware of stereotypes and make sure you are not asking the system to generate an image that is copyrighted

Finally, what are the stakes involved in this work? Is it for an internal presentation or will the generated content be used in a national ad campaign? The higher the stakes, the more due diligence and review you should do – including involving external stakeholders for something that could have major impacts.

Send Me Your Questions!

I would love to hear about your data dilemmas or AI ethics questions and quandaries. You can send me a note at hello@ethicallyalignedai.com or connect with me on LinkedIn. I will keep all inquiries confidential and remove any potentially sensitive information – so please feel free to keep things high-level and anonymous as well.