Advertisement

Ask a Data Ethicist: How Can We Ethically Assess the Influence of AI Systems on Humans?

By on

From entertainment to online shopping to chatbots, Al systems are exerting influence across many aspects of our lives. The impacts are widespread, shaping our belief systems, our voting decisions, and our well-being. Yet, not all influence is unethical, which leads to this question …

How can we ethically assess the influence of AI systems on humans?

Bezou-Vrakatseli et al provides some guidance in this paper, which outlines the S.H.A.P.E. framework. S.H.A.P.E. stands for secrecy, harm, agency, privacy, and exogeneity. Let’s break it down.

S Is for Secrecy

If you are not aware that you are being influenced or are unaware of the way in which the influence is taking place, there might be an ethical issue. The idea of intent to influence while keeping that intent a secret, speaks to ideas of deception or trickery. The authors of the paper point out the “information asymmetry” being exploited is a big part of the problem. Deepfakes are an obvious example, but this might also be less overt, such as designing a feature in an online environment that makes it easier to action A vs. action B.

Even if you are aware that you are being influenced, there could still be an ethical concern if you don’t understand the means by which the influence is occurring. For example, you might know that a social media app intends to keep you engaged with its content, but the algorithmic means by which that influence is happening – the data used, the way the system works – may be opaque. Similarly, with a chatbot or generative AI system.

The Fix: Transparency, explainability, and interpretability. Mitigate information asymmetries. Beware of adversarial attacks and manipulation of AI systems and agents.

H Is for Harm

This one likely feels pretty self-explanatory as to why influencing someone in a way that causes harm is not ethical. You might be wondering – what actually constitutes harm? It’s not just physical harm. There are a range of possible harms including mental health and well being, psychological safety, and representational harms. The authors note that this issue of what is harm – ethically speaking – is contestable, and that lack of consensus can make it difficult to address. This is particularly prevalent when it comes to something like online speech on a social media platform – is that free speech or hate speech? Where is the line and who decides? Should there even be a line? The authors also point out that not all harm is unethical. For example, a surgeon is causing bodily harm but in the name of helping address a medical issue that is acting in the patient’s best interest.

Intent also plays into the harm’s discussion – and, generally speaking, if there is intent to harm that carries greater moral weight. We can understand this in terms of how we evaluate perhaps the biggest harm of all – killing someone. First degree murder is premeditated and deliberate while manslaughter speaks to negligence and lack of intent (didn’t mean to do it). The corresponding punishment recognizes the issue of intent. In the context of AI, we can think of intent in terms of the system’s designers or those who provide the system for use rather than the system itself. As a machine, AI systems do not have intent (IMHO) – though this area is that hotly debated and linked to issues of moral agency, sentience, and consciousness – all of which is well beyond the scope of this month’s column.

The Fix: Actively seek to mitigate or minimize harms. Pay attention to the design of the reward system or what is being optimized. Work with cross-functional domains to seek a range of inputs and perspectives.

A Is for Agency

If your ability to make self-determined choices is at stake, this might indicate unethical influence. Human agency has “intrinsic moral value” – that is to say we value it in and of itself. Thus, anything that messes with human agency is generally seen as unethical. There can be exceptions, and we sometimes make these when the human in question might not be able to act in their own best interests. For example, a person with dementia might no longer be able to make good choices, so there may be grounds for a caregiver to intervene and reduce the agency of the dementia patient. However, generally speaking, we try to uphold agency. The authors provide five areas “of what it means for influence to reduce agency: removing options, imposing conditional costs or offers, influencing without consent, bypassing reason, or being irresistible.” For brevity, I won’t expand on all of these, but you can generally think about coercion or being pressured by algorithmic or AI means as examples. A timer that counts down a “special offer” that will expire is an example, or narrowing choices in a digital environment to steer you towards a particular outcome is another example.

The Fix: Apply technical means to quantify degrees of agency. Use AI to support human agency.

P Is for Privacy

Influence may be unethical if there is a violation of privacy. Much has been written about why privacy is valuable and why breaches of privacy are an ethical issue. The authors cite the following – limiting surveillance of citizens, restricting access to certain information, and curtailing intrusions into places deemed private or personal. In the context of AI and data, we can think about privacy concerns both in terms of the training/testing data as well as data collected through the use of the system, which also may find its way back into training data for a subsequent release. AI systems make inferences – using various data points to make best guesses. This inference might reveal sensitive, private information. The infamous story of Target using data analytics to infer a teen’s pregnancy comes to mind. Privacy also introduces legal obligations around data – which isn’t a core focus of this paper but should be noted. 

The Fix: Determine if it’s worth proceeding given privacy concerns. Differential privacy and other technical fixes.

E Is for Exogeneity

Exogeneity isn’t likely a word we encounter a lot. In the context of this work, it simply means that there are other interests being advanced that don’t align with the interests or goals of the person who is being influenced. We might think of this in terms of mis or dis information created by a third party with the intention of manipulating an election through exploiting social media. It could also just be misaligned incentives by an influencer or activist. Power also comes into play when a system provides a means for actors to “steer” an outcome to their advantage, thereby exerting control. It can allow certain parties to wield significant power which is an ethical concern even if that power is not actually acted on.

The Fix: Understand and monitor whose interests are being served. Look for issues of fairness. Develop mechanisms to decentralize power.

While S.H.A.P.E. doesn’t claim to address every ethical issue related to influence, it does provide a handy mechanism to think about some of the big, recurring ethical issues. As the authors of the paper note: “We envisage the SHAPE framework being used by designers of influential AI systems as a way to structure their thinking when considering the ethical impacts of their systems.” Designers and developers can use the framework as a self-assessment tool to see if there are S.H.A.P.E. concerns and then work to address those issues. Users can also benefit by being aware of these issues – and perhaps asking designers and developers if they have addressed these things.

This is a very high-level summary – the full paper is well worth a read for anyone interested in applying this framework.

Resources

Bezou-Vrakatseli, E., Brückner, B., & Thorburn, L. (2023, September). SHAPE: A Framework for Evaluating the Ethicality of Influence. In European Conference on Multi-Agent Systems (pp. 167-185). Cham: Springer Nature Switzerland.

Send Me Your Questions!

I would love to hear about your data dilemmas or AI ethics questions and quandaries. You can send me a note at hello@ethicallyalignedai.com or connect with me on LinkedIn. I will keep all inquiries confidential and remove any potentially sensitive information – so please feel free to keep things high level and anonymous as well. 

This column is not legal advice. The information provided is strictly for educational purposes. AI and data regulation is an evolving area and anyone with specific questions should seek advice from a legal professional.