Risks often dominate our discussions about the ethics of artificial intelligence (AI), but we also have an ethical obligation to look at the opportunities. In my second article about AI ethics, I argue there is a way to link the two.
“Our future is a race between the growing power of technology and the wisdom with which we use it,” Stephen Hawking famously said about AI in 2015. What makes this statement so powerful is the physicist’s understanding that AI, like all technology, is ethically neutral, that it has the power to do good – and equal power to do bad. It is a necessary antidote to the more unreflective technology cheerleading of the past two decades. But we can’t let AI risks sap our resolve in the race between technological advances and putting them to use.
CHECK OUT OUR NEW PODCAST
Tune in weekly to hear different data experts discuss how they built their careers and share tips and tricks for those looking to follow in their footsteps.
I worry that at the moment we are moving in that direction. We are witnessing ever broader and, in some cases, louder public debates about AI-driven information bubbles, data privacy violations, and discrimination coded into algorithms (based on ethnicity, gender, disability, and income, to name but a few). In the public imagination, many AI risks currently outweigh any opportunities – and lawmakers and policymakers in the EU, the U.S., and China are discussing the regulation of algorithms or AI more generally, although admittedly to varying degrees.
In the summer of 2021, the World Health Organization (WHO) published “Ethics and Governance of Artificial Intelligence for Health.” It quoted Hawking and praised the “enormous potential” of AI in the field – before warning about the “existing biases” of health care systems being encoded in algorithms, the “digital divide” that makes access to AI-powered health care uneven, and “unregulated providers” (and all the resulting dangers to personal-data security and patient safety, including decisions taken by machines).
For one, this demonstrates how the ethics of intent and implementation I discussed in my first piece are linked to the ethics of risk and opportunity. The WHO has (rightly) decided that what AI is meant to achieve in this case – the provision of the best health care in the most equitable way for the utmost number of people – is an ethical goal worth pursuing. Having done that, the WHO asks how this goal can be achieved in the most ethical way – it assesses how good intentions might be undermined in the process of implementation.
What the WHO’s argument also points to are the dangers of an overcautious appraisal of risk and opportunity. Its worries about cementing in or augmenting systemic biases, increasing the inequality of access, and opening the field to buccaneering for-profit operators will no doubt convince some to reject the use of AI – better the devil you know than the devil you don’t. And their caution would probably make them blind to an ethical dilemma this creates: Are these reasons sufficient to simply ignore the benefits of AI?
When it comes to health care, the WHO’s answer is an emphatic no. AI, it tells us, can greatly improve “the delivery of health care and medicine” and help “all countries achieve universal health coverage,” including “improved diagnosis and clinical care, enhancing health research and drug development” and public health by way of “disease surveillance, outbreak response.” The ethical requirement is to honestly weigh risks and opportunity. In this case, it leads to the conclusion that AI-driven health care is a devil we ought to get to know.
We have to look at the risks of AI, but in becoming aware of them, we cannot lose sight of the opportunities. The ethical obligation to consider risks should not outweigh our ethical obligation to consider opportunities. What right would, say, Europe have to ban the use of AI in health care? Such a step might protect its citizens from some forms of harm, but also exclude them from potential advantages – and quite possibly billions more around the globe, by slowing the development of AI in diagnosing, treating, and preventing diseases.
Once we agree that the ethics of intent for using AI in a particular area are acceptable, we will not be able to solve ethical problems arising from implementation through blanket prohibitions. Once we’re aware of the risks that exist alongside opportunities, we will have to aim to use the latter, and in parallel, reduce the former – risk mitigation, not banning AI, is the key. Or, as the WHO puts it: “Ethical considerations and human rights must be placed at the centre of the design, development, and deployment of AI technologies for health.”
Ethically founded and enforceable rules – and, yes, regulations – are the “missing link” between risk and opportunity. In health care, rules will have to mitigate AI risks by taking biases out of health care algorithms, addressing the digital divide, making private buccaneers work in the patient’s interest, not their own. The right kind of rules will make sure that AI works for us, not we for it. Or, to borrow a phrase from Stephen Hawking from that day in 2015, they will help us “make sure the computers have goals aligned with ours.”