Advertisement

Fundamentals of AI Ethics

By on
AI ethics

In the world of enterprise technology, AI is a fast-growing sector, with no end in sight. A recent survey by PwC found that 86% of executives expected AI to become mainstream in their organization soon. Data-driven businesses want to reap the benefits of AI implementation, including better customer relationships, more efficient business processes, and valuable marketing data. But the rise of AI also means more risks and challenges, such as data privacy violations and unintentional bias, and business leaders today must be extra-mindful of AI ethics.

What does it mean to make AI ethical? Below, we’ll look at the most significant ethical issues associated with AI, as well as how to tackle them.

What Is AI Ethics?

AI ethics is the study of how to use AI in responsible, trustworthy ways, with an ethical mindset, looking out for various types of problems that could have negative effects on people, communities, or the environment. Some definitions include:

  • “A broad collection of considerations for responsible AI that combines safety, security, human concerns and environmental considerations” (Forbes)
  • “A system of moral principles and techniques intended to inform the development and responsible use of artificial intelligence technology” (TechTarget
  • “A set of guidelines that advise on the design and outcomes of artificial intelligence” (IBM)

From Google to Facebook to the Department of Defense, organizations are establishing codes of AI ethics, featuring the principles that align most with their core business values. 

Getting Started with AI Ethics

While the approach to AI ethics varies across industries, with no one-size-fits-all set of rules, a few common guidelines stand out:

1. AI should help, and not harm, human users.

Part of this means being responsive to humans’ needs. Evolving AI and machine learning (ML) must not bypass humans without, in a sense, checking in and getting the necessary guidance: AI engines are not here to steal our jobs or deliver us over to random circumstances that humans haven’t reviewed.

For example, self-paying kiosks do not replace cashiers fully – there can still be a human monitoring these machines and helping with problems. The same holds true in the medical world, where instead of autonomously diagnosing conditions, an AI program can help a doctor or radiologist to spot certain items on a scan. 

Using AI for the greater good of society is a growing trend in AI ethics, with promising advances in AI tackling poverty, hunger, climate change, and other societal challenges.

2. Explainable AI creates transparency and trust.

In a move toward more transparent AI results and solving the “black box” problem of AI, organizations are also increasingly focusing on explainable AI.

Transparency in AI ethics means guarding against uses of AI that are not fully clear to operators or others. Neural networks are one example – when the neural networks themselves become too complex and automated, engineers may be staring down a black box and unable to fully control or guide results.

If the engineers can’t figure out exactly how the AI engine does its work, they cannot provide full transparency to the people who are either using, or are affected by, the systems themselves.

What does AI transparency look like in the real world? When AI makes recommendations about whether someone should be insured or get a higher credit limit on a credit card, it’s important to know how AI “chooses” the people it finds eligible. That way, decision-makers can evaluate whether they can trust and use those recommendations.

3. Work proactively against AI bias.

One of the biggest challenges in AI, bias can stem from several sources. The data used for training AI models might reflect real societal inequalities, or the AI developers themselves might have conscious or unconscious feelings about gender, race, age, and more that can wind up in ML algorithms. Discriminatory decisions can ensue, such as when Amazon’s recruiting software penalized applications that included the word “women,” or when a health care risk prediction algorithm exhibited a racial bias that affected 200 million hospital patients. 

To combat AI bias, AI-powered enterprises are incorporating bias-detecting features into AI programming, investing in bias research, and making efforts to ensure that the training data used for AI and the teams that develop it are diverse. Gartner predicts that by 2023, “all personnel hired for AI development and training work will have to demonstrate expertise in responsible AI.”

Continually monitoring, analyzing, and improving ML algorithms using a human-in-the-loop (HITL) approach – where humans and machines work together, rather than separately – can also help reduce AI bias.

4. Ensure data privacy and security.

To train machine learning algorithms, AI systems typically need large amounts of data, some of which may be personally identifiable and sensitive. Because of this, it has become more complicated to harness the full potential of AI while also complying with data regulations and ensuring that training data stays secure from cyber threats.

Some techniques that have been used to mitigate these risks include leveraging synthetic data sets, encrypting personal data, and operationalizing AI governance, which involves organizing the move to harness, control, and direct AI efforts.

Organizations starting out with AI governance can look to guides such as the World Economic Forum’s model AI governance framework and IBM Watson’s AI Governance maturity model focusing on the AI governance lifecycle. 

Benefits of AI Ethics

Incorporating AI ethics into a wider data strategy requires a great deal of investment, but there are numerous benefits:

  • Identifying problems in AI algorithms before they can cause harm
  • Retaining the trust and loyalty of customers and employees
  • Ensuring AI-based decisions are ethical and easily understood 
  • Avoiding the reputational damage associated with AI bias
  • Reducing the legal and financial risk of privacy violations
  • Protecting the data security of AI systems

The Future of AI Ethics

At UNESCO, people are working with partners to promote better uses of AI and guide the technology vanguard in the right direction. In a site resource, a UNESCO spokesperson talks about “creating a multi-stakeholder forum that advances the international AI policy dialogue in support of human rights and the democratic order.” Other agencies and world governments are on board too, in anticipating challenges with AI implementations.

Despite the best efforts of leading organizations, concerns about AI ethics loom. Notable figures like Elon Musk and Bill Gates have warned us that the risks and dangers of AI should be front and center in any new AI/ML developments. Still, experience shows that issues like AI bias can be hard to eradicate and that more powerful AI may be, in general, harder to control. It’s incumbent on those pushing forward with AI work to always keep in mind that their efforts and results should be open to the public and accountable to real human beings in as broad a way as possible. 

Image used under license from Shutterstock.com

Leave a Reply