Advertisement

The Growing Importance of AI Governance

By on
AI governance

New technologies often engender fear and foreboding among people outside tech industries. The latest example of this trend is artificial intelligence (AI), which is a topic of much concern and misunderstanding among the public. It’s easy to dismiss these qualms as the common human tendency to mistrust the unknown. However, much of the alarm about AI is now being voiced by the scientists and researchers at the forefront of the technology. Technologists and public policymakers are joining forces to emphasize the importance of AI governance as both a code of ethical conduct and a regulatory framework.

In May 2023, more than 350 artificial intelligence researchers, engineers, and executives signed an open letter issued by the nonprofit Center for AI Safety warning that AI posed a “risk of extinction.” The group claims that mitigating the dangers of AI to society needs to be a “global priority” on the same scale as pandemics and nuclear war. 

AI governance is the key to the safe, fair, and effective implementation of the technology. Efforts are underway by technology firms and public policymakers to create and deploy guidelines and regulations for the design and implementation of systems and products based on AI technology. This article examines the current state of AI governance and the outlook for the secure and prosperous use of AI systems in years to come.

What Is AI Governance?

The goal of AI governance is to ensure that the benefits of machine learning algorithms and other forms of artificial intelligence are available to everyone in a fair and equitable manner. AI governance is intended to promote the ethical application of the technology so that its use is transparent, safe, private, accountable, and free of bias. To be effective, AI governance must bring together government agencies, researchers, system designers, industry organizations, and public interest groups. It will:

  • Ensure that AI vendors can maximize profits and realize the many benefits of the technology while minimizing societal harms, injustices, and illegalities
  • Provide developers with practical codes of conduct and ethical guidelines
  • Create and deploy mechanisms for measuring AI’s social and economic impact
  • Establish regulatory frameworks that enforce safe and reliable application of AI

The ethical use of artificial intelligence depends on six core principles:

  • Empathy: AI systems must understand the social implications of their responses to humans and must respect human emotions and feelings.
  • Transparency: The decision-making mechanisms programmed into AI algorithms must be clear to promote accountability and scrutiny.
  • Fairness: The systems must be prevented from perpetuating existing biases in society, whether intentionally or unintentionally, to ensure that they don’t violate human rights regarding sex, race, religion, gender, and disability.
  • Unbiased: The data that machine learning systems are trained on must be regulated and assessed to detect and remove bias that the data may perpetuate.
  • Accountability: Users of the systems must be able to determine who is responsible for protecting against any adverse outcomes generated by the use of AI.
  • Safety and reliability: Individuals and society in general must be protected against any potential risks posed by AI systems, whether due to data quality, system architecture, or decision-making processes programmed into the algorithms.

The Impact of Generative AI

Traditional AI focuses on pattern recognition and forecasts based on existing data sources. Generative AI goes a step further by using AI algorithms to create new images, text, audio, and other content based on the data it has been trained on rather than simply analyzing the data to recognize patterns and make predictions. The dangers of generative AI include potential job displacement and unemployment, the creation of massive amounts of fake content, and the potential that AI systems will become sentient and develop a will of their own.

An immediate, pervasive, and surreptitious threat posed by generative AI is the technology’s ability to create content designed to influence the beliefs and actions of specific individuals.

  • Targeted generative advertising appears like a typical ad but has been personalized in real time based on the viewer’s age, gender, education level, purchase history, and other demographic data, including political affiliation and personal biases.
  • Targeted conversational influence uses interactive conversations with AI systems such as ChatGPT, Google Bard, Microsoft Bing Chat, and Jasper.ai to personalize their responses based on the person’s unique characteristics. Advertisers can embed their marketing messages in machine-generated responses to users’ questions and statements.

In both instances, the real-time and individualized nature of the interaction makes it difficult to hold the system’s designers accountable for any misuse of the AI algorithms that power the responses. The large language models (LLM) at the heart of generative AI also threaten the ability of constituents to have their voices heard by public office holders because the technology can be used to overwhelm government offices with automated content that is indistinguishable from human-generated communications.

Guidelines for Businesses Implementing AI Governance

The long-term success of AI depends on gaining public trust as much as it does on the technical capabilities of AI systems. In response to the potential threats posed by artificial intelligence, the U.S. Office of Science and Technology Policy (OSTP) has issued a Blueprint for an AI Bill of Rights that’s intended to serve as “a guide for a society that protects all people” from misuse of the technology. The blueprint identifies five principles to follow in designing and applying AI systems:

  • The public must be protected from unsafe and ineffective AI applications.
  • Designers must prohibit discrimination by algorithms and ensure AI-based systems behave equitably.
  • Data privacy protections must be built into AI design by adopting privacy by default.
  • The public must have notice and a clear understanding of how they are affected by AI systems.
  • The public should be able to opt out of automated systems and have a human alternative whenever it’s appropriate to do so.

The World Economic Forum’s AI Governance Alliance brings together AI industry executives and researchers, government officials, academic institutions, and public organizations to work toward the development of AI systems that are reliable, transparent, and inclusive. The group has issued recommendations for responsible generative AI that serve as guidelines for responsible development, social progress, and open innovation and collaboration. 

The European Union’s proposed Artificial Intelligence Act creates three levels of risk for AI systems:

  • Unacceptable risks are systems that pose a threat to people. They include cognitive behavioral manipulation of individuals or vulnerable groups; social scoring that classifies people based on behavior, socio-economic status, or personal characteristics; and biometric identification systems. All such actions are banned.
  • High risks are systems that affect the safety or fundamental rights of people. Examples are AI used in toys, aviation, medical devices, and motor vehicles, as well as AI used in education, employment, law enforcement, migration, and administration of the law. This category also includes generative AI. All such systems require evaluation before release and while on the market.
  • Limited risks are systems that meet minimum transparency requirements and allow users to make informed decisions about their use, so long as it’s clear to users that they are interacting with AI. Examples include deep fakes, such as manipulated images and other content.

To protect against the risks of AI, companies can adopt a four-pronged strategy for AI governance:

  1. Review and document all uses of AI in the organization. This includes conducting a survey of algorithmic tools and machine learning programs that involve automatic decision-making, such as automated employment screening.
  2. Identify key internal and external users and stakeholders of the company’s AI systems. Potential stakeholders include employees, customers, job seekers, members of the community, government officials, board members, and contractors.
  3. Perform an internal review of AI processes. The review should examine the objectives of AI systems, and the principles on which they are based. It should also document the system’s intended uses and outcomes, including specific data inputs and outputs.
  4. Create an AI monitoring system that states the organization’s policies and procedures. Regular reviews will ensure that the systems are being applied as intended and within ethical guidelines, including transparency for users and identification of algorithmic biases.

The Future of AI Governance

As AI systems become more powerful and complex, businesses and regulatory agencies face two formidable obstacles:

  • The complexity of the systems requires rule-making by technologists rather than politicians, bureaucrats, and judges.
  • The thorniest issues in AI governance involve value-based decisions rather than purely technical ones.

An approach based on regulatory markets has been proposed that attempts to bridge the divide between government regulators who lack the required technical acumen and technologists in the private sector whose actions may be undemocratic. The technique adopts an outcome-based approach to regulation in place of the traditional reliance on prescriptive command-and-control rules.

AI governance under this model would rely on licensed private regulators charged with ensuring AI systems comply with outcomes specified by governments, such as preventing fraudulent transactions and blocking illegal content. The private regulators would also be responsible for the safe use of autonomous vehicles, use of unbiased hiring practices, and identification of organizations that fail to comply with the outcome-based regulations.

To prepare for the future of AI governance, businesses can take a six-step approach:

  1. Create a set of AI principles, policies, and design criteria, and maintain an inventory of AI capabilities and use cases in the organization.
  2. Design and deploy an AI governance model that applies to all parts of the product development life cycle.
  3. Identify gaps in the company’s current AI risk-assessment program and potential opportunities for future growth.
  4. Develop a framework for AI systems composed of guidelines, templates, and tools that accelerate and otherwise enhance your firm’s operations.
  5. Identify and prioritize the algorithms that are most important to your organization’s success, and mitigate risks related to security, fairness, and resilience.
  6. Implement an algorithm-control process that doesn’t impact innovation or flexibility. This may require investing in new governance and risk-management technologies.

Translating Governance into Business Success

Historian Melvin Kranzberg’s first law of technology states that “technology is neither good nor bad; nor is it neutral.” It is impossible to anticipate the impact of new technologies with any degree of accuracy. 

Whether AI is applied for the good of the public or to its detriment depends entirely on the people who create, develop, design, implement, and monitor the technology. As AI researcher and educator Yanay Zaguri has stated, “The AI genie is out of the bottle.” AI governance is the key to applying the technology in ways that enhance our lives, our communities, and our society. 

Image used under license from Shutterstock.com