Advertisement

AI Governance Best Practices

By on
Sansoen Saengsakaorat / Shutterstock.com

AI governance is meant to promote the responsible use of artificial intelligence for the betterment of humankind. Artificial intelligence has proven itself quite useful in completing a large variety of tasks quickly and efficiently. Unfortunately, it can also be used to support criminal behavior or to create and distribute misinformation. AI governance is an effort to minimize the use of artificial intelligence for criminal and unethical behavior.

Recent advancements in artificial intelligence – ChatGPT, generative AI, large language models – have galvanized both industry and government leaders into recognizing the need for ethical guidelines and regulations when using AI.     

The development of AI governance programs, regulations, and guidelines is an effort to govern the development and applications of AI technology. Writers, for example, have expressed significant concerns about being replaced by artificial intelligence, and the Writers Guild of America went on strike, demanding increased wages and severe limitations on the use of AI for writing purposes

The ability to create life-like images (referred to as “deep fakes”) of individuals saying anything desired by the person controlling the AI has become a concern for some politicians and political groups. Ransom demands have been presented after a business’s computer system has been infected by AI-generated malware, or after the re-creation of a daughter’s voice on the phone, sobbing and telling a parent she’s been kidnapped.

AI governance deals with a variety of critical issues, such as privacy, built-in biases, impersonation, theft, and fraud. It is unfortunate that laws and regulations are necessary to protect the people from individuals with weak or no ethics. Businesses should make a point of staying updated on emerging laws and regulations and ensure their AI systems’ creations and deployment are in compliance.

Organizations can navigate the ethical concerns raised by the use of artificial intelligence by adhering to a system of AI governance best practices, in turn promoting its responsible use for the betterment of humankind.

Government Efforts to Regulate Artificial Intelligence

In the United States, freedom of speech can be confused with the freedom to lie. The situation is perplexing, and makes it difficult to create any laws that restrict misinformation. Artificial intelligence can be a remarkably useful tool in adding to fraud. 

In order to develop a system that protects both individuals and the freedom to be innovative, governments must take certain steps. The first step involves developing an understanding of the problems resulting from the unethical use of artificial intelligence. In the United States, the Senate initiated this when they asked several tech CEOs to attend nine sessions, with the first taking place on September 13, 2023, to discuss the CEOs’ concerns about AI.

On October 30, 2023, President Biden issued an executive order regarding AI concerns. With the exception of a mandate that requires the “developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government,” the executive order deals with “developing” standards and guidance. At this time, the United States has not developed any laws controlling or limiting the behavior of artificial intelligence.  

The European Union (EU) was one of the first governmental organizations to decide they needed AI-focused regulations. The final text of their proposed legislation, the EU AI act, is still being developed following an agreement on December 8, 2023. They chose to develop a series of risk levels, with Unacceptable Risk AI Systems described as a threat to people (these risks will be banned). Unacceptable risks include:

  • The cognitive, deliberate, behavioral manipulation of humans or specific vulnerable groups. An example would be voice-activated toys that encourage children to perform dangerous behavior.
  • Social scoring, the process of classifying people using their socio-economic status, behavior, or personal characteristics.
  • The use of real-time and remote biometric identification systems.

China does not have the same free speech considerations democratic governments support. As a consequence, their AI priorities are different. The Interim Administrative Measures for Generative Artificial Intelligence Services was implemented on August 15, 2023. These AI-control regulations require businesses offering generative AI services to complete a security assessment and a filing of algorithms. They also require providers to make efforts to improve the accuracy, objectivity, authenticity, and reliability of generated content, and demand its oversight.

Generally speaking, those countries that are concerned with AI governance are still in the process of developing appropriate laws and regulations.

Developing AI Governance Best Practices Within a Business

Business managers should consider the impact of AI on their customers and employees, and implement policies to minimize risks, and avoid doing harm. By developing a system of AI governance best practices, businesses can support the responsible use of artificial intelligence for the advancement of humankind. 

Best practices include:

Identify AI-generated materials: Many governments are discussing the required use of watermarks as a way of distinguishing AI-generated art. For organizations that are honest and responsible, a watermark provides an easy way of communicating the art was created by AI, and not a human. The problem with watermarks is that they can be removed quite easily; to increase the potential for confusion and misinformation, watermarks can be added to art created by humans. 

Honest and responsible organizations should include a watermark on any AI-generated art. Articles that are written by AI should place “AI generated” in the spot where the author’s name is normally located, regardless of whether the person who initiated the article wants to claim authorship. (To do otherwise is simply misleading and deceitful.) 

Deal with algorithmic biases: Unintentional (or secretly planted) biases and prejudices that are built into an AI’s algorithms can have an impact on an organization’s hiring practices and customer service by using demographics such as race or gender.

To determine if an AI is biased, give it a test run. Test it several times for biases. Excel has developed What-If analysis tools (Goal Seek and Scenario Manager) that will perform these tests. These tools are designed to promote equity and fairness in AI systems. They ensure the AI operates without discrimination.

Customer security: There are two basic types of customer information that businesses gather. The first is supplied directly by the customer, and includes such things as their home address and phone number, possibly a birth date. Everyone agrees this information should be secure and protected.

Artificial intelligence can be combined with Data Governance to support data privacy and security laws. By developing an AI supported Data Governance program and security rules, a business can significantly reduce the risks of stolen and exploited data. 

The second form of customer information is purchased from other organizations and includes data ranging from online shopping patterns to social media activity. This type of information (referred to as “third-party data”) is collected with the intention of manipulating a person into making a purchase. 

Most people don’t like the idea of their personal preferences and needs being observed and exploited. Honest and responsible businesses should not support the use of artificial intelligence in manipulating humans, nor third-party data in general.

Develop a philosophy of “do no harm” when using AI: There are businesses whose only goal is short-term profits in which deceit is fine, so long as it brings in a profit. But would you do business with them more than once? In the continuous pursuit of profits, it can be easy to lose sight of the big picture.

When ethics are included in the business model, a philosophy of do no harm develops. Honest, accurate information rarely damages a business’s customer base, but deceit or theft typically results in the loss of any future business with that customer. Additionally, accurate information streamlines the efficiency and flow of the larger society, in turn promoting the advancement of humankind. The introduction of misinformation can result in chaos and confusion. 

Artificial intelligence can be used to promote chaos and confusion, or it can be used for purposes of good communication. 

Develop a code of ethics for both the organization and the AI: An AI governance code of ethics should outline the organization’s desire and commitment to ethical behavior. This code of ethics may include a commitment to “use artificial intelligence to provide accurate information” and “artificial intelligence shall not be used to create or distribute misinformation.”

Creating an AI governance code of ethics helps an organization to establish clear standards of behavior. If made available to the public, a business’s code of ethics can help in developing the trust of customers and stakeholders, mitigating legal risks, and demonstrating social responsibility.

The data steward and AI ethics reports: An AI governance program should include a series of policies and procedures that support ethical concerns. One of these policies should require regularly scheduled ethics reports, and the data steward seems to be an appropriate person to assign this responsibility to. By creating a reporting mechanism on the ethical use of the organization’s artificial intelligence, senior leadership can ensure accountability. Routine audits can also help to identify potential legal issues and promote compliance. 

These measures collectively strengthen the implementation of an AI governance program and promote responsible AI practices throughout the organization.

Educate management and staff: Creating a comprehensive AI governance program requires that all staff and management have an understanding of the organization’s code of ethics and long-term goals. The education process ensures that all staff are working to achieve the same goals, and that no one on staff is misguidedly working against those goals.

The Use of Algorithms in AI Governance

If we, as humans, find a way to separate and identify accurate information from misinformation, we might be able to develop algorithms that prevent artificial intelligence from performing criminal acts and distributing misinformation.