Now Is the Time for Executives to Deploy Ethical Rules Around AI

By on
Read more about author Usman Shuja.

For better or worse, AI is causing disruption in almost every field imaginable. Corporations around the world are embracing its possibilities to make work more efficient. The success of ChatGPT and other generative AI tools has also caught the attention of nearly every industry in an effort to meet profitability, efficiency, and sustainability goals.

Money is pouring into the technology. In 2023, more than 25% of all U.S. investment dollars in American startups went into AI-related companies. What’s more, AI startups are now expected to see an annual growth rate of 37.3% from 2023 to 2030. This investment is allowing companies around the globe to operate with unprecedented efficiency, speed, and sustainability by revisiting and revamping old processes.

While we are developing ways to use AI to make a better future, we are also at a precipice in how companies deploy such disruptive technology. We need to be sure that these tools are used cautiously and responsibly.

The evolution of technology is a runaway train. In many respects, the field of AI is in its adolescence, but in the past year alone advances in AI and other nascent technologies have continued at a breakneck pace. Globally, the public and private sector and academia are engaging in ongoing debates over the promise, peril, and appropriate uses of AI. As a result, we can expect 2024 to be a year of enhanced government regulation of the technology. I believe it will ultimately fall to governments to set standards and laws around what those parameters will be. But while governments will undoubtedly play a critical regulatory role, the speed of AI adoption will require that company executives create ethical guidelines of their own around AI.

Specifically, companies should be concerned with the following:

Work with unbiased data: AI comes with many questions around the data used to source and train its algorithm. Companies should be taking the lead in making sure that any AI products they develop or source are drawing from datasets that are fair and transparent [1]. The relationship between privacy and fairness is complex, and particular design choices and assumptions about both should be considered at the outset and made explicit. 

Making sure all AI tools are being formed using unbiased and balanced data will be of particular importance. Consider facial recognition technology powered by generative AI. If an AI dataset is trained to favor a certain type of ethnicity, unfair biases and outcomes will almost certainly be baked into the tool. Working to create AI tools that avoid these types of dangerous and inequitable outcomes will be critical to the technology’s continued development. 

Deploy with transparency: The recent U.S. executive order on AI by President Biden underscores a renewed commitment to advancing research that fosters fairness, transparency and ethical practices in the AI landscape. There are concerns in the tech community surrounding the interpretability of the deep neural networks (DNNs) and large language learning models (LLMs) that most generative AI tools are based on. Executives will increasingly need transparency on these factors and others before broadening their applications of AI. Companies should adopt and publicize their AI algorithms’ policies to uphold transparency and ethical standards.

Deploy with purpose: Create an environment that encourages and incentivizes the rapid adoption of mature technologies in low-risk applications, and that emphasizes and prioritizes greater precaution and scrutiny in applications that are less mature. In other words, don’t jump on a new AI development just because it’s there. AI should be deployed with purpose.

Employee engagement: Powerful technologies such as AI can appear threatening to many people, whether it’s a concern about privacy, job security or other issues. Therefore, leaders need to understand, empathize and educate their teams on the hazards, challenges and opportunities that lie ahead to realize the potential of the technology. 

I predict that with AI, productivity gains could be exponential, and skills will need to evolve more dynamically. Specifically, employees should think through how they can measure performance and strategize proactively with a goal in mind vs. focusing on reactive tasks that are more easily outsourced to AI.

While the evolution of AI and other digital tools will undoubtedly bring their share of unwanted change, with ethical corporate applications and collaborative government oversight these tools and others to come will ultimately increase the bottom line and make life better and easier for those who build, manage and protect our world. But waiting to create and deploy ethical considerations for AI’s use in your corporation isn’t an option. To stay ahead of the issues that are bound to arise, now is the time to be a leader in this area. Establish an internal committee with the responsibility of ensuring oversight and execution of AI strategy and tools are consistent with the ethics that are of most importance. 

[1] Dwork, Cynthia. Moritz Hardt, Toniann Pitassi, Omer Reingold and Richard Zemel. (2011). “Fairness Through Awareness”