The importance of artificial intelligence (AI) cannot be understated. From a 5% increase in EBIT to the potential to create trillions of dollars in economic value, AI is the technology businesses are rushing to scale – and failing to do so could be catastrophic. But while AI has come a long way, there is one challenge that continues to hold it back: the bias and prejudice of those who built the technology.
While some companies have taken steps to reduce bias as much as possible, the problems associated with AI bias run deep. They may occur when unconscious feelings about race, gender, sexual orientation, religion, or age creep their way into AI’s development, creating issues that may not be easy to detect. This was particularly apparent in a recent study from DataRobot, which found that more than one-third (36%) of businesses suffered due to the presence of AI bias in at least one algorithm. Of those businesses, more than half (62%) lost revenue and nearly as many (61%) lost customers. More than two-thirds (43%) of organizations lost employees while 35% incurred legal fees due to a subsequent lawsuit or other legal action.
WANT TO STAY IN THE KNOW?
Get our weekly newsletter in your inbox with the latest Data Management articles, webinars, events, online courses, and more.
These are tangible side effects of a technology that, when implemented properly, could make every company more productive by eliminating routine work. But if businesses are to take full advantage of AI and create value for both businesses and their customers alike, the risk of AI bias must be eliminated as much as possible. Let’s take a look at how these problems begin and how enterprises can weed them out.
Turn to Diverse Voices to Avoid AI Bias
AI is not inherently biased, but as long as humans are the ones developing it, any conscious or unconscious feelings could seep through. According to a report by Gartner, businesses will respond to this risk by requiring all personnel hired for AI development and training to demonstrate expertise in responsible AI by 2023.
In addition to increased responsibility, AI programmers and designers can achieve greater success by being trained to recognize how to avoid bias in the technology they’re creating. This is easier said than done, but it is essential. Biased algorithms can negatively impact a person’s ability to get health care or a home loan, and that’s just the tip of the iceberg. But McKinsey found that bias in training data and models – which inform AI – is more difficult to detect if the company lacks diverse employees who are capable of noticing these issues in the first place. In other words, AI should always be built by people representative of the same demographics the end product or service is intended to serve.
The good news is that McKinsey also found that nearly 70% of businesses believe that promoting diversity, equity, and inclusion (DEI) is very or extremely critical. We couldn’t agree more – my own company also created a Women in AI program, which empowers our women to share their stories of success and provides our clients and partners with a chance to learn more about their incredible journeys.
We hope their inspiring tales will encourage more women to pursue a career in STEM fields, which will go a long way toward shrinking the industry’s gender gap. It will also do a lot for our industry. By bringing more diverse voices to the table, and by ensuring that AI is built by a greater variety of consumers, businesses can avoid the pitfalls of deploying algorithms that were built by select individuals.
Keep Humans in the Loop to Stop AI Bias in Its Tracks
Diversity is an important step forward, but Harvard Business Review noted that “vigorous human review” is another essential aspect that should not be overlooked. Some major businesses have already put this into action, including Sony Group. During a conference last fall, Alice Xiang, the company’s head of AI Ethics Office, explained how she regularly instructs her business units to conduct fairness assessments. Instead of waiting to take action when something is wrong, she wants her business units to continuously monitor to prevent AI bias from becoming a problem in the first place.
Haniyeh Mahmoudian, Global AI Ethicist at DataRobot, echoed Xiang’s strategy to prevent AI bias. Mahmoudian also spoke at that conference and said that she emphasizes the importance of surveilling AI at every step of development, allowing AI teams to determine whether their product is ready to be deployed.
Businesses can also take steps to ensure AI remains unbiased by developing procedures to stop bias in its tracks. For example, a truly intelligent AI may draw upon past experiences with users to create new business processes whenever an unfamiliar workflow is encountered. That’s great – but if the AI is allowed to proceed on its own, unintentional consequences may occur. However, if businesses require that newly created processes must be approved by human subject matter experts before they are deployed, they can reduce the risk that biased AI will make its way to end users.
Start Now to Build a Better AI
AI is one of the most important technologies businesses will deploy in the years to come. It has the power to add trillions of dollars in value as it replaces tasks, not jobs. But in order for AI to do its very best – and provide both B2B and B2C end users with the results they demand – businesses must take the necessary steps to prevent AI bias from creeping in. While AI should be continually monitored for bias, businesses can also benefit by bringing diverse voices into the fold. And by empowering all people to develop, perfect, and refine AI, enterprises can increase the likelihood that the tech will properly serve its target audience.