Advertisement

AI Regulation Must Focus on Building Public Trust

By on
Read more about author CF Su.

Mere months after generative AI captured the world’s attention, leaders like OpenAI’s Sam Altman and Google’s Sundar Pichai testified before Congress with a simple message: Prioritize AI regulation before the technology gets out of hand.

The message surprised many – especially coming from the leaders who unveiled the revolutionary tools themselves – but it has become clear that some form of oversight is needed to safely guide generative AI’s growth. However, there’s an even better reason to regulate how these tools are introduced to daily life, and that’s to build public trust. 

The Biggest Challenge Facing Generative AI

Public opinion surrounding generative AI can have a profound impact on the technology’s growth and future implementation. People with access to AI tools could pose a danger to one another, and worse yet, they could shake humans’ ability to trust any digital interaction.

Consider robocalling and robotext as examples. Nowadays, most people are hesitant to answer the phone unless they recognize the number in front of them due to being interrupted by the intrusive and potential fraud that comes with automated calls. Similarly, many have experienced an increase in the volume of scam-oriented texts designed to appear as someone else to get information, whether by clicking or providing personal information. Phone calls and texts have only gotten better with the advancement of technology, so how can individuals really tell the difference? 

When generative AI starts to creep into everyday life, the trust gap could widen. Some won’t be able to trust who’s on the other end of an email, a chat, or even a video call. Understanding how to leverage humans in guiding AI is critical to building trust.

How Do We Get There?

While there’s been discussion around regulating the development of AI, this is not a pragmatic solution. It’s much easier to regulate commercial applications than research and development, which is why governments should regulate specific use cases, such as licensing the business applications of AI models, rather than requiring licenses for creating them.

Self-driving cars are a great example of a tech innovation that has generated a lot of exciting buzz in recent years. Despite the hype, these vehicles inherently create a public safety issue. What if the AI model integrated within the car misreads a situation or misses an oncoming driver? By regulating specific use cases on the commercial side, governments can show the public that they are taking this technology seriously and ensuring it’s applied ethically and safely. That is a significant step toward building public trust around generative AI and can help users feel more at peace with using the technology.  

The Future Is Bright

The future of generative AI – and all emerging, innovative technologies, for that matter – is exciting. These tools will help us focus on value-added activities while freeing up time spent on mundane tasks, such as data entry or scrounging the Internet to find a piece of information.

It will be especially interesting to see how the U.S. government responds in the coming months, including the division between the industry and legislation. But one thing is certain: The industry must converge on a set of high-level AI regulation principles to continue the conversation. The fate of the technology depends on it.