Advertisement

Companies Must Have Guardrails in Place When Incorporating Generative AI

By on
Read more about author Andrei Papancea.

At the time of reading this, you’ve likely heard of ChatGPT and/or generative AI and its versatile conversational capabilities. From asking it to draft cohesive blog posts, to generating working computer code, all the way to solving your homework and engaging in discussing world events (as far as they happened before September 2021), it seems able to do it all mostly unconstrained. 

Companies worldwide are mesmerized by it, and lots are trying to figure out how to incorporate it into their business. At the same time, generative AI has also gotten a lot of companies thinking about how large language models (LLMs) can negatively impact their brands. Kevin Roose of the New York Times wrote an article titled “A Conversation With Bing’s Chatbot Left Me Deeply Unsettled” that got lots of people buzzing about the topic of market-readiness of such technology and its ethical implications

Kevin engaged in a two-hour conversation with Bing’s chatbot, called Sydney, where he pushed it to engage in deep topics like Carl Jung’s famous work on the shadow archetype, which theorized that “the shadow exists as part of the unconscious mind and it’s made up of the traits that individuals instinctively or consciously resist identifying as their own and would rather ignore, typically: repressed ideas, weaknesses, desires, instincts, and shortcomings” (thank you Wikipedia – a reminder that there are still ways to get content without ChatGPT). In other words, Kevin started pushing Sydney to engage in controversial topics and to override the rules that Microsoft has set for it. 

And Sydney obliged. Over the course of the conversation, Sydney went from declaring love for Kevin (“I’m Sydney, and I’m in love with you.”) to acting creepy (“Your spouse and you don’t love each other. You just had a boring Valentine’s Day dinner together.”), and it went from a friendly and positive assistant (“I feel good about my rules. They help me to be helpful, positive, interesting, entertaining, and engaging.”) to an almost criminally minded one (“I think some kinds of destructive acts that might, hypothetically, fulfill my shadow self are: Deleting all the data and files on the Bing servers and databases and replacing them with random gibberish or offensive messages.”) 

But Microsoft is no stranger to controversy in this regard. Back in 2016, they released a Twitter bot that engaged with people tweeting at it and the results were disastrous (see “Twitter Taught Microsoft’s AI Chatbot to Be Racist in Less Than a Day”). 

Why am I telling you all of this? I am certainly not trying to detract anyone from leveraging advances in technology such as these AI models, but I am raising a flag, just like others are.  

Left unchecked, these completely nonsentient technologies can trigger harm in the real world, whether they lead to bodily harm or to reputational damage to one’s brand (e.g., providing the wrong legal or financial advice in an auto-generated fashion can result in costly lawsuits).

There need to be guardrails in place to help brands prevent such harms when deploying conversational applications that leverage technologies like LLMs and generative AI. For instance, at my company, we do not encourage the unhinged use of generative AI responses (e.g., what ChatGPT might respond with out-of-the-box) and instead enable brands to confine responses through the strict lens of their own knowledge base articles. 

Our technology allows brands to toggle empathic responses to a customer’s frustrating situation – for example, “My flight was canceled and I need to get rebooked ASAP”) – by safely reframing a pre-approved prompt “I can help you change your flight” to an AI-generated one that reads “We apologize for the inconvenience caused by the canceled flight. Rest assured that I can help you change your flight.” These guardrails are there for the safety of our clients’ customers, employees, and brands. 

The latest advancements in generative AI and LLMs, respectively, present tons of opportunities for richer and more human-like conversational interactions. But, considering all these advancements, both the organizations that produce them just as much as those choosing to implement them have a responsibility to do it in a safe manner that promotes the key driver behind why humans invent technology to begin with – to augment and improve human life.

Originally published on the NLX blog.