Article icon
Article

Lessons Learned from Implementing AI Governance

The rapidly accelerating use of artificial intelligence (AI) has brought with it a host of challenges – not least of which is that it’s hard to govern its usage given that existing data governance frameworks have been built for traditional IT, not AI. Now, organizations must contend with everything from creating new decision-making processes, such as how to decide who approves AI use cases, to dealing with workforce management challenges resulting from the ways that AI changes employee responsibilities and workflows.

Enterprises need to drive AI governance programs that stick, even as AI capabilities are evolving rapidly. “AI governance has to grasp the idea that uncertainty is a feature, not a flaw, and be able to set up an organizing framework as a concept to leverage that uncertainty and ensure that there is value delivered to the organization,” said Kelle O’Neal, the founder and CEO of First San Francisco Partners, in her Leading AI Governance webinar, a new DATAVERSITY series aimed at helping individuals articulate the value of AI governance and influence its deployment in the enterprise.

AI is scaling faster than governance – just 38% of organizations polled by TrendAI have AI policies. Addressing this through a series of four lessons, O’Neal and Lisa Wintrick, executive adviser at First San Francisco Partners, dove into issues such as how to talk about AI governance with executives, focus on guiding principles, and accelerate efforts through active and transparent decision-making.

Lesson 1: Determine AI Roles

Where to start? “As an AI governance leader, you should really look closely at what you expect AI to do,” Wintrick said. “We found this evidence in a couple of different places to say that what executives are expecting AI to do and how AI will behave is a lot like some of the roles that we all play in our organization.” By that she is referring to tasks such as assistant-level jobs and coding and the use of agentic AI to autonomously accomplish various processes – and even make decisions.

Human resources organizations have already defined such roles that AI will take on, and if the AI is supposed to act as your agentic peer, she pointed out, “then maybe we need to start considering the way that we think about it in more of an HR-centric way.” What she and O’Neal refer to as “the aha moment” is that the paradigm of what AI will bring and how it will behave means “we have to think about it as both qualitative and quantitative, and give it performance reviews like we do each other.” Just as people’s roles will be rethought and expanded as AI’s functionality gains increasing acceptance in the workplace, HR will become a new stakeholder in the AI governance process, linking with IT in assessing AI assets accordingly.

As an example, O’Neal noted that if an AI agent does something wrong, is it a mistake – as it likely would be classified if a human made the wrong move, or is it a defect? Is that error addressed by better learning and management, as would likely be the case if the organization were addressing a human-made error, or by changing code? Or both?

Additionally, O’Neal asked that we consider the impact this will have on how an organization is constructed: “We’re looking at work charts, right? And how is work getting executed? Does that relate to an org chart? Probably not,” she said.

Lesson 2: Align AI for Impact

The value of AI is measured by its impact on the business. No surprise there, O’Neal noted. The issue, however, is that when people discuss AI’s business value, “there is a consistent and somewhat alarming disconnect” between what leaders understand and expect, and what is actually being built and instantiated in an organization.

That governance gap must be closed, and the discussion forum for doing so starts at the boardroom. “What organizations need to do is that they really need to truly decide, as they jump into AI, what does it mean to be a business enabler?” she said. It’s fine to perform silo-based AI experiments but it needs to be understood whether such things will automatically create a path to success that produces expected value for the organization, or if they will just result in further explorations.

The board and executives are likely thinking more at the holistic level, where the operating model is rewired with AI as the enterprise nervous system, and where it acts as the engine of growth and reinvention. But that level of business enablement doesn’t mesh well with an organization that is focused on AI as a series of experiments conducted in functional silos to automate their processes – as functional reinvention, where AI is used in disciplined ways aimed at driving targeted improvements rather than for strategic use cases. End result: Decision-making, from resource allocation through accountability, reflects the disconnect, and ultimately leads to more difficulties and higher expenses, compared to returned value.

It’s not that siloed AI work doesn’t add value, which is what everybody wants to be able to do for their organizations, O’Neal said. But, “when it’s done in those functional silos, it doesn’t necessarily translate up to the board of directors.” At the same time, board discussions cannot be behind closed doors – board discussions must be opened up so that expectations at the very highest level are filtered down through the enterprise.

“We want to identify impact and share that through the rest of the organization, so that when leadership does set strategic goals, they are actually setting the direction for the organization, and the organization can ensure that the work that they’re doing meets those strategic goals,” she said.

Wintrick added that “breaking down that disconnect really makes the people on the front lines feel like they’re contributing to the whole.”

AI Governance Webinars

Check out a free upcoming episode on challenges, trends, best practices, use cases, and more.

Lesson 3: Building a Safety Net

As a still-maturing technology, AI often comes with a long string of issues that AI governance programs may not be able to handle, says O’Neal. Starting with the fact that since it is a technology, it is often first tossed over to IT for governance. Frustration grows with the feeling that AI has gotten out ahead of us, and with every question about who owns AI and what is the strategy regarding its usage.

“Thinking differently about how you instantiate a program, knowing that this adolescent child is still growing, but needs some really strict oversight, we have come up with something that we like to call a safety net,” which protects the organization, its people, and its IP, said Wintrick. Adaptability and agility are critical, especially in these early stages. To that end, business leaders should concentrate on getting out of the gate, not boiling the ocean, she suggests.

Follow these steps to stay secure as your AI governance evolves:

  • Core values: Your ethical charter and principles are your launching point. Your governance program will reflect what your organization believes in.
  • Business imperatives: You need to compete and win, so how are you going to do it? Will you adopt agentic AI? Or are you just trying to augment your data science team, for instance?
  • Accountability: Only a few people should be involved in this at the outset in order to assure agility and nimbleness. That’s better in tune with the speed at which AI is moving.
  • Measures: Communicate value in terms of how you are making progress and can get to the next level.

According to O’Neal, following this safety net also provides a way to elevate the conversation to the executives and to the board.

Lesson 4: The How’s of Decision-Making

O’Neal and Wintrick also offered a decision-making framework to get the organization aligned with how decisions are made, how communication occurs, and how to ensure transparency, so that the change that is occurring in the organization can occur in the most effective way possible. They positioned this as an orchestration layer that creates equality between decision-making and accountability, leveraging cross-functional, category-specific working groups.

Consider participants in the framework to comprise the board: executive sponsors and steering committees; ethics and compliance leaders; data, analytics, and leadership committees; and data and AI stewards – for a start. “Every part of the organization needs to show up in this framework,” said Wintrick. “The point is that there is bi-directional flow, and that it is surrounded with governance, and that everyone’s voice is heard.”

The delegation of authority for board-level, enterprise, functional, and operational decision-making “helps to empower people while reducing the number of issues that need to be escalated up the decision-making framework,” said O’Neal. A decision framework allows authority to be delegated appropriately – not just to senior leaders – and at speed. Diving into the idea of where decisions are made is a great way to think about what AI governance needs to be in order to achieve board-level goals and functional goals, she said, and achieve innovation and acceleration without limitations.

Said O’Neal, “the organization at its core will change if decision-making is pushed to the right level.”

Data Governance Intensive

Learn strategies for building, sustaining, and scaling data governance programs – June 9-10, 2026.