Article icon
Article

Data Governance in the AI Era: Are We Solving the Wrong Problem?

In April 2023, a major telecom company’s engineers made headlines for uploading proprietary source code and confidential meeting notes to a notable LLM. The tech world erupted and news outlets screamed about catastrophic data leaks. Within weeks, companies everywhere rushed to ban or heavily restrict AI tools. The fear was understandable, but was it the right lesson to learn?

These early cases drove a wave of FUD (fear, uncertainty, and doubt) that continues to linger across the enterprise. Instead of enabling thoughtful AI adoption, many organizations defaulted to paranoia. Bans were put in place. Policies were written that effectively prohibited AI use altogether, or required every AI-assisted work product to be labeled like it was radioactive. The result? Companies believed they were protecting themselves when in reality, they were crippling their ability to benefit from AI.

The issue is that these measures often miss the actual risk and the danger isn’t truly in that AI tools are inherently malicious. It’s that they’re cloud-based internet services and just like with any cloud product, if you’re putting sensitive data in the wrong place, you can lose control over it. What companies should be doing isn’t banning AI, but putting in place common-sense policies that address data leakage in the same way we already manage other cloud services.

Understanding the Real Risk

The telecom company’s incident revealed something important about how organizations currently think about data security in the AI era. The immediate reaction treated AI tools as a new class of threat requiring new defenses. In reality, they represent a problem we’ve seen before this time just with different interfaces.

When employees upload data to any cloud service, whether it’s a generative AI (GenAI) platform or a document collaboration tool, the core risk remains unchanged. Data leaves your perimeter and you lose direct control. The vendor’s policies, security practices, and data handling procedures then become your responsibility by proxy. This then doesn’t turn into a generalized AI problem, it becomes a somewhat typical data governance problem that AI simply made more visible to executives and security teams who hadn’t been paying close attention to how cloud adoption had already transformed organizational data boundaries. This distinction and shift in perspective matters profoundly because it fundamentally changes how you should respond to the risk.

If you treat AI as a unique and novel threat, you build walls that isolate it from productive use. But, if you treat it as another cloud service requiring appropriate governance, you can move forward with the necessary guardrails that enable safe adoption. The real challenge organizations face is balancing safe adoption with genuine productivity gains, while avoiding the overcorrection that leaves transformative AI capabilities locked away in a compliance vault while competitors move forward.

The Cost of Fear-Based Policy

Policies designed to prevent data leakage often increase it, because they push adoption into spaces where governance is impossible. Organizations operating under blanket AI bans face hidden problems like shadow IT adoption acceleration because employees are still using these tools, just on personal accounts, outside company networks, beyond visibility and control. The irony is stark.

Many organizations adopted acceptable use policies that mirrored the major telecom’s post-incident reaction. This approach remains common across enterprises today where security teams, tasked with protecting data are simply defaulting to total restriction. It’s the safe play while technically defensible, easy to explain, and transfers risk away from the security function but limits the forward motion of innovation and holds a cap on potential.

The answer for security and IT leaders isn’t paranoia, it’s structure. By creating clear, enforceable rules, organizations can enable AI adoption while protecting the business.

Building Governance That Enables AI Adoption

The foundation of any effective AI governance model starts with visibility and control. Create a living list of sanctioned AI tools tied to enterprise accounts like personal accounts and shadow IT.

Once you have that visibility, it’d be right to require all AI usage through company-issued credentials, ensuring every login is accountable and logged. Users authenticate through your identity provider, and audit trails capture usage patterns. When you can trace who accessed which tool and when, you can create records that support both compliance requirements and incident investigation. More importantly, when employees know their access is logged, they’re more likely to follow data classification policies.

One of the biggest mistakes organizations make is treating all data the same way, imposing blanket bans that create friction without proportional security benefit. A more effective approach classifies data by sensitivity level and creates rules aligned with that classification. Highly confidential data stays out of unauthorized AI tools unless specifically approved. Public or low-sensitivity data can be used with guardrails and establishing that AI is a tool rather than a scapegoat means employees are accountable for the quality of their work, no matter how it was generated. This shifts the focus from policing the tool to empowering the user and treating AI vendors the same way you treat any SaaS partner is essential, but requires additional scrutiny: Does the vendor train on your data? If so, is it for your benefit only or to improve their global model? What are their data retention policies? Understanding these distinctions before adoption prevents surprises later. Finally, policies that aren’t enforced don’t matter. Build in audits, reporting, and consequences for violations, with exceptions requiring explicit sign-off. When people understand that violations are detected and addressed, compliance improves naturally.

Time to Choose

The telecom company’s case became a cautionary tale, but it distracted the industry from the real conversation around GenAI. The narrative turned into “AI is too dangerous to use” instead of “we need to manage it like any other powerful technology.” The truth is quite straightforward, AI doesn’t need to be treated like a bomb, it’s just any other enterprise tool that requires similar and sometimes more stringent governance, clear boundaries, and real accountability.

If your policy today looks like a wall of “no,” you’re probably protecting yourself from the wrong consequence. The real risk isn’t that AI will suddenly go rogue, it’s more likely that your people will use it without guidance, visibility, or control. Unmanaged adoption creates the very data leakage you’re trying to prevent. And with managed adoption, through clear policy and good governance, creates visibility, accountability, and the ability to detect and respond to actual incidents.

Data professionals occupy a critical position in this conversation, they own the data architecture, the classification systems, and the audit trails that make AI governance possible. The success of enterprise AI adoption depends on your ability to translate security requirements into workable data policies. You’re not just protecting data, you’re enabling the business to move forward responsibly.

The next phase of enterprise AI adoption will likely separate organizations into two groups. Those that built strong governance structures and encouraged safe use and those that built walls that drove adoption into hiding. The first group will likely benefit from AI’s productivity gains while maintaining security. The second will face data incidents they can’t prevent because they can’t see them. The choice isn’t between security and innovation, it’s between governance and chaos. Choose governance, and you get both.

Data Governance Bootcamp

Learn techniques to implement and manage a data governance program – February 10, 17, and 24.