Introduction
Although organizations have sunk a lot of money into technology-enabled governance solutions over the past several years through the creation of data governance platforms, data governance catalogs, and various types of technology-assisted toolsets, many organizations remain reliant on fragile, human-dependent operating models. All organizations have defined policies and assigned ownership of them along with documented workflows, yet the vast majority of governance-related outcomes still rely on manual effort, tribal knowledge and, in extreme cases, escalation paths.
I contend that the failure of human-in-the-loop governance is not due to poor-quality tools or lack of technical knowledge. The failure of such governance outcomes results from a structural inability to achieve scalability. As the volume, velocity, and regulatory pressures on data increase, particularly in response to the EU AI Act, a manual data stewardship model will ultimately become an increasing burden of hidden risk on the organization’s success and will eventually collapse under the weight of that risk.
Based on experience gained from working with enterprise delivery models in this field, this article presents a governance-first paradigm within which all autonomous data systems will have to operate. It describes why the automation of human tasks related to governance does not equal the automation of human responsibility for those tasks, how artificial intelligence alone will not eliminate the organization’s historic governance debt, and the primary issues that data leaders must resolve by the year 2026 in order to prevent failure due to governance.
The Illusion of Control in Modern Data Governance
Organizations have created and deployed catalogs, stewardship workflows, policy repositories, and governance councils over the last decade. These data governance tools appear to provide adequate governance on paper for many organizations. However, the enforcement of the governance models relies heavily on individuals being able to identify issues, interpret intentions and take appropriate actions with limited time to act.
This reliance on human action creates a false sense of control; although governance artifacts do exist, responsibility for accountability exists outside the formal governance system and is therefore difficult to enforce. Policies are more procedural than they are structural, and as long as there is minimal change to data, and there are very few complexities associated with organizational data, this model may appear to be effective. As organizations are more extensive and complex, human vigilance is a critical component of a governance model’s effectiveness, as opposed to having system-enforced governance.
How AI Exposes Data Governance Debt
AI does not obscure the issues of governance. Instead, it makes these issues visible. When organizations begin to use analytic and automate technologies that are based on AI, they will discover that the governance issues regarding lineage, trust, and accountability are easily seen. Therefore, organizations find themselves stuck with questions that were once put off until after the event has been completed:
- Where did this data come from?
- Can we trust this data?
- If we challenge the outputs from our automated systems, who is responsible?
Human operators have traditionally been left to implicitly resolve these questions. Whereas AI has been designed to require explicit, enforced answers (forms of governance) on all three of these questions.
As a result of this issue, many organizations will find that their own level of perceived governance maturity has been overstated, as everything that was acceptable for reporting and data dashboards fails the test of explainable, auditable, and regulated processes. In essence, AI has acted as a “Stress Test” revealing years of accumulated governance “debt” that manual processes have quietly stored.
Human-in-the-Loop vs. Governance-First Systems
The dominant structures of governance today represent a human-in-the-loop system model. In a human-in-the-loop model, technology is used primarily to automate the completion or execution of specific tasks, such as the movement of data between systems, checking the validity of data between systems, and enhancing data that has been provided or created by other systems. The responsibility for the outcome of an automated governance system is not part of the automated system itself. Therefore, humans have the ability to resolve disputes between systems, approve any exceptions made by systems, and determine what is true when using different systems produces different conclusions.
The governance-first model treats governance differently than the dominant human-in-the-loop model. In the governance-first model, the system has been developed so that the system can enforce policies, limits, and responsibilities to the same extent that humans can enforce those same policies, limits, and responsibilities. The role of humans in a governance-first model is not to intervene continuously with the system but rather to support the systems in designing policy and to be the exception administrator for the governance-first systems.
Understanding the difference between automated tasks and automated responsibilities is critical for designing automated governance systems. Automated tasks primarily enhance throughput. Automated responsibilities enhance system reliability. When automated governance systems are designed with these two characteristics confused, the result is an automated governance system that has the ability to rapidly increase the level of activity in a system while simultaneously increasing the level of risk associated with that increased level of activity.
What Breaks First at Scale
As data ecosystems continue to expand, we see recurring patterns of failure emerge. As a result, stewardship teams tend to create bottlenecks in their processes, as the volumes of existing exceptions continue to grow much faster than the capability to resolve those exceptions. The presence of escalation paths creates delays in decision-making processes, leading to inconsistencies in the products or services being delivered.
Over time, informal methods of addressing issues become accepted as standard operating procedures. Controls within the organization are bypassed in an effort to deliver projects on time. Because of the high volumes of exceptions, governance within organizations has shifted from a preventative role to one of mitigating damage. This is not due to a lack of effort or tools being utilized, but rather to an excessive amount of cognitive load that humans must experience while trying to reasonably reason through highly complicated, fast-moving systems with little or no usable information.
Therefore, when governance processes are applied at scale, they evolve to a point where they can no longer sustain themselves as the human experience allows for, and rather fall apart altogether.
What 2026 Will Demand from Data Leaders
Regulatory expectations for AI are increasing rapidly, with Europe leading the way. The EU AI Act identifies accountability, traceability, and explainability as critical elements of compliance. All three of these requirements are based on an organisation’s data governance maturity; organisations are not able to adequately explain the outcome of AI unless they have qualified their supporting data.
By 2026, boards and regulators will require an organisation to demonstrate embedded responsibility into their operational systems and not just through indirectly documented policies. Governance will evolve from a supportive function to be an operational capability.
Resiliency of an organization’s governance system will supersede the precision of models used.
Practical Steps for Data Leaders Today
Organizations that currently have platforms in place may not need to invest in new systems as long as they are taking various practical actions today. Practical steps for organizations include identifying any areas of governance where intentional human discretion plays a significant role; identifying where automated processes (e.g., task automation) and human responsibility (e.g., compliance enforcement) are separate; and limiting informal management requests.
It is also important for leaders to focus on creating deterministic, observable and auditable outcomes from governance activities. Governance should be seen as an architectural practice rather than a workflow-based practice. Systems that institutionalize the concept of responsibility will have confidence when they scale, whereas systems that depend upon being saved by superheroes will have difficulties scaling in an era of increasing complexity and regulatory scrutiny.
Data Governance Bootcamp
Learn techniques to implement and manage a data governance program – February 10, 17, and 24.


