Categories
Blog

Preventing Strategy Outrunning Governance in AI

One of the clearest AI governance challenges facing companies today is not a failure of ambition. It is a failure of pacing. Put simply, strategy is moving faster than governance. Business teams want results. Senior executives hear daily about efficiency gains, lower costs, faster decision-making, enhanced customer engagement, and competitive advantage. Vendors are more than happy to promise it all. Employees are already experimenting with AI tools on their own. In that environment, the pressure to move quickly is relentless.

That is where the compliance function must step forward. Not to say no. Not to slow innovation for the sake of slowing it. But to ensure that innovation moves with structure, discipline, and accountability. Governance is not the enemy of AI strategy. Governance is what allows an AI strategy to scale without becoming an enterprise risk event.

The Central Question for Boards and CCOs

For boards, Chief Compliance Officers, and business leaders, the central question is straightforward: has the company defined the rules of the road before putting AI into production? If the answer is no, the company is already behind.

This is not a theoretical problem. It is happening every day. A business unit buys an AI-enabled tool before legal, compliance, IT, privacy, and security have reviewed it. A vendor pitches a product as low-risk automation, even though it actually makes consequential recommendations. An employee uploads sensitive data into a generative AI platform for convenience. A use case that began as internal support quietly migrates into customer-facing decision-making. A pilot project becomes business as usual without anyone documenting who approved it, what risks were considered, or what human oversight is supposed to look like.

That is what it means when strategy outruns governance. The business has a faster process for adopting AI than it has for understanding, controlling, and monitoring AI risk.

What the DOJ Expects

The Department of Justice has been telling compliance professionals for years that an effective compliance program must be dynamic, risk-based, and integrated into the business. That lesson applies directly here. Under the ECCP, prosecutors ask whether a company has identified and assessed its risk profile, whether policies and procedures are practical and accessible, whether responsibilities are clearly assigned, whether decisions are documented, and whether the program evolves as risks change. AI governance sits squarely in that framework.

What “Rules of the Road” Means in Practice

What do the “rules of the road” look like in practice?

First, the company must define which AI use cases are permissible. These are lower-risk applications that can be used within established controls. Think internal drafting support, workflow automation for non-sensitive administrative tasks, or summarization tools used on approved data sets. Even here, there should be basic conditions: approved tools only, no confidential data unless authorized, user training, logging, and manager accountability.

Second, the company must identify restricted or high-risk use cases. These are situations where AI may be allowed, but only after enhanced review. This can include uses involving personal data, HR decisions, customer communications, pricing, fraud detection, credit or eligibility decisions, compliance surveillance, or any function where bias, opacity, or error could create legal, regulatory, or reputational harm. These use cases should trigger a more formal process that includes a documented risk assessment, legal and compliance review, data governance checks, testing, defined human oversight, and ongoing monitoring.

Third, the company must be clear about prohibited use cases. If an AI application cannot be used consistently with the company’s values, control environment, legal obligations, or risk appetite, it should be off-limits. That might include tools that process sensitive data in unapproved environments, systems that make fully automated consequential decisions without human review, or applications that cannot be explained, tested, validated, or monitored sufficiently for their intended use.

Fourth, the company must establish escalation thresholds. Not every AI decision belongs at the board level, but some certainly do. Use cases involving strategic transformation, material legal exposure, major customer impact, significant third-party dependency, or high-consequence decision-making may need escalation to senior management, a designated AI or risk committee, or the board itself. If management cannot explain when a matter gets elevated, governance is too vague to be trusted.

Why the NIST AI RMF Matters

This is where the NIST Framework is so useful. NIST does not treat AI governance as a one-time signoff exercise. It organizes governance as an ongoing discipline through four connected functions: Govern, Map, Measure, and Manage. For compliance professionals, that is a practical operating model.

Governance means setting accountability, policies, oversight structures, and risk tolerances. It answers who is responsible, who decides, and what standards apply. A map means understanding the use case, context, stakeholders, data, and risks. It answers what the system is actually doing and where exposure lies. Measure means testing, validating, and assessing performance and controls. It answers whether the system works as intended and whether the company can prove it. Managing means acting on what is learned through oversight, remediation, change management, and continual improvement. It answers whether the company is prepared to respond when reality diverges from the plan.

How ISO 42001 Reinforces Governance Discipline

ISO 42001 reinforces the same message from a management systems perspective. It brings structure, accountability, controls, and continual improvement to AI governance. That matters because many organizations do not fail because of a lack of policy language. They fail because they do not operationalize accountability. ISO 42001 pushes companies to embed AI governance into defined processes, assign responsibilities, document controls, conduct internal reviews, and take corrective action. In other words, it turns aspiration into a management discipline.

What Happens When Strategy Outruns Governance

What happens when none of this is done well?

Shadow AI is usually the first warning sign. Employees use public or lightly reviewed tools because they are easy to use, fast, and readily available. Sensitive data may be entered without approval. Outputs may be used in business decisions without validation. The organization tells itself it is still in the experimentation phase, while the risk has already gone live.

Vendor-driven deployment is another danger. The company relies too heavily on what the vendor says the product can do and not enough on its own evaluation of what the product should do, how it works, what data it uses, and what controls are required. When something goes wrong, accountability becomes murky. Procurement says the business wanted speed. The business says IT approved the integration. IT says legal reviewed the contract. Legal says compliance owns the policy. Compliance says no one submitted the use case for formal review. That is not governance. That is institutional finger-pointing.

Undocumented approvals are equally dangerous. An AI tool is launched because everyone generally agrees it seems useful. But there is no record of the intended purpose, risk rating, required controls, human review standard, or approval rationale. Six months later, the company cannot explain why the system was deployed, what guardrails were put in place, or whether its use has drifted beyond its original scope.

The Compliance Mechanisms Companies Need Now

That is why companies need concrete compliance mechanisms now. They need an intake process for AI use cases to enter a formal review channel before deployment. They need risk tiering so not every use case gets the same treatment, but higher-risk applications receive enhanced scrutiny. They need approval workflows with defined roles for the business, legal, compliance, privacy, security, IT, and, where appropriate, model risk or internal audit. They need board reporting triggers to inform leadership when AI adoption crosses materiality or risk thresholds. They need a current model and use-case inventory so the company knows what is in operation. They need change management, so updates, retraining, vendor changes, and scope shifts are reviewed rather than assumed. And they need periodic review because AI risk does not stand still after launch.

The Special Role of Compliance

The compliance professional has a special role here. Compliance is often the function best positioned to connect governance, process, accountability, documentation, and escalation. That is precisely what the DOJ expects in an effective program. If the company can buy AI faster than it can classify risk, document controls, assign accountability, and test outcomes, the program is not keeping pace with the business. That gap will not stay theoretical for long. It will harden into enterprise risk.

Conclusion: Governance Must Keep Pace With Strategy

The lesson is direct. Strategy and governance must move together. AI governance is not a brake pedal. It is the steering system. A company that wants the benefits of AI must be disciplined enough to define where AI can go, where it cannot go, who decides, what gets documented, and when the business must stop and reassess. If the company can move faster on AI strategy than on AI governance, it is creating risk faster than it can manage. That is not innovation. That is exposure.

Categories
Blog

From Principle to Proof: Operationalizing AI Governance Through the ECCP and NIST

Artificial intelligence governance has officially crossed the threshold from theory to expectation. The Department of Justice has not issued a standalone “AI rulebook,” but it has provided a framework for compliance professionals to consider the issue: the 2024 Evaluation of Corporate Compliance Programs (ECCP). In this version of the ECCP, the DOJ laid out guidance that any technology capable of creating material business risk must be governed, monitored, and improved like any other compliance risk. That includes artificial intelligence.

Too many organizations still treat AI governance as an ethics exercise, a technical problem, or a future concern. That posture is not defensible. The DOJ does not ask whether your program is fashionable or aspirational. It asks three very old-fashioned questions: Is your compliance program well designed? Is it applied in good faith? Does it work in practice? Those questions apply with full force to AI.

In this post, I want to move the discussion from abstract frameworks to operational reality. I will show how compliance professionals can use the ECCP to structure AI governance, select board-grade KPIs, and demonstrate effectiveness in a way regulators understand. I will also show how the NIST AI Risk Management Framework (NIST Framework) fits neatly underneath this structure as an operating model, not a competing philosophy.

AI Governance Is Already an ECCP Issue

The DOJ has repeatedly emphasized that compliance programs must evolve as business risks evolve. Artificial intelligence is not a future risk. It is already embedded in pricing, hiring, credit decisions, customer interactions, fraud detection, and third-party screening. If an AI model can influence revenue, customer outcomes, or regulatory exposure, it is a compliance risk. Period.

The ECCP does not require companies to eliminate risk. It requires them to identify, assess, manage, and learn from it. AI governance, therefore, belongs squarely inside the compliance program, not off to the side in an innovation lab or technology committee.

The ECCP as an AI Governance Blueprint

The power of the ECCP is its simplicity. Every enforcement action ultimately traces back to the same three questions. Let us apply them directly to AI.

Is the Program Well Designed?

Design begins with risk assessment. If your organization cannot answer a basic question such as “What AI systems do we have, who owns them, and what decisions they influence,” you do not have a program. You have hope. A well-designed AI compliance program starts with an AI asset inventory that identifies models, tools, vendors, and use cases. Each asset must be risk-classified based on business impact, regulatory exposure, and potential harm.

Board-level KPIs here are coverage metrics. How many AI assets have been identified? What percentage has been risk-classified? How many high-impact models have completed an impact assessment before deployment? If your dashboard does not show near-full coverage, the design is incomplete.

Policies and procedures come next. The DOJ does not care how many policies you have. It cares whether they provide clear guidance for real decisions. AI policies should cover the full lifecycle, from design and data sourcing through deployment, monitoring, and retirement. A practical KPI is policy coverage. What percentage of AI assets operate under current, approved procedures? How often are those procedures refreshed? Annual updates are a reasonable baseline in a rapidly changing risk environment.

Is the Program Applied Earnestly and in Good Faith?

Good faith is demonstrated through action, not intent. Training is a central indicator. The DOJ expects role-based training tailored to actual risk. A generic AI awareness course does not meet this standard. Developers, model owners, compliance reviewers, and business leaders all require different training. Completion rates matter, but so does comprehension. Measuring post-training proficiency improvement is one of the clearest signals that training is more than a box-checking exercise.

Third-party risk management is another critical area. Many organizations rely on external models, data providers, or AI-enabled vendors. If you do not understand how those tools are built, governed, and updated, you are importing risk without controls. Strong programs use standardized AI diligence questionnaires, assign assurance scores, and require contractual safeguards for high-risk vendors. A board-ready KPI here is the percentage of high-risk AI vendors subject to enhanced diligence and contractual controls.

Mergers and acquisitions deserve special attention. AI risk does not wait for post-close integration. The DOJ has been explicit that pre-acquisition diligence matters. A defensible KPI is simple and unforgiving. 100% of acquisition targets with material AI usage must undergo AI due diligence before closing. Anything less invites inherited risk.

Does the Program Work in Practice?

This is where many programs fail. Paper controls do not impress regulators. Outcomes do. Incident reporting is a critical signal. A low number of reported AI issues may indicate fear, confusion, or a lack of safety rather than safety concerns. What matters is whether issues are identified, investigated, and resolved promptly. Mean time to investigate is a powerful metric. If AI-related concerns take months to resolve, the program is not working. Clear escalation paths, defined investigation playbooks, and documented root cause analysis are essential.

Continuous monitoring is equally important. High-risk AI systems must be monitored for performance drift, data changes, and unintended outcomes. The DOJ expects companies to use data analytics to test whether controls are functioning. KPIs here include validation pass rates before deployment, drift-detection coverage for critical models, and corrective action closure rates. These are not technical vanity metrics. They are evidence of effectiveness.

Where NIST Fits and Why It Matters

The NIST AI Risk Management Framework does not compete with the ECCP. It operationalizes it. The ECCP tells you what regulators expect. NIST helps you implement those expectations across governance, mapping, measurement, and management. For example, ECCP risk assessment aligns with NIST’s mapping function. ECCP’s continuous improvement aligns with NIST’s measurement and management functions. Using NIST terminology creates a shared language across compliance, legal, security, and data science teams. That shared language is governance in action.

Reporting AI Risk to the Board

Boards do not want technical detail. They want assurance. The most effective AI governance dashboards focus on a small set of indicators that answer the DOJ’s three questions: coverage, quality, responsiveness, and learning. Examples include the percentage of AI assets risk-classified, validation pass rates, investigation cycle times, and corrective action closure rates. When these metrics move in the right direction, they tell a credible story of control. More importantly, they show that compliance is not reacting to AI. It is governing it.

Five Key Takeaways for Compliance Professionals

  1. AI as Risk. Artificial intelligence is already within the scope of the ECCP. If AI can influence business outcomes, it must be governed like any other compliance risk.
  2. Risk Management Program. A well-designed AI compliance program begins with complete asset identification and risk classification. Coverage metrics are the first signal regulators will examine.
  3. Implementation. Good faith implementation is demonstrated through role-based training, disciplined third-party oversight, and pre-acquisition AI diligence. Intent without execution does not count.
  4. Outcomes, not Inputs. Effectiveness is proven through outcomes. Investigation speed, monitoring coverage, and corrective action closure rates matter more than policy volume.
  5. Complementary. The NIST Framework complements the ECCP by providing an operating model that compliance, legal, and technical teams can share. Together, they turn principles into proof.

Final Thoughts

AI governance is not about predicting the future. It is about demonstrating discipline in the present. The DOJ is not asking compliance professionals to become data scientists. It is asking us to do what they have always done well: identify risk, establish controls, test effectiveness, and improve continuously. The ECCP already gives you the framework. The only question is applying it.