Categories
Blog

AI Disclosures, Controls, and D&O Coverage: Closing the Governance Gap Around Artificial Intelligence

A new governance gap is emerging around artificial intelligence, and it is one that Chief Compliance Officers, compliance professionals, and boards need to confront now. It sits at the intersection of three areas that too many companies still treat separately: public disclosures, internal controls, and insurance coverage. That siloed approach is no longer sustainable.

As companies speak more confidently about their AI strategies, insurers are becoming more cautious about the risks those strategies create. That tension matters. It signals that the market is beginning to see something many organizations have not yet fully addressed: when a company’s statements about AI outpace its actual governance, the exposure is not merely operational or reputational. It can become a disclosure issue, a board oversight issue, and ultimately a proof-of-governance issue under the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP).

For the compliance professional, this is not simply an insurance story. It is a compliance integration story. The question is whether the company can align its statements about AI, the controls it has in place, and the protections it believes it has in place if something goes wrong.

The New Governance Gap

Many organizations are eager to describe AI as a source of innovation, efficiency, better decision-making, or competitive advantage. Those messages increasingly appear in earnings calls, investor decks, public filings, marketing materials, and board presentations. Yet the underlying governance structures often remain immature. That disconnect is the governance gap.

It appears when management speaks broadly about responsible AI but has not built a complete inventory of AI use cases. It appears when companies discuss oversight but cannot show testing, documentation, or monitoring. It appears that boards assume that insurance will respond to AI-related claims without understanding how new policy language may narrow coverage.

This is where D&O coverage becomes so important. It is not the center of the story, but it is a revealing signal. If insurers are revisiting policy language and introducing exclusions or limitations tied to AI-related conduct, it suggests the market sees governance risk. In other words, the insurance market is sending a message: AI-related claims are no longer hypothetical, and companies that cannot demonstrate disciplined oversight may find that risk transfer is less available than they assumed.

Why the ECCP Should Be the Primary Lens

The DOJ’s ECCP remains the most useful framework for analyzing this issue because it asks exactly the right questions.

Has the company conducted a risk assessment that accounts for emerging risks? Are policies and procedures aligned with actual business practice? Are controls working in practice? Is there proper oversight, accountability, and continuous improvement? Can the company demonstrate all of this with evidence? Those are compliance questions, but they are also the right AI governance questions.

If a company makes public statements about AI capability, oversight, or reliability, the ECCP lens requires more than aspiration. It requires substantiation. Can the company show who owns the AI risk? Can it demonstrate how models or systems are tested? Can it show escalation procedures when problems arise? Can it document how AI-related decisions are monitored, reviewed, and improved over time?

If the answer is no, then the issue is not simply that the company may have overpromised. The issue is that its compliance program may not be adequately addressing a material emerging risk. That is why CCOs should view AI as a cross-functional challenge requiring integration across legal, compliance, technology, risk, audit, investor relations, and the board.

AI Disclosure Must Be Evidence-Based

One of the most practical steps a compliance function can take is to push for an evidence-based disclosure process around AI. This means that public statements about AI should not be driven solely by enthusiasm, market pressure, or executive optimism. They should be grounded in underlying documentation. If the company says it uses AI responsibly, where is the governance framework? If it claims AI improves decision-making, what testing supports that assertion? If it says it has safeguards, where are the control descriptions, monitoring results, and escalation records?

This is not about suppressing innovation. It is about ensuring that disclosure discipline keeps pace with technological ambition. For boards, this means asking harder questions before approving or relying on public AI narratives. For compliance officers, it means helping management build the evidentiary record that turns broad statements into defensible representations.

Controls Must Catch Up to Strategy

This is where the “how-to” work begins. Compliance professionals should begin by creating a structured inventory of AI use cases across the enterprise. That inventory should identify where AI is being used, what decisions it informs, what data it relies on, who owns it, and what risks it entails.

Once that inventory exists, risk tiering should follow. Not every AI use case carries the same compliance significance. A low-risk productivity tool does not need the same oversight as a system that affects investigations, third-party due diligence, customer interactions, financial reporting, or core operational decisions.

From there, the company can design controls proportionate to risk. High-impact uses of AI should have documented governance, human review where appropriate, testing protocols, escalation triggers, and monitoring requirements. The compliance team should be able to answer a simple question: where are the controls, and how do we know they work? That is the heart of the ECCP inquiry.

Where NIST AI RMF and ISO/IEC 42001 Fit

This is also where the NIST AI Risk Management Framework and ISO/IEC 42001 become highly practical tools. NIST AI RMF helps organizations govern, map, measure, and manage AI risks. For compliance professionals, this provides a disciplined structure for identifying AI use cases, understanding impacts, assessing reliability, and managing response. It is especially useful in linking abstract AI risk to operational decision-making.

ISO/IEC 42001 brings management system discipline to AI governance. It focuses on defined roles, documented processes, control implementation, monitoring, internal review, and continual improvement. That makes it an excellent bridge between policy and execution. Together, these frameworks help operationalize the ECCP. The ECCP tells you what an effective compliance program should be able to demonstrate. NIST AI RMF helps structure the risk analysis. ISO 42001 helps embed those requirements into a repeatable governance process.

For CCOs, the practical lesson is clear: use these frameworks not as academic overlays, but as working tools to build ownership, documentation, testing, and accountability.

Insurance Is a Governance Input

Companies also need to stop treating insurance as an afterthought. D&O coverage should be considered a governance input, not merely a downstream purchase. If policy language is narrowing around AI-related claims, boards and compliance leaders need to understand what that means. What scenarios might raise disclosure-related allegations? Where is ambiguity in coverage? What assumptions has management made about protection that may no longer hold?

Compliance does not need to become an insurance specialist. But it does need to ensure that disclosure, governance, and risk transfer are aligned. If the company is making strong public claims about AI while carrying unexamined governance weaknesses and uncertain coverage, that is precisely the kind of mismatch that can trigger a crisis.

Closing the Gap Before It Becomes a Failure

The larger lesson is straightforward. AI governance is not simply about technology controls. It is about integration. It is about ensuring that what the company says, what it does, and what it can prove all line up. That is why the governance gap matters so much. It is the space where strategy outruns structure, where disclosure outruns evidence, and where confidence outruns control. For boards and compliance professionals, the task is to close that gap before it becomes a failure.

The companies that do this well will not necessarily be the ones moving the fastest. They will be the ones building documented, tested, monitored, and governed AI programs that stand up to regulatory scrutiny, investor pressure, and real-world disruption. That is not bureaucracy. That is the price of sustainable innovation.

Categories
Blog

When AI Becomes Evidence of Bad Governance: What CCOs and Boards Can Learn from Fortis Advisors

The Delaware Court of Chancery has handed compliance leaders and boards a timely lesson: generative AI is not a substitute for judgment, legal discipline, or governance. When leaders use AI to validate a predetermined objective, the technology does not reduce risk. It can become powerful evidence of intent, bad faith, and control failure.

A Cautionary Tale for Corporate Leaders

The recent Delaware Court of Chancery decision in Fortis Advisors, LLC v. Krafton, Inc. should be read by every Chief Compliance Officer (CCO), board member, general counsel, and corporate deal professional. The article describing the decision recounts a dispute in which a buyer, apparently unhappy with a substantial earnout obligation, turned to ChatGPT for advice on how to escape the economic consequences of the deal. According to the court’s account, the buyer then executed an AI-generated strategy designed to renegotiate the arrangement or take control from the seller management team. The court ultimately found that the buyer had wrongfully terminated key employees, improperly seized operational control, reinstated the seller’s CEO, and extended the earnout window to restore a genuine opportunity to achieve the payout.

The Real Compliance Lesson

For compliance professionals, the most important lesson is not that AI is dangerous. The lesson is that leadership can use AI in dangerous ways when governance is absent. That is a far more important point.

Too many organizations still approach AI governance as a technology problem. They focus on model performance, cybersecurity, or procurement review. Those are important issues, but this case reminds us that AI governance begins with human purpose. What question was asked? What objective was embedded in the prompt? What controls existed before action was taken? Who challenged the proposed course of conduct? Who documented the legal and ethical analysis? Those are compliance questions. Those are board questions.

Viewing the Case Through the DOJ ECCP Lens

This is also where the DOJ’s Evaluation of Corporate Compliance Programs (ECCP) provides a useful lens. The ECCP asks whether a company’s program is well designed, adequately resourced, empowered to function effectively, and actually works in practice. Put that framework over this fact pattern, and the governance gaps become painfully clear. Was there a control around the use of generative AI in strategic or legal decision-making? Was there escalation to legal, compliance, or the board when a significant earnout exposure was at stake? Was there any meaningful challenge function, or did leadership use AI as a convenient amplifier for a business objective it had already chosen?

The case suggests the latter. That should concern every board. Generative AI can be useful in brainstorming, summarizing, and scenario testing. But when executives use it to reinforce a desired outcome, particularly one touching contractual obligations, employment decisions, or post-closing governance rights, the tool can become a mechanism for rationalizing misconduct.

When AI Chats Become Discoverable Evidence

Worse, it creates a record. The Court notes that the AI chats were not privileged, were discoverable, and vividly underscored the buyer’s efforts to avoid its legal obligations. That point alone should stop corporate leaders in their tracks.

Many executives still treat AI chats as an informal thinking space, almost like talking to themselves. That is a serious mistake. Prompt histories, outputs, internal forwarding, and downstream use can all become evidence. If employees use public or enterprise AI tools to explore termination strategies, dispute positions, or ways around contractual commitments, they may be creating exactly the documentary record that plaintiffs, regulators, and judges will later find most compelling. In other words, the issue is not simply data leakage. It is discoverability, privilege erosion, and self-generated evidence of intent.

That is why CCOs and boards need to move beyond generic AI-use policies and build governance around high-risk use cases. The question should not be, “Do we allow ChatGPT?” The question should be, “Under what circumstances can generative AI be used in decisions involving legal rights, employee discipline, regulatory exposure, strategic transactions, or board-level matters?” If the answer is unclear, the company has work to do.

The M&A and Earnout Governance Lesson

The dealmaking lesson here is equally important. Earnouts are already fertile ground for post-closing disputes because they sit at the intersection of incentives, control, and timing. Buyers often want flexibility. Sellers want protection from interference. This case illustrates what can happen when a buyer attempts to manipulate operations in a way that affects the achievement of the earnout. The court not only found wrongful interference but also equitably extended the earnout period by 258 days and preserved a further contractual right to extend, thereby materially altering the deal’s economic landscape.

That is a governance lesson hiding inside an M&A lesson. Once a company acquires a business with earnout rights and operational covenants, post-closing conduct is no longer just integration management. It is compliance management. Interference with operational control, pretextual terminations, or actions designed to suppress performance metrics can lead to litigation, destroy value, and trigger judicial remedies that boards did not expect. CCOs should therefore insist that M&A integration playbooks include compliance review of earnout governance, decision rights, escalation protocols, and documentation standards.

Five Lessons for Boards and CCOs

What should boards and compliance officers do now? Here are five lessons.

  1. Govern the objective before you govern the tool. AI is only as sound as the purpose for which it is deployed. If leadership starts with a bad objective, AI can scale the problem. Boards should require management to define prohibited uses of AI in areas such as contract avoidance, pretextual employee actions, retaliation, and legal strategy without oversight by counsel.
  2. Treat high-risk AI prompts and outputs as governed business records. If a prompt relates to litigation, terminations, regulatory response, deal rights, or board matters, it should fall within clear policies on retention, review, and escalation. Employees need to understand that AI interactions may be discoverable and may not be privileged.
  3. Embed legal and compliance into consequential AI use cases. The ECCP emphasizes whether compliance has stature, access, and authority. That principle applies directly here. Strategic uses of AI that touch contractual rights, employment decisions, or fiduciary issues should not proceed without legal and compliance review.
  4. Build AI governance into M&A and post-closing integration. Earnout structures, operational covenants, and seller management rights are precisely the areas where incentives can distort behavior. Boards should ask whether integration teams have controls preventing actions that could be viewed as interference, manipulation, or bad-faith conduct.
  5. Document challenge, not just action. A single final decision does not prove good governance. It is proved by the process surrounding it. Was there dissent? Was there an analysis? Was there an escalation memo? Was there a documented rationale grounded in law, contract, and fiduciary duty? If not, the company may be left with a record that tells the wrong story.

Governance Must Come Before AI

In the end, this case is not really about a video game company. It is about a governance failure dressed in modern technology. Leaders appear to have used AI not to improve judgment, but to reinforce a course of conduct they already wanted to pursue. That is the compliance lesson. AI does not remove the need for fiduciary discipline, legal oversight, or ethical restraint. It makes those requirements more urgent.

For boards and CCOs, the mandate is clear. Governance must come first. Because when AI is used without guardrails, it does not merely create risk; it creates it. It can become the evidence.

Categories
Blog

Culture, Speak-Up, and Human Judgment: The Human Side of AI Governance

Artificial intelligence may be built on data, models, and code, but governance ultimately rests on people. For boards and Chief Compliance Officers, one of the most important questions is not only whether the organization has responsibly approved AI tools, but also whether employees are prepared to challenge them, report concerns, and apply human judgment when something does not look right. In many organizations, the earliest warning system for AI failure is not a dashboard. It is the workforce.

Over the course of this series, I have explored four critical governance challenges in AI: board oversight and accountability, strategy outrunning governance, data governance and privacy, and ongoing monitoring. This final blog post turns to the fifth and most underappreciated challenge of all: culture, speak-up, and human judgment.

Underappreciated because organizations often begin AI governance with structure in mind. They build committees, draft policies, classify risks, and establish approval gates. All of that is necessary. But structure alone is not sufficient. If the human beings closest to the work do not understand their role in AI governance, do not feel empowered to raise concerns, or begin to defer too readily to machine-generated outputs. The governance framework will be weaker than it appears on paper.

This is the point many companies miss. AI governance is not only about the technology. It is about whether the organization’s culture supports the responsible use of technology.

Employees Will See AI Failures First

In many companies, the first person to notice an AI problem will not be a board member, a Chief Executive Officer, or even a member of the governance committee. It will be an employee interacting with the tool in daily operations. It may be the customer service representative who sees the system generating inaccurate responses. It may be the HR professional who notices troubling patterns from an AI-supported screening tool. It may be the sales employee who sees a generative tool overstating product claims. It may be the finance professional who questions an automated summary that does not match underlying records. It may be the compliance analyst who sees a tool being used for an unapproved purpose.

That matters because early visibility is one of the most valuable protections a company can have. But visibility only becomes a control if employees know what to do with what they see. That is why culture is a governance issue. A workforce may spot the problem, but if employees do not understand that AI-related concerns are reportable, are unsure where to raise them, or believe management will ignore them, the warning system fails.

For boards and CCOs, that means AI governance cannot stop at policy creation. It must extend into behavior, reporting norms, and organizational trust.

Speak-Up Culture Is an AI Governance Control

Compliance professionals have long known that a speak-up culture is a control. It is often the first way a company learns of misconduct, process breakdowns, weak supervision, retaliation, harassment, fraud, or control evasion. The same principle now applies with equal force to AI.

Employees may observe biased outputs, inaccurate recommendations, privacy concerns, unexplained model behavior, misuse of tools, inappropriate reliance on machine-generated content, or efforts to bypass required human review. If they do not report those concerns, management may have no timely way to know what is happening.

This is where the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) remains highly instructive. The ECCP places substantial emphasis on whether employees are comfortable raising concerns, whether the company investigates them appropriately, and whether retaliation is prohibited in practice. Those same questions should now be asked in the context of AI. Does the company’s reporting framework explicitly include AI-related concerns? Are managers trained to recognize and escalate those concerns? Are reports investigated with the same seriousness as other compliance issues? Are employees protected if they raise uncomfortable questions about a tool the business wants to use?

If the answer is no, the company may have AI procedures, but does not yet have embedded AI governance in its culture.

Human Judgment Cannot Be Optional

One of the most significant risks in AI governance is not simply that a model will be wrong. It is that people will stop questioning it. AI systems can produce outputs quickly, fluently, and with apparent confidence. That creates a powerful temptation for users to over-trust the tool. When a system sounds polished, appears efficient, and reduces workload, people may assume that its conclusions deserve deference. This is precisely where governance needs the corrective force of human judgment.

Human judgment cannot be treated as a ceremonial step or a paper requirement. It must be meaningful. That means the people reviewing AI outputs must have the authority, time, training, and confidence to challenge those outputs when needed. A human review requirement that exists only on paper is not much of a safeguard. If reviewers are overloaded, insufficiently trained, or culturally discouraged from slowing the process, the control may be largely illusory.

Boards should care about this because one of the easiest mistakes management can make is to describe human oversight in governance documents without testing whether it is functioning in practice. CCOs should care because this is a classic compliance problem. A control may be designed elegantly but fail in daily operations because the supporting culture is too weak to sustain it.

Training Must Change with AI

A company cannot expect good judgment around AI if it has not trained people on what good judgment looks like. That means AI training should go beyond technical usage instructions. Employees need to understand what risks may arise, what concerns are reportable, what approved use looks like, what prohibited use looks like, and why human challenge matters. Managers need additional training because they are often the first informal escalation point when an employee raises a concern. If managers dismiss AI concerns as overreactions, inconveniences, or resistance to innovation, the speak-up system will quickly lose credibility.

Training should also be role-based. The risks faced by a customer-facing team may differ from those faced by teams in HR, legal, procurement, marketing, finance, or internal audit. A generic AI training module may create awareness, but it will not create the operational judgment needed in high-risk areas.

This is where the NIST AI Risk Management Framework provides practical value. NIST’s emphasis on governance is not limited to formal structures. It contemplates culture, accountability, and the need for organizations to support informed decision-making across the enterprise. ISO/IEC 42001 similarly reinforces the importance of organizational competence, awareness, and defined responsibilities. Both frameworks point to a critical truth: responsible AI use depends not only on controls over the technology, but also on the capabilities of the people who use and oversee it.

Managers Matter More Than Companies Often Realize

If culture is the operating environment of governance, managers are often its most important local translators. An employee may not begin by filing a formal report. More often, an employee may raise a concern informally with a supervisor or colleague. “This output does not seem right.” “I do not think we should be using it this way.” “This seems to be pulling in sensitive information.” “This recommendation may be biased.” “The human review is not really happening anymore.”

The manager’s response in that moment matters enormously. Does the manager take the concern seriously? Does the manager know it should be escalated? Does the manager see it as a governance issue or as resistance to efficiency? Does the manager understand the difference between a minor usability complaint and a potentially significant compliance concern?

This is why boards and CCOs should not think about speak-up solely in hotline terms. AI governance depends on the broader management culture. If supervisors are not equipped to receive and escalate AI concerns appropriately, many issues will die in the middle of the organization before they ever reach a formal channel.

Anti-Retaliation Must Be Real in the AI Context

There is another dimension that cannot be overlooked: the risk of retaliation. In some organizations, employees may hesitate to raise AI concerns because they fear being labeled anti-innovation, obstructionist, or not commercially minded. That creates a subtle but serious governance risk. If the corporate atmosphere celebrates rapid AI adoption without equally celebrating responsible challenge, then employees may conclude that silence is safer than candor.

This is why anti-retaliation messaging must be explicit in the AI context. The company should make clear that raising concerns about inaccurate outputs, misuse, privacy risks, unfairness, or control breakdowns is part of responsible business conduct. It is not a failure to embrace innovation. It is a contribution to the effective governance of innovation.

The CCO should ensure that AI-related concerns are incorporated into existing anti-retaliation frameworks, investigations protocols, and communications. Boards should ask whether employee sentiment data, hotline trends, and internal investigations provide any signal that people are reluctant to question AI initiatives. If the organization is moving aggressively on AI, it should be equally serious about protecting those who raise governance concerns about it.

Documentation and Escalation Still Matter

As with every other aspect of AI governance, culture and judgment must be integrated into the process. A company should document how AI-related concerns can be reported, how they are triaged, who reviews them, what escalation triggers apply, and how resolutions are tracked. Concerns about AI should not be dismissed as vague general complaints. They should be reviewable and analyzable over time.

This is essential not only for accountability but for learning. Patterns in employee concerns may reveal weaknesses in training, design, vendor management, access controls, or oversight. A single report may be an isolated event. Repeated concerns within a single function may point to a systemic governance problem. That is why speak-up is not just about receiving reports. It is about turning those reports into organizational intelligence.

The ECCP again offers a useful framework. It asks whether investigations are timely, whether root causes are examined, and whether lessons learned are fed back into the compliance program. AI governance should work the same way. A reported concern should not end with a narrow answer to the immediate complaint. It should prompt management to ask what the issue reveals about the broader governance environment.

Boards Must Model the Right Tone

This final point may be the most important. Culture is shaped by what leadership rewards, tolerates, and asks about. If the board only asks about AI efficiency, adoption, and speed, management will take the signal. If the board asks whether employees are raising concerns, whether human oversight is meaningful, whether managers are trained, and whether retaliation protections are working, management will take that signal as well.

For CCOs, this is a vital opportunity. The compliance function can help boards understand that governance is not only about structure and controls, but also about whether the organization has preserved the human capacity to question, escalate, and correct. In the AI context, that may be the most important governance capability of all.

Because in the end, even the most advanced system will not govern itself. An enterprise must govern it. That requires culture. It requires trust. It requires the courage to speak up. And it requires strong human judgment to look at an impressive output and still ask, “Is this right?”

The Human Side of Governance Is the Decisive Side

This final article brings the series back to a simple truth. AI governance is not only about what the company builds. It is about how the company behaves.

Boards may establish oversight. Management may create structures. Compliance may build controls. But if employees are not prepared to report concerns or exercise judgment, the organization will remain vulnerable. A strong AI governance program does not merely control the system. It empowers the people around the system to challenge it responsibly.

That is the human side of governance, and in many ways it is the decisive side. 

Categories
AI Today in 5

AI Today in 5: April 13, 2026, The AI Governance Framework Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Oracle brings storytelling to the heart of compliance with AI. (Yahoo!Finance)
  2. AI is bringing compliance to BioPharma. (PharmTech)
  3. Oracle brings AI agents to financial crime and compliance. (Financial IT)
  4. Building out your AI governance framework. (Bloomberg Law)
  5. AI developments finance pros should be tracking. (MIT)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Blog

Preventing Strategy Outrunning Governance in AI

One of the clearest AI governance challenges facing companies today is not a failure of ambition. It is a failure of pacing. Put simply, strategy is moving faster than governance. Business teams want results. Senior executives hear daily about efficiency gains, lower costs, faster decision-making, enhanced customer engagement, and competitive advantage. Vendors are more than happy to promise it all. Employees are already experimenting with AI tools on their own. In that environment, the pressure to move quickly is relentless.

That is where the compliance function must step forward. Not to say no. Not to slow innovation for the sake of slowing it. But to ensure that innovation moves with structure, discipline, and accountability. Governance is not the enemy of AI strategy. Governance is what allows an AI strategy to scale without becoming an enterprise risk event.

The Central Question for Boards and CCOs

For boards, Chief Compliance Officers, and business leaders, the central question is straightforward: has the company defined the rules of the road before putting AI into production? If the answer is no, the company is already behind.

This is not a theoretical problem. It is happening every day. A business unit buys an AI-enabled tool before legal, compliance, IT, privacy, and security have reviewed it. A vendor pitches a product as low-risk automation, even though it actually makes consequential recommendations. An employee uploads sensitive data into a generative AI platform for convenience. A use case that began as internal support quietly migrates into customer-facing decision-making. A pilot project becomes business as usual without anyone documenting who approved it, what risks were considered, or what human oversight is supposed to look like.

That is what it means when strategy outruns governance. The business has a faster process for adopting AI than it has for understanding, controlling, and monitoring AI risk.

What the DOJ Expects

The Department of Justice has been telling compliance professionals for years that an effective compliance program must be dynamic, risk-based, and integrated into the business. That lesson applies directly here. Under the ECCP, prosecutors ask whether a company has identified and assessed its risk profile, whether policies and procedures are practical and accessible, whether responsibilities are clearly assigned, whether decisions are documented, and whether the program evolves as risks change. AI governance sits squarely in that framework.

What “Rules of the Road” Means in Practice

What do the “rules of the road” look like in practice?

First, the company must define which AI use cases are permissible. These are lower-risk applications that can be used within established controls. Think internal drafting support, workflow automation for non-sensitive administrative tasks, or summarization tools used on approved data sets. Even here, there should be basic conditions: approved tools only, no confidential data unless authorized, user training, logging, and manager accountability.

Second, the company must identify restricted or high-risk use cases. These are situations where AI may be allowed, but only after enhanced review. This can include uses involving personal data, HR decisions, customer communications, pricing, fraud detection, credit or eligibility decisions, compliance surveillance, or any function where bias, opacity, or error could create legal, regulatory, or reputational harm. These use cases should trigger a more formal process that includes a documented risk assessment, legal and compliance review, data governance checks, testing, defined human oversight, and ongoing monitoring.

Third, the company must be clear about prohibited use cases. If an AI application cannot be used consistently with the company’s values, control environment, legal obligations, or risk appetite, it should be off-limits. That might include tools that process sensitive data in unapproved environments, systems that make fully automated consequential decisions without human review, or applications that cannot be explained, tested, validated, or monitored sufficiently for their intended use.

Fourth, the company must establish escalation thresholds. Not every AI decision belongs at the board level, but some certainly do. Use cases involving strategic transformation, material legal exposure, major customer impact, significant third-party dependency, or high-consequence decision-making may need escalation to senior management, a designated AI or risk committee, or the board itself. If management cannot explain when a matter gets elevated, governance is too vague to be trusted.

Why the NIST AI RMF Matters

This is where the NIST Framework is so useful. NIST does not treat AI governance as a one-time signoff exercise. It organizes governance as an ongoing discipline through four connected functions: Govern, Map, Measure, and Manage. For compliance professionals, that is a practical operating model.

Governance means setting accountability, policies, oversight structures, and risk tolerances. It answers who is responsible, who decides, and what standards apply. A map means understanding the use case, context, stakeholders, data, and risks. It answers what the system is actually doing and where exposure lies. Measure means testing, validating, and assessing performance and controls. It answers whether the system works as intended and whether the company can prove it. Managing means acting on what is learned through oversight, remediation, change management, and continual improvement. It answers whether the company is prepared to respond when reality diverges from the plan.

How ISO 42001 Reinforces Governance Discipline

ISO 42001 reinforces the same message from a management systems perspective. It brings structure, accountability, controls, and continual improvement to AI governance. That matters because many organizations do not fail because of a lack of policy language. They fail because they do not operationalize accountability. ISO 42001 pushes companies to embed AI governance into defined processes, assign responsibilities, document controls, conduct internal reviews, and take corrective action. In other words, it turns aspiration into a management discipline.

What Happens When Strategy Outruns Governance

What happens when none of this is done well?

Shadow AI is usually the first warning sign. Employees use public or lightly reviewed tools because they are easy to use, fast, and readily available. Sensitive data may be entered without approval. Outputs may be used in business decisions without validation. The organization tells itself it is still in the experimentation phase, while the risk has already gone live.

Vendor-driven deployment is another danger. The company relies too heavily on what the vendor says the product can do and not enough on its own evaluation of what the product should do, how it works, what data it uses, and what controls are required. When something goes wrong, accountability becomes murky. Procurement says the business wanted speed. The business says IT approved the integration. IT says legal reviewed the contract. Legal says compliance owns the policy. Compliance says no one submitted the use case for formal review. That is not governance. That is institutional finger-pointing.

Undocumented approvals are equally dangerous. An AI tool is launched because everyone generally agrees it seems useful. But there is no record of the intended purpose, risk rating, required controls, human review standard, or approval rationale. Six months later, the company cannot explain why the system was deployed, what guardrails were put in place, or whether its use has drifted beyond its original scope.

The Compliance Mechanisms Companies Need Now

That is why companies need concrete compliance mechanisms now. They need an intake process for AI use cases to enter a formal review channel before deployment. They need risk tiering so not every use case gets the same treatment, but higher-risk applications receive enhanced scrutiny. They need approval workflows with defined roles for the business, legal, compliance, privacy, security, IT, and, where appropriate, model risk or internal audit. They need board reporting triggers to inform leadership when AI adoption crosses materiality or risk thresholds. They need a current model and use-case inventory so the company knows what is in operation. They need change management, so updates, retraining, vendor changes, and scope shifts are reviewed rather than assumed. And they need periodic review because AI risk does not stand still after launch.

The Special Role of Compliance

The compliance professional has a special role here. Compliance is often the function best positioned to connect governance, process, accountability, documentation, and escalation. That is precisely what the DOJ expects in an effective program. If the company can buy AI faster than it can classify risk, document controls, assign accountability, and test outcomes, the program is not keeping pace with the business. That gap will not stay theoretical for long. It will harden into enterprise risk.

Conclusion: Governance Must Keep Pace With Strategy

The lesson is direct. Strategy and governance must move together. AI governance is not a brake pedal. It is the steering system. A company that wants the benefits of AI must be disciplined enough to define where AI can go, where it cannot go, who decides, what gets documented, and when the business must stop and reassess. If the company can move faster on AI strategy than on AI governance, it is creating risk faster than it can manage. That is not innovation. That is exposure.

Categories
Blog

Ongoing Monitoring: Why AI Governance Begins After Launch

In this blog post, we turn to the fourth major governance challenge in AI: ongoing monitoring. This is one of the most persistent weaknesses in AI governance. Organizations may build an intake process. They may create an approval committee. They may conduct risk reviews, privacy assessments, and validation testing before launch. All of that is important. But it is not enough.

AI risk does not freeze at the moment of approval. It changes over time. Use cases evolve. Employees adapt tools in unexpected ways. Vendors modify models. Controls weaken in practice. Regulatory expectations shift. What looked reasonable at launch may become inadequate six weeks later.

That is why ongoing monitoring is not an optional enhancement to AI governance. It is a core governance requirement. For boards and CCOs, the central question is not simply whether the company approved AI responsibly. It is whether the company has the discipline to govern it continuously once it is in the wild.

Approval Is Not Governance

One of the great temptations in AI governance is to confuse approval with control. A business unit proposes a use case, a committee reviews it, guardrails are listed, and the tool goes live. At that point, many organizations behave as though the governance work is largely complete. It is not.

Approval is a moment. Governance is a process. The problem is that companies often put their best people, clearest thinking, and highest scrutiny into the approval stage, then shift immediately into operational mode without building the same discipline around post-launch oversight. That leaves management blind to how the system actually performs under real-world conditions.

The Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) is especially instructive here. The ECCP does not ask merely whether a company has policies on paper. It asks whether the program works in practice, whether controls are tested, whether issues are investigated, and whether lessons learned are incorporated back into the compliance framework. AI governance should be viewed through the same lens. The question is not whether a control was described at launch. The question is whether that control continues to function and whether management would know if it stopped.

Why AI Risks Change After Launch

Post-deployment risk in AI does not arise because management failed to care on Implementation Day. It arises because AI systems operate in dynamic environments. A model may begin to drift as conditions change. A tool approved for one limited purpose may gradually be used for broader or higher-risk decisions. Employees may find workarounds that bypass the intended controls. Human reviewers may begin by scrutinizing outputs closely but, over time, may become overconfident, overloaded, or simply too reliant on the system. Vendors may update underlying functionality without the company fully appreciating the consequences. New regulations or regulatory interpretations may alter the risk landscape. Inputs may change. Outputs may become less reliable. Bias may surface in ways not identified in initial testing.

In other words, AI governance risk is not static. It is operational. That is why boards and CCOs must resist the notion that initial approval is the hardest part. In many respects, ongoing monitoring is harder because it requires sustained attention, clear metrics, escalation discipline, and the willingness to revisit prior assumptions.

The Governance Question

After implementation, the governance question changes. It is no longer simply, “Was this use case approved?” It becomes, “Is the use case still operating as expected, within risk tolerance, and under effective control?” That sounds simple, but it requires a much more mature oversight model than many companies currently have. It requires management to define what should be monitored, how frequently, by whom, and what changes or anomalies trigger escalation. It requires a reporting structure that does not simply celebrate adoption or efficiency gains, but surfaces incidents, deviations, near misses, and control fatigue.

For the board, the challenge is to insist on post-launch visibility. Board reporting on AI should not end with inventories and implementation updates. It should include information about ongoing performance, exception trends, complaints, incidents, validation results, vendor changes, policy breaches, and remediation efforts. A board that hears only that AI adoption is accelerating may not hear that AI governance is working.

For the CCO, the challenge is even more immediate. Compliance must ask whether the organization is gathering evidence that controls continue to function in practice. If it is not, then the governance program is still immature, no matter how polished its approval process may appear.

Monitoring What Matters

It all begins by identifying the right things to monitor. This cannot be a generic exercise. Monitoring should be tied to the specific use case, its risk classification, and its control environment. But there are some recurring categories that boards and CCOs should expect to see.

  1. Performance should be monitored. Is the tool still delivering outputs that are accurate, reliable, and appropriate for the intended purpose? Have error rates changed? Are there signs of drift or degraded quality?
  2. Control effectiveness should be monitored. Are human review requirements actually being followed? Are approval restrictions, access controls, or usage limitations still operating as designed? Is there evidence that employees are bypassing or weakening controls?
  3. Incidents and complaints should be monitored. Has the tool produced problematic results? Have customers, employees, or managers raised concerns? Have there been internal reports about bias, inaccuracy, misuse, or confidentiality risks?
  4. Changes in scope should be monitored. Is the tool still being used for the original purpose, or has it drifted into new contexts? Scope creep is one of the oldest compliance problems in business, and AI is no exception.
  5. External change should be monitored. Has a vendor updated the model? Have relevant laws, guidance, or industry expectations changed? Has a new regulatory concern emerged that requires reevaluation?

This is where the NIST AI Risk Management Framework is especially useful. NIST emphasizes that organizations must govern, measure, and manage AI risk over time, not simply identify it once. ISO/IEC 42001 reaches the same conclusion from a management systems perspective by requiring continual improvement, internal review, and adaptive controls. Both frameworks point to the same truth: effective AI governance is iterative, not episodic.

The CCO’s Role in Governance

For compliance professionals, ongoing monitoring is where the AI governance conversation becomes most familiar. This is where the CCO brings real institutional value. Compliance understands that controls weaken over time. Training decays. Workarounds emerge. Policies lose operational traction. Reporting channels capture issues others do not see. Root cause analysis matters. Corrective action must be tracked to closure. These are not new lessons. They are the daily work of compliance. AI gives them a new domain.

The CCO should insist that AI use cases have documented post-launch monitoring plans. These should identify the responsible owner, the metrics to be reviewed, the review frequency, the escalation triggers, and the process for documenting findings and remediation. High-risk use cases should not be left to passive observation. They should be actively governed.

The CCO should also ensure that AI monitoring is connected to the broader compliance ecosystem. Employee concerns raised through speak-up channels may reveal issues with the model. Internal investigations may expose misuse. Third-party due diligence may uncover changes to vendors. Training gaps may explain repeated incidents. AI governance should not be isolated from these functions. It should be integrated with them.

This is also where the CCO can most effectively help the board. Rather than presenting AI as a series of isolated technical matters, the CCO can frame post-launch governance in familiar compliance terms: monitoring, testing, escalation, remediation, and lessons learned.

Board Practice: Ask for More Than Adoption Metrics

One of the most important disciplines boards can develop is to stop mistaking usage information for governance information.

Management may report that AI adoption is growing, that productivity gains are material, or that pilot programs are expanding. Those data points may be relevant, but they are not a form of governance assurance. A board should want to know whether controls are operating, whether incidents are increasing, whether certain business units generate more exceptions, whether human review remains meaningful, and whether management has paused or modified any use cases based on real-world experience.

This is where board oversight becomes genuinely valuable. When the board asks for evidence of ongoing monitoring, it changes management behavior. It signals that AI success will not be measured solely by speed or efficiency, but also by discipline and resilience.

Boards should also ensure that high-risk use cases receive enhanced visibility. Not every AI tool merits the same level of board attention. But where AI affects regulated interactions, employment decisions, sensitive data, financial reporting, significant customer outcomes, or reputationally sensitive functions, ongoing board-level reporting should be expected.

Escalation and Remediation Must Be Built In

Monitoring matters only if it leads to action. There must be clear escalation and remediation protocols. When a material issue emerges, who gets notified? Can the use case be paused? Who determines whether the problem is technical, operational, legal, or cultural? How are facts gathered? How are corrective actions assigned? When is the board informed? How is the lesson fed back into policy, training, vendor management, or approval standards?

These processes should not be improvised. They should be documented. The organization should know in advance which incidents require escalation, which temporary controls may be imposed, and how remediation is tracked.

This is another place where the ECCP provides a useful governance model. DOJ expects companies not only to identify misconduct but also to investigate it, understand its root causes, and implement improvements that reduce the risk of recurrence. AI governance should work the same way. If a model fails or a control weakens, management should not merely fix the immediate problem. It should ask what the failure reveals about the program itself.

Documentation Is the Proof

As with every other element of effective governance, documentation is what turns intention into evidence. Post-launch AI governance should generate records that demonstrate monitoring occurred, issues were surfaced, escalations were handled, and remediation was completed. That may include performance reviews, validation updates, incident logs, committee minutes, complaint summaries, control testing records, vendor change notices, and corrective action trackers.

Without such documentation, management may believe it is effectively monitoring AI, but it will struggle to prove it to internal audit, regulators, or the board. More importantly, it will struggle to learn from experience in a disciplined way. A company that documents ongoing monitoring creates institutional memory. It can compare use cases, detect patterns, and refine its oversight model over time. That is how governance matures.

AI Governance Starts After Launch

The hardest truth in AI governance may be this: launching the tool is often the easiest part. The real challenge begins afterward. That is when optimism meets operational reality. That is when human reviewers become tired. That is when vendors update products. That is when regulators begin asking harder questions. That is when small problems become visible, or invisible, depending on whether the company has built a monitoring system capable of finding them.

For boards and CCOs, this is where governance earns its name. If the organization can monitor, escalate, remediate, and improve, then AI oversight has substance. If it cannot, then the company has not really governed AI at all. It has only been approved.

In the next and final blog post in this series, I will turn to the fifth governance challenge: culture, speak-up, and human judgment, because in many organizations, the first people to see an AI problem will not be the board, the CCO, or the governance committee. It will be the employee closest to the work.

Categories
AI Today in 5

AI Today in 5: April 8, 2026, The AI in Professional Services Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. AI is increasing social engineering scams. (FT)
  2. Advancing compliance efficiency with AI. (Yahoo!Finance)
  3. AI governance really matters. (HR Brew)
  4. Privacy and AI. (BlufftonToday)
  5. AI to automate professional services. (FinTechGlobal)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Blog

Board Oversight and Accountability in AI: Where Governance Begins

For boards and Chief Compliance Officers, AI governance does not begin with the model. It begins with oversight, accountability, and the discipline to define who owns risk, who makes decisions, and who answers when something goes wrong. If AI is changing how companies operate, then board governance and compliance leadership must change as well.

In the first article in this series, I laid out the five significant corporate governance challenges around artificial intelligence: board oversight and accountability, strategy outrunning governance, data governance and model integrity, ongoing monitoring, and culture and speak-up. In Part 2, I turn to the first and most foundational issue: board oversight and accountability.

This is where every AI governance program either starts with rigor or begins with ambiguity. And ambiguity, in governance, is rarely neutral. It is usually the breeding ground for failure.

There is a tendency in some organizations to treat AI oversight as a natural extension of technology oversight. That is too narrow. AI touches legal exposure, regulatory risk, data governance, privacy, discrimination concerns, intellectual property, operational resilience, internal controls, and corporate culture. That makes AI a board-level and CCO-level issue, not just a CIO issue.

The central governance question is straightforward: who is responsible for AI risk, and how is that responsibility exercised in practice? If the board cannot answer that question, if management cannot explain it, and if the compliance function is not part of the answer, then the company does not yet have credible AI governance.

Why Board Oversight Matters Now

Boards have always been expected to oversee enterprise risk. What has changed with AI is the speed, scale, and opacity of the risks involved. A business process can be altered quickly by a generative AI tool. A model can influence customer interactions, internal decisions, and external communications at scale. Employees can adopt AI capabilities before governance structures are fully formed. Vendors can embed AI inside products and services without management fully understanding the downstream implications. That is why AI cannot be governed informally. It requires deliberate oversight.

The board does not need to manage models line by line. That is not its role. But the board must ensure that management has established a governance structure capable of identifying AI use cases, classifying risk, escalating significant issues, testing controls, and reporting failures. Just as important, the board must know who inside management is accountable for making that system work.

This is where the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) offers a very practical lens. The ECCP asks whether a compliance program is well designed, adequately resourced, empowered to function effectively, and tested in practice. Those four questions are equally powerful in evaluating AI governance. Is the governance structure well designed? Is it resourced? Is the compliance function empowered in AI decision-making? Is the program working in practice? If the answer to any of those questions is uncertain, the board should treat that uncertainty as a governance gap.

Accountability Begins with Ownership

One of the oldest problems in corporate governance is fragmented responsibility. AI only intensifies that risk. Consider the typical organizational landscape. IT may own its own infrastructure. Legal may review contracts and liability. Privacy may address data use. Security may focus on cyber threats. Risk may handle enterprise frameworks. Compliance may address policy, controls, investigations, and reporting. Business leaders may champion the use case. Internal audit may come in later for assurance. The board, meanwhile, receives updates from multiple directions.

Without a clearly defined operating model, this becomes a classic accountability fog. Everyone has a slice of the issue, but no one owns the whole risk. A more disciplined approach requires naming an accountable executive owner for enterprise AI governance; in some companies, that may be the Chief Risk Officer. In others, it may be a Chief Legal Officer, Chief Compliance Officer, or a designated senior executive with cross-functional authority. The title matters less than the clarity. The organization must know who convenes the process, who resolves conflicts, who signs off on high-risk use cases, and who reports upward to the board.

For the CCO, this does not mean taking sole ownership of AI. That would be unrealistic and unwise. But it does mean insisting that compliance has a defined role in the governance architecture. AI raises issues of policy adherence, training, escalation, investigations, third-party risk, disciplinary consistency, and remediation. Those are core compliance issues. A governance model that sidelines the CCO is not merely incomplete; it is unstable.

The Right Committee Structure

Once ownership is established, the next question is structural: where does AI governance live? The answer should be enterprise-wide, but with a defined committee architecture. Companies need at least two governance layers.

The first is a management-level AI governance committee or council. This should be a cross-functional working body with representation from compliance, legal, privacy, security, technology, risk, internal audit, and relevant business units, as appropriate. Its purpose is operational governance. It reviews proposed use cases, classifies risk levels, evaluates controls, addresses issues, and determines escalation.

The second is a board-level oversight mechanism. This does not always require a new standing AI committee. In some organizations, oversight may sit with the audit committee, risk committee, technology committee, or full board, depending on the company’s structure and maturity. What matters is not the name of the committee. What matters is that there is an identified board body with responsibility for overseeing AI governance and receiving regular reporting.

This is consistent with the NIST AI Risk Management Framework, which begins with the “Govern” function. NIST recognizes that governance is not an afterthought; it is the foundation that enables the rest of the risk management lifecycle. ISO/IEC 42001 similarly reinforces that AI governance must be embedded in a management system with defined roles, controls, review mechanisms, and continuous improvement. Both frameworks point in the same direction: AI governance requires structure, not aspiration.

Reporting Lines That Actually Work

Good governance lives or dies by reporting lines. If information cannot move efficiently upward, then oversight will be stale, filtered, or incomplete. Boards should require periodic reporting on several core areas: the current AI inventory, high-risk use cases, incident trends, control exceptions, third-party AI dependencies, regulatory developments, and remediation status. The board does not need a data dump. It needs decision-useful reporting.

That means management should create a formal reporting cadence. Quarterly reporting is sufficient for many organizations, but high-risk environments require more frequent updates. The reporting should identify not only what has been approved, but what has changed. That includes scope changes, incidents, near misses, new vendors, policy exceptions, and any material concerns raised by employees, customers, or regulators.

The CCO should be part of the reporting chain, not a bystander. A balanced governance model allows compliance to elevate concerns independently if necessary, particularly when a business leader is pushing to move faster than controls will support. That is not an obstruction. That is governance doing its job.

Escalation Protocols: The Missing Middle

Many companies have approval procedures, but far fewer have robust escalation protocols. That is a mistake. Governance fails only when there is no structure. It also fails when there is no clear path for handling edge cases, incidents, or disagreements.

An effective AI governance program should specify escalation triggers. For example, a use case should be escalated when it affects employment decisions, consumer rights, regulated communications, financial reporting, sensitive personal data, or legally significant outcomes. Escalation should also occur when there is evidence of model drift, hallucinations in a material context, unexplained bias, control failure, a third-party vendor issue, or a credible employee concern.

These triggers should not live in someone’s head. They should be documented in policy, operating procedures, or a risk classification matrix. There should also be a defined process for who gets notified, what interim controls are applied, whether deployment pauses are available, and how issues are documented for follow-up.

This is another place where the ECCP remains highly relevant. DOJ prosecutors routinely ask whether issues are escalated appropriately, whether investigations are timely, and whether lessons learned are incorporated into the program. AI governance should be built with the same operational seriousness. If an issue arises, the company should not be improvising its governance response in real time.

Documentation Is Evidence of Governance

One of the great compliance truths is that governance without documentation is hard to prove and harder to sustain. For AI governance, documentation should include at least these categories: use case inventories, risk classifications, approval memos, committee minutes, control requirements, incident logs, training records, validation summaries, escalation decisions, and remediation actions. This is not paperwork for its own sake. It is the evidentiary trail that shows the organization is governing AI thoughtfully and consistently.

Boards should care about this because documentation is what allows oversight to be more than anecdotal. It is also what allows internal audit, regulators, and investigators to assess whether the governance program is functioning.

For the CCO, documentation is particularly important because it connects AI oversight to the larger compliance architecture. It helps align AI governance with policy management, training, investigations, speak-up systems, third-party due diligence, and corrective action tracking. In other words, it turns AI governance from a loose collection of meetings into a defensible management process.

Board Practice and CCO Practice Must Meet in the Middle

The best AI governance models do not pit the board and the compliance function against innovation. They create a structure that allows innovation to move, but only within defined guardrails. Boards should ask sharper questions. Who owns AI governance? What committee reviews high-risk use cases? What issues must be escalated? What reporting do we receive? How are incidents tracked and remediated? What role does compliance play?

CCOs should be equally direct. Where does compliance sit in the approval process? How do employees report AI concerns? What documentation is required? When can compliance elevate an issue on its own? How are lessons learned being fed back into policy and training?

This is the practical heart of the matter. Oversight is not a slogan. Accountability is not a press release. Both must be built into reporting lines, committee design, escalation protocols, and documentation discipline.

AI governance begins here because every other issue in this series depends on it. If oversight is weak and accountability is blurred, strategy will outrun governance, data issues will go unnoticed, monitoring will become inconsistent, and culture will not carry the load. But if the board and CCO get this first issue right, they create the governance spine that the rest of the program can rely on.

Join us tomorrow, where we review the rule of data governance in AI governance, because that is where every effective AI governance program either starts strong or starts to fail.

Categories
AI Today in 5

AI Today in 5: April 7, 2026, The AI Governance Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. AI for auditing. (FT)
  2. AI is creating compliance gaps in the mortgage industry. (National Mortgage Professional)
  3. AI-enabled compliance reduces healthcare risks. (The Palm Beach Post)
  4. AI issues in the workplace. (Mintz)
  5. Compliance priorities are shifting towards AI governance. (BDO)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Blog

Five Corporate Governance Challenges in AI: A Roadmap for CCOs and Boards

AI is not simply a technology deployment question. It is a corporate governance challenge that requires board attention, compliance discipline, and operational oversight. For Chief Compliance Officers and board members, the task is not merely to encourage innovation, but to ensure that innovation is governed, monitored, and aligned with business values and risk tolerance.

Artificial intelligence has moved from pilot projects and innovation labs into the bloodstream of the modern corporation. It now touches customer service, finance, procurement, HR, sales, third-party management, internal reporting, and strategic decision-making. That expansion is why AI can no longer be treated as a narrow IT issue. It is a governance issue. More particularly, it is a governance issue with compliance implications at every lifecycle stage.

For compliance professionals, that means AI is not simply about whether a model works. It is about whether the organization has built the structures, accountability, and culture to use AI responsibly. For boards, it means AI oversight can no longer be delegated away with a cursory quarterly update. The board must understand not only where AI is being used, but whether the company’s governance architecture is fit for purpose.

This is the first post in a series examining the five most important corporate governance issues around AI. They are not exotic or theoretical. They are the same types of governance challenges compliance professionals have seen before in other contexts: ownership, control design, data integrity, monitoring, and culture. AI raises the stakes and accelerates the timeline.

1. Board Oversight and Accountability

The first challenge is the most fundamental: who is actually in charge?

One of the great failures in governance is diffuse accountability. When everyone has some responsibility, no one has real responsibility. AI governance suffers from this problem in many organizations. Legal is concerned about liability. IT is focused on systems. Security is focused on cyber risk. Privacy is focused on data usage. Compliance is focused on controls and conduct. Business leaders are focused on speed and competitive advantage. The board hears fragments from all of them, but may not receive a coherent picture.

That is a dangerous place to be. AI governance begins with clear ownership. The board should know who is accountable for enterprise AI governance, how decisions are escalated, and how high-risk use cases are reviewed. A company does not need bureaucracy for its own sake, but it does need clarity.

This is where the Department of Justice’s Evaluation of Corporate Compliance Programs remains instructive, even if AI is not its exclusive focus. The ECCP repeatedly asks whether compliance is well designed, adequately resourced, empowered to function effectively, and tested in practice. Those same questions apply directly to AI governance. If accountability for AI is vague, if compliance is not in the room, or if oversight is not documented, governance will be performative rather than operational.

2. Strategy Outrunning Governance

The second challenge is one many companies know all too well: innovation is sprinting ahead while governance is still tying its shoes.

Business teams are under enormous pressure to deploy AI quickly. Senior leadership hears daily that AI can deliver efficiency, productivity, growth, and competitive advantage. Vendors promise transformation. Employees experiment informally. In that environment, governance can be cast as friction.

But good governance is not the enemy of innovation. It is what keeps innovation from becoming unmanaged exposure.

The central question here is simple: has the company defined the rules of the road before putting AI into production? In practical terms, has it determined which use cases are permissible, which require enhanced review, which are prohibited, and which must go to the board or a designated committee? Has it established approval criteria, documentation standards, and stop/go decision points?

The NIST AI Risk Management Framework is especially helpful on this point because it treats AI governance as an ongoing management discipline rather than a one-time sign-off. Its emphasis on Govern, Map, Measure, and Manage is a powerful reminder that strategy and governance must move together. ISO/IEC 42001 brings similar discipline by framing AI management systems around structure, accountability, controls, and continual improvement.

The lesson for compliance professionals is clear: if the business has a faster process for buying or launching AI than for reviewing risks and governance, it has already fallen behind.

3. Data Governance, Privacy, and Model Integrity

The third challenge is the quality and integrity of what goes into, and comes out of, AI systems.

AI does not operate in a vacuum. It depends on data, assumptions, training inputs, prompts, workflows, and human interaction. That means weaknesses in data governance are not side issues. They are central governance risks. Poor data lineage, unvalidated data sources, confidentiality breaches, inadequate access controls, and bias in training data can all create downstream failures that become legal, reputational, regulatory, and operational events.

For boards, the temptation is to hear “AI” and think about futuristic questions. But the more immediate concern is often much more familiar. Does management know where the data came from? Does the company understand whether sensitive or proprietary information is being exposed? Are outputs accurate enough for the intended use? Are the controls around data usage consistent with privacy obligations and internal policy?

This is where AI governance intersects with traditional compliance disciplines in a very real way. Privacy, information governance, records management, cybersecurity, and internal controls all converge here. A system that produces impressive outputs but relies on flawed or unauthorized data is not a governance success. It is a governance failure waiting to be discovered.

ISO 42001 is particularly useful because it forces organizations to think in systems terms. It is not merely about the model itself; it is about the management environment surrounding it. That is exactly how boards and CCOs should think about model integrity.

4. Ongoing Monitoring and the “Day Two” Problem

The fourth challenge is the one that too many organizations underestimate: governance after deployment. A great many companies put substantial effort into approving an AI use case, but far less into monitoring it once it is live. Yet this is where some of the greatest risks emerge. Models drift. Employees use tools for new purposes. Controls that looked solid on paper weaken in practice. Reviewers become overloaded. Risk profiles change. Regulators evolve their expectations. The use case expands far beyond its original design.

That is why AI governance must include what I call the “Day Two” problem. What happens after launch? This is once again a place where the ECCP offers a useful lens. The DOJ does not ask merely whether a policy exists. It asks whether it works in practice, whether it is tested, and whether lessons learned are incorporated back into the program. AI governance should be held to the same standard. If the company has no way to monitor performance, investigate anomalies, log incidents, revalidate assumptions, or update controls, then it lacks effective AI governance. It has an approval memo.

The board should be asking for reporting that goes beyond usage metrics or efficiency gains. It should want to know about incidents, exception trends, control failures, validation results, and remediation efforts. In other words, governance must be dynamic because AI risk is dynamic.

5. Culture, Speak-Up, and Human Judgment

The fifth challenge may be the most overlooked, yet it is often the earliest warning system a company has: culture. Employees will usually see AI failures before leadership does. They will spot the odd output, the customer complaint, the biased result, the misuse of a tool, the shortcut around a control, or the inaccurate summary that could trigger a bad decision. The question is whether they will say something.

This is why AI governance is not solely about structure and policy. It is also about whether the organization has a culture that encourages people to raise concerns. Do employees understand that AI-related problems are reportable? Do they know where to raise them? Are managers trained to respond properly? Are anti-retaliation protections reinforced in this context?

Human judgment also matters because AI does not eliminate accountability. If anything, it heightens the need for judgment. A machine-generated output can create a false sense of confidence, especially when it arrives quickly and sounds authoritative. Boards and CCOs must resist that temptation. Human oversight is not a ceremonial step. It is an essential governance control.

The strongest AI governance programs will be the ones that connect structure with culture. They will not merely create committees and frameworks. They will create an environment where people trust the system enough to challenge it.

The Governance Road Ahead

For CCOs and boards, the governance challenge around AI is not mysterious. It is demanding, but it is not mysterious. The questions are recognizable. Who owns it? What are the rules? Can we trust the data? Are we monitoring the system over time? Will people speak up when something goes wrong?

These five issues form the roadmap for the series ahead. In the coming posts, I will take up each one in turn and explore what it means in practice for modern compliance programs and board oversight. Because if there is one lesson here, it is this: AI governance is not about admiring the technology. It is about governing the enterprise that uses it.

Join us tomorrow, where we review board oversight and accountability, because that is where every effective AI governance program either starts strong or starts to fail.