Categories
Blog

René Descartes and the Discipline of Internal Investigation

This week, we are moving to Enlightenment Thinkers to see their influence on modern compliance programs. This week’s category is broader than philosophers, as many of these men excelled in numerous fields such as science, mathematics, calculus, and medicine. However, each contributed a key component that relates directly to our modern compliance regimes. In this post, we consider René Descartes and what he teaches as the next step beyond Bacon: that evidence must be rigorously examined.

If Francis Bacon taught us that a compliance program must be grounded in evidence, René Descartes teaches the next step: evidence must be examined with rigor. That is why Descartes is the natural second installment in this series on what Enlightenment thinkers can teach us about modern corporate compliance. Bacon gave us empiricism. Descartes gives us a method. Bacon tells us to look. Descartes tells us how to think about what we find.

For the compliance professional, that is no small matter. Modern compliance programs do not fail only because they lack information. They often fail because organizations do not ask the right questions, challenge convenient assumptions, or investigate troubling facts with sufficient discipline. A hotline report comes in, and management prematurely dismisses it. A financial anomaly is explained away because the business result looks attractive. A third-party red flag is rationalized because the market opportunity seems too important to slow down. In each case, the problem is not simply a lack of data. The problem is a lack of disciplined inquiry.

That is where Descartes has something important to say to the modern Chief Compliance Officer.

Why Descartes Matters to Compliance

René Descartes is best known for methodical doubt. He believed that if one wanted to arrive at reliable knowledge, one had to strip away weak assumptions and test what could be known. He did not advocate doubt for its own sake. He advocated doubt as a disciplined tool, a way to avoid error and reach sound conclusions. His method required breaking problems into parts, analyzing them carefully, proceeding in an orderly manner, and ensuring nothing important was overlooked. That is remarkably close to what an effective compliance investigation function should do.

The compliance professional cannot assume an allegation is false because it is inconvenient. Nor can one assume it is true because it is emotionally compelling. The task is to examine. What happened? Who knew what, and when? What documents exist? What controls should have operated? Where are the inconsistencies? What explanation fits the evidence, and what explanation merely sounds comforting? Descartes would have recognized this immediately. A sound conclusion requires method, not instinct.

In a corporate environment, that is especially important because organizations are full of narratives. Managers tell stories about performance. Employees tell stories about why something was necessary. Third parties tell stories about local customs or business necessities. The compliance function should listen, but it cannot stop there. It must test those stories against facts.

The DOJ Expects More Than a Quick Answer

The Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) does not use philosophical language, but its expectations align closely with Cartesian thinking. The ECCP asks whether investigations are properly scoped, whether the company has adequate resources to conduct them, whether the company preserves and analyzes relevant data, whether reporting structures support independence, and whether lessons learned are used to improve the compliance program. That is not a request for superficial closure. It is a demand for disciplined inquiry.

The ECCP is not interested in whether a company can produce a memo that says the matter has been reviewed. It wants to know whether the review was credible. Did the company ask hard questions? Did it follow the evidence even when the evidence was uncomfortable? Did it look at underlying causes or accept a narrow explanation that minimized institutional responsibility? These are Descartes’ questions as much as the DOJ’s.

Method Beats Reaction

One of the most important lessons Descartes offers is that method matters more than reaction. Too many organizations still respond to reports of misconduct in an ad hoc fashion. The identity of the reporter, the subject’s seniority, or the business sensitivity of the issue can distort the process from the outset. Some matters are overreacted to because they are visible. Others are under-investigated because they are politically awkward. That is not a system. That is improvisation. A mature compliance program requires a clear, repeatable investigative method.

That begins with triage. Allegations should be assessed based on risk, scope, subject matter, and potential impact. Matters involving senior leadership, financial controls, corruption risk, retaliation, or systemic process failures may require immediate escalation and greater independence. Low-risk issues may still require attention, but not every matter needs the same level of response. Cartesian thinking does not mean treating every problem identically. It means applying a coherent method to determine what level of inquiry is warranted.

From there, the matter should be broken down into manageable components. What is the allegation? What business process is implicated? What documents are likely relevant? Who are the key custodians? What data sources exist? What is the working timeline? What controls should have operated? What policy provisions may have been implicated? This is classic Descartes: divide complex problems into smaller parts so they can be understood.

Disciplined Skepticism Is a Compliance Strength

Compliance professionals sometimes worry that skepticism will be perceived as mistrust. But disciplined skepticism is not cynicism. It is not hostility. It is professional rigor. It is the recognition that people often explain events in self-protective ways, that organizations prefer neat stories to messy truths, and that important facts are often buried inside routine processes. Descartes would have understood that skepticism is a necessary safeguard against error.

Consider a common internal reporting scenario. A manager says that a questionable payment was simply an administrative oversight. Perhaps that is true. But a compliance professional guided by Descartes would ask several follow-up questions. Was it really isolated? Have similar payments occurred before? Were approval thresholds bypassed? Was the vendor properly vetted? Were invoice descriptions vague or coded? Did someone raise concerns earlier? Was the explanation consistent across all available records? None of those questions accuse. They clarify.

Documentation Turns Inquiry Into Credibility

Another Cartesian lesson for compliance is the importance of orderly reasoning. An investigation cannot simply be sound in substance. It must also be documented in a way that shows how the conclusion was reached. This is essential for institutional memory, for regulatory defensibility, and for credibility with boards and senior management.

A well-documented investigation answers basic but vital questions. What was alleged? Who handled the matter? What evidence was reviewed? Which witnesses were interviewed? What facts were established? What policy or control failures were identified? What conclusion was reached, and why? What remediation followed? This kind of documentation is not bureaucratic excess. It is proof of intellectual discipline.

Without it, the company cannot show that it acted reasonably. It cannot identify patterns across matters. It cannot demonstrate consistency. It cannot revisit earlier decisions when new facts emerge. Most importantly, it cannot turn an individual case into organizational learning. Descartes’ method was about structured thinking. In corporate compliance, documentation is how structured thinking becomes durable.

Independence Matters When the Facts Get Uncomfortable

No discussion of investigations would be complete without addressing independence. The most elegant methodology in the world will not help if investigators are pressured to protect favored executives, minimize business disruption, or avoid awkward findings. Cartesian rigor requires a willingness to follow the facts wherever they lead. That, in turn, requires real autonomy.

The ECCP addresses this directly through its focus on stature, authority, resources, and access. Can the compliance function investigate senior personnel? Can it escalate concerns to the board or audit committee when necessary? Is it empowered to challenge management narratives? These are not secondary governance questions. They are central to whether the investigation process can produce reliable conclusions.

There is a reason so many compliance failures involve not merely misconduct, but management interference with the review of misconduct. When power shapes the investigation, facts become negotiable. Descartes would have seen that as a fundamental corruption of method.

Investigations Must Lead to Remediation

A Cartesian compliance program does not end with a finding. It asks what the finding means for the system. That is why investigations must connect to remediation and root cause analysis. If an allegation is substantiated, the question is not simply who violated what rule. The question is what enabled the failure.

Was the training insufficient? Were incentives pushing employees toward bad decisions? Was a manager creating pressure that undermined ethical judgment? Did the approval process invite shortcuts? Was the policy too vague to guide real-world conduct? These questions push the company from conclusion to improvement.

This is where Descartes connects back to Bacon. Bacon teaches that we need evidence. Descartes teaches that we must reason carefully from the evidence. Together, they create a powerful model for compliance effectiveness. The company observes, investigates, documents, learns, and improves.

The Compliance Officer as a Guardian of Clear Thinking

If Bacon cast the compliance officer as an institutional scientist, Descartes casts the compliance officer as a guardian of clear thinking. In a corporation full of pressure, narrative, hierarchy, and urgency, that role is vital. Someone must insist that facts be tested, that assumptions be challenged, that conclusions be explained, and that the process remain disciplined when the easier path is to settle for a quick answer.

That is not merely an investigative skill. It is a governance function. It protects employee fairness, the board’s credibility, and the company’s defensibility. It also builds trust over time, because people learn that reports are taken seriously, that outcomes are reasoned rather than political, and that the system values truth over convenience.

René Descartes may seem an unlikely guide for corporate compliance. Yet his method of doubt, order, and careful reasoning belongs squarely within the modern best-practices compliance program. In an era where companies are judged not simply on whether they responded, but on how they responded, Descartes offers an enduring lesson: clear thinking is a control.

Five Lessons Learned for the Modern Compliance Professional

First, allegations should trigger a method, not a reaction. A repeatable investigative framework reduces bias and improves consistency.

Second, disciplined skepticism is a professional obligation. Compliance must test explanations against facts rather than accept convenient narratives.

Third, complex matters should be broken into parts. Scoping, evidence review, interviews, control mapping, and timeline construction all improve rigor.

Fourth, documentation is essential. It is how the company proves that its inquiry was credible and how it preserves institutional learning.

Fifth, an investigation is not complete until it informs remediation. Findings should lead to enhancements in control, policy changes, training updates, or broader governance improvements.

Coming Next: John Locke and the Legitimacy of Compliance Governance

If Francis Bacon teaches us to gather evidence and René Descartes teaches us to examine it rigorously, John Locke asks an equally important question: why should anyone trust the system in the first place? In Part 3, I will explore how Locke’s ideas about legitimacy, rights, and accountable authority provide a powerful framework for speak-up culture, non-retaliation, fairness, and board oversight. In the world of compliance, authority alone is never enough. It must also be credible.

Categories
Blog

Enlightenment Philosophers Week: Part 1 – Francis Bacon and the Compliance Program That Works in Practice

I have explored the work of ancient Greek and Roman philosophers to understand the underpinnings of the modern corporate compliance program. This week, I want to move to Enlightenment Thinkers. Our category is broader than that of philosophers, as many of these men excelled in numerous fields, including science, mathematics, calculus, and medicine. However, each contributed a key component that relates directly to our modern compliance regimes.

The five we will explore are Francis Bacon, René Descartes, John Locke, Thomas Hobbes, and Issac Newton. Today, we begin with Francis Bacon and the design of a compliance program that works not simply in theory but in practice.

There is a reason Francis Bacon is the right place to begin a series on what Enlightenment thinkers can teach us about modern corporate compliance. Bacon did not simply advance a philosophical idea. He changed the way serious people were supposed to think. He pushed inquiry away from inherited assumptions and abstract theorizing and toward observation, testing, evidence, and disciplined learning from experience. In many ways, that is the same journey corporate compliance has had to take.

For too long, compliance programs were judged by what they had on paper. Did the company have a code of conduct? Did it conduct annual training? Did it maintain a hotline? Did it have policies and procedures? Those questions still matter, of course, but they are no longer enough. The Department of Justice has made that point repeatedly through its Evaluation of Corporate Compliance Programs. The DOJ does not simply ask whether a company has a program. It asks whether the program is well designed, whether it is being applied earnestly and in good faith, and whether it works in practice. That final phrase could have been written by Bacon himself.

Why Bacon Matters to Compliance

Francis Bacon is most closely associated with empiricism, the idea that knowledge should be grounded in observation and experience rather than assumption or pure deduction. He believed that if you want to understand the world, you do not begin with what you hope is true. You begin with facts. You gather information. You test propositions. You challenge your own biases. Then you refine your conclusions based on the evidence. That mindset is at the heart of every effective compliance program.

A Chief Compliance Officer cannot assume that a policy is effective because it was well-drafted. A board cannot assume that a training program changes behavior because employees clicked through an online module. A legal department cannot assume that third-party due diligence is functioning because questionnaires are being completed. In each case, the real question is Baconian: what evidence do you have that the control is working as intended?

This is where philosophy becomes practice. Bacon gives compliance professionals a method. He reminds us that the difference between performative compliance and effective compliance is proof.

The DOJ Standard Is a Baconian Standard

The modern DOJ approach is deeply consistent with Bacon’s philosophy. The ECCP has moved the compliance conversation away from formalism and toward effectiveness. Prosecutors are instructed to consider whether a company has access to relevant data, whether it uses that data to monitor performance, whether it investigates red flags, whether it adapts the program based on lessons learned, and whether it performs root-cause analysis after misconduct occurs. That is not a paper exercise. That is evidence-based governance.

The DOJ is effectively saying that compliance must be a living system of observation, testing, response, and continuous improvement. In Bacon’s world, knowledge advances by disciplined interaction with reality. In the DOJ’s world, compliance credibility advances the same way. A company earns trust not because it announces a program, but because it can demonstrate through data, testing, and response that the program actually functions.

From Risk Assessment to Real Measurement

A Bacon-inspired compliance program begins with risk assessment, but it does not end there. Too many organizations treat the risk assessment as an annual exercise that produces a polished heat map and then disappears into a slide deck. Bacon would reject that approach. A risk assessment should be a working hypothesis about where misconduct and control failure are most likely to occur. That hypothesis must then be tested through monitoring, internal reporting, auditing, and data review.

Consider a company that identifies third-party risk as a top concern. A paper-based approach might stop with enhanced due diligence procedures and contract clauses. A Baconian approach goes further. It asks whether third parties are actually being onboarded according to policy, whether approvals are properly documented, whether high-risk distributors are subject to enhanced monitoring, whether payments match contractual terms, whether red flags are closed or merely noted, and whether the company can identify trends across geographies, business units, or product lines. That is where compliance becomes operational.

Monitoring Is How a Program Proves Itself

One of the clearest lessons Bacon offers is that observation must be ongoing. In compliance terms, that means monitoring is not an optional add-on. It is how the program proves itself. COSO has long emphasized monitoring as a core element of an effective internal control framework. The same logic applies to compliance more broadly. Monitoring tells a company whether its controls are operating consistently, whether local business practices are drifting from policy expectations, and whether emerging risks are being detected early enough to matter.

Hotline data is a good example. Many organizations report the number of calls received, but that is only the beginning. A Baconian compliance officer looks beneath the surface. Are certain allegations rising in a specific region? Are retaliation claims increasing after a business reorganization? Are reports being substantiated at a lower rate because employees do not understand what should be reported? Are investigation closure times lengthening in a way that undermines confidence in the process? Those are not just operational questions. There are questions about whether the compliance system is learning.

Root Cause Analysis Is Bacon in Action

If there is one area where Bacon’s influence should be explicit, it is root cause analysis. When misconduct happens, the least useful response is to identify the wrongdoer, discipline the individual, and move on. That may satisfy a desire for closure, but it does not satisfy the demands of an effective compliance program.

Bacon would ask a different set of questions. What conditions allowed this to happen? What signals were missed? Were incentives misaligned? Was a manager pressuring a sales team in ways that made policy noncompliance more likely? Did the control exist on paper but fail in operation? Was a prior warning sign identified but not escalated?

Those questions matter because substantive compliance violations are never random. It is often the product of pressure, weak controls, poor communication, bad assumptions, or failures to learn from earlier warning signs. Root cause analysis is the process by which a company examines the conditions that led to a failure and turns that failure into institutional knowledge.

Culture Needs Evidence Too

Compliance professionals often speak about culture, and they should. But here, too, Bacon has a warning for us. Culture cannot be measured only by slogans or tone-at-the-top statements. A company that wants to claim a strong ethical culture should be able to point to supporting evidence.

Do employees raise concerns without fear of retaliation? Are managers evaluated in part on ethical leadership? Do exit interviews reveal pressure points that formal reporting channels miss? Are discipline outcomes consistent across levels of seniority? Does the organization respond to bad news constructively or defensively? These are empirical questions. They require information, not aspiration.

This is where compliance, internal audit, legal, and HR can work together in a mature governance model. Surveys, hotline trends, investigation data, audit findings, and employee feedback all become part of the evidence base. Culture, in this framework, is not soft. It is observable. It can be tested, assessed, and strengthened.

The Compliance Officer as Institutional Scientist

Perhaps Bacon’s greatest gift to the compliance profession is this: he offers a model for what the compliance officer should be. Not merely a policy custodian. Not merely a trainer. Not merely an investigator. The modern compliance leader is, in part, an institutional scientist.

That phrase may sound grand, but it captures something important. The CCO studies how the organization really works. Which incentives shape conduct? Which controls hold under pressure? Where are the blind spots? What do the data show? What must change? In that sense, the compliance function is not external to the business. It is one of the primary ways the business learns about itself.

That is why evidence matters so much. It is the basis for credibility with the board, with regulators, and with employees. It is how a program shows that it is more than a collection of good intentions. Francis Bacon would have understood that immediately.

Five Lessons Learned for the Modern Compliance Professional

First, a compliance program must be judged by evidence, not by appearance. Policies and training matter, but proof of effectiveness matters more.

Second, risk assessments should be treated as working hypotheses that must be tested through monitoring, auditing, and ongoing review.

Third, data is central to the credibility of compliance. Hotline trends, investigation outcomes, audit findings, and control testing demonstrate that a company’s program works in practice.

Fourth, root cause analysis is essential. Misconduct should trigger institutional learning, not merely individual discipline.

Fifth, culture itself must be supported by evidence. Speak-up, non-retaliation, consistency in discipline, and employee trust are all observable markers of program health.

Coming Next: René Descartes and the Discipline of Internal Investigation

If Francis Bacon teaches us how to gather evidence, René Descartes teaches us what to do with it. In Part 2, I will examine how Descartes’ method of disciplined doubt provides a blueprint for internal investigations, allegation triage, and rigorous compliance inquiry. In a world of management narratives, incomplete facts, and pressure to reach quick conclusions, Descartes reminds us that the compliance professional’s first duty is not comfort. It is clear thinking.

Categories
Blog

Trust Is Not a Control: The Drop-In AI Audit

There is a hard truth at the center of modern AI governance that every compliance professional needs to confront: trust is not a control. For too long, organizations have approached AI oversight with a familiar but outdated mindset. They collect a vendor certification. They review a policy statement. They ask whether a third party is “aligned” with a recognized framework. Then they move on, assuming the governance box has been checked. In today’s enforcement and risk environment, that approach is no longer good enough.

The Department of Justice has repeatedly made this point in its Evaluation of Corporate Compliance Programs. The DOJ does not ask whether a company has a policy on paper. It asks whether the program is well designed, whether it is applied earnestly and in good faith, and, most importantly, whether it works in practice. That final phrase matters. Works in practice. It is the dividing line between performative governance and effective governance.

That is why every compliance program now needs a drop-in AI audit. It is not simply another diligence exercise. It is a mechanism for proving that governance is real. It is a practical third-party risk tool. And it is one of the clearest ways to operationalize the ECCP in the age of artificial intelligence.

The Problem: Third-Party AI Risk Is Moving Faster Than Oversight

Most companies do not build every AI capability internally. They rely on vendors, service providers, cloud platforms, embedded applications, analytics partners, and other third parties whose tools increasingly shape business processes and compliance outcomes. In many organizations, these third parties now influence investigations, due diligence, monitoring, onboarding, reporting, customer interactions, and internal decision-making. That creates a new class of third-party risk.

The problem is not only whether a vendor has responsible AI language in its contract or whether it can point to a certification. The problem is whether your organization can verify that the relevant controls are functioning as represented in the real-world use case affecting your business. That is where too many compliance programs still fall short.

Under the ECCP, the DOJ asks whether a company’s risk assessment is updated and informed by lessons learned. It asks whether the company has a process for managing risks presented by third parties. It asks whether controls have been tested, whether data is available to compliance personnel, and whether the company can demonstrate continuous improvement. These are not abstract questions. They go directly to how you oversee AI-enabled third parties. If your third-party AI governance begins and ends with a questionnaire and a PDF certification, you do not have evidence of governance. You have evidence of intake.

What a Drop-In Audit Really Does

A drop-in AI audit changes the question from “What does the third party say?” to “What can the third party prove?” That is a profound shift.

The value of the drop-in audit is that it brings compliance discipline directly into third-party AI oversight. Instead of accepting broad claims about safety, control, and alignment, you examine operational evidence. Instead of relying solely on design statements, you test for performance in practice. Instead of treating governance as a one-time approval event, treat it as a repeatable audit process. In that sense, the drop-in audit becomes proof of governance.

It also becomes a far more mature third-party risk tool. You are no longer merely assessing whether a vendor appears sophisticated. You are assessing whether a third party can withstand scrutiny on the questions that matter most: scope, controls, traceability, escalation, and evidence.

And from an ECCP perspective, that is precisely the point. The DOJ has emphasized that compliance programs must move beyond paper design into operational reality. A drop-in audit is one of the few mechanisms that let you do that in a disciplined, documentable way.

From Vendor Oversight to Third-Party Governance

This discipline should not be limited only to classic vendors. The better view is to expand the concept across all third parties that provide, influence, host, or materially shape AI-enabled services. That includes software providers, outsourced service partners, embedded AI functionality in enterprise tools, cloud-based analytics environments, compliance technology vendors, and any external party whose systems affect business-critical decisions or regulated processes.

Risk does not care about the label on the contract. If the third party’s AI affects your organization’s screening, monitoring, investigations, decision support, or disclosures, the compliance risk is real. Your governance process must be equally real. This is why “trust but verify” is no longer just a slogan. It is a design principle for third-party oversight of AI.

The Core Elements of the Drop-In Audit

A strong drop-in audit has three features: sampling, contradiction testing, and escalation.

1. Sampling: Evidence of Operation, Not Merely Design

Sampling is where governance becomes tangible. A company requests specific artifacts tied to actual use cases and actual control operations. This may include scope documents, Statements of Applicability, system documentation, training data summaries, access controls, incident records, runtime logs, or evidence of human review. The point is simple: operational evidence is what matters.

This is where a compliance function moves from hearing about controls to seeing them in action. It is also where internal audit can add real value by testing whether the evidence supports the stated control environment.

2. Contradiction Testing: Where Real Risk Emerges

This is one of the most important and underused concepts in third-party AI oversight. Inconsistencies between claims and reality are where governance failures emerge. If a third party says its certification covers a given service, does the scope document confirm it? If it claims strong incident response, does the record back it up? If it represents strong human oversight, do the runtime traces show meaningful intervention or only theoretical review points?

Contradiction testing is powerful because it goes to credibility. It tests whether the governance narrative matches the operating reality. Under the ECCP, that is exactly the kind of inquiry prosecutors and regulators will care about. It speaks to effectiveness, honesty, and control discipline.

3. Escalation: Governance in Action

Governance without consequences is not governance. A drop-in audit must include clear escalation triggers. Missing evidence, mismatched certification scope, unexplained gaps, unresolved incidents, or inconsistent remediation should not be noted in isolation. They should trigger action.

That action may include enhanced diligence, contractual remediation, independent validation, temporary use restrictions, or deeper audit review. The important point is that the program responds. This is where the drop-in audit becomes operationalizing the ECCP. It demonstrates that the company not only identifies risk but also acts on it.

How the Drop-In Audit Maps to the ECCP

The drop-in audit aligns tightly with the DOJ’s framework for an effective compliance program. Risk assessment is addressed because the audit focuses attention on where AI-enabled third parties create actual operational and control exposure. Policies and procedures are tested because the company does not merely accept them at face value. It assesses whether the stated controls are supported by evidence. Third-party management is strengthened by making oversight continuous, risk-based, and verifiable. Testing and continuous improvement are built into the audit process, which identifies gaps, contradictions, and corrective actions. Investigation and remediation principles are reinforced by documenting, escalating, and using findings to improve the control environment.

Most importantly, the audit answers the ECCP’s central practical question: Does the program work in practice?

How the Drop-In Audit Maps to NIST AI RMF

The NIST AI Risk Management Framework provides a highly useful structure for the drop-in audit, especially through its Govern, Map, Measure, and Manage functions.

  1. Governance is reflected in defined ownership, accountability, and escalation when issues are identified.
  2. A map is reflected in understanding the third party’s actual AI use case, scope, dependencies, and business impact.
  3. The measure is reflected in the use of evidence, runtime observations, contradiction testing, and performance assessment.
  4. Management is reflected in remediation, ongoing oversight, and updates to controls based on audit findings.

In this way, the drop-in audit becomes a practical tool for taking the NIST AI RMF from concept to execution.

How the Drop-In Audit Maps to ISO/IEC 42001

ISO/IEC 42001 adds the management-system discipline that compliance programs need. Its value lies in documented scope, role clarity, control applicability, monitoring, corrective action, and continual improvement. A drop-in audit fits naturally into that structure because it tests whether those elements are visible in operation, not merely stated in documentation.

The Statement of Applicability becomes meaningful when the company verifies that the controls identified there actually correspond to the deployed service. Monitoring becomes meaningful when evidence is examined. Corrective action becomes meaningful when gaps trigger follow-up. Continual improvement becomes meaningful when findings are fed back into governance. That is why the documentation you generate should serve your board, regulators, and internal stakeholders without additional work. Producing evidence that travel is one of the most strategic benefits of this approach.

Why Every Compliance Program Needs This Now

The strategic payoff is straightforward. Strong AI governance is not a drag on innovation. It is what allows innovation to scale with trust. A drop-in audit gives compliance and internal audit a mechanism to test what matters, document their findings, and create evidence that withstands scrutiny. It moves governance from assertion to proof. It transforms third-party diligence into a repeatable, auditable process. It helps ensure that when regulators, boards, or business leaders ask how the company knows its third-party AI governance is working, there is a real answer.

Because, in the end, evidence of governance matters. Not narratives. Not slide decks. Evidence. President Reagan was right in the 1980s, and he is still right today: “Trust but verify.”

Categories
Blog

AI Disclosures, Controls, and D&O Coverage: Closing the Governance Gap Around Artificial Intelligence

A new governance gap is emerging around artificial intelligence, and it is one that Chief Compliance Officers, compliance professionals, and boards need to confront now. It sits at the intersection of three areas that too many companies still treat separately: public disclosures, internal controls, and insurance coverage. That siloed approach is no longer sustainable.

As companies speak more confidently about their AI strategies, insurers are becoming more cautious about the risks those strategies create. That tension matters. It signals that the market is beginning to see something many organizations have not yet fully addressed: when a company’s statements about AI outpace its actual governance, the exposure is not merely operational or reputational. It can become a disclosure issue, a board oversight issue, and ultimately a proof-of-governance issue under the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP).

For the compliance professional, this is not simply an insurance story. It is a compliance integration story. The question is whether the company can align its statements about AI, the controls it has in place, and the protections it believes it has in place if something goes wrong.

The New Governance Gap

Many organizations are eager to describe AI as a source of innovation, efficiency, better decision-making, or competitive advantage. Those messages increasingly appear in earnings calls, investor decks, public filings, marketing materials, and board presentations. Yet the underlying governance structures often remain immature. That disconnect is the governance gap.

It appears when management speaks broadly about responsible AI but has not built a complete inventory of AI use cases. It appears when companies discuss oversight but cannot show testing, documentation, or monitoring. It appears that boards assume that insurance will respond to AI-related claims without understanding how new policy language may narrow coverage.

This is where D&O coverage becomes so important. It is not the center of the story, but it is a revealing signal. If insurers are revisiting policy language and introducing exclusions or limitations tied to AI-related conduct, it suggests the market sees governance risk. In other words, the insurance market is sending a message: AI-related claims are no longer hypothetical, and companies that cannot demonstrate disciplined oversight may find that risk transfer is less available than they assumed.

Why the ECCP Should Be the Primary Lens

The DOJ’s ECCP remains the most useful framework for analyzing this issue because it asks exactly the right questions.

Has the company conducted a risk assessment that accounts for emerging risks? Are policies and procedures aligned with actual business practice? Are controls working in practice? Is there proper oversight, accountability, and continuous improvement? Can the company demonstrate all of this with evidence? Those are compliance questions, but they are also the right AI governance questions.

If a company makes public statements about AI capability, oversight, or reliability, the ECCP lens requires more than aspiration. It requires substantiation. Can the company show who owns the AI risk? Can it demonstrate how models or systems are tested? Can it show escalation procedures when problems arise? Can it document how AI-related decisions are monitored, reviewed, and improved over time?

If the answer is no, then the issue is not simply that the company may have overpromised. The issue is that its compliance program may not be adequately addressing a material emerging risk. That is why CCOs should view AI as a cross-functional challenge requiring integration across legal, compliance, technology, risk, audit, investor relations, and the board.

AI Disclosure Must Be Evidence-Based

One of the most practical steps a compliance function can take is to push for an evidence-based disclosure process around AI. This means that public statements about AI should not be driven solely by enthusiasm, market pressure, or executive optimism. They should be grounded in underlying documentation. If the company says it uses AI responsibly, where is the governance framework? If it claims AI improves decision-making, what testing supports that assertion? If it says it has safeguards, where are the control descriptions, monitoring results, and escalation records?

This is not about suppressing innovation. It is about ensuring that disclosure discipline keeps pace with technological ambition. For boards, this means asking harder questions before approving or relying on public AI narratives. For compliance officers, it means helping management build the evidentiary record that turns broad statements into defensible representations.

Controls Must Catch Up to Strategy

This is where the “how-to” work begins. Compliance professionals should begin by creating a structured inventory of AI use cases across the enterprise. That inventory should identify where AI is being used, what decisions it informs, what data it relies on, who owns it, and what risks it entails.

Once that inventory exists, risk tiering should follow. Not every AI use case carries the same compliance significance. A low-risk productivity tool does not need the same oversight as a system that affects investigations, third-party due diligence, customer interactions, financial reporting, or core operational decisions.

From there, the company can design controls proportionate to risk. High-impact uses of AI should have documented governance, human review where appropriate, testing protocols, escalation triggers, and monitoring requirements. The compliance team should be able to answer a simple question: where are the controls, and how do we know they work? That is the heart of the ECCP inquiry.

Where NIST AI RMF and ISO/IEC 42001 Fit

This is also where the NIST AI Risk Management Framework and ISO/IEC 42001 become highly practical tools. NIST AI RMF helps organizations govern, map, measure, and manage AI risks. For compliance professionals, this provides a disciplined structure for identifying AI use cases, understanding impacts, assessing reliability, and managing response. It is especially useful in linking abstract AI risk to operational decision-making.

ISO/IEC 42001 brings management system discipline to AI governance. It focuses on defined roles, documented processes, control implementation, monitoring, internal review, and continual improvement. That makes it an excellent bridge between policy and execution. Together, these frameworks help operationalize the ECCP. The ECCP tells you what an effective compliance program should be able to demonstrate. NIST AI RMF helps structure the risk analysis. ISO 42001 helps embed those requirements into a repeatable governance process.

For CCOs, the practical lesson is clear: use these frameworks not as academic overlays, but as working tools to build ownership, documentation, testing, and accountability.

Insurance Is a Governance Input

Companies also need to stop treating insurance as an afterthought. D&O coverage should be considered a governance input, not merely a downstream purchase. If policy language is narrowing around AI-related claims, boards and compliance leaders need to understand what that means. What scenarios might raise disclosure-related allegations? Where is ambiguity in coverage? What assumptions has management made about protection that may no longer hold?

Compliance does not need to become an insurance specialist. But it does need to ensure that disclosure, governance, and risk transfer are aligned. If the company is making strong public claims about AI while carrying unexamined governance weaknesses and uncertain coverage, that is precisely the kind of mismatch that can trigger a crisis.

Closing the Gap Before It Becomes a Failure

The larger lesson is straightforward. AI governance is not simply about technology controls. It is about integration. It is about ensuring that what the company says, what it does, and what it can prove all line up. That is why the governance gap matters so much. It is the space where strategy outruns structure, where disclosure outruns evidence, and where confidence outruns control. For boards and compliance professionals, the task is to close that gap before it becomes a failure.

The companies that do this well will not necessarily be the ones moving the fastest. They will be the ones building documented, tested, monitored, and governed AI programs that stand up to regulatory scrutiny, investor pressure, and real-world disruption. That is not bureaucracy. That is the price of sustainable innovation.

Categories
Blog

AI Concentration Risk: A New Third-Party and Operational Resilience Challenge for Compliance

For years, concentration risk was treated as someone else’s problem. Procurement is worried about sole-source vendors. Treasury worried about counterparty exposure. Supply chain teams worried about bottlenecks. Compliance, by contrast, often sat one step removed from those conversations. In the age of enterprise AI, that separation no longer works.

Today, AI concentration risk is a front-line compliance issue. When a company’s most important AI-enabled processes depend on a small number of cloud providers, model vendors, chip suppliers, or geographic regions, that dependency is not merely an operational detail. It is a governance decision. And when that dependency is not identified, documented, tested, and managed, it becomes evidence of weak oversight that regulators and prosecutors understand very well.

That is why Chief Compliance Officers (CCOs) need to move AI concentration risk out of the technology silo and into the compliance program. This is not simply about resilience. It is about whether the company can demonstrate, under the DOJ’s Evaluation of Corporate Compliance Programs (ECCP), that it has identified a material risk, assigned ownership, designed controls, tested those controls, and escalated what matters. In other words, AI concentration risk is now a test of whether governance is real.

Why AI Concentration Risk Belongs in Compliance

At its core, AI concentration risk arises when a company becomes overly dependent on a small number of external providers, infrastructure layers, or geographic regions to support key AI-enabled operations. This is a classic third-party risk problem because it involves reliance on outside parties for critical services. It is also an operational resilience problem because a failure at one of those chokepoints can disrupt business continuity, customer commitments, internal reporting, investigations, monitoring, or other compliance-relevant functions.

For compliance professionals, that should sound familiar. The ECCP has long required companies to identify their risk universe, tailor controls accordingly, allocate resources to higher-risk areas, and continuously assess whether those controls are working in practice. The DOJ asks whether compliance programs are well designed, adequately resourced, empowered to function effectively, and tested for real-world performance. AI concentration risk fits squarely within that framework.

If your company relies on a single model provider for third-party screening, a single cloud region for transaction monitoring, or a single AI vendor for investigation triage, then a disruption is not simply an IT problem. It may affect the company’s ability to prevent misconduct, detect red flags, escalate allegations, and maintain reliable controls. If management cannot explain those dependencies and cannot show what has been done to mitigate them, that is evidence of under-governance.

The ECCP as the Primary Lens

The ECCP provides a highly practical framework for thinking about AI concentration risk by forcing compliance professionals to ask implementation questions rather than merely conceptual ones.

  1. Has your company conducted a risk assessment that includes AI dependency and concentration? Many organizations assess AI bias, privacy, and cybersecurity risk, but far fewer assess whether a small number of vendors represent single points of failure.
  2. Has your company translated that risk assessment into policies, procedures, and controls? It is not enough to know that dependency exists. The compliance question is whether there are controls in place for vendor onboarding, backup arrangements, portability, incident escalation, contractual protections, and contingency planning.
  3. Have those controls been tested? The ECCP is clear that paper programs are not enough. A company needs to know whether its controls function in practice. If there is a multi-cloud failover plan or an alternate-model runbook, has it actually been exercised?
  4. Has ownership been assigned? The DOJ repeatedly focuses on accountability. Someone must own the risk, someone must own the mitigation plan, and someone must report it to leadership.
  5. Is there evidence? Under the ECCP, documentation matters because it shows that a company did not merely talk about governance but operationalized it. In the AI context, this means inventories, risk rankings, contracts, testing logs, escalation protocols, incident reviews, and committee reporting. It is still Document Document Document.

Where Compliance Should Look First

For CCOs, the best way to begin is to map AI concentration risk across three layers.

The first is the infrastructure layer. Which GPU, accelerator, or compute providers support the organization’s most important AI functions? Is there heavy dependence on a single supplier or downstream foundry chain? Even if compliance does not make technical decisions, it should understand whether there is material operational exposure concentrated in a single location.

The second is the cloud and hosting layer. Which cloud providers and regions support production AI workloads? Are critical applications concentrated in one geography or one platform? Have failover and disaster recovery been tested, or are they merely theoretical?

The third is the model and application layer. Which model vendors, API providers, or AI-enabled workflow tools sit inside key business processes? Here is where the third-party risk lens becomes especially important. If one provider supports sanctions screening, hotline triage, policy search, transaction monitoring, or investigation workflows, the disruption risk is directly relevant to compliance effectiveness.

This is where a CCO should work closely with procurement, legal, IT, enterprise risk, and internal audit. The goal is not to take over technology governance. The goal is to ensure that AI concentration risk is incorporated into the company’s existing compliance and third-party risk architecture.

Building Practical Controls

Your approach should be practical and programmatic. First, start with inventory and classification. You cannot govern what you have not identified. Compliance should push for an inventory of AI use cases and the vendors, cloud environments, and model providers that support them. Those use cases should then be tiered based on business criticality, regulatory sensitivity, and operational dependency.

Next, update third-party due diligence. Traditional diligence questions around financial stability, security, and legal compliance remain important, but AI vendors should also be assessed for concentration-related risks. Can data and workflows be ported? Are there fallback options? What are the provider’s subcontracting dependencies? What audit rights exist? How are outages escalated?

Then move to contract design. This is where many compliance programs can add real value. Contracts should address incident notification, business continuity, data export, transition assistance, audit rights, service levels, and escalation expectations. Where concentration is likely to become significant, enhanced contractual protections should be mandatory.

After that, build contingency runbooks. If a model provider becomes unavailable, what happens? If a cloud region goes down, how quickly can key compliance processes be rerouted? If a vendor changes pricing or access terms, what is the escalation path? These runbooks should be documented, assigned to owners, and tested.

Finally, establish escalation thresholds. Governance is strongest when the company decides in advance what degree of concentration requires mitigation. For example, if more than half of a key compliance workflow depends on a single external provider, that may trigger a review by the board or executive committee. If a single region hosts a material portion of compliance-critical AI activity, failover testing may become mandatory.

Where NIST AI RMF and ISO/IEC 42001 Help

This is where the NIST AI Risk Management Framework and ISO/IEC 42001 become highly valuable for compliance officers. They help translate high-level concern into disciplined governance.

The NIST AI RMF emphasizes the Govern, Map, Measure, and Manage phases. That structure is especially useful here. Governance means assigning responsibility and setting risk appetite. Mapping means identifying where concentration exists and which business processes depend on it. Measuring means assessing the degree of dependency and resilience. Managing means putting in place mitigation, monitoring, and response mechanisms.

ISO/IEC 42001 adds an equally important management system discipline. It pushes organizations to define roles, document controls, monitor performance, conduct periodic reviews, and drive continual improvement. In other words, it helps turn AI governance into an operating system rather than a one-time project.

For compliance professionals, the lesson is clear. Use ECCP to define what effectiveness and accountability should look like. Use NIST AI RMF to structure the risk analysis. Use ISO 42001 to embed the resulting controls into a repeatable management process.

Proof of Governance in the AI Era

The deeper point is that AI concentration risk is no longer a hidden architecture issue. It is a test of whether the compliance function can help the enterprise identify dependencies before they fail. Under the ECCP, regulators are not simply asking whether a company had good intentions. They are asking whether it identified real risks, assigned responsibility, implemented controls, tested those controls, and learned from experience.

That is why AI concentration risk matters so much. It reveals whether the company understands how fragile its AI-enabled processes may be. It reveals whether third-party governance is keeping up with technological dependence. And it reveals whether compliance is engaged early enough to shape resilience rather than merely respond to disruption.

For the modern CCO, this is not a niche issue. It is a live example of how compliance adds value by helping the company operationalize governance before a crisis arrives.

Conclusion

In the end, AI concentration risk is not about servers, chips, or software contracts. It is about whether a company understands its vulnerabilities and has the discipline to govern them before they become failures. That is the heart of modern compliance. The issue is not whether disruption will come. The issue is whether your organization has done the hard work in advance to map dependency, build resilience, assign accountability, and prove that its controls can hold under pressure.

That is why this issue belongs squarely on the CCO’s agenda. Under the ECCP, a company must do more than claim it takes risk seriously. It must show its work. It must show that it identified the risk, assessed it, built controls around it, tested those controls, and updated them as the business evolved. The NIST AI Risk Management Framework and ISO/IEC 42001 help provide the structure. But the real challenge, and the real opportunity, belongs to compliance.

Because in the AI era, concentration risk is not merely a technical fragility. It is a governance signal. And the companies that can identify it, manage it, and document it will not only be more resilient. They will be able to demonstrate something even more valuable: that their compliance program is working exactly as it should.

Categories
Blog

Corporate Value(s), Corporate Risk, and the Board’s Oversight Challenge

There was a time when many executives could treat corporate values as a branding exercise, a recruiting line, or a paragraph on the company website. That time is over. Today, corporate values are operational. They shape customer loyalty, employee engagement, regulatory attention, shareholder expectations, and public trust. Most importantly for boards and compliance professionals, they shape risk.

That is the central lesson of Corporate Value(s) by Jill Fisch and Jeff Schwartz. Their insight is both practical and profound: managers should select the corporate values that maximize long-term economic value, and to do that, they need reliable information about what stakeholders actually care about. The paper does not argue that corporations should become moral philosophers. It argues for something more useful for the compliance function. Corporate values are part of the long-term value equation, and management ignores them at its peril.

Why This Matters to Compliance

For a corporate compliance audience, this is not an abstract governance debate. It is a board oversight issue. It is a cultural issue. It is an internal controls issue. And it is a warning that values misalignment can become a business crisis long before it shows up in a formal investigation or on a quarterly earnings call.

The paper is particularly strong in rejecting two simplistic views. First, it rejects the notion that companies can operate as if values do not matter. Second, it rejects the idea that companies should chase social legitimacy untethered from business reality. Instead, the authors land where sophisticated boards and chief compliance officers should land: values matter because they affect value, and management needs disciplined ways to understand that connection.

Culture as a Control

That is where compliance comes in. Too often, companies treat culture as a soft concept and values as a public relations topic. Yet every experienced compliance professional knows that culture is a control. It influences decision-making when policy manuals are silent, when incentives are misaligned, and when leaders face pressure. Corporate values, when operationalized correctly, help define that culture. They tell employees, managers, and third parties what the company stands for when the choice is not easy, the answer is not obvious, and money is on the line.

The paper notes that values-based concerns now influence a broad range of business decisions, from product design and sourcing to employment policies and public positioning. It also emphasizes that employees, customers, governments, and shareholders all communicate their values and preferences in different ways, and that management must stay attuned to those preferences, as misalignment can carry real economic consequences. That is precisely the language of risk management.

A Governance Issue for the Board

For boards, this means values cannot be siloed in human resources, investor relations, or communications. Values belong in governance. Boards need to ask not only what the company says its values are, but how those values are translated into operations, incentives, escalation, and response. If culture is a control, then values are part of the control environment.

This is also why corporate values should be viewed as a business risk issue. A values mismatch can trigger employee walkouts, consumer backlash, shareholder agitation, government retaliation, or a reputational spiral amplified through social media. The paper offers multiple examples showing how value-related decisions can carry material economic consequences. For the modern board, that means values are no longer a side conversation. They are part of enterprise risk management.

The paper offers another insight that compliance professionals should take seriously. Management often lacks perfect information about stakeholder values, and shareholders face structural impediments in communicating their views clearly. The authors argue that shareholder input can help management better understand public sentiment, reputational risk, and the tradeoffs between values and value. Whether one agrees with every detail of their governance analysis, the broader compliance lesson is straightforward: management needs listening mechanisms before a crisis hits.

Compliance as an Information System

That point should resonate deeply with compliance professionals. A mature compliance program is, at its core, an information system. It is supposed to tell management what it needs to know before misconduct metastasizes. The same is true for values-based risk. If the only time leadership learns that employees, customers, or investors believe the company is out of step is when a boycott begins, or a viral post explodes, the company’s information channels have already failed.

What Boards Should Do

  1. Boards should insist that management identify the company’s most material values-sensitive risk areas. These will vary by industry. For one company, it may be product safety. For another, environmental performance. For another, labor standards, DEI, or political engagement. The important point is that these issues should be mapped as risk categories, not simply discussed as messaging challenges.
  2. Boards should ask whether the company has credible mechanisms to hear from stakeholders before controversy becomes a crisis. The paper emphasizes that employees and customers often have clearer channels to express their values and preferences than shareholders do. A compliance-minded board should ask: Are we learning from all of them? Are we capturing concerns through speak-up systems, culture assessments, employee town halls, customer trends, market testing, and investor engagement? Or are we waiting for a public backlash to tell us what we should already know?
  3. Boards should evaluate whether management is treating corporate culture as a control. This means looking beyond tone at the top to the systems beneath it: incentives, middle-management behavior, escalation pathways, decision rights, and accountability. Values that live only in a code of conduct are decorative. Values that influence promotions, discipline, product choices, third-party oversight, and crisis response become operational.
  4. Boards should ensure that compliance has a seat at the table when values-laden business decisions are made. The compliance function should not decide corporate values. That is not its role. But it should help management test assumptions, identify blind spots, assess stakeholder reactions, and determine whether a proposed course is consistent with the company’s culture and risk appetite. In that sense, compliance serves as both translator and challenger.
  5. Boards should resist the temptation to turn every values issue into a political debate. The paper wisely cautions against viewing corporations as moral leaders first and economic institutions second. That is a sound warning. But there is an equal and opposite danger in pretending that values are irrelevant to business. They are not. The board’s job is not to moralize. It is to govern. And governance today requires management to understand how stakeholder values affect long-term value.

Steps for Chief Compliance Officers

For chief compliance officers, there are some clear, practical steps to take.

Begin by incorporating values-sensitive issues into risk assessment and culture reviews. Build a process to identify where stakeholder expectations may materially affect the company’s operations, reputation, and control environment. Make sure that speak-up and escalation systems can capture values-based concerns, not only legal violations. Work with management to develop an early-warning capability around stakeholder sentiment. Bring boards concrete reporting on culture trends, employee concerns, reputational flashpoints, and areas where the company may be drifting away from its stated values. Finally, pressure-test whether the company’s incentives, communications, and business decisions align with the culture it claims to have.

The Bottom Line

The bottom line is this: corporate values are not soft. They are not ornamental. They are not outside the compliance function’s field of vision. They are part of how companies create value, lose trust, and invite risk. The real challenge for boards and CCOs is not to choose values in the abstract. It is to build the governance and information systems that help management understand stakeholder values before a crisis hits. That is not politics. That is good governance.

Categories
Compliance Into the Weeds

Compliance into the Weeds: Surveying Retaliation Against Compliance Officers

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore it more fully. Looking for some hard-hitting insights on compliance? Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly discuss a new anonymous Radical Compliance survey, launched with Case IQ and Compliance Week, to quantify retaliation against compliance officers who raise compliance concerns to senior management.

The survey asks what misconduct was reported, who retaliated, what forms of retaliation occurred, such as firing, demotion, harassment, budget cuts, blacklisting, and what actions followed. Matt also encourages responses from those who have not experienced retaliation. Tom and Matt have previously discussed anecdotally but have not systematically studied, and plan to publish their findings and host a webinar later in the spring, likely in June. They also discuss potential structural protections informed by data, such as disclosure expectations around CCO departures (e.g., 8-K concepts) and contract/regulatory-approval models like those in India’s banking sector, and suggest that the findings could inform DOJ views on compliance autonomy and effective compliance programs.

Key highlights:

  • Survey Launch Explained
  • Retaliation Questions
  • Why This Study Matters
  • Defining Prevalence
  • Using Findings for Change
  • Final Call to Participate

Resources:

Matt on Radical Compliance

Survey on Retaliation Against Compliance Professionals

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred a Davey, a Communicator Award, and a W3 Award, all for podcast excellence.

Categories
Blog

When AI Becomes Evidence of Bad Governance: What CCOs and Boards Can Learn from Fortis Advisors

The Delaware Court of Chancery has handed compliance leaders and boards a timely lesson: generative AI is not a substitute for judgment, legal discipline, or governance. When leaders use AI to validate a predetermined objective, the technology does not reduce risk. It can become powerful evidence of intent, bad faith, and control failure.

A Cautionary Tale for Corporate Leaders

The recent Delaware Court of Chancery decision in Fortis Advisors, LLC v. Krafton, Inc. should be read by every Chief Compliance Officer (CCO), board member, general counsel, and corporate deal professional. The article describing the decision recounts a dispute in which a buyer, apparently unhappy with a substantial earnout obligation, turned to ChatGPT for advice on how to escape the economic consequences of the deal. According to the court’s account, the buyer then executed an AI-generated strategy designed to renegotiate the arrangement or take control from the seller management team. The court ultimately found that the buyer had wrongfully terminated key employees, improperly seized operational control, reinstated the seller’s CEO, and extended the earnout window to restore a genuine opportunity to achieve the payout.

The Real Compliance Lesson

For compliance professionals, the most important lesson is not that AI is dangerous. The lesson is that leadership can use AI in dangerous ways when governance is absent. That is a far more important point.

Too many organizations still approach AI governance as a technology problem. They focus on model performance, cybersecurity, or procurement review. Those are important issues, but this case reminds us that AI governance begins with human purpose. What question was asked? What objective was embedded in the prompt? What controls existed before action was taken? Who challenged the proposed course of conduct? Who documented the legal and ethical analysis? Those are compliance questions. Those are board questions.

Viewing the Case Through the DOJ ECCP Lens

This is also where the DOJ’s Evaluation of Corporate Compliance Programs (ECCP) provides a useful lens. The ECCP asks whether a company’s program is well designed, adequately resourced, empowered to function effectively, and actually works in practice. Put that framework over this fact pattern, and the governance gaps become painfully clear. Was there a control around the use of generative AI in strategic or legal decision-making? Was there escalation to legal, compliance, or the board when a significant earnout exposure was at stake? Was there any meaningful challenge function, or did leadership use AI as a convenient amplifier for a business objective it had already chosen?

The case suggests the latter. That should concern every board. Generative AI can be useful in brainstorming, summarizing, and scenario testing. But when executives use it to reinforce a desired outcome, particularly one touching contractual obligations, employment decisions, or post-closing governance rights, the tool can become a mechanism for rationalizing misconduct.

When AI Chats Become Discoverable Evidence

Worse, it creates a record. The Court notes that the AI chats were not privileged, were discoverable, and vividly underscored the buyer’s efforts to avoid its legal obligations. That point alone should stop corporate leaders in their tracks.

Many executives still treat AI chats as an informal thinking space, almost like talking to themselves. That is a serious mistake. Prompt histories, outputs, internal forwarding, and downstream use can all become evidence. If employees use public or enterprise AI tools to explore termination strategies, dispute positions, or ways around contractual commitments, they may be creating exactly the documentary record that plaintiffs, regulators, and judges will later find most compelling. In other words, the issue is not simply data leakage. It is discoverability, privilege erosion, and self-generated evidence of intent.

That is why CCOs and boards need to move beyond generic AI-use policies and build governance around high-risk use cases. The question should not be, “Do we allow ChatGPT?” The question should be, “Under what circumstances can generative AI be used in decisions involving legal rights, employee discipline, regulatory exposure, strategic transactions, or board-level matters?” If the answer is unclear, the company has work to do.

The M&A and Earnout Governance Lesson

The dealmaking lesson here is equally important. Earnouts are already fertile ground for post-closing disputes because they sit at the intersection of incentives, control, and timing. Buyers often want flexibility. Sellers want protection from interference. This case illustrates what can happen when a buyer attempts to manipulate operations in a way that affects the achievement of the earnout. The court not only found wrongful interference but also equitably extended the earnout period by 258 days and preserved a further contractual right to extend, thereby materially altering the deal’s economic landscape.

That is a governance lesson hiding inside an M&A lesson. Once a company acquires a business with earnout rights and operational covenants, post-closing conduct is no longer just integration management. It is compliance management. Interference with operational control, pretextual terminations, or actions designed to suppress performance metrics can lead to litigation, destroy value, and trigger judicial remedies that boards did not expect. CCOs should therefore insist that M&A integration playbooks include compliance review of earnout governance, decision rights, escalation protocols, and documentation standards.

Five Lessons for Boards and CCOs

What should boards and compliance officers do now? Here are five lessons.

  1. Govern the objective before you govern the tool. AI is only as sound as the purpose for which it is deployed. If leadership starts with a bad objective, AI can scale the problem. Boards should require management to define prohibited uses of AI in areas such as contract avoidance, pretextual employee actions, retaliation, and legal strategy without oversight by counsel.
  2. Treat high-risk AI prompts and outputs as governed business records. If a prompt relates to litigation, terminations, regulatory response, deal rights, or board matters, it should fall within clear policies on retention, review, and escalation. Employees need to understand that AI interactions may be discoverable and may not be privileged.
  3. Embed legal and compliance into consequential AI use cases. The ECCP emphasizes whether compliance has stature, access, and authority. That principle applies directly here. Strategic uses of AI that touch contractual rights, employment decisions, or fiduciary issues should not proceed without legal and compliance review.
  4. Build AI governance into M&A and post-closing integration. Earnout structures, operational covenants, and seller management rights are precisely the areas where incentives can distort behavior. Boards should ask whether integration teams have controls preventing actions that could be viewed as interference, manipulation, or bad-faith conduct.
  5. Document challenge, not just action. A single final decision does not prove good governance. It is proved by the process surrounding it. Was there dissent? Was there an analysis? Was there an escalation memo? Was there a documented rationale grounded in law, contract, and fiduciary duty? If not, the company may be left with a record that tells the wrong story.

Governance Must Come Before AI

In the end, this case is not really about a video game company. It is about a governance failure dressed in modern technology. Leaders appear to have used AI not to improve judgment, but to reinforce a course of conduct they already wanted to pursue. That is the compliance lesson. AI does not remove the need for fiduciary discipline, legal oversight, or ethical restraint. It makes those requirements more urgent.

For boards and CCOs, the mandate is clear. Governance must come first. Because when AI is used without guardrails, it does not merely create risk; it creates it. It can become the evidence.

Categories
Blog

Ongoing Monitoring: Why AI Governance Begins After Launch

In this blog post, we turn to the fourth major governance challenge in AI: ongoing monitoring. This is one of the most persistent weaknesses in AI governance. Organizations may build an intake process. They may create an approval committee. They may conduct risk reviews, privacy assessments, and validation testing before launch. All of that is important. But it is not enough.

AI risk does not freeze at the moment of approval. It changes over time. Use cases evolve. Employees adapt tools in unexpected ways. Vendors modify models. Controls weaken in practice. Regulatory expectations shift. What looked reasonable at launch may become inadequate six weeks later.

That is why ongoing monitoring is not an optional enhancement to AI governance. It is a core governance requirement. For boards and CCOs, the central question is not simply whether the company approved AI responsibly. It is whether the company has the discipline to govern it continuously once it is in the wild.

Approval Is Not Governance

One of the great temptations in AI governance is to confuse approval with control. A business unit proposes a use case, a committee reviews it, guardrails are listed, and the tool goes live. At that point, many organizations behave as though the governance work is largely complete. It is not.

Approval is a moment. Governance is a process. The problem is that companies often put their best people, clearest thinking, and highest scrutiny into the approval stage, then shift immediately into operational mode without building the same discipline around post-launch oversight. That leaves management blind to how the system actually performs under real-world conditions.

The Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) is especially instructive here. The ECCP does not ask merely whether a company has policies on paper. It asks whether the program works in practice, whether controls are tested, whether issues are investigated, and whether lessons learned are incorporated back into the compliance framework. AI governance should be viewed through the same lens. The question is not whether a control was described at launch. The question is whether that control continues to function and whether management would know if it stopped.

Why AI Risks Change After Launch

Post-deployment risk in AI does not arise because management failed to care on Implementation Day. It arises because AI systems operate in dynamic environments. A model may begin to drift as conditions change. A tool approved for one limited purpose may gradually be used for broader or higher-risk decisions. Employees may find workarounds that bypass the intended controls. Human reviewers may begin by scrutinizing outputs closely but, over time, may become overconfident, overloaded, or simply too reliant on the system. Vendors may update underlying functionality without the company fully appreciating the consequences. New regulations or regulatory interpretations may alter the risk landscape. Inputs may change. Outputs may become less reliable. Bias may surface in ways not identified in initial testing.

In other words, AI governance risk is not static. It is operational. That is why boards and CCOs must resist the notion that initial approval is the hardest part. In many respects, ongoing monitoring is harder because it requires sustained attention, clear metrics, escalation discipline, and the willingness to revisit prior assumptions.

The Governance Question

After implementation, the governance question changes. It is no longer simply, “Was this use case approved?” It becomes, “Is the use case still operating as expected, within risk tolerance, and under effective control?” That sounds simple, but it requires a much more mature oversight model than many companies currently have. It requires management to define what should be monitored, how frequently, by whom, and what changes or anomalies trigger escalation. It requires a reporting structure that does not simply celebrate adoption or efficiency gains, but surfaces incidents, deviations, near misses, and control fatigue.

For the board, the challenge is to insist on post-launch visibility. Board reporting on AI should not end with inventories and implementation updates. It should include information about ongoing performance, exception trends, complaints, incidents, validation results, vendor changes, policy breaches, and remediation efforts. A board that hears only that AI adoption is accelerating may not hear that AI governance is working.

For the CCO, the challenge is even more immediate. Compliance must ask whether the organization is gathering evidence that controls continue to function in practice. If it is not, then the governance program is still immature, no matter how polished its approval process may appear.

Monitoring What Matters

It all begins by identifying the right things to monitor. This cannot be a generic exercise. Monitoring should be tied to the specific use case, its risk classification, and its control environment. But there are some recurring categories that boards and CCOs should expect to see.

  1. Performance should be monitored. Is the tool still delivering outputs that are accurate, reliable, and appropriate for the intended purpose? Have error rates changed? Are there signs of drift or degraded quality?
  2. Control effectiveness should be monitored. Are human review requirements actually being followed? Are approval restrictions, access controls, or usage limitations still operating as designed? Is there evidence that employees are bypassing or weakening controls?
  3. Incidents and complaints should be monitored. Has the tool produced problematic results? Have customers, employees, or managers raised concerns? Have there been internal reports about bias, inaccuracy, misuse, or confidentiality risks?
  4. Changes in scope should be monitored. Is the tool still being used for the original purpose, or has it drifted into new contexts? Scope creep is one of the oldest compliance problems in business, and AI is no exception.
  5. External change should be monitored. Has a vendor updated the model? Have relevant laws, guidance, or industry expectations changed? Has a new regulatory concern emerged that requires reevaluation?

This is where the NIST AI Risk Management Framework is especially useful. NIST emphasizes that organizations must govern, measure, and manage AI risk over time, not simply identify it once. ISO/IEC 42001 reaches the same conclusion from a management systems perspective by requiring continual improvement, internal review, and adaptive controls. Both frameworks point to the same truth: effective AI governance is iterative, not episodic.

The CCO’s Role in Governance

For compliance professionals, ongoing monitoring is where the AI governance conversation becomes most familiar. This is where the CCO brings real institutional value. Compliance understands that controls weaken over time. Training decays. Workarounds emerge. Policies lose operational traction. Reporting channels capture issues others do not see. Root cause analysis matters. Corrective action must be tracked to closure. These are not new lessons. They are the daily work of compliance. AI gives them a new domain.

The CCO should insist that AI use cases have documented post-launch monitoring plans. These should identify the responsible owner, the metrics to be reviewed, the review frequency, the escalation triggers, and the process for documenting findings and remediation. High-risk use cases should not be left to passive observation. They should be actively governed.

The CCO should also ensure that AI monitoring is connected to the broader compliance ecosystem. Employee concerns raised through speak-up channels may reveal issues with the model. Internal investigations may expose misuse. Third-party due diligence may uncover changes to vendors. Training gaps may explain repeated incidents. AI governance should not be isolated from these functions. It should be integrated with them.

This is also where the CCO can most effectively help the board. Rather than presenting AI as a series of isolated technical matters, the CCO can frame post-launch governance in familiar compliance terms: monitoring, testing, escalation, remediation, and lessons learned.

Board Practice: Ask for More Than Adoption Metrics

One of the most important disciplines boards can develop is to stop mistaking usage information for governance information.

Management may report that AI adoption is growing, that productivity gains are material, or that pilot programs are expanding. Those data points may be relevant, but they are not a form of governance assurance. A board should want to know whether controls are operating, whether incidents are increasing, whether certain business units generate more exceptions, whether human review remains meaningful, and whether management has paused or modified any use cases based on real-world experience.

This is where board oversight becomes genuinely valuable. When the board asks for evidence of ongoing monitoring, it changes management behavior. It signals that AI success will not be measured solely by speed or efficiency, but also by discipline and resilience.

Boards should also ensure that high-risk use cases receive enhanced visibility. Not every AI tool merits the same level of board attention. But where AI affects regulated interactions, employment decisions, sensitive data, financial reporting, significant customer outcomes, or reputationally sensitive functions, ongoing board-level reporting should be expected.

Escalation and Remediation Must Be Built In

Monitoring matters only if it leads to action. There must be clear escalation and remediation protocols. When a material issue emerges, who gets notified? Can the use case be paused? Who determines whether the problem is technical, operational, legal, or cultural? How are facts gathered? How are corrective actions assigned? When is the board informed? How is the lesson fed back into policy, training, vendor management, or approval standards?

These processes should not be improvised. They should be documented. The organization should know in advance which incidents require escalation, which temporary controls may be imposed, and how remediation is tracked.

This is another place where the ECCP provides a useful governance model. DOJ expects companies not only to identify misconduct but also to investigate it, understand its root causes, and implement improvements that reduce the risk of recurrence. AI governance should work the same way. If a model fails or a control weakens, management should not merely fix the immediate problem. It should ask what the failure reveals about the program itself.

Documentation Is the Proof

As with every other element of effective governance, documentation is what turns intention into evidence. Post-launch AI governance should generate records that demonstrate monitoring occurred, issues were surfaced, escalations were handled, and remediation was completed. That may include performance reviews, validation updates, incident logs, committee minutes, complaint summaries, control testing records, vendor change notices, and corrective action trackers.

Without such documentation, management may believe it is effectively monitoring AI, but it will struggle to prove it to internal audit, regulators, or the board. More importantly, it will struggle to learn from experience in a disciplined way. A company that documents ongoing monitoring creates institutional memory. It can compare use cases, detect patterns, and refine its oversight model over time. That is how governance matures.

AI Governance Starts After Launch

The hardest truth in AI governance may be this: launching the tool is often the easiest part. The real challenge begins afterward. That is when optimism meets operational reality. That is when human reviewers become tired. That is when vendors update products. That is when regulators begin asking harder questions. That is when small problems become visible, or invisible, depending on whether the company has built a monitoring system capable of finding them.

For boards and CCOs, this is where governance earns its name. If the organization can monitor, escalate, remediate, and improve, then AI oversight has substance. If it cannot, then the company has not really governed AI at all. It has only been approved.

In the next and final blog post in this series, I will turn to the fifth governance challenge: culture, speak-up, and human judgment, because in many organizations, the first people to see an AI problem will not be the board, the CCO, or the governance committee. It will be the employee closest to the work.

Categories
Blog

Data Governance, Privacy, and Model Integrity: The Control Foundation of AI Governance

Artificial intelligence may look like a technology story on the surface, but beneath that surface lies a governance reality every board and Chief Compliance Officer must confront. AI systems are only as sound as the data that feeds them, the controls that govern them, and the integrity of the outputs they generate. When data governance is weak, privacy obligations are poorly managed, or model integrity is assumed rather than tested, AI risk can move quickly from a technical flaw to enterprise exposure.

In the prior blog posts in this series, I examined the foundational questions of AI governance: board oversight and accountability, and the danger of strategy outrunning governance. Today, I want to turn to a third issue that sits at the core of every credible AI governance program: data governance, privacy, and model integrity.

This is where the AI conversation often moves from excitement to discipline. Companies may be eager to deploy tools, automate functions, and improve decision-making. But none of that matters if the underlying data is flawed, sensitive information is mishandled, or the model produces outputs that are unreliable, biased, or impossible to explain in context—the more powerful the technology, the more important the governance framework beneath it.

For boards and CCOs, this is not simply a technical control matter. It is a governance matter because failures in data integrity, privacy management, and model performance can have legal, regulatory, reputational, financial, and cultural consequences simultaneously.

AI Governance Begins with the Data

There is an old saying in technology: garbage in, garbage out. In the AI era, that phrase remains true, but it is no longer sufficient. In corporate governance terms, the problem is not merely bad data. It is unknown, unauthorized, untraceable, biased, stale, overexposed, or used in ways the organization never properly approved. That is why data governance is the control foundation of AI governance.

Every AI use case depends on inputs. Those inputs may include structured internal data, public information, personal data, third-party data, proprietary records, historical documents, transactional records, prompts, or user interactions. If management does not understand where that data comes from, who has rights over it, whether it is accurate, how it is classified, and whether it is appropriate for the intended purpose, then the company is not governing AI. It is merely using it.

For compliance professionals, this point should feel familiar. Data governance is not new. What is new is the speed and scale at which AI can amplify data weaknesses. A spreadsheet error may affect one report. A flawed AI input may affect thousands of interactions, recommendations, or decisions before anyone notices.

Why Boards Should Care About Data Lineage

Boards do not need to become technical experts in model training or data architecture. But they do need to ask whether management understands the provenance and reliability of the information flowing into critical AI systems.

At a governance level, this is a question of data lineage. Can the company trace the source of the data, how it was curated, whether it was changed, and whether it was approved for the intended use? If a customer, regulator, employee, or auditor asks why the system reached a particular result, can management explain not only the output, but the data conditions that shaped it?

A board that does not ask these questions risks receiving polished dashboards and impressive demonstrations while missing the underlying weaknesses. AI systems can sound authoritative even when they are wrong. That is part of what makes governance here so essential. Confidence is not the same as integrity.

This is also where the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) offers a helpful mindset. The ECCP pushes companies to think in terms of operational reality. Do policies work in practice? Are controls tested? Is the company learning from what goes wrong? The same discipline applies here. A company should not assume its data environment is fit for AI simply because it has data available. It should test, verify, document, and challenge that assumption.

Privacy Is Not an Adjacent Issue

Too many organizations still treat privacy as adjacent to AI governance rather than central to it. That is a mistake. AI systems often rely on data sets that include personal information, employee information, customer records, usage patterns, communications, or behavior-based inputs. Even when a company believes it has de-identified or anonymized data, there may still be re-identification risks, overcollection concerns, retention issues, or use limitations tied to law, contract, or internal policy.

For the board and the CCO, privacy should not be discussed as a compliance side note. It should be part of the approval and governance architecture from the outset. Before an AI use case is deployed, management should understand what personal data is involved, whether its use is permitted, what notices or disclosures apply, what access restrictions are required, how the data will be retained, and whether any vendor relationships create additional privacy exposure.

This is particularly important in generative AI environments, where employees may paste confidential, proprietary, or personal information into tools without fully appreciating the consequences. A privacy incident in the AI context may not begin with malicious intent. It may begin with convenience. That is why governance must focus not only on policy, but on system design, training, and usage constraints.

The CCO has a critical role here because privacy governance often intersects with policy management, employee conduct, training, investigations, and disciplinary response. If privacy is left solely to specialists without integration into the broader governance process, the organization risks building fragmented controls that do not hold together under pressure.

Model Integrity Is a Governance Question

Model integrity sounds like a technical term, but it is really a governance concept. It asks whether the system is performing in a manner consistent with its intended purpose, risk classification, and control expectations.

That means asking hard questions. Is the model accurate enough for the use case? Has it been validated before deployment? Are there known limitations? Does it perform differently across populations or scenarios? Can outputs be reviewed in a meaningful way by human decision-makers? Are there conditions under which the model should not be used? These are not engineering questions alone. They are governance questions because they determine whether management is relying on the system responsibly.

This is where NIST’s AI Risk Management Framework is especially valuable. NIST emphasizes that organizations should map, measure, and manage AI risks, including those related to validity, reliability, safety, security, resilience, explainability, and fairness. It is not enough to say that a tool works most of the time. The organization must understand where it may fail, how failure will be detected, and what safeguards are in place when it does.

ISO/IEC 42001 reinforces the same discipline through the lens of management systems. It requires structured attention to risk identification, control design, monitoring, documentation, and continual improvement. In other words, it treats model integrity not as a technical aspiration, but as an organizational responsibility. For boards, the takeaway is direct: if management cannot explain how model integrity is validated and maintained, then the board does not yet have assurance that AI is being governed effectively.

Third Parties Increase the Stakes

One of the more dangerous assumptions in AI governance is that outsourcing technology also outsources risk. It does not. Many organizations will deploy AI through third-party vendors, embedded tools, software platforms, or external service providers. That may be practical, even necessary. But it also means the company may be relying on data practices, training methods, model assumptions, or privacy safeguards it did not design and cannot fully see.

That is why data governance, privacy, and model integrity must extend to third-party risk management. Procurement cannot focus solely on functionality and price. Legal cannot focus solely on contract form. Compliance, privacy, security, and risk all need to understand what the vendor is doing, what data is being used, what rights the company has to inspect or question performance, and what happens when the vendor changes the model or its underlying terms.

This is not simply good vendor management. It is a governance necessity. A company remains accountable for business decisions made using third-party AI tools, especially when those tools affect customers, employees, compliance obligations, or regulated activities.

Documentation Is What Makes Governance Real

As with every major governance issue, documentation is what turns theory into evidence. If a company is serious about data governance, privacy, and model integrity, it should have records that show it. Those records may include data inventories, data classification standards, model validation summaries, privacy assessments, vendor due diligence files, testing results, approved use cases, control requirements, escalation logs, and remediation actions. Without this documentation, governance becomes anecdotal. With it, governance becomes reviewable, auditable, and improvable.

This is another place where the ECCP mindset is so useful. Prosecutors and regulators tend to ask the same core question in different ways: how do you know your program works? In the AI context, the answer cannot be “our vendor told us so” or “the business says the tool is helpful.” It must be grounded in evidence, testing, and management discipline.

What Boards and CCOs Should Be Pressing For

Boards should expect management to present AI use cases with enough clarity to answer four questions. What data is being used? What privacy implications attach to that use? How has model integrity been tested? What controls will remain in place after deployment?

CCOs should press equally hard from the management side. Is there a documented data governance process for AI? Are privacy reviews built into the intake and approval process? Are models validated according to risk? Are third-party tools subject to diligence and contract controls? Are incidents and anomalies logged and investigated? Are employees trained not to expose confidential or personal information through improper use? These are not burdensome questions. They are the practical questions that separate governed AI from hopeful AI.

Governance Requires Trustworthy Inputs and Defensible Outputs

In the end, AI governance depends on a simple but demanding truth: the organization must be able to trust what goes into the system and defend what comes out of it.

If the data is poorly governed, privacy rights are handled casually, or model integrity is assumed rather than demonstrated, then no amount of strategic enthusiasm will make the program safe. Boards will not have real oversight. CCOs will not have a defensible control environment. The company will merely have a faster way to create risk.

That is why data governance, privacy, and model integrity are not support issues in AI governance. They are central issues. They determine whether the enterprise is using AI with discipline or simply hoping for the best.

In the next article in this series, I will turn to the fourth governance challenge: ongoing monitoring, where many organizations discover that approving an AI use case is far easier than governing it after it goes live.