Categories
AI Today in 5

AI Today in 5: May 12, 2026, The RegTech as Infrastructure Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

For more information on the use of AI in compliance programs, Tom Fox’s new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out Tom’s latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Innovation in Compliance

Innovation in Compliance: Data Defensibility: The Compliance Foundation for AI Governance with George Tziahanas

Innovation occurs across many areas, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom visits with George Tziahanas, VP of Compliance and Associate General Counsel at Archive360.

Tom interviews George Tziahanas on why organizations must move beyond data storage to providing data integrity, lineage, and accountability as a foundation for AI readiness. George defines “data defensibility” as the ability to defend how AI systems were trained and operate when AI decisions are not easily explainable, such as in rules-based automation, emphasizing upstream data provenance, monitoring, and audit trails. They discuss increasing regulator and stakeholder focus on authority and accountability and how litigation can shape compliance, citing early e-discovery practices influenced by the Zubulake v. UBS Warburg decision and enforcement context involving former New York AG Elliot Spitzer. George uses the Mercor breach to show supply-chain and confidentiality risks in AI training data and notes that regulators and plaintiffs may rely on existing laws. He highlights risks from weak data governance, dark data, and legacy archives. He recommends asset/data inventories, migrating data off insecure legacy systems, risk-tiering AI use cases, extending ISO/NIST frameworks, and building observability to enable faster, responsible AI adoption.

Key highlights:

  • What Data Defensibility Means
  • Litigation Shapes Compliance
  • Weak Data Governance Risks
  • Managing Legacy Archive Data
  • Governance Accelerates AI
  • Dark Data Explained
  • What Success Looks Like

Resources:

George Tziahanas on LinkedIn

Archive360

Articles by George Tziahanas

Beyond Retention: Why AI Governance in 2026 is a Defensibility Problem

Keeping Data in Check: The Importance of Data Defensibility

Categories
FCPA Compliance Report

FCPA Compliance Report: Report from Compliance Week 2026 on AI Sessions

In this episode, Tom Fox takes a solo turn behind the mic to report on the AI tracks from the recently concluded Compliance Week 2026 conference.

He highlights two AI tracks: practical “creative” uses, including live demonstrations by Hemma Lomax creating PowerPoint content and Roxanne Petraeus creating video content, and the more critical compliance focus on AI governance, oversight, and accountability amid limited federal direction and a growing patchwork of state laws, with the EU AI Act positioned as a global benchmark. Tom emphasizes applying standard compliance risk management to AI (identify, manage, train, implement, monitor, improve), addressing shadow AI, internal/external/vendor risks, and building AI “in” rather than bolting it on. He notes scaling challenges, ROI questions, auditor expectations, risk registers, fraudsters’ use of AI, and ongoing discussions with Matt Kelly.

Key highlights:

  • AI Everywhere at CW
  • Creative AI Demos
  • AI Risk Framework
  • Shadow AI and Risks
  • ROI and Use Cases
  • Scaling and Oversight
  • Governance Takeaways

Resources:

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

For more information on the use of AI in compliance programs, Tom Fox’s new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com: https://a.co/d/00XNoelh.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out Tom’s latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com: https://a.co/d/05NTW4zz.

Categories
Blog

Compliance Week 2026: AI Governance Highlights

The 21st Annual Compliance Week Conference made one point unmistakably clear: AI is no longer a technology issue sitting outside the compliance function. It is now a governance, risk, controls, culture, and accountability issue. Across the conference, AI appeared in nearly every discussion, from practical tools for compliance teams to regulatory uncertainty, shadow AI, third-party risk, and board oversight. The central message for compliance professionals was clear: AI must be governed with the same discipline, documentation, monitoring, and continuous improvement as any other enterprise risk.

That should not surprise any Chief Compliance Officer. The DOJ’s Evaluation of Corporate Compliance Programs (2024 ECCP) has long asked whether a compliance program is well-designed, adequately resourced, empowered to function effectively, and working in practice. Those same questions now apply to AI. The issue is not whether an organization is using AI. It almost certainly is. The issue is whether the company knows where AI is being used, who approved it, the risks it creates, the controls that apply, and whether those controls are being monitored.

AI Is Now a Compliance Governance Issue

The first major theme from Compliance Week 2026 was governance. AI may be exciting, efficient, and creative, but without governance, it can quickly become a source of unmanaged enterprise risk. That governance challenge begins with oversight. Who owns AI risk? Who approves AI use cases? Who determines whether a tool is appropriate for use with company data? Who has the authority to stop an AI project that is not meeting its stated purpose? These are not theoretical questions. They are the basic operating questions of an effective compliance program.

A company should not treat AI as a series of disconnected experiments. It should treat AI as part of the enterprise control environment. That means clear governance structures, documented approvals, defined risk owners, escalation protocols, monitoring, testing, and board reporting. The board does not need to become a group of AI engineers. But directors do need to understand whether management has created a defensible AI governance framework. They should ask how AI risks are identified, how high-risk use cases are reviewed, how third-party AI vendors are assessed, and how the company detects unauthorized AI use.

Shadow AI Is the Risk Hiding in Plain Sight

One of the strongest compliance lessons from the conference was the danger of shadow AI. Employees are already using AI tools, often because they are efficient, accessible, and easy to deploy. The problem is that ease of use can defeat governance. If employees are using ChatGPT, Claude, Gemini, Copilot, or other tools without authorization, training, or data restrictions, the company has a control gap. Confidential business information, financial data, personal information, customer information, or regulated data can move into systems the company does not control. That creates legal, privacy, cybersecurity, contractual, and reputational risk.

The answer is not simply to prohibit AI. That approach is unlikely to work. The better answer is to identify the tools being used, classify them by risk, authorize appropriate use cases, train employees, monitor usage, and make clear what data can and cannot be entered into an AI system. A strong AI governance program should include an AI use register. It should identify approved tools, owners, business purposes, data categories, risk ratings, controls, monitoring obligations, and renewal or reassessment dates. Without that inventory, a company cannot credibly claim to govern AI risk.

The Compliance Risk Management Model Already Works

One of the most important insights from the conference was that compliance professionals already have the right risk management framework. AI risk does not require abandoning the compliance discipline. It requires applying it.

The framework is familiar. Identify the risk. Develop a risk management strategy. Train employees. Implement the strategy. Monitor performance. Use data to improve your strategy continuously. That is the compliance operating model. It is also the right model for AI governance.

The 2024 ECCP emphasized risk-based compliance, data access, continuous improvement, and the effectiveness of controls in practice. Those expectations fit naturally into AI governance. A company should ask whether its AI controls are designed around actual risks, whether compliance has access to AI-related data, whether employees understand acceptable use, and whether the company can prove that its controls operate effectively. The lesson is straightforward. Do not build AI governance as a technology policy alone. Build it as a compliance program.

AI Risk Has Three Core Dimensions

The conference also highlighted the need to separate AI risk into practical categories. For compliance officers, three risk areas deserve immediate attention.

First, internal risk. This includes employee use of AI, shadow AI, unauthorized tools, misuse of confidential information, lack of training, and gaps in approval processes.

Second, external risk. This involves AI systems that affect customers, patients, consumers, investors, or other external stakeholders. These tools may raise issues involving fairness, privacy, transparency, discrimination, consumer protection, and regulatory obligations.

Third, third-party risk. Vendors, consultants, service providers, and sales agents may introduce AI into the company’s operations. A third-party vendor using AI in screening, analytics, customer service, data processing, or decision support can pose a risk to the company, even when the company did not build the tool.

This is where compliance must bring discipline. Third-party AI risk should be part of due diligence, contracting, audit rights, monitoring, and renewal. Companies should ask vendors what AI tools they use, what data those tools process, whether subcontractors are involved, how outputs are validated, and whether the company has audit rights over AI-related controls.

ROI Must Begin With the Business Purpose

AI projects should begin with a simple question: what problem are we trying to solve? Too many AI initiatives begin with pressure to “use AI” rather than a clear business case. That is not governance. That is technology enthusiasm without control or discipline. A compliance-minded AI review should ask whether the proposed tool has a defined use case, measurable business value, appropriate controls, and a clear owner. It should also ask whether the project is drifting from its original purpose. Mission creep is a real AI risk. A tool approved for one purpose can quickly be used for another. That creates new risks and may invalidate the original approval.

The more regulated the use case, the more important this analysis becomes. AI used in healthcare, employment, finance, consumer decisions, investigations, sanctions screening, or third-party risk management demands heightened scrutiny. ROI may not always appear as a direct financial return. Sometimes the business value is avoiding regulatory exposure, improving consistency, strengthening documentation, or reducing unmanaged risk.

Training Is No Longer Optional

AI training must move beyond general awareness. Employees need practical, role-based instruction. They need to know which tools are approved. They need to know what data is prohibited. They need to understand when human review is required. They need to know how to report AI concerns, errors, bias, hallucinations, or misuse. They also need to understand that AI output is not a substitute for professional judgment.

For compliance teams, training should include investigators, auditors, third-party managers, procurement, legal, finance, HR, IT, and business leaders. The message should be clear: AI can support the work, but it does not remove accountability.

Build AI In, Do Not Bolt It On

One of the most practical insights from the conference was that AI should be built into business processes, not bolted on afterward. That distinction matters. Bolted-on AI becomes a tool without governance. Built-in AI becomes part of the control environment.

For example, in third-party risk management, AI can help analyze due diligence responses, identify red flags, monitor adverse media, track contract obligations, and support ongoing risk scoring. But it must be embedded into a process with human oversight, escalation protocols, audit trails, and testing. The same applies to investigations, hotline analytics, policy management, training, and monitoring. AI should strengthen compliance processes, not bypass them.

The CCO Must Have a Seat at the AI Table

The compliance function should not wait to be invited into AI governance. It should claim its role. The CCO brings the language of risk, controls, accountability, documentation, monitoring, and culture. Those are precisely the disciplines AI governance requires. Compliance should help design AI approval workflows, risk assessments, training, third-party reviews, monitoring plans, and board reporting.

This does not mean compliance owns every AI decision. It means compliance must be part of the governance architecture. AI governance should be cross-functional, with legal, compliance, IT, privacy, cybersecurity, internal audit, procurement, HR, and the business working together. But compliance must ensure that the program is not simply innovative. It must be defensible.

Practical Takeaways for Compliance Professionals

  1. Create an AI inventory. Know what tools are being used, by whom, for what purpose, and with what data.
  2. Establish an AI governance committee. Include compliance, legal, IT, privacy, cybersecurity, internal audit, procurement, and business leadership.
  3. Build a risk-based approval process. High-risk AI use cases should require enhanced review, documentation, testing, and escalation.
  4. Address shadow AI directly. Do not assume employees are waiting for policy guidance. Identify actual use and bring it into governance.
  5. Train by role and risk. General AI awareness is not enough. Employees need practical rules for approved tools, prohibited data, human review, and reporting.
  6. Extend third-party risk management to AI. Vendor diligence, contracts, audit rights, monitoring, and renewal reviews should include AI-specific questions.
  7. Monitor and improve. AI governance is not a one-time policy exercise. It requires testing, metrics, incident review, and continuous improvement.

Board Questions

  1. Do we have an inventory of AI tools currently used across the enterprise?
  2. Who approves AI use cases, and how are high-risk uses escalated?
  3. How do we detect and manage shadow AI?
  4. What data is prohibited from being entered into AI tools?
  5. How are third-party AI vendors reviewed, contracted, monitored, and audited?
  6. What AI metrics does management provide to the board?
  7. Who has the authority to pause or terminate an AI project that creates unacceptable risk?

CCO Questions

  1. Is compliance involved before AI tools are deployed?
  2. Do our policies distinguish between approved, restricted, and prohibited uses of AI?
  3. Can we prove employees have been trained on AI risks?
  4. Do we have a documented AI risk assessment process?
  5. Are AI controls tested by internal audit or another independent function?
  6. Are AI incidents, errors, and misuse captured through speak-up and escalation systems?
  7. Can we show regulators that our AI governance works in practice?

Conclusion

Compliance Week 2026 confirmed that AI has crossed the threshold from emerging technology to core compliance risk. The companies that succeed will not be those that chase every new tool. They will be the companies that govern AI with discipline. For the modern CCO, this is the moment to step forward. AI governance belongs squarely within the compliance conversation because it involves risk, accountability, culture, controls, third parties, monitoring, and board oversight. Those are the foundations of effective compliance.

AI may change the tools. It does not change the obligation. Governance still matters. Controls still matter. Culture still matters. Accountability still matters. And compliance must help lead the way.

Categories
Sunday Book Review

Sunday Book Review: May 10, 2026, The Top Books on AI Governance Edition

In the Sunday Book Review, Tom Fox considers books that would interest compliance professionals, business executives, or anyone curious. It could be books about business, compliance, history, leadership, current events, or anything else that might interest Tom. In this episode, we look at 4 top books on AI governance.

  • AI Governance: Secure, Privacy-preserving, Ethical Systems by Engin Bozdag & Stefano Bennati (2026) 
  • Governing the Machine: How to Navigate the Risks of AI and Unlock Its True Potential by Ray Eitel-Porter, Paul Dongha, & Miriam Vogel (2025) 
  • A Short & Happy Guide to AI Governance and Regulation by Kashyap Kompella & James Cooper (2025) 
  • Mastering AI Governance: A Guide to Building Trustworthy and Transparent AI Systems by Rajendra Gangavarapu (2025)
Categories
AI Today in 5

AI Today in 5: April 29, 2026, The (AI) Trial of the Century Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Musk v. Altman-AI Trial of the Century. (WSJ)
  2. A RegTech solution vs. an internal bespoke solution. (FinTech Global)
  3. AI governance in practice. (bankinfo security)
  4. AI in a skilled nursing facility. (McKnights)
  5. US v. states—the battle for AI governance. (Vorys)

For more information on the use of AI in compliance programs, Tom Fox’s new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out Tom’s latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Blog

AI Disclosures, Controls, and D&O Coverage: Closing the Governance Gap Around Artificial Intelligence

A new governance gap is emerging around artificial intelligence, and it is one that Chief Compliance Officers, compliance professionals, and boards need to confront now. It sits at the intersection of three areas that too many companies still treat separately: public disclosures, internal controls, and insurance coverage. That siloed approach is no longer sustainable.

As companies speak more confidently about their AI strategies, insurers are becoming more cautious about the risks those strategies create. That tension matters. It signals that the market is beginning to see something many organizations have not yet fully addressed: when a company’s statements about AI outpace its actual governance, the exposure is not merely operational or reputational. It can become a disclosure issue, a board oversight issue, and ultimately a proof-of-governance issue under the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP).

For the compliance professional, this is not simply an insurance story. It is a compliance integration story. The question is whether the company can align its statements about AI, the controls it has in place, and the protections it believes it has in place if something goes wrong.

The New Governance Gap

Many organizations are eager to describe AI as a source of innovation, efficiency, better decision-making, or competitive advantage. Those messages increasingly appear in earnings calls, investor decks, public filings, marketing materials, and board presentations. Yet the underlying governance structures often remain immature. That disconnect is the governance gap.

It appears when management speaks broadly about responsible AI but has not built a complete inventory of AI use cases. It appears when companies discuss oversight but cannot show testing, documentation, or monitoring. It appears that boards assume that insurance will respond to AI-related claims without understanding how new policy language may narrow coverage.

This is where D&O coverage becomes so important. It is not the center of the story, but it is a revealing signal. If insurers are revisiting policy language and introducing exclusions or limitations tied to AI-related conduct, it suggests the market sees governance risk. In other words, the insurance market is sending a message: AI-related claims are no longer hypothetical, and companies that cannot demonstrate disciplined oversight may find that risk transfer is less available than they assumed.

Why the ECCP Should Be the Primary Lens

The DOJ’s ECCP remains the most useful framework for analyzing this issue because it asks exactly the right questions.

Has the company conducted a risk assessment that accounts for emerging risks? Are policies and procedures aligned with actual business practice? Are controls working in practice? Is there proper oversight, accountability, and continuous improvement? Can the company demonstrate all of this with evidence? Those are compliance questions, but they are also the right AI governance questions.

If a company makes public statements about AI capability, oversight, or reliability, the ECCP lens requires more than aspiration. It requires substantiation. Can the company show who owns the AI risk? Can it demonstrate how models or systems are tested? Can it show escalation procedures when problems arise? Can it document how AI-related decisions are monitored, reviewed, and improved over time?

If the answer is no, then the issue is not simply that the company may have overpromised. The issue is that its compliance program may not be adequately addressing a material emerging risk. That is why CCOs should view AI as a cross-functional challenge requiring integration across legal, compliance, technology, risk, audit, investor relations, and the board.

AI Disclosure Must Be Evidence-Based

One of the most practical steps a compliance function can take is to push for an evidence-based disclosure process around AI. This means that public statements about AI should not be driven solely by enthusiasm, market pressure, or executive optimism. They should be grounded in underlying documentation. If the company says it uses AI responsibly, where is the governance framework? If it claims AI improves decision-making, what testing supports that assertion? If it says it has safeguards, where are the control descriptions, monitoring results, and escalation records?

This is not about suppressing innovation. It is about ensuring that disclosure discipline keeps pace with technological ambition. For boards, this means asking harder questions before approving or relying on public AI narratives. For compliance officers, it means helping management build the evidentiary record that turns broad statements into defensible representations.

Controls Must Catch Up to Strategy

This is where the “how-to” work begins. Compliance professionals should begin by creating a structured inventory of AI use cases across the enterprise. That inventory should identify where AI is being used, what decisions it informs, what data it relies on, who owns it, and what risks it entails.

Once that inventory exists, risk tiering should follow. Not every AI use case carries the same compliance significance. A low-risk productivity tool does not need the same oversight as a system that affects investigations, third-party due diligence, customer interactions, financial reporting, or core operational decisions.

From there, the company can design controls proportionate to risk. High-impact uses of AI should have documented governance, human review where appropriate, testing protocols, escalation triggers, and monitoring requirements. The compliance team should be able to answer a simple question: where are the controls, and how do we know they work? That is the heart of the ECCP inquiry.

Where NIST AI RMF and ISO/IEC 42001 Fit

This is also where the NIST AI Risk Management Framework and ISO/IEC 42001 become highly practical tools. NIST AI RMF helps organizations govern, map, measure, and manage AI risks. For compliance professionals, this provides a disciplined structure for identifying AI use cases, understanding impacts, assessing reliability, and managing response. It is especially useful in linking abstract AI risk to operational decision-making.

ISO/IEC 42001 brings management system discipline to AI governance. It focuses on defined roles, documented processes, control implementation, monitoring, internal review, and continual improvement. That makes it an excellent bridge between policy and execution. Together, these frameworks help operationalize the ECCP. The ECCP tells you what an effective compliance program should be able to demonstrate. NIST AI RMF helps structure the risk analysis. ISO 42001 helps embed those requirements into a repeatable governance process.

For CCOs, the practical lesson is clear: use these frameworks not as academic overlays, but as working tools to build ownership, documentation, testing, and accountability.

Insurance Is a Governance Input

Companies also need to stop treating insurance as an afterthought. D&O coverage should be considered a governance input, not merely a downstream purchase. If policy language is narrowing around AI-related claims, boards and compliance leaders need to understand what that means. What scenarios might raise disclosure-related allegations? Where is ambiguity in coverage? What assumptions has management made about protection that may no longer hold?

Compliance does not need to become an insurance specialist. But it does need to ensure that disclosure, governance, and risk transfer are aligned. If the company is making strong public claims about AI while carrying unexamined governance weaknesses and uncertain coverage, that is precisely the kind of mismatch that can trigger a crisis.

Closing the Gap Before It Becomes a Failure

The larger lesson is straightforward. AI governance is not simply about technology controls. It is about integration. It is about ensuring that what the company says, what it does, and what it can prove all line up. That is why the governance gap matters so much. It is the space where strategy outruns structure, where disclosure outruns evidence, and where confidence outruns control. For boards and compliance professionals, the task is to close that gap before it becomes a failure.

The companies that do this well will not necessarily be the ones moving the fastest. They will be the ones building documented, tested, monitored, and governed AI programs that stand up to regulatory scrutiny, investor pressure, and real-world disruption. That is not bureaucracy. That is the price of sustainable innovation.

Categories
Blog

When AI Becomes Evidence of Bad Governance: What CCOs and Boards Can Learn from Fortis Advisors

The Delaware Court of Chancery has handed compliance leaders and boards a timely lesson: generative AI is not a substitute for judgment, legal discipline, or governance. When leaders use AI to validate a predetermined objective, the technology does not reduce risk. It can become powerful evidence of intent, bad faith, and control failure.

A Cautionary Tale for Corporate Leaders

The recent Delaware Court of Chancery decision in Fortis Advisors, LLC v. Krafton, Inc. should be read by every Chief Compliance Officer (CCO), board member, general counsel, and corporate deal professional. The article describing the decision recounts a dispute in which a buyer, apparently unhappy with a substantial earnout obligation, turned to ChatGPT for advice on how to escape the economic consequences of the deal. According to the court’s account, the buyer then executed an AI-generated strategy designed to renegotiate the arrangement or take control from the seller management team. The court ultimately found that the buyer had wrongfully terminated key employees, improperly seized operational control, reinstated the seller’s CEO, and extended the earnout window to restore a genuine opportunity to achieve the payout.

The Real Compliance Lesson

For compliance professionals, the most important lesson is not that AI is dangerous. The lesson is that leadership can use AI in dangerous ways when governance is absent. That is a far more important point.

Too many organizations still approach AI governance as a technology problem. They focus on model performance, cybersecurity, or procurement review. Those are important issues, but this case reminds us that AI governance begins with human purpose. What question was asked? What objective was embedded in the prompt? What controls existed before action was taken? Who challenged the proposed course of conduct? Who documented the legal and ethical analysis? Those are compliance questions. Those are board questions.

Viewing the Case Through the DOJ ECCP Lens

This is also where the DOJ’s Evaluation of Corporate Compliance Programs (ECCP) provides a useful lens. The ECCP asks whether a company’s program is well designed, adequately resourced, empowered to function effectively, and actually works in practice. Put that framework over this fact pattern, and the governance gaps become painfully clear. Was there a control around the use of generative AI in strategic or legal decision-making? Was there escalation to legal, compliance, or the board when a significant earnout exposure was at stake? Was there any meaningful challenge function, or did leadership use AI as a convenient amplifier for a business objective it had already chosen?

The case suggests the latter. That should concern every board. Generative AI can be useful in brainstorming, summarizing, and scenario testing. But when executives use it to reinforce a desired outcome, particularly one touching contractual obligations, employment decisions, or post-closing governance rights, the tool can become a mechanism for rationalizing misconduct.

When AI Chats Become Discoverable Evidence

Worse, it creates a record. The Court notes that the AI chats were not privileged, were discoverable, and vividly underscored the buyer’s efforts to avoid its legal obligations. That point alone should stop corporate leaders in their tracks.

Many executives still treat AI chats as an informal thinking space, almost like talking to themselves. That is a serious mistake. Prompt histories, outputs, internal forwarding, and downstream use can all become evidence. If employees use public or enterprise AI tools to explore termination strategies, dispute positions, or ways around contractual commitments, they may be creating exactly the documentary record that plaintiffs, regulators, and judges will later find most compelling. In other words, the issue is not simply data leakage. It is discoverability, privilege erosion, and self-generated evidence of intent.

That is why CCOs and boards need to move beyond generic AI-use policies and build governance around high-risk use cases. The question should not be, “Do we allow ChatGPT?” The question should be, “Under what circumstances can generative AI be used in decisions involving legal rights, employee discipline, regulatory exposure, strategic transactions, or board-level matters?” If the answer is unclear, the company has work to do.

The M&A and Earnout Governance Lesson

The dealmaking lesson here is equally important. Earnouts are already fertile ground for post-closing disputes because they sit at the intersection of incentives, control, and timing. Buyers often want flexibility. Sellers want protection from interference. This case illustrates what can happen when a buyer attempts to manipulate operations in a way that affects the achievement of the earnout. The court not only found wrongful interference but also equitably extended the earnout period by 258 days and preserved a further contractual right to extend, thereby materially altering the deal’s economic landscape.

That is a governance lesson hiding inside an M&A lesson. Once a company acquires a business with earnout rights and operational covenants, post-closing conduct is no longer just integration management. It is compliance management. Interference with operational control, pretextual terminations, or actions designed to suppress performance metrics can lead to litigation, destroy value, and trigger judicial remedies that boards did not expect. CCOs should therefore insist that M&A integration playbooks include compliance review of earnout governance, decision rights, escalation protocols, and documentation standards.

Five Lessons for Boards and CCOs

What should boards and compliance officers do now? Here are five lessons.

  1. Govern the objective before you govern the tool. AI is only as sound as the purpose for which it is deployed. If leadership starts with a bad objective, AI can scale the problem. Boards should require management to define prohibited uses of AI in areas such as contract avoidance, pretextual employee actions, retaliation, and legal strategy without oversight by counsel.
  2. Treat high-risk AI prompts and outputs as governed business records. If a prompt relates to litigation, terminations, regulatory response, deal rights, or board matters, it should fall within clear policies on retention, review, and escalation. Employees need to understand that AI interactions may be discoverable and may not be privileged.
  3. Embed legal and compliance into consequential AI use cases. The ECCP emphasizes whether compliance has stature, access, and authority. That principle applies directly here. Strategic uses of AI that touch contractual rights, employment decisions, or fiduciary issues should not proceed without legal and compliance review.
  4. Build AI governance into M&A and post-closing integration. Earnout structures, operational covenants, and seller management rights are precisely the areas where incentives can distort behavior. Boards should ask whether integration teams have controls preventing actions that could be viewed as interference, manipulation, or bad-faith conduct.
  5. Document challenge, not just action. A single final decision does not prove good governance. It is proved by the process surrounding it. Was there dissent? Was there an analysis? Was there an escalation memo? Was there a documented rationale grounded in law, contract, and fiduciary duty? If not, the company may be left with a record that tells the wrong story.

Governance Must Come Before AI

In the end, this case is not really about a video game company. It is about a governance failure dressed in modern technology. Leaders appear to have used AI not to improve judgment, but to reinforce a course of conduct they already wanted to pursue. That is the compliance lesson. AI does not remove the need for fiduciary discipline, legal oversight, or ethical restraint. It makes those requirements more urgent.

For boards and CCOs, the mandate is clear. Governance must come first. Because when AI is used without guardrails, it does not merely create risk; it creates it. It can become the evidence.

Categories
Blog

Culture, Speak-Up, and Human Judgment: The Human Side of AI Governance

Artificial intelligence may be built on data, models, and code, but governance ultimately rests on people. For boards and Chief Compliance Officers, one of the most important questions is not only whether the organization has responsibly approved AI tools, but also whether employees are prepared to challenge them, report concerns, and apply human judgment when something does not look right. In many organizations, the earliest warning system for AI failure is not a dashboard. It is the workforce.

Over the course of this series, I have explored four critical governance challenges in AI: board oversight and accountability, strategy outrunning governance, data governance and privacy, and ongoing monitoring. This final blog post turns to the fifth and most underappreciated challenge of all: culture, speak-up, and human judgment.

Underappreciated because organizations often begin AI governance with structure in mind. They build committees, draft policies, classify risks, and establish approval gates. All of that is necessary. But structure alone is not sufficient. If the human beings closest to the work do not understand their role in AI governance, do not feel empowered to raise concerns, or begin to defer too readily to machine-generated outputs. The governance framework will be weaker than it appears on paper.

This is the point many companies miss. AI governance is not only about the technology. It is about whether the organization’s culture supports the responsible use of technology.

Employees Will See AI Failures First

In many companies, the first person to notice an AI problem will not be a board member, a Chief Executive Officer, or even a member of the governance committee. It will be an employee interacting with the tool in daily operations. It may be the customer service representative who sees the system generating inaccurate responses. It may be the HR professional who notices troubling patterns from an AI-supported screening tool. It may be the sales employee who sees a generative tool overstating product claims. It may be the finance professional who questions an automated summary that does not match underlying records. It may be the compliance analyst who sees a tool being used for an unapproved purpose.

That matters because early visibility is one of the most valuable protections a company can have. But visibility only becomes a control if employees know what to do with what they see. That is why culture is a governance issue. A workforce may spot the problem, but if employees do not understand that AI-related concerns are reportable, are unsure where to raise them, or believe management will ignore them, the warning system fails.

For boards and CCOs, that means AI governance cannot stop at policy creation. It must extend into behavior, reporting norms, and organizational trust.

Speak-Up Culture Is an AI Governance Control

Compliance professionals have long known that a speak-up culture is a control. It is often the first way a company learns of misconduct, process breakdowns, weak supervision, retaliation, harassment, fraud, or control evasion. The same principle now applies with equal force to AI.

Employees may observe biased outputs, inaccurate recommendations, privacy concerns, unexplained model behavior, misuse of tools, inappropriate reliance on machine-generated content, or efforts to bypass required human review. If they do not report those concerns, management may have no timely way to know what is happening.

This is where the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) remains highly instructive. The ECCP places substantial emphasis on whether employees are comfortable raising concerns, whether the company investigates them appropriately, and whether retaliation is prohibited in practice. Those same questions should now be asked in the context of AI. Does the company’s reporting framework explicitly include AI-related concerns? Are managers trained to recognize and escalate those concerns? Are reports investigated with the same seriousness as other compliance issues? Are employees protected if they raise uncomfortable questions about a tool the business wants to use?

If the answer is no, the company may have AI procedures, but does not yet have embedded AI governance in its culture.

Human Judgment Cannot Be Optional

One of the most significant risks in AI governance is not simply that a model will be wrong. It is that people will stop questioning it. AI systems can produce outputs quickly, fluently, and with apparent confidence. That creates a powerful temptation for users to over-trust the tool. When a system sounds polished, appears efficient, and reduces workload, people may assume that its conclusions deserve deference. This is precisely where governance needs the corrective force of human judgment.

Human judgment cannot be treated as a ceremonial step or a paper requirement. It must be meaningful. That means the people reviewing AI outputs must have the authority, time, training, and confidence to challenge those outputs when needed. A human review requirement that exists only on paper is not much of a safeguard. If reviewers are overloaded, insufficiently trained, or culturally discouraged from slowing the process, the control may be largely illusory.

Boards should care about this because one of the easiest mistakes management can make is to describe human oversight in governance documents without testing whether it is functioning in practice. CCOs should care because this is a classic compliance problem. A control may be designed elegantly but fail in daily operations because the supporting culture is too weak to sustain it.

Training Must Change with AI

A company cannot expect good judgment around AI if it has not trained people on what good judgment looks like. That means AI training should go beyond technical usage instructions. Employees need to understand what risks may arise, what concerns are reportable, what approved use looks like, what prohibited use looks like, and why human challenge matters. Managers need additional training because they are often the first informal escalation point when an employee raises a concern. If managers dismiss AI concerns as overreactions, inconveniences, or resistance to innovation, the speak-up system will quickly lose credibility.

Training should also be role-based. The risks faced by a customer-facing team may differ from those faced by teams in HR, legal, procurement, marketing, finance, or internal audit. A generic AI training module may create awareness, but it will not create the operational judgment needed in high-risk areas.

This is where the NIST AI Risk Management Framework provides practical value. NIST’s emphasis on governance is not limited to formal structures. It contemplates culture, accountability, and the need for organizations to support informed decision-making across the enterprise. ISO/IEC 42001 similarly reinforces the importance of organizational competence, awareness, and defined responsibilities. Both frameworks point to a critical truth: responsible AI use depends not only on controls over the technology, but also on the capabilities of the people who use and oversee it.

Managers Matter More Than Companies Often Realize

If culture is the operating environment of governance, managers are often its most important local translators. An employee may not begin by filing a formal report. More often, an employee may raise a concern informally with a supervisor or colleague. “This output does not seem right.” “I do not think we should be using it this way.” “This seems to be pulling in sensitive information.” “This recommendation may be biased.” “The human review is not really happening anymore.”

The manager’s response in that moment matters enormously. Does the manager take the concern seriously? Does the manager know it should be escalated? Does the manager see it as a governance issue or as resistance to efficiency? Does the manager understand the difference between a minor usability complaint and a potentially significant compliance concern?

This is why boards and CCOs should not think about speak-up solely in hotline terms. AI governance depends on the broader management culture. If supervisors are not equipped to receive and escalate AI concerns appropriately, many issues will die in the middle of the organization before they ever reach a formal channel.

Anti-Retaliation Must Be Real in the AI Context

There is another dimension that cannot be overlooked: the risk of retaliation. In some organizations, employees may hesitate to raise AI concerns because they fear being labeled anti-innovation, obstructionist, or not commercially minded. That creates a subtle but serious governance risk. If the corporate atmosphere celebrates rapid AI adoption without equally celebrating responsible challenge, then employees may conclude that silence is safer than candor.

This is why anti-retaliation messaging must be explicit in the AI context. The company should make clear that raising concerns about inaccurate outputs, misuse, privacy risks, unfairness, or control breakdowns is part of responsible business conduct. It is not a failure to embrace innovation. It is a contribution to the effective governance of innovation.

The CCO should ensure that AI-related concerns are incorporated into existing anti-retaliation frameworks, investigations protocols, and communications. Boards should ask whether employee sentiment data, hotline trends, and internal investigations provide any signal that people are reluctant to question AI initiatives. If the organization is moving aggressively on AI, it should be equally serious about protecting those who raise governance concerns about it.

Documentation and Escalation Still Matter

As with every other aspect of AI governance, culture and judgment must be integrated into the process. A company should document how AI-related concerns can be reported, how they are triaged, who reviews them, what escalation triggers apply, and how resolutions are tracked. Concerns about AI should not be dismissed as vague general complaints. They should be reviewable and analyzable over time.

This is essential not only for accountability but for learning. Patterns in employee concerns may reveal weaknesses in training, design, vendor management, access controls, or oversight. A single report may be an isolated event. Repeated concerns within a single function may point to a systemic governance problem. That is why speak-up is not just about receiving reports. It is about turning those reports into organizational intelligence.

The ECCP again offers a useful framework. It asks whether investigations are timely, whether root causes are examined, and whether lessons learned are fed back into the compliance program. AI governance should work the same way. A reported concern should not end with a narrow answer to the immediate complaint. It should prompt management to ask what the issue reveals about the broader governance environment.

Boards Must Model the Right Tone

This final point may be the most important. Culture is shaped by what leadership rewards, tolerates, and asks about. If the board only asks about AI efficiency, adoption, and speed, management will take the signal. If the board asks whether employees are raising concerns, whether human oversight is meaningful, whether managers are trained, and whether retaliation protections are working, management will take that signal as well.

For CCOs, this is a vital opportunity. The compliance function can help boards understand that governance is not only about structure and controls, but also about whether the organization has preserved the human capacity to question, escalate, and correct. In the AI context, that may be the most important governance capability of all.

Because in the end, even the most advanced system will not govern itself. An enterprise must govern it. That requires culture. It requires trust. It requires the courage to speak up. And it requires strong human judgment to look at an impressive output and still ask, “Is this right?”

The Human Side of Governance Is the Decisive Side

This final article brings the series back to a simple truth. AI governance is not only about what the company builds. It is about how the company behaves.

Boards may establish oversight. Management may create structures. Compliance may build controls. But if employees are not prepared to report concerns or exercise judgment, the organization will remain vulnerable. A strong AI governance program does not merely control the system. It empowers the people around the system to challenge it responsibly.

That is the human side of governance, and in many ways it is the decisive side. 

Categories
AI Today in 5

AI Today in 5: April 13, 2026, The AI Governance Framework Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Oracle brings storytelling to the heart of compliance with AI. (Yahoo!Finance)
  2. AI is bringing compliance to BioPharma. (PharmTech)
  3. Oracle brings AI agents to financial crime and compliance. (Financial IT)
  4. Building out your AI governance framework. (Bloomberg Law)
  5. AI developments finance pros should be tracking. (MIT)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.