Categories
Blog

Compliance Week 2026: AI Governance Highlights

The 21st Annual Compliance Week Conference made one point unmistakably clear: AI is no longer a technology issue sitting outside the compliance function. It is now a governance, risk, controls, culture, and accountability issue. Across the conference, AI appeared in nearly every discussion, from practical tools for compliance teams to regulatory uncertainty, shadow AI, third-party risk, and board oversight. The central message for compliance professionals was clear: AI must be governed with the same discipline, documentation, monitoring, and continuous improvement as any other enterprise risk.

That should not surprise any Chief Compliance Officer. The DOJ’s Evaluation of Corporate Compliance Programs (2024 ECCP) has long asked whether a compliance program is well-designed, adequately resourced, empowered to function effectively, and working in practice. Those same questions now apply to AI. The issue is not whether an organization is using AI. It almost certainly is. The issue is whether the company knows where AI is being used, who approved it, the risks it creates, the controls that apply, and whether those controls are being monitored.

AI Is Now a Compliance Governance Issue

The first major theme from Compliance Week 2026 was governance. AI may be exciting, efficient, and creative, but without governance, it can quickly become a source of unmanaged enterprise risk. That governance challenge begins with oversight. Who owns AI risk? Who approves AI use cases? Who determines whether a tool is appropriate for use with company data? Who has the authority to stop an AI project that is not meeting its stated purpose? These are not theoretical questions. They are the basic operating questions of an effective compliance program.

A company should not treat AI as a series of disconnected experiments. It should treat AI as part of the enterprise control environment. That means clear governance structures, documented approvals, defined risk owners, escalation protocols, monitoring, testing, and board reporting. The board does not need to become a group of AI engineers. But directors do need to understand whether management has created a defensible AI governance framework. They should ask how AI risks are identified, how high-risk use cases are reviewed, how third-party AI vendors are assessed, and how the company detects unauthorized AI use.

Shadow AI Is the Risk Hiding in Plain Sight

One of the strongest compliance lessons from the conference was the danger of shadow AI. Employees are already using AI tools, often because they are efficient, accessible, and easy to deploy. The problem is that ease of use can defeat governance. If employees are using ChatGPT, Claude, Gemini, Copilot, or other tools without authorization, training, or data restrictions, the company has a control gap. Confidential business information, financial data, personal information, customer information, or regulated data can move into systems the company does not control. That creates legal, privacy, cybersecurity, contractual, and reputational risk.

The answer is not simply to prohibit AI. That approach is unlikely to work. The better answer is to identify the tools being used, classify them by risk, authorize appropriate use cases, train employees, monitor usage, and make clear what data can and cannot be entered into an AI system. A strong AI governance program should include an AI use register. It should identify approved tools, owners, business purposes, data categories, risk ratings, controls, monitoring obligations, and renewal or reassessment dates. Without that inventory, a company cannot credibly claim to govern AI risk.

The Compliance Risk Management Model Already Works

One of the most important insights from the conference was that compliance professionals already have the right risk management framework. AI risk does not require abandoning the compliance discipline. It requires applying it.

The framework is familiar. Identify the risk. Develop a risk management strategy. Train employees. Implement the strategy. Monitor performance. Use data to improve your strategy continuously. That is the compliance operating model. It is also the right model for AI governance.

The 2024 ECCP emphasized risk-based compliance, data access, continuous improvement, and the effectiveness of controls in practice. Those expectations fit naturally into AI governance. A company should ask whether its AI controls are designed around actual risks, whether compliance has access to AI-related data, whether employees understand acceptable use, and whether the company can prove that its controls operate effectively. The lesson is straightforward. Do not build AI governance as a technology policy alone. Build it as a compliance program.

AI Risk Has Three Core Dimensions

The conference also highlighted the need to separate AI risk into practical categories. For compliance officers, three risk areas deserve immediate attention.

First, internal risk. This includes employee use of AI, shadow AI, unauthorized tools, misuse of confidential information, lack of training, and gaps in approval processes.

Second, external risk. This involves AI systems that affect customers, patients, consumers, investors, or other external stakeholders. These tools may raise issues involving fairness, privacy, transparency, discrimination, consumer protection, and regulatory obligations.

Third, third-party risk. Vendors, consultants, service providers, and sales agents may introduce AI into the company’s operations. A third-party vendor using AI in screening, analytics, customer service, data processing, or decision support can pose a risk to the company, even when the company did not build the tool.

This is where compliance must bring discipline. Third-party AI risk should be part of due diligence, contracting, audit rights, monitoring, and renewal. Companies should ask vendors what AI tools they use, what data those tools process, whether subcontractors are involved, how outputs are validated, and whether the company has audit rights over AI-related controls.

ROI Must Begin With the Business Purpose

AI projects should begin with a simple question: what problem are we trying to solve? Too many AI initiatives begin with pressure to “use AI” rather than a clear business case. That is not governance. That is technology enthusiasm without control or discipline. A compliance-minded AI review should ask whether the proposed tool has a defined use case, measurable business value, appropriate controls, and a clear owner. It should also ask whether the project is drifting from its original purpose. Mission creep is a real AI risk. A tool approved for one purpose can quickly be used for another. That creates new risks and may invalidate the original approval.

The more regulated the use case, the more important this analysis becomes. AI used in healthcare, employment, finance, consumer decisions, investigations, sanctions screening, or third-party risk management demands heightened scrutiny. ROI may not always appear as a direct financial return. Sometimes the business value is avoiding regulatory exposure, improving consistency, strengthening documentation, or reducing unmanaged risk.

Training Is No Longer Optional

AI training must move beyond general awareness. Employees need practical, role-based instruction. They need to know which tools are approved. They need to know what data is prohibited. They need to understand when human review is required. They need to know how to report AI concerns, errors, bias, hallucinations, or misuse. They also need to understand that AI output is not a substitute for professional judgment.

For compliance teams, training should include investigators, auditors, third-party managers, procurement, legal, finance, HR, IT, and business leaders. The message should be clear: AI can support the work, but it does not remove accountability.

Build AI In, Do Not Bolt It On

One of the most practical insights from the conference was that AI should be built into business processes, not bolted on afterward. That distinction matters. Bolted-on AI becomes a tool without governance. Built-in AI becomes part of the control environment.

For example, in third-party risk management, AI can help analyze due diligence responses, identify red flags, monitor adverse media, track contract obligations, and support ongoing risk scoring. But it must be embedded into a process with human oversight, escalation protocols, audit trails, and testing. The same applies to investigations, hotline analytics, policy management, training, and monitoring. AI should strengthen compliance processes, not bypass them.

The CCO Must Have a Seat at the AI Table

The compliance function should not wait to be invited into AI governance. It should claim its role. The CCO brings the language of risk, controls, accountability, documentation, monitoring, and culture. Those are precisely the disciplines AI governance requires. Compliance should help design AI approval workflows, risk assessments, training, third-party reviews, monitoring plans, and board reporting.

This does not mean compliance owns every AI decision. It means compliance must be part of the governance architecture. AI governance should be cross-functional, with legal, compliance, IT, privacy, cybersecurity, internal audit, procurement, HR, and the business working together. But compliance must ensure that the program is not simply innovative. It must be defensible.

Practical Takeaways for Compliance Professionals

  1. Create an AI inventory. Know what tools are being used, by whom, for what purpose, and with what data.
  2. Establish an AI governance committee. Include compliance, legal, IT, privacy, cybersecurity, internal audit, procurement, and business leadership.
  3. Build a risk-based approval process. High-risk AI use cases should require enhanced review, documentation, testing, and escalation.
  4. Address shadow AI directly. Do not assume employees are waiting for policy guidance. Identify actual use and bring it into governance.
  5. Train by role and risk. General AI awareness is not enough. Employees need practical rules for approved tools, prohibited data, human review, and reporting.
  6. Extend third-party risk management to AI. Vendor diligence, contracts, audit rights, monitoring, and renewal reviews should include AI-specific questions.
  7. Monitor and improve. AI governance is not a one-time policy exercise. It requires testing, metrics, incident review, and continuous improvement.

Board Questions

  1. Do we have an inventory of AI tools currently used across the enterprise?
  2. Who approves AI use cases, and how are high-risk uses escalated?
  3. How do we detect and manage shadow AI?
  4. What data is prohibited from being entered into AI tools?
  5. How are third-party AI vendors reviewed, contracted, monitored, and audited?
  6. What AI metrics does management provide to the board?
  7. Who has the authority to pause or terminate an AI project that creates unacceptable risk?

CCO Questions

  1. Is compliance involved before AI tools are deployed?
  2. Do our policies distinguish between approved, restricted, and prohibited uses of AI?
  3. Can we prove employees have been trained on AI risks?
  4. Do we have a documented AI risk assessment process?
  5. Are AI controls tested by internal audit or another independent function?
  6. Are AI incidents, errors, and misuse captured through speak-up and escalation systems?
  7. Can we show regulators that our AI governance works in practice?

Conclusion

Compliance Week 2026 confirmed that AI has crossed the threshold from emerging technology to core compliance risk. The companies that succeed will not be those that chase every new tool. They will be the companies that govern AI with discipline. For the modern CCO, this is the moment to step forward. AI governance belongs squarely within the compliance conversation because it involves risk, accountability, culture, controls, third parties, monitoring, and board oversight. Those are the foundations of effective compliance.

AI may change the tools. It does not change the obligation. Governance still matters. Controls still matter. Culture still matters. Accountability still matters. And compliance must help lead the way.

Categories
Blog

The Warner Bros. Bidding War: Part 3 – The CCO Playbook for Transactions Under Pressure

The Warner Bros. Bidding War: Part 3 – The CCO Playbook for Transactions Under Pressure

The Warner Bros. (WBD) bidding war is not simply a Board story. It is a compliance operating model test. When a superior proposal emerges, the Chief Compliance Officer (CCO) must move from program design to execution discipline. Today, we conclude our short review of the Warner Bros./Netflix/Paramount dance and sale by considering lessons for the compliance professional.

In Part 1, we focused on the deal mechanics that led Warner Bros. Discovery to move from an agreed transaction with Netflix to a superior proposal from Paramount Skydance. In Part 2, the focus shifted to Board governance and fiduciary duty. This final post, Post 3, answers the operational question. What must the Chief Compliance Officer do when the process accelerates and governance must be proven in real time?

The answer is grounded in the DOJ’s Evaluation of Corporate Compliance Programs (ECCP). The core question remains constant. Is the program working in practice? A live transaction provides the answer.

Move Compliance Into the Transaction Control Room

Too many compliance functions treat M&A as a legal and financial activity. That approach fails when the transaction becomes contested. Once a superior proposal is identified, the compliance function must:

  • Participate in transaction governance meetings
  • Map control risks across disclosure, communications, and decision-making
  • Establish escalation pathways for new information

This is consistent with the expectations embedded in the DOJ’s Corporate Enforcement Policy, which rewards companies that demonstrate real-time awareness, escalation, and action. A compliance function that is not present during the decision-making process cannot later demonstrate that controls were effective.

Build and Execute an Evidence Protocol

The most significant compliance failure point in transactions is not misconduct. It is the absence of a reliable evidentiary record. In the WBD process, multiple streams of information were created simultaneously:

  • Board materials
  • Banker communications
  • Draft proposals and revisions
  • Internal analyses and emails

The CCO must ensure that the company has an evidence-based protocol that includes:

  • Centralized collection of transaction-related materials
  • Defined custodians for document integrity
  • Time-stamped records of key decisions and communications

Under the DOJ’s framework, this directly ties to the question of whether the company can demonstrate effectiveness through data and documentation. If the company cannot reconstruct its decision-making process, it cannot defend it.

Treat Disclosure Controls as a Real-Time Compliance System

Post 2 emphasized that disclosure is a governance issue. For the CCO, it is a control system. The compliance function should validate that:

  • The disclosure committee is activated and functioning continuously
  • There is a clear trigger matrix for Form 8-K filings and proxy updates
  • All external communications are coordinated and controlled

This is not theoretical. In a contested transaction, the volume and speed of information create a risk of selective disclosure, inconsistent messaging, or delayed filings. The CCO must ensure that disclosure controls meet the same standard as financial controls. They must be tested, documented, and operational.

Control Third-Party and Advisor Risk

Transactions introduce intense third-party engagement. Investment banks, legal advisors, consultants, and communications firms all operate at speed. In the WBD scenario, third-party actions included:

  • Structuring revised proposals
  • Communicating deal terms
  • Interacting with market participants

The CCO must ensure:

  • Clear protocols for third-party communications
  • Defined boundaries on who can speak on behalf of the company
  • Documentation of all material third-party interactions

This aligns with long-standing expectations under the Foreign Corrupt Practices Act (FCPA) and the broader third-party risk principles embedded in compliance programs. Even in a domestic transaction, third-party risk remains a control issue.

Align Governance With Internal Controls Frameworks

The events described in Parts 1 and 2 map directly onto internal control frameworks such as the COSO Internal Controls Framework. For the CCO, this means:

  • Control Environment: Tone at the top regarding disciplined decision-making
  • Risk Assessment: Identification of disclosure, litigation, and regulatory risks
  • Control Activities: Implementation of approval processes and documentation protocols
  • Information and Communication: Real-time disclosure and coordination
  • Monitoring: Ongoing review of transaction-related controls

This mapping is not academic. It is how the company demonstrates that governance is structured, repeatable, and effective.

Prepare for Day Two Risk

The transaction does not end with signing or closing. It creates a new risk profile. The CCO must plan for:

  • Integration of compliance programs across entities
  • Review of legacy decisions made during the transaction process
  • Preservation of records for litigation or regulatory review

This is where the DOJ’s focus on continuous improvement becomes critical. The company must show that it learns from the transaction and strengthens its program.

Connecting the Lessons Across the Series

Part 1 showed that deal terms, including termination fees and superior proposal mechanics, can change outcomes. Part 2 demonstrated that the Board must govern those changes through documented, disciplined processes. In Part 3, we demonstrated the connections between the two. The compliance function is the mechanism that allows the company to prove that governance worked. Without compliance execution, governance is an assertion. With compliance execution, governance becomes evidence.

Practical Action Steps for CCOs

  1. Embed compliance into the transaction governance structure at the outset of any deal.
  2. Implement an evidence protocol that captures all material transaction activity in real time.
  3. Test disclosure controls under accelerated conditions, including mock 8-K scenarios.
  4. Define and enforce third-party communication protocols.
  5. Map transaction governance to COSO and DOJ ECCP requirements before a contested situation arises.

Questions for the CCO

  1. If a regulator requested the full decision record tomorrow, could the company produce it?
  2. Are disclosure controls capable of operating continuously under transaction pressure?
  3. Is there a single source of truth for transaction-related documentation?
  4. Are third-party interactions fully documented and controlled?
  5. Has the compliance program been stress-tested in a high-speed governance scenario?

Final Thoughts

The Warner Bros. Discovery bidding war is not unique. What is unique is how clearly it illustrates the modern role of the Chief Compliance Officer. Compliance is no longer limited to preventing misconduct. It is responsible for enabling the company to act, decide, and disclose with integrity under pressure and then prove it. That is the standard set by the DOJ. That is the expectation of Boards. And that is the future of the compliance profession.

 

Categories
Blog

The Warner Bros. Bidding War: Part 1 – What Happened and Why Compliance Professionals Should Care

A fast-moving corporate auction shows how deal terms, fiduciary duties, disclosure controls, regulatory risk, and evidence discipline can determine the outcome of a major transaction. Over the rest of this week, I will be exploring the Warner Bros./Netflix/Paramount bidding war, which

The Deal That Changed Direction

The Warner Bros./Netflix/Paramount bidding war is one of those corporate stories that looks like Hollywood drama on the surface but is really a governance story underneath. At first, Warner Bros. (WBD) had an agreed transaction with Netflix. That deal carried a $2.8 billion company termination fee payable by WBD under specified circumstances, including termination to enter into a superior proposal. The proxy materials also disclosed a $5.8 billion regulatory termination fee payable by Netflix if the deal failed for certain regulatory reasons. (SEC)

Then Paramount Skydance (Paramount) came back with a revised proposal. It raised the bid to $31 per WBD share in cash, added a ticking fee, offered a $7 billion regulatory termination fee, and agreed to fund the $2.8 billion termination fee owed to Netflix. (SEC) Reuters reported that WBD said the revised Paramount proposal could be considered superior, which set the process in motion. (Reuters)

By February 27, 2026, WBD terminated the Netflix agreement and entered into a merger agreement with Paramount Skydance. WBD later disclosed that Paramount Skydance paid the $2.8 billion Netflix termination fee on WBD’s behalf. (SEC)

That is the transaction story. The compliance story is deeper.

This Was Not Merely a Higher Price

In M&A, price matters. But price is rarely the only issue. Boards also look at certainty of closing, regulatory risk, financing, timing, shareholder value, legal exposure, and execution risk. Paramount did not merely increase the cash price. It addressed several deal objections at once. It offered to cover the Netflix break fee. It added a ticking fee if closing was delayed. It increased regulatory risk protection. It positioned its offer as cleaner, faster, and more certain than the existing transaction. (SEC)

That matters because boards do not evaluate superior proposals in a vacuum. They evaluate the entire package. The better governance question is not simply, “Which offer is higher? ”It is, “Which offer delivers the best risk-adjusted value to shareholders, and can the Board prove how it reached that conclusion? ”

The Termination Fee Became a Governance Issue

The $2.8 billion termination fee is an important part of the story. In ordinary conversation, that number sounds like a barrier. In this transaction, it became part of the competitive bidding structure. Paramount agreed to fund the termination fee, which changed the economics for WBD shareholders. WBD’s own annual report language later stated that, after the Board determined it had received a Company Superior Proposal and Netflix waived its right to propose revisions, WBD terminated the Netflix agreement and Paramount paid Netflix the $2.8 billion fee on WBD’s behalf. (SEC)

For compliance and governance professionals, this is the control point: when a large termination fee can be assumed, reimbursed, funded, or otherwise neutralized by a rival bidder, the company needs clear documentation showing who approved that structure, how it was analyzed, how it was disclosed, and how conflicts were managed.

Disclosure Was Not a Back-Office Exercise

In a contested transaction, disclosure is part of the control environment. The company must update shareholders, respond to rival communications, track proxy statements, preserve drafts, document board deliberations, and avoid selective disclosure. The Netflix proxy materials laid out the termination fee structure and the circumstances under which the fee could become payable. (SEC) Paramount’s revised proposal was also publicly communicated through SEC filings, including the increased $31-per-share cash price and the regulatory termination fee. (SEC)

This is where compliance should pay attention. A transaction can move faster than the company’s document discipline. Emails, banker calls, board materials, draft press releases, proxy supplements, and negotiation notes can become evidence. If the company doesn’t have a real-time evidence protocol, the record will build itself, which isn’t ideal.

Why Compliance Professionals Should Care

Some believe this is a board-and-banker story. That is too narrow. It is also a compliance story because compliance is about governance, controls, documentation, accountability, escalation, and evidence. A high-stakes transaction tests whether the company’s control environment holds up under the highest pressure. It tests whether the Board receives complete information. It tests whether management understands escalation obligations. It tests whether legal, finance, communications, investor relations, and compliance can coordinate without losing the record.

This is exactly the kind of moment when the DOJ’s Evaluation of Corporate Compliance Programs is relevant, even outside an enforcement action. The central question is familiar: is the program well-designed, adequately resourced, empowered to function, and working in practice? In M&A, that means the compliance function should understand how deal governance intersects with disclosure controls, third-party risk, regulatory commitments, document preservation, and post-closing integration.

The Larger Lesson

The WBD bidding war shows that corporate governance is not theoretical. It is operational. A superior proposal clause is not just legal drafting. A termination fee is not just a financial number. A proxy supplement is not just a filing. Each is a control point. The companies that manage these moments well do three things. They make decisions through disciplined processes. They document the basis for those decisions in real time. They align governance, legal, finance, disclosure, and compliance before the crisis point arrives.

Practical Takeaways for Compliance Professionals

  1. Major transactions require evidence discipline from day one.
  2. Disclosure controls must be ready before a rival bidder appears.
  3. Termination fees and regulatory commitments should be treated as governance issues, not simply deal terms.
  4. Board minutes and waiver records must tell the fiduciary story.
  5. Compliance should have a seat at the broader transaction control table, especially when regulatory, third-party, data access, communications, and post-closing integration risks are implicated.

That is the lesson for every CCO. You may not be running the auction, but your program should help the company prove that it made decisions with integrity, evidence, and accountability.

Categories
Daily Compliance News

Daily Compliance News: April 29, 2026, The Trial of the Century Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • PR exec tried to get rid of documents. (FT)
  • Why did First Brands hire BDO? (FT)
  • Altman v. Musk. Trial of the Century. (FT)
  • Should your Board appoint a Bot? (FT)

For more information on the use of AI in compliance programs, Tom Fox’s new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out Tom’s latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Blog

Data Governance, Privacy, and Model Integrity: The Control Foundation of AI Governance

Artificial intelligence may look like a technology story on the surface, but beneath that surface lies a governance reality every board and Chief Compliance Officer must confront. AI systems are only as sound as the data that feeds them, the controls that govern them, and the integrity of the outputs they generate. When data governance is weak, privacy obligations are poorly managed, or model integrity is assumed rather than tested, AI risk can move quickly from a technical flaw to enterprise exposure.

In the prior blog posts in this series, I examined the foundational questions of AI governance: board oversight and accountability, and the danger of strategy outrunning governance. Today, I want to turn to a third issue that sits at the core of every credible AI governance program: data governance, privacy, and model integrity.

This is where the AI conversation often moves from excitement to discipline. Companies may be eager to deploy tools, automate functions, and improve decision-making. But none of that matters if the underlying data is flawed, sensitive information is mishandled, or the model produces outputs that are unreliable, biased, or impossible to explain in context—the more powerful the technology, the more important the governance framework beneath it.

For boards and CCOs, this is not simply a technical control matter. It is a governance matter because failures in data integrity, privacy management, and model performance can have legal, regulatory, reputational, financial, and cultural consequences simultaneously.

AI Governance Begins with the Data

There is an old saying in technology: garbage in, garbage out. In the AI era, that phrase remains true, but it is no longer sufficient. In corporate governance terms, the problem is not merely bad data. It is unknown, unauthorized, untraceable, biased, stale, overexposed, or used in ways the organization never properly approved. That is why data governance is the control foundation of AI governance.

Every AI use case depends on inputs. Those inputs may include structured internal data, public information, personal data, third-party data, proprietary records, historical documents, transactional records, prompts, or user interactions. If management does not understand where that data comes from, who has rights over it, whether it is accurate, how it is classified, and whether it is appropriate for the intended purpose, then the company is not governing AI. It is merely using it.

For compliance professionals, this point should feel familiar. Data governance is not new. What is new is the speed and scale at which AI can amplify data weaknesses. A spreadsheet error may affect one report. A flawed AI input may affect thousands of interactions, recommendations, or decisions before anyone notices.

Why Boards Should Care About Data Lineage

Boards do not need to become technical experts in model training or data architecture. But they do need to ask whether management understands the provenance and reliability of the information flowing into critical AI systems.

At a governance level, this is a question of data lineage. Can the company trace the source of the data, how it was curated, whether it was changed, and whether it was approved for the intended use? If a customer, regulator, employee, or auditor asks why the system reached a particular result, can management explain not only the output, but the data conditions that shaped it?

A board that does not ask these questions risks receiving polished dashboards and impressive demonstrations while missing the underlying weaknesses. AI systems can sound authoritative even when they are wrong. That is part of what makes governance here so essential. Confidence is not the same as integrity.

This is also where the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) offers a helpful mindset. The ECCP pushes companies to think in terms of operational reality. Do policies work in practice? Are controls tested? Is the company learning from what goes wrong? The same discipline applies here. A company should not assume its data environment is fit for AI simply because it has data available. It should test, verify, document, and challenge that assumption.

Privacy Is Not an Adjacent Issue

Too many organizations still treat privacy as adjacent to AI governance rather than central to it. That is a mistake. AI systems often rely on data sets that include personal information, employee information, customer records, usage patterns, communications, or behavior-based inputs. Even when a company believes it has de-identified or anonymized data, there may still be re-identification risks, overcollection concerns, retention issues, or use limitations tied to law, contract, or internal policy.

For the board and the CCO, privacy should not be discussed as a compliance side note. It should be part of the approval and governance architecture from the outset. Before an AI use case is deployed, management should understand what personal data is involved, whether its use is permitted, what notices or disclosures apply, what access restrictions are required, how the data will be retained, and whether any vendor relationships create additional privacy exposure.

This is particularly important in generative AI environments, where employees may paste confidential, proprietary, or personal information into tools without fully appreciating the consequences. A privacy incident in the AI context may not begin with malicious intent. It may begin with convenience. That is why governance must focus not only on policy, but on system design, training, and usage constraints.

The CCO has a critical role here because privacy governance often intersects with policy management, employee conduct, training, investigations, and disciplinary response. If privacy is left solely to specialists without integration into the broader governance process, the organization risks building fragmented controls that do not hold together under pressure.

Model Integrity Is a Governance Question

Model integrity sounds like a technical term, but it is really a governance concept. It asks whether the system is performing in a manner consistent with its intended purpose, risk classification, and control expectations.

That means asking hard questions. Is the model accurate enough for the use case? Has it been validated before deployment? Are there known limitations? Does it perform differently across populations or scenarios? Can outputs be reviewed in a meaningful way by human decision-makers? Are there conditions under which the model should not be used? These are not engineering questions alone. They are governance questions because they determine whether management is relying on the system responsibly.

This is where NIST’s AI Risk Management Framework is especially valuable. NIST emphasizes that organizations should map, measure, and manage AI risks, including those related to validity, reliability, safety, security, resilience, explainability, and fairness. It is not enough to say that a tool works most of the time. The organization must understand where it may fail, how failure will be detected, and what safeguards are in place when it does.

ISO/IEC 42001 reinforces the same discipline through the lens of management systems. It requires structured attention to risk identification, control design, monitoring, documentation, and continual improvement. In other words, it treats model integrity not as a technical aspiration, but as an organizational responsibility. For boards, the takeaway is direct: if management cannot explain how model integrity is validated and maintained, then the board does not yet have assurance that AI is being governed effectively.

Third Parties Increase the Stakes

One of the more dangerous assumptions in AI governance is that outsourcing technology also outsources risk. It does not. Many organizations will deploy AI through third-party vendors, embedded tools, software platforms, or external service providers. That may be practical, even necessary. But it also means the company may be relying on data practices, training methods, model assumptions, or privacy safeguards it did not design and cannot fully see.

That is why data governance, privacy, and model integrity must extend to third-party risk management. Procurement cannot focus solely on functionality and price. Legal cannot focus solely on contract form. Compliance, privacy, security, and risk all need to understand what the vendor is doing, what data is being used, what rights the company has to inspect or question performance, and what happens when the vendor changes the model or its underlying terms.

This is not simply good vendor management. It is a governance necessity. A company remains accountable for business decisions made using third-party AI tools, especially when those tools affect customers, employees, compliance obligations, or regulated activities.

Documentation Is What Makes Governance Real

As with every major governance issue, documentation is what turns theory into evidence. If a company is serious about data governance, privacy, and model integrity, it should have records that show it. Those records may include data inventories, data classification standards, model validation summaries, privacy assessments, vendor due diligence files, testing results, approved use cases, control requirements, escalation logs, and remediation actions. Without this documentation, governance becomes anecdotal. With it, governance becomes reviewable, auditable, and improvable.

This is another place where the ECCP mindset is so useful. Prosecutors and regulators tend to ask the same core question in different ways: how do you know your program works? In the AI context, the answer cannot be “our vendor told us so” or “the business says the tool is helpful.” It must be grounded in evidence, testing, and management discipline.

What Boards and CCOs Should Be Pressing For

Boards should expect management to present AI use cases with enough clarity to answer four questions. What data is being used? What privacy implications attach to that use? How has model integrity been tested? What controls will remain in place after deployment?

CCOs should press equally hard from the management side. Is there a documented data governance process for AI? Are privacy reviews built into the intake and approval process? Are models validated according to risk? Are third-party tools subject to diligence and contract controls? Are incidents and anomalies logged and investigated? Are employees trained not to expose confidential or personal information through improper use? These are not burdensome questions. They are the practical questions that separate governed AI from hopeful AI.

Governance Requires Trustworthy Inputs and Defensible Outputs

In the end, AI governance depends on a simple but demanding truth: the organization must be able to trust what goes into the system and defend what comes out of it.

If the data is poorly governed, privacy rights are handled casually, or model integrity is assumed rather than demonstrated, then no amount of strategic enthusiasm will make the program safe. Boards will not have real oversight. CCOs will not have a defensible control environment. The company will merely have a faster way to create risk.

That is why data governance, privacy, and model integrity are not support issues in AI governance. They are central issues. They determine whether the enterprise is using AI with discipline or simply hoping for the best.

In the next article in this series, I will turn to the fourth governance challenge: ongoing monitoring, where many organizations discover that approving an AI use case is far easier than governing it after it goes live.

Categories
Blog

Board Oversight and Accountability in AI: Where Governance Begins

For boards and Chief Compliance Officers, AI governance does not begin with the model. It begins with oversight, accountability, and the discipline to define who owns risk, who makes decisions, and who answers when something goes wrong. If AI is changing how companies operate, then board governance and compliance leadership must change as well.

In the first article in this series, I laid out the five significant corporate governance challenges around artificial intelligence: board oversight and accountability, strategy outrunning governance, data governance and model integrity, ongoing monitoring, and culture and speak-up. In Part 2, I turn to the first and most foundational issue: board oversight and accountability.

This is where every AI governance program either starts with rigor or begins with ambiguity. And ambiguity, in governance, is rarely neutral. It is usually the breeding ground for failure.

There is a tendency in some organizations to treat AI oversight as a natural extension of technology oversight. That is too narrow. AI touches legal exposure, regulatory risk, data governance, privacy, discrimination concerns, intellectual property, operational resilience, internal controls, and corporate culture. That makes AI a board-level and CCO-level issue, not just a CIO issue.

The central governance question is straightforward: who is responsible for AI risk, and how is that responsibility exercised in practice? If the board cannot answer that question, if management cannot explain it, and if the compliance function is not part of the answer, then the company does not yet have credible AI governance.

Why Board Oversight Matters Now

Boards have always been expected to oversee enterprise risk. What has changed with AI is the speed, scale, and opacity of the risks involved. A business process can be altered quickly by a generative AI tool. A model can influence customer interactions, internal decisions, and external communications at scale. Employees can adopt AI capabilities before governance structures are fully formed. Vendors can embed AI inside products and services without management fully understanding the downstream implications. That is why AI cannot be governed informally. It requires deliberate oversight.

The board does not need to manage models line by line. That is not its role. But the board must ensure that management has established a governance structure capable of identifying AI use cases, classifying risk, escalating significant issues, testing controls, and reporting failures. Just as important, the board must know who inside management is accountable for making that system work.

This is where the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) offers a very practical lens. The ECCP asks whether a compliance program is well designed, adequately resourced, empowered to function effectively, and tested in practice. Those four questions are equally powerful in evaluating AI governance. Is the governance structure well designed? Is it resourced? Is the compliance function empowered in AI decision-making? Is the program working in practice? If the answer to any of those questions is uncertain, the board should treat that uncertainty as a governance gap.

Accountability Begins with Ownership

One of the oldest problems in corporate governance is fragmented responsibility. AI only intensifies that risk. Consider the typical organizational landscape. IT may own its own infrastructure. Legal may review contracts and liability. Privacy may address data use. Security may focus on cyber threats. Risk may handle enterprise frameworks. Compliance may address policy, controls, investigations, and reporting. Business leaders may champion the use case. Internal audit may come in later for assurance. The board, meanwhile, receives updates from multiple directions.

Without a clearly defined operating model, this becomes a classic accountability fog. Everyone has a slice of the issue, but no one owns the whole risk. A more disciplined approach requires naming an accountable executive owner for enterprise AI governance; in some companies, that may be the Chief Risk Officer. In others, it may be a Chief Legal Officer, Chief Compliance Officer, or a designated senior executive with cross-functional authority. The title matters less than the clarity. The organization must know who convenes the process, who resolves conflicts, who signs off on high-risk use cases, and who reports upward to the board.

For the CCO, this does not mean taking sole ownership of AI. That would be unrealistic and unwise. But it does mean insisting that compliance has a defined role in the governance architecture. AI raises issues of policy adherence, training, escalation, investigations, third-party risk, disciplinary consistency, and remediation. Those are core compliance issues. A governance model that sidelines the CCO is not merely incomplete; it is unstable.

The Right Committee Structure

Once ownership is established, the next question is structural: where does AI governance live? The answer should be enterprise-wide, but with a defined committee architecture. Companies need at least two governance layers.

The first is a management-level AI governance committee or council. This should be a cross-functional working body with representation from compliance, legal, privacy, security, technology, risk, internal audit, and relevant business units, as appropriate. Its purpose is operational governance. It reviews proposed use cases, classifies risk levels, evaluates controls, addresses issues, and determines escalation.

The second is a board-level oversight mechanism. This does not always require a new standing AI committee. In some organizations, oversight may sit with the audit committee, risk committee, technology committee, or full board, depending on the company’s structure and maturity. What matters is not the name of the committee. What matters is that there is an identified board body with responsibility for overseeing AI governance and receiving regular reporting.

This is consistent with the NIST AI Risk Management Framework, which begins with the “Govern” function. NIST recognizes that governance is not an afterthought; it is the foundation that enables the rest of the risk management lifecycle. ISO/IEC 42001 similarly reinforces that AI governance must be embedded in a management system with defined roles, controls, review mechanisms, and continuous improvement. Both frameworks point in the same direction: AI governance requires structure, not aspiration.

Reporting Lines That Actually Work

Good governance lives or dies by reporting lines. If information cannot move efficiently upward, then oversight will be stale, filtered, or incomplete. Boards should require periodic reporting on several core areas: the current AI inventory, high-risk use cases, incident trends, control exceptions, third-party AI dependencies, regulatory developments, and remediation status. The board does not need a data dump. It needs decision-useful reporting.

That means management should create a formal reporting cadence. Quarterly reporting is sufficient for many organizations, but high-risk environments require more frequent updates. The reporting should identify not only what has been approved, but what has changed. That includes scope changes, incidents, near misses, new vendors, policy exceptions, and any material concerns raised by employees, customers, or regulators.

The CCO should be part of the reporting chain, not a bystander. A balanced governance model allows compliance to elevate concerns independently if necessary, particularly when a business leader is pushing to move faster than controls will support. That is not an obstruction. That is governance doing its job.

Escalation Protocols: The Missing Middle

Many companies have approval procedures, but far fewer have robust escalation protocols. That is a mistake. Governance fails only when there is no structure. It also fails when there is no clear path for handling edge cases, incidents, or disagreements.

An effective AI governance program should specify escalation triggers. For example, a use case should be escalated when it affects employment decisions, consumer rights, regulated communications, financial reporting, sensitive personal data, or legally significant outcomes. Escalation should also occur when there is evidence of model drift, hallucinations in a material context, unexplained bias, control failure, a third-party vendor issue, or a credible employee concern.

These triggers should not live in someone’s head. They should be documented in policy, operating procedures, or a risk classification matrix. There should also be a defined process for who gets notified, what interim controls are applied, whether deployment pauses are available, and how issues are documented for follow-up.

This is another place where the ECCP remains highly relevant. DOJ prosecutors routinely ask whether issues are escalated appropriately, whether investigations are timely, and whether lessons learned are incorporated into the program. AI governance should be built with the same operational seriousness. If an issue arises, the company should not be improvising its governance response in real time.

Documentation Is Evidence of Governance

One of the great compliance truths is that governance without documentation is hard to prove and harder to sustain. For AI governance, documentation should include at least these categories: use case inventories, risk classifications, approval memos, committee minutes, control requirements, incident logs, training records, validation summaries, escalation decisions, and remediation actions. This is not paperwork for its own sake. It is the evidentiary trail that shows the organization is governing AI thoughtfully and consistently.

Boards should care about this because documentation is what allows oversight to be more than anecdotal. It is also what allows internal audit, regulators, and investigators to assess whether the governance program is functioning.

For the CCO, documentation is particularly important because it connects AI oversight to the larger compliance architecture. It helps align AI governance with policy management, training, investigations, speak-up systems, third-party due diligence, and corrective action tracking. In other words, it turns AI governance from a loose collection of meetings into a defensible management process.

Board Practice and CCO Practice Must Meet in the Middle

The best AI governance models do not pit the board and the compliance function against innovation. They create a structure that allows innovation to move, but only within defined guardrails. Boards should ask sharper questions. Who owns AI governance? What committee reviews high-risk use cases? What issues must be escalated? What reporting do we receive? How are incidents tracked and remediated? What role does compliance play?

CCOs should be equally direct. Where does compliance sit in the approval process? How do employees report AI concerns? What documentation is required? When can compliance elevate an issue on its own? How are lessons learned being fed back into policy and training?

This is the practical heart of the matter. Oversight is not a slogan. Accountability is not a press release. Both must be built into reporting lines, committee design, escalation protocols, and documentation discipline.

AI governance begins here because every other issue in this series depends on it. If oversight is weak and accountability is blurred, strategy will outrun governance, data issues will go unnoticed, monitoring will become inconsistent, and culture will not carry the load. But if the board and CCO get this first issue right, they create the governance spine that the rest of the program can rely on.

Join us tomorrow, where we review the rule of data governance in AI governance, because that is where every effective AI governance program either starts strong or starts to fail.

Categories
Blog

Five Corporate Governance Challenges in AI: A Roadmap for CCOs and Boards

AI is not simply a technology deployment question. It is a corporate governance challenge that requires board attention, compliance discipline, and operational oversight. For Chief Compliance Officers and board members, the task is not merely to encourage innovation, but to ensure that innovation is governed, monitored, and aligned with business values and risk tolerance.

Artificial intelligence has moved from pilot projects and innovation labs into the bloodstream of the modern corporation. It now touches customer service, finance, procurement, HR, sales, third-party management, internal reporting, and strategic decision-making. That expansion is why AI can no longer be treated as a narrow IT issue. It is a governance issue. More particularly, it is a governance issue with compliance implications at every lifecycle stage.

For compliance professionals, that means AI is not simply about whether a model works. It is about whether the organization has built the structures, accountability, and culture to use AI responsibly. For boards, it means AI oversight can no longer be delegated away with a cursory quarterly update. The board must understand not only where AI is being used, but whether the company’s governance architecture is fit for purpose.

This is the first post in a series examining the five most important corporate governance issues around AI. They are not exotic or theoretical. They are the same types of governance challenges compliance professionals have seen before in other contexts: ownership, control design, data integrity, monitoring, and culture. AI raises the stakes and accelerates the timeline.

1. Board Oversight and Accountability

The first challenge is the most fundamental: who is actually in charge?

One of the great failures in governance is diffuse accountability. When everyone has some responsibility, no one has real responsibility. AI governance suffers from this problem in many organizations. Legal is concerned about liability. IT is focused on systems. Security is focused on cyber risk. Privacy is focused on data usage. Compliance is focused on controls and conduct. Business leaders are focused on speed and competitive advantage. The board hears fragments from all of them, but may not receive a coherent picture.

That is a dangerous place to be. AI governance begins with clear ownership. The board should know who is accountable for enterprise AI governance, how decisions are escalated, and how high-risk use cases are reviewed. A company does not need bureaucracy for its own sake, but it does need clarity.

This is where the Department of Justice’s Evaluation of Corporate Compliance Programs remains instructive, even if AI is not its exclusive focus. The ECCP repeatedly asks whether compliance is well designed, adequately resourced, empowered to function effectively, and tested in practice. Those same questions apply directly to AI governance. If accountability for AI is vague, if compliance is not in the room, or if oversight is not documented, governance will be performative rather than operational.

2. Strategy Outrunning Governance

The second challenge is one many companies know all too well: innovation is sprinting ahead while governance is still tying its shoes.

Business teams are under enormous pressure to deploy AI quickly. Senior leadership hears daily that AI can deliver efficiency, productivity, growth, and competitive advantage. Vendors promise transformation. Employees experiment informally. In that environment, governance can be cast as friction.

But good governance is not the enemy of innovation. It is what keeps innovation from becoming unmanaged exposure.

The central question here is simple: has the company defined the rules of the road before putting AI into production? In practical terms, has it determined which use cases are permissible, which require enhanced review, which are prohibited, and which must go to the board or a designated committee? Has it established approval criteria, documentation standards, and stop/go decision points?

The NIST AI Risk Management Framework is especially helpful on this point because it treats AI governance as an ongoing management discipline rather than a one-time sign-off. Its emphasis on Govern, Map, Measure, and Manage is a powerful reminder that strategy and governance must move together. ISO/IEC 42001 brings similar discipline by framing AI management systems around structure, accountability, controls, and continual improvement.

The lesson for compliance professionals is clear: if the business has a faster process for buying or launching AI than for reviewing risks and governance, it has already fallen behind.

3. Data Governance, Privacy, and Model Integrity

The third challenge is the quality and integrity of what goes into, and comes out of, AI systems.

AI does not operate in a vacuum. It depends on data, assumptions, training inputs, prompts, workflows, and human interaction. That means weaknesses in data governance are not side issues. They are central governance risks. Poor data lineage, unvalidated data sources, confidentiality breaches, inadequate access controls, and bias in training data can all create downstream failures that become legal, reputational, regulatory, and operational events.

For boards, the temptation is to hear “AI” and think about futuristic questions. But the more immediate concern is often much more familiar. Does management know where the data came from? Does the company understand whether sensitive or proprietary information is being exposed? Are outputs accurate enough for the intended use? Are the controls around data usage consistent with privacy obligations and internal policy?

This is where AI governance intersects with traditional compliance disciplines in a very real way. Privacy, information governance, records management, cybersecurity, and internal controls all converge here. A system that produces impressive outputs but relies on flawed or unauthorized data is not a governance success. It is a governance failure waiting to be discovered.

ISO 42001 is particularly useful because it forces organizations to think in systems terms. It is not merely about the model itself; it is about the management environment surrounding it. That is exactly how boards and CCOs should think about model integrity.

4. Ongoing Monitoring and the “Day Two” Problem

The fourth challenge is the one that too many organizations underestimate: governance after deployment. A great many companies put substantial effort into approving an AI use case, but far less into monitoring it once it is live. Yet this is where some of the greatest risks emerge. Models drift. Employees use tools for new purposes. Controls that looked solid on paper weaken in practice. Reviewers become overloaded. Risk profiles change. Regulators evolve their expectations. The use case expands far beyond its original design.

That is why AI governance must include what I call the “Day Two” problem. What happens after launch? This is once again a place where the ECCP offers a useful lens. The DOJ does not ask merely whether a policy exists. It asks whether it works in practice, whether it is tested, and whether lessons learned are incorporated back into the program. AI governance should be held to the same standard. If the company has no way to monitor performance, investigate anomalies, log incidents, revalidate assumptions, or update controls, then it lacks effective AI governance. It has an approval memo.

The board should be asking for reporting that goes beyond usage metrics or efficiency gains. It should want to know about incidents, exception trends, control failures, validation results, and remediation efforts. In other words, governance must be dynamic because AI risk is dynamic.

5. Culture, Speak-Up, and Human Judgment

The fifth challenge may be the most overlooked, yet it is often the earliest warning system a company has: culture. Employees will usually see AI failures before leadership does. They will spot the odd output, the customer complaint, the biased result, the misuse of a tool, the shortcut around a control, or the inaccurate summary that could trigger a bad decision. The question is whether they will say something.

This is why AI governance is not solely about structure and policy. It is also about whether the organization has a culture that encourages people to raise concerns. Do employees understand that AI-related problems are reportable? Do they know where to raise them? Are managers trained to respond properly? Are anti-retaliation protections reinforced in this context?

Human judgment also matters because AI does not eliminate accountability. If anything, it heightens the need for judgment. A machine-generated output can create a false sense of confidence, especially when it arrives quickly and sounds authoritative. Boards and CCOs must resist that temptation. Human oversight is not a ceremonial step. It is an essential governance control.

The strongest AI governance programs will be the ones that connect structure with culture. They will not merely create committees and frameworks. They will create an environment where people trust the system enough to challenge it.

The Governance Road Ahead

For CCOs and boards, the governance challenge around AI is not mysterious. It is demanding, but it is not mysterious. The questions are recognizable. Who owns it? What are the rules? Can we trust the data? Are we monitoring the system over time? Will people speak up when something goes wrong?

These five issues form the roadmap for the series ahead. In the coming posts, I will take up each one in turn and explore what it means in practice for modern compliance programs and board oversight. Because if there is one lesson here, it is this: AI governance is not about admiring the technology. It is about governing the enterprise that uses it.

Join us tomorrow, where we review board oversight and accountability, because that is where every effective AI governance program either starts strong or starts to fail. 

Categories
GSK in China: 13 Years Later

GSK In China: 13 Years Later – Where Was the Board? Director Oversight and Doing Business in China

Thirteen years after the GSK China scandal exploded onto the global stage, its lessons remain as urgent as ever for compliance professionals and business leaders. In this podcast series, we revisit the case not simply as corporate history, but as a living cautionary tale about culture, incentives, third parties, investigations, and governance. Each episode explores what went wrong, why it went wrong, and how those failures still echo in today’s compliance and ethics landscape. Join me as we unpack the scandal and draw practical lessons for building stronger, more resilient organizations. This episode examines why major bribery scandals occur “under the board’s nose,” using GSK as a launching point to explain directors’ legal and practical compliance responsibilities.

It traces oversight duties under Delaware law, highlighting Caremark’s good-faith duty to ensure information and reporting systems, Stone v. Ritter’s standard for liability for sustained or systematic oversight failure, and the business judgment rule. It contrasts “check-the-box” programs with risk-based oversight via the Piat case, where formal compliance masked illegal conduct embedded in business plans. The discussion ties board expectations to FCPA guidance hallmarks, emphasizing tone at the top, empowered compliance functions with direct board access, DOJ/SEC scrutiny, and SEC Reg. S-K 407 risk-oversight disclosures, and potential disgorgement. It then focuses on China as a high-risk environment, third-party intermediary exposure, and M&A “deal-breaker” dilemmas requiring rigorous pre- and post-acquisition diligence, concluding with the paradox that boards may be incentivized toward plausible deniability. Our hosts are Timothy and Fiona.

Key highlights:

  • Compliance Starts at the Top
  • Caremark Duty Explained
  • FCPA Hallmarks for Boards
  • Passive Board Era Ends
  • Plausible Deniability Paradox

Resources:

GSK in China: A Game Changer for Compliance on Amazon.com

GSK in China: Anti-Bribery Enforcement Goes Global on Amazon.com

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Ed. Note: Notebook LM created the voices of the hosts, Timothy and Fiona, based on text written by Tom Fox

Categories
Blog

AI Risk Appetite: The Conversation Boards Are Not Having

There is a quiet but serious problem developing in boardrooms around AI. Directors are hearing about innovation. They are hearing about productivity gains. They are hearing about competitive pressure, transformation, and speed. What they are not hearing enough about is risk appetite. That is the missing conversation.

Most companies are already using AI in one form or another. Some are deploying enterprise tools. Some are approving vendor solutions with embedded AI. Some are allowing business units to experiment in a controlled fashion. Some, of course, are doing all of the above and pretending it is a strategy. Yet for all the discussion about adoption, there has been far less focus on a basic governance question: what level of AI-driven decision risk is acceptable for this company? That is not a technical question. It is a board question.

The Risk Appetite Gap in AI Governance

AI is not simply another software purchase. It can influence recommendations, rankings, forecasts, summaries, classifications, and decisions. It can operate upstream from business judgments or directly within them. It can affect customer communications, hiring decisions, compliance monitoring, internal investigations, financial analysis, and reporting workflows. So the central governance challenge is not whether AI exists in the enterprise. It is how much authority the company is willing to give it, in what contexts, with what controls, and with what margin for error. If you do not define that, you do not have AI governance. You have AI optimism.

What Is AI Risk Appetite?

At its core, AI risk appetite is the level and type of AI-related risk an organization is willing to accept in pursuit of business value. That includes a series of questions boards ought to be asking. How much error is acceptable in AI-generated output before a human must intervene? Which uses are low-risk productivity enhancements, and which are sensitive, consequential, or reputation-threatening? In what contexts can AI make recommendations only, and in what contexts can it influence or automate action? How much dependence on opaque third-party models is acceptable? What degree of explainability does the company require for different use cases? When does speed stop being a benefit and start becoming exposure?

Many boards are currently discussing AI deployment without ever discussing AI tolerance. That is like approving a global third-party strategy without deciding what level of distributor risk, sanctions exposure, or bribery risk the company is prepared to accept. No compliance professional would recommend that. Yet in AI, organizations do versions of it every day.

Why Boards Avoid the Conversation

There are several reasons boards have been slow to engage on AI risk appetite.

First, the technology moves fast, and the terminology can become a fog machine. Directors do not want to look uninformed, so discussions often stay broad and strategic. Second, management may not yet have the internal inventory or classification framework needed to make a risk-appetite conversation concrete. Third, many companies are still in an experimentation phase, which creates the illusion that formal governance can come later. Fourth, there is a natural tendency to believe AI risk belongs to IT, legal, or security, rather than to enterprise oversight.

AI risk appetite cannot be delegated away because it intersects with business judgment, ethics, records, privacy, data governance, resilience, and culture. It cuts across functions. It also cuts across reputational boundaries. If a company uses AI in a way that produces unfair results, faulty decisions, poor disclosures, or customer harm, nobody is going to say, “Well, that was a technical issue, so the board need not have been involved.” Boards do not get a hall pass when the governance system is missing.

The Conversations Boards Need to Be Having

Risk Map. The first conversation is about where AI sits on the company’s risk map. Is AI a productivity tool, a strategic platform, a decision-support capability, or some combination of all three? The answer matters because it affects the level of oversight. A company using AI for internal drafting support faces one type of exposure. A company using AI in customer-facing interactions, underwriting, hiring, fraud detection, or compliance monitoring faces another challenge.

Decision Significance. Boards need to ask where AI is being used in decisions that affect legal rights, financial outcomes, customer treatment, employment status, compliance judgments, or public disclosures. Not all uses are equal. A board that treats AI use in marketing copy the same as AI use in employee discipline is not governing. It is lumping.

Acceptable Error and Human Review. Boards should ask: what level of inaccuracy can the company tolerate in a given use case, and who is accountable for checking the output before action is taken? Human oversight has become one of those phrases everybody likes, and few define. Directors need something more disciplined. When is review mandatory? What does a meaningful review look like? What evidence shows that the reviewer is not simply rubber-stamping machine output?

Data and Model |Dependency. What data is being used? Who owns it? Who has the right to it? How current is it? Are third-party vendors changing capabilities under existing contracts? Is the company becoming dependent on systems it does not fully understand or cannot easily audit? Boards should not need to know how the engine works, but they absolutely need to know whether the company is driving a car with uncertain brakes.

Incident Tolerance and Escalation. What types of AI failures must be reported to senior leadership or the board? A hallucinated internal memo may be embarrassing. A flawed AI-assisted hiring screen or customer communication may be far more serious. The board should ensure management has defined materiality thresholds before an incident occurs, not after the headlines begin.

The CCO’s Role in Shaping the Conversation

This is where compliance officers can be enormously helpful.

The CCO is often the person in the enterprise most experienced at turning abstract risk into operating discipline. Compliance knows how to frame risk-based governance. It knows how to create escalation structures, policy frameworks, investigations protocols, and oversight dashboards. It knows that culture and control design matter just as much as rules. Here are four ways to do so.

  1. A CCO can help management develop a tiered inventory of AI use cases. This is essential. Boards cannot discuss appetite in the abstract. They need to see the map. Which uses are low risk? Which are medium? Which are high? Which are prohibited absent specific approval?
  2. Compliance can help translate legal, ethical, and operational concerns into board-level language. Directors do not need a seminar on neural networks. They need clear framing around consequences, control points, accountabilities, and thresholds.
  3. A CCO can help build governance around human review, documentation, and escalation. If the company says a human is responsible, compliance can help test whether that responsibility is real, documented, and operational.
  4. Compliance can keep the conversation grounded in how people actually behave. Employees will choose convenience. Business teams will move quickly. Vendors will market aggressively. Managers may trust the generated output more than they should. A good compliance officer knows that policy must be built for actual human behavior, not ideal behavior.

Compliance as Risk Mitigation and Business Enablement

One of the enduring frustrations in compliance is that governance is often viewed as a speed bump until something goes wrong. AI gives us another chance to make the larger point. Governance does not slow innovation. Bad governance slows innovation by causing rework, distrust, remediation, and public embarrassment.

A well-defined AI risk appetite does the opposite. It gives the business clarity. It tells innovation teams where they can move quickly and where they must slow down. It helps procurement negotiate the right terms. It helps managers know when to escalate. It helps employees understand when they may rely on AI and when they must verify it. Most importantly, it gives the board a strategic rather than reactive basis for oversight.

That is compliance at its best. Not Dr. No, from the Land of “no,” but the function that makes responsible growth possible.

Final Thoughts

Boards need not fear AI. But they do need to govern it. And governance begins with clarity about appetite. If your board has discussed an AI opportunity but not AI tolerance, it has only had half the conversation. If your company has adopted tools but has not defined acceptable levels of error, autonomy, dependency, and oversight, it is operating on hope. Hope, as every compliance professional knows, is not a strategy and certainly not a control.

Here are the questions I would leave you with. Has your board defined what level of AI-driven decision risk it is willing to accept? Can management explain how that appetite changes across low-risk and high-risk use cases? And can your compliance function show, with evidence, whether the company is operating inside those lines? If the answer is no, then the conversation boards may be the most important AI conversation of all.

Categories
Blog

When AI Strategy Outruns Governance: What the Board Should Do Before Innovation Becomes Exposure

A scene is playing out in companies across the globe right now. Innovation teams are moving fast. Procurement is signing contracts. Business units are experimenting with copilots, workflow agents, and internal knowledge tools. Marketing is testing generative content. HR is evaluating AI for talent processes. Finance wants forecasting help. Security is watching from the corner. Legal is asking pointed questions. Compliance is handed the bill for governance after the train has already left the station. But the reality is that it is a board governance issue.

The problem is not that companies are moving too slowly on AI. In many organizations, the opposite is true. AI strategy is moving faster than the governance structure designed to oversee it. When that happens, the gap creates risk in ways boards understand very well: unmanaged decision-making, unclear accountability, inconsistent controls, fragmented reporting, and blind spots around operational resilience, ethics, and trust.

If you are a Chief Compliance Officer (CCO), this is your moment. Not to say no to AI. Not to become the Department of Technological Misery. But to help the board and senior leadership understand that AI governance is about capturing upside without swallowing avoidable downside. That is the central lesson. Strategy without governance is aspiration. Strategy with governance is a business discipline.

Why This Is a Board Issue

Boards are not expected to code models, evaluate vector databases, or decide which prompt library a business unit should use. They are expected to oversee risk, culture, controls, and management accountability. AI now sits squarely in that lane.

Once AI touches business processes, it can affect decision rights, data usage, customer interactions, employee treatment, financial reporting inputs, records management, and reputation. That means the board does not need to manage the machinery, but it must ensure a management system is in place for it.

This is where compliance can bring real value. Ethisphere’s latest work on the Ethics Premium makes a useful point for governance professionals: leading programs improve board reporting practices, including more frequent meetings with directors to ensure they receive the information needed for effective oversight, and they are also pushing documentation to be ready for AI-driven assistance so employees can find answers when they need them. In other words, mature governance is not static. It evolves as technology evolves.

That same report also reminds us that strong ethics and compliance systems are associated with higher returns, less downside, and faster recoveries, which is exactly the language boards understand when evaluating strategic risk and resilience.

So let us translate that lesson into the AI context. The board’s task is not to bless every shiny new tool. Its task is to ensure management has built an operating system for responsible AI use.

What a Board Should Do

The first thing a board should do is insist on a clear AI governance architecture. That means management should be able to answer basic questions cleanly and quickly. Who owns the enterprise AI strategy? Who approves high-risk use cases? Who validates controls before deployment? Who monitors incidents, exceptions, and drift? Who reports to the board? If five executives give five different answers, you do not have governance. You have a theater.

Second, the board should require a risk-based inventory of AI use cases. I am continually amazed at how many organizations start with policy language before they know where AI is actually being used. That is backwards. Boards should ask for a current inventory of internal, customer-facing, employee-facing, and vendor-enabled AI use cases. The inventory should distinguish between low-risk productivity tools and higher-risk uses involving sensitive data, regulated processes, legal judgments, employment decisions, or customer outcomes. If management cannot map the use cases, it cannot credibly manage the risk.

Third, the board should demand decision-use discipline. Not every AI output deserves the same level of trust. Some uses are advisory. Some are operational. Some may influence consequential business judgments. Boards should ask management where AI outputs are being relied upon, who reviews them, and what level of human oversight is required before action is taken. The issue is not whether humans are “in the loop” as a slogan. The issue is whether human review is meaningful, documented, and tied to the use case’s risk.

Fourth, the board should require intelligible reporting, not merely technical. Board oversight fails when management delivers either fluff or jargon. Directors need reporting that answers practical questions: What are our top AI use cases? Which ones are classified as high risk? What incidents or near misses have occurred? What controls were tested? What third parties are material to our AI stack? What changed this quarter? What needs escalation? Good board reporting turns AI from mystique into management.

That point is entirely consistent with what Ethisphere identifies in leading ethics and compliance programs: improved board reporting practices that provide directors with the information they need for effective oversight.

Where Compliance Officers Can Help the Board Most

This is where the CCO earns their seat at the table.

First, the compliance function can help management create the classification framework. Compliance professionals know how to tier risk, define escalation paths, and build governance around business reality. You have been doing it for years with third parties, gifts and entertainment, investigations, and training. AI is a new technology, but the governance muscle memory is familiar.

Second, compliance can help build the policy-to-practice bridge. A glossy AI principles statement is not governance. Governance is what happens when procurement uses approved clauses, HR knows what tools it can use, managers understand escalation triggers, training is tailored to real workflows, and documentation supports decision-making. Ethisphere’s report notes that best-in-class programs are investing in clear, compelling documentation and training approaches designed for actual employee use, not simply for formal compliance completion. That is precisely the model AI governance needs.

Third, compliance can help the board by translating operational signals into governance signals. A rejected deployment, a data-permission problem, a hallucinated output in a sensitive workflow, a vendor change notice, a policy exception, or a spike in employee questions may each seem isolated. They are not. They are governance indicators. The CCO can aggregate them into trend lines that the board can actually use.

Fourth, compliance can help define the cadence and content of board reporting. Directors do not need every technical detail. They do need a disciplined dashboard and escalation protocol. Compliance is often the right function to help standardize that process, because it lives at the intersection of risk, policy, training, speak-up culture, investigations, and controls.

The Operational Reality Boards Must Understand

One reason AI governance lags strategy is that AI adoption is not happening in one place. It is happening everywhere. That decentralization is what makes governance hard. The legal team may be reviewing one contract while a business leader is piloting another tool within budget. An employee may paste sensitive information into a system that was never intended to accept it. A vendor may quietly add AI functionality to an existing platform. A manager may begin relying on generated summaries as if they are verified facts. None of this requires malicious intent. It only requires speed, convenience, and a little ambiguity. Corporate history teaches that those ingredients are often enough.

Boards, therefore, need to understand a simple truth: AI risk is not only model risk. It is a workflow risk. It is a data risk. It is governance risk. It is a cultural risk. But culture matters here. Ethisphere found that nearly every honoree equips managers with toolkits and talk tracks to discuss ethical dilemmas with their teams, and 51% require managers to do so. That should be a flashing neon sign for AI governance. If managers are not talking with employees about responsible use, escalation expectations, and when not to trust the machine, the company is relying on hope as a control. Hope is not a control. It is a prayer.

Final Thoughts

When AI strategy outruns governance, the problem is not innovation. The problem is unmanaged innovation. Boards should not respond by slamming on the brakes. They should respond by insisting on lanes, guardrails, dashboards, and accountability.

For compliance officers, the opportunity is enormous. You can help the board ask better questions. You can help management build a governance operating system. You can help the business adopt AI faster, smarter, and more defensibly.

That is the larger point. Compliance is not there to suffocate strategy. Compliance is there to make the strategy sustainable.

Here are the questions I would leave you with:

  • Does your board receive meaningful AI oversight reporting, or only periodic reassurance?
  • Can your company identify its highest-risk AI use cases today, not next quarter?
  • If a director asked tomorrow who owns AI governance end-to-end, would the answer be immediate and credible?
  • If not, your AI strategy may already be outrunning your governance.