Categories
Blog

Compliance Week 2026: AI Governance Highlights

The 21st Annual Compliance Week Conference made one point unmistakably clear: AI is no longer a technology issue sitting outside the compliance function. It is now a governance, risk, controls, culture, and accountability issue. Across the conference, AI appeared in nearly every discussion, from practical tools for compliance teams to regulatory uncertainty, shadow AI, third-party risk, and board oversight. The central message for compliance professionals was clear: AI must be governed with the same discipline, documentation, monitoring, and continuous improvement as any other enterprise risk.

That should not surprise any Chief Compliance Officer. The DOJ’s Evaluation of Corporate Compliance Programs (2024 ECCP) has long asked whether a compliance program is well-designed, adequately resourced, empowered to function effectively, and working in practice. Those same questions now apply to AI. The issue is not whether an organization is using AI. It almost certainly is. The issue is whether the company knows where AI is being used, who approved it, the risks it creates, the controls that apply, and whether those controls are being monitored.

AI Is Now a Compliance Governance Issue

The first major theme from Compliance Week 2026 was governance. AI may be exciting, efficient, and creative, but without governance, it can quickly become a source of unmanaged enterprise risk. That governance challenge begins with oversight. Who owns AI risk? Who approves AI use cases? Who determines whether a tool is appropriate for use with company data? Who has the authority to stop an AI project that is not meeting its stated purpose? These are not theoretical questions. They are the basic operating questions of an effective compliance program.

A company should not treat AI as a series of disconnected experiments. It should treat AI as part of the enterprise control environment. That means clear governance structures, documented approvals, defined risk owners, escalation protocols, monitoring, testing, and board reporting. The board does not need to become a group of AI engineers. But directors do need to understand whether management has created a defensible AI governance framework. They should ask how AI risks are identified, how high-risk use cases are reviewed, how third-party AI vendors are assessed, and how the company detects unauthorized AI use.

Shadow AI Is the Risk Hiding in Plain Sight

One of the strongest compliance lessons from the conference was the danger of shadow AI. Employees are already using AI tools, often because they are efficient, accessible, and easy to deploy. The problem is that ease of use can defeat governance. If employees are using ChatGPT, Claude, Gemini, Copilot, or other tools without authorization, training, or data restrictions, the company has a control gap. Confidential business information, financial data, personal information, customer information, or regulated data can move into systems the company does not control. That creates legal, privacy, cybersecurity, contractual, and reputational risk.

The answer is not simply to prohibit AI. That approach is unlikely to work. The better answer is to identify the tools being used, classify them by risk, authorize appropriate use cases, train employees, monitor usage, and make clear what data can and cannot be entered into an AI system. A strong AI governance program should include an AI use register. It should identify approved tools, owners, business purposes, data categories, risk ratings, controls, monitoring obligations, and renewal or reassessment dates. Without that inventory, a company cannot credibly claim to govern AI risk.

The Compliance Risk Management Model Already Works

One of the most important insights from the conference was that compliance professionals already have the right risk management framework. AI risk does not require abandoning the compliance discipline. It requires applying it.

The framework is familiar. Identify the risk. Develop a risk management strategy. Train employees. Implement the strategy. Monitor performance. Use data to improve your strategy continuously. That is the compliance operating model. It is also the right model for AI governance.

The 2024 ECCP emphasized risk-based compliance, data access, continuous improvement, and the effectiveness of controls in practice. Those expectations fit naturally into AI governance. A company should ask whether its AI controls are designed around actual risks, whether compliance has access to AI-related data, whether employees understand acceptable use, and whether the company can prove that its controls operate effectively. The lesson is straightforward. Do not build AI governance as a technology policy alone. Build it as a compliance program.

AI Risk Has Three Core Dimensions

The conference also highlighted the need to separate AI risk into practical categories. For compliance officers, three risk areas deserve immediate attention.

First, internal risk. This includes employee use of AI, shadow AI, unauthorized tools, misuse of confidential information, lack of training, and gaps in approval processes.

Second, external risk. This involves AI systems that affect customers, patients, consumers, investors, or other external stakeholders. These tools may raise issues involving fairness, privacy, transparency, discrimination, consumer protection, and regulatory obligations.

Third, third-party risk. Vendors, consultants, service providers, and sales agents may introduce AI into the company’s operations. A third-party vendor using AI in screening, analytics, customer service, data processing, or decision support can pose a risk to the company, even when the company did not build the tool.

This is where compliance must bring discipline. Third-party AI risk should be part of due diligence, contracting, audit rights, monitoring, and renewal. Companies should ask vendors what AI tools they use, what data those tools process, whether subcontractors are involved, how outputs are validated, and whether the company has audit rights over AI-related controls.

ROI Must Begin With the Business Purpose

AI projects should begin with a simple question: what problem are we trying to solve? Too many AI initiatives begin with pressure to “use AI” rather than a clear business case. That is not governance. That is technology enthusiasm without control or discipline. A compliance-minded AI review should ask whether the proposed tool has a defined use case, measurable business value, appropriate controls, and a clear owner. It should also ask whether the project is drifting from its original purpose. Mission creep is a real AI risk. A tool approved for one purpose can quickly be used for another. That creates new risks and may invalidate the original approval.

The more regulated the use case, the more important this analysis becomes. AI used in healthcare, employment, finance, consumer decisions, investigations, sanctions screening, or third-party risk management demands heightened scrutiny. ROI may not always appear as a direct financial return. Sometimes the business value is avoiding regulatory exposure, improving consistency, strengthening documentation, or reducing unmanaged risk.

Training Is No Longer Optional

AI training must move beyond general awareness. Employees need practical, role-based instruction. They need to know which tools are approved. They need to know what data is prohibited. They need to understand when human review is required. They need to know how to report AI concerns, errors, bias, hallucinations, or misuse. They also need to understand that AI output is not a substitute for professional judgment.

For compliance teams, training should include investigators, auditors, third-party managers, procurement, legal, finance, HR, IT, and business leaders. The message should be clear: AI can support the work, but it does not remove accountability.

Build AI In, Do Not Bolt It On

One of the most practical insights from the conference was that AI should be built into business processes, not bolted on afterward. That distinction matters. Bolted-on AI becomes a tool without governance. Built-in AI becomes part of the control environment.

For example, in third-party risk management, AI can help analyze due diligence responses, identify red flags, monitor adverse media, track contract obligations, and support ongoing risk scoring. But it must be embedded into a process with human oversight, escalation protocols, audit trails, and testing. The same applies to investigations, hotline analytics, policy management, training, and monitoring. AI should strengthen compliance processes, not bypass them.

The CCO Must Have a Seat at the AI Table

The compliance function should not wait to be invited into AI governance. It should claim its role. The CCO brings the language of risk, controls, accountability, documentation, monitoring, and culture. Those are precisely the disciplines AI governance requires. Compliance should help design AI approval workflows, risk assessments, training, third-party reviews, monitoring plans, and board reporting.

This does not mean compliance owns every AI decision. It means compliance must be part of the governance architecture. AI governance should be cross-functional, with legal, compliance, IT, privacy, cybersecurity, internal audit, procurement, HR, and the business working together. But compliance must ensure that the program is not simply innovative. It must be defensible.

Practical Takeaways for Compliance Professionals

  1. Create an AI inventory. Know what tools are being used, by whom, for what purpose, and with what data.
  2. Establish an AI governance committee. Include compliance, legal, IT, privacy, cybersecurity, internal audit, procurement, and business leadership.
  3. Build a risk-based approval process. High-risk AI use cases should require enhanced review, documentation, testing, and escalation.
  4. Address shadow AI directly. Do not assume employees are waiting for policy guidance. Identify actual use and bring it into governance.
  5. Train by role and risk. General AI awareness is not enough. Employees need practical rules for approved tools, prohibited data, human review, and reporting.
  6. Extend third-party risk management to AI. Vendor diligence, contracts, audit rights, monitoring, and renewal reviews should include AI-specific questions.
  7. Monitor and improve. AI governance is not a one-time policy exercise. It requires testing, metrics, incident review, and continuous improvement.

Board Questions

  1. Do we have an inventory of AI tools currently used across the enterprise?
  2. Who approves AI use cases, and how are high-risk uses escalated?
  3. How do we detect and manage shadow AI?
  4. What data is prohibited from being entered into AI tools?
  5. How are third-party AI vendors reviewed, contracted, monitored, and audited?
  6. What AI metrics does management provide to the board?
  7. Who has the authority to pause or terminate an AI project that creates unacceptable risk?

CCO Questions

  1. Is compliance involved before AI tools are deployed?
  2. Do our policies distinguish between approved, restricted, and prohibited uses of AI?
  3. Can we prove employees have been trained on AI risks?
  4. Do we have a documented AI risk assessment process?
  5. Are AI controls tested by internal audit or another independent function?
  6. Are AI incidents, errors, and misuse captured through speak-up and escalation systems?
  7. Can we show regulators that our AI governance works in practice?

Conclusion

Compliance Week 2026 confirmed that AI has crossed the threshold from emerging technology to core compliance risk. The companies that succeed will not be those that chase every new tool. They will be the companies that govern AI with discipline. For the modern CCO, this is the moment to step forward. AI governance belongs squarely within the compliance conversation because it involves risk, accountability, culture, controls, third parties, monitoring, and board oversight. Those are the foundations of effective compliance.

AI may change the tools. It does not change the obligation. Governance still matters. Controls still matter. Culture still matters. Accountability still matters. And compliance must help lead the way.

Categories
Blog

The Warner Bros. Bidding War: Part 2 – Board Governance Under Pressure

When a superior proposal emerges, the Board is no longer evaluating strategy. It is proving governance. The Warner Bros. transaction shows how fiduciary duty, disclosure discipline, and control execution must function in real time. We are exploring Warner Bros./Netflix/Paramount’s bidding and purchase processes for lessons for the compliance professional. In Part 1, we focused on what happened. This post focuses on how the Board must respond when events accelerate.

The process moved from a negotiated transaction with Netflix to a contested situation with a rival bidder, Paramount. At that moment, the Board’s role shifted from approving a deal to managing an auction under fiduciary duty. This is the precise moment contemplated by Delaware fiduciary law and the Board oversight obligations often framed through the lens of Caremark duties. The question is no longer whether the Board can approve a transaction. The question becomes whether the Board can demonstrate that it acted on an informed basis, in good faith, and in the best interests of shareholders. That is not a conclusion. It is a record.

Waiver Discipline and the Fiduciary Record

In a live bidding environment, the Board will be asked to consider waiving contractual provisions, including standstill agreements, exclusivity clauses, and information-sharing restrictions. The governance risk is not the waiver itself. The governance risk is undocumented decision-making. A Board must ensure that every waiver is:

  • Reduced to writing with a defined scope and duration
  • Reviewed by counsel with a clear statement of fiduciary rationale
  • Reflected in contemporaneous Board minutes that explain why the waiver was necessary

Under the DOJ’s Evaluation of Corporate Compliance Programs (ECCP) framework, the question is whether the company can demonstrate that its processes work in practice. A waiver without documentation is indistinguishable from a control failure.

Termination Fees as Board-Level Risk

The WBD transaction turned the $2.8 billion termination fee into a live issue. When Paramount agreed to fund the fee, the Board had to evaluate more than price. It had to evaluate:

  • Who ultimately bears the economic and legal risk
  • Whether the funding mechanism introduces new contingencies
  • How the arrangement should be disclosed to shareholders

Termination fees are often treated as deal protections. In a contested process, they serve as mechanisms for risk allocation. That places them squarely within Board oversight. A Board that does not interrogate the assumptions behind a termination fee, including third-party assumptions, is not exercising informed judgment.

Real-Time Disclosure Controls

Disclosure obligations in a transaction are not periodic. They are continuous. Once a superior proposal is identified, the company must:

  • Update proxy materials where required
  • Ensure that all material information is disclosed without selective leakage
  • Align communications across legal, investor relations, and management

The governance challenge is that information moves faster than process. Emails, banker discussions, draft proposals, and internal analyses all become part of the evidentiary record. Boards must ask whether the company has a real-time disclosure protocol. This includes:

  • A defined disclosure committee process
  • A single point of accountability for filings such as Form 8-K
  • Controls over who can communicate with external stakeholders

This is where governance intersects directly with compliance. Disclosure failures are not merely technical. They can trigger enforcement exposure.

The 8-K and Proxy Playbook

In a fast-moving transaction, the company does not have the luxury of drafting disclosures from scratch. A Board should expect management to have a predefined playbook that includes the following:

  • Trigger thresholds for filing obligations
  • Pre-approved disclosure templates for common scenarios
  • A documented approval chain involving legal, finance, and executive leadership

The absence of such a playbook creates a delay. Delay creates inconsistency. Inconsistency creates risk. From a COSO internal control perspective, this is a failure in control activities and information and communication. From a DOJ perspective, it is evidence that the program is not operationalized.

Regulatory Readiness and Remedy Planning

Both competing transactions carried regulatory risk. The difference was how that risk was allocated and mitigated. A Board must understand the following:

  • The regulatory approval pathways
  • The likelihood of a challenge
  • The remedies available if regulators object

More importantly, the Board must ensure that management has pre-developed the following:

  • Divestiture scenarios
  • Behavioral remedies
  • Escrow or holdback mechanisms tied to regulatory outcomes

This is not theoretical planning. It is part of the decision to determine which proposal is superior. A Board that does not understand regulatory risk is not fully evaluating the transaction’s value.

Post-Termination Control and Evidence Custody

When WBD terminated the agreement with Netflix, the transaction did not end. It transitioned into a new phase of risk. The company must:

  • Ensure proper handling of confidential information shared during the termination process
  • Preserve all records relevant to the decision-making process
  • Maintain audit trails for potential litigation or regulatory review

This is where evidence discipline becomes critical. The record must be complete, organized, and defensible. In the absence of such controls, the company risks being unable to demonstrate how decisions were made.

Why This Matters for Boards

The WBD process illustrates that governance is tested when conditions change rapidly. A Board cannot build governance in the middle of a transaction. It must already exist. The DOJ and SEC will not evaluate the Board based on the outcome. They will evaluate the Board based on the effectiveness of its processes, documentation, and controls. This is the essence of modern corporate governance. It is not about whether the Board chose Netflix or Paramount. It is about whether the Board can prove how and why it made that choice.

Practical Takeaways for Boards

  1. Ensure that superior proposal mechanics are understood at the Board level before a transaction is signed.
  2. Treat termination fees and regulatory protections as governance issues requiring full Board engagement.
  3. Demand real-time disclosure controls with clear ownership and escalation protocols.
  4. Require a pre-built 8-K and proxy playbook to manage disclosure risk under time pressure.
  5. Mandate regulatory scenario planning as part of transaction evaluation.

Questions for the Board

  1. Can the Board demonstrate, through contemporaneous documentation, how it evaluated a superior proposal?
  2. Does the company have a real-time disclosure control framework that supports rapid filings and updates?
  3. Are termination fee structures and third-party funding arrangements fully understood and documented?
  4. Has the Board reviewed regulatory risk scenarios and approved a default remedy strategy?
  5. Who is accountable for evidence preservation and record integrity during and after the transaction?

Please join us tomorrow; in our final post, we’ll focus on the Chief Compliance Officer. The question will be direct. What must a CCO do, in operational terms, to ensure that the company can execute governance under pressure and prove it after the fact?

 

Categories
Blog

The Warner Bros. Bidding War: Part 1 – What Happened and Why Compliance Professionals Should Care

A fast-moving corporate auction shows how deal terms, fiduciary duties, disclosure controls, regulatory risk, and evidence discipline can determine the outcome of a major transaction. Over the rest of this week, I will be exploring the Warner Bros./Netflix/Paramount bidding war, which

The Deal That Changed Direction

The Warner Bros./Netflix/Paramount bidding war is one of those corporate stories that looks like Hollywood drama on the surface but is really a governance story underneath. At first, Warner Bros. (WBD) had an agreed transaction with Netflix. That deal carried a $2.8 billion company termination fee payable by WBD under specified circumstances, including termination to enter into a superior proposal. The proxy materials also disclosed a $5.8 billion regulatory termination fee payable by Netflix if the deal failed for certain regulatory reasons. (SEC)

Then Paramount Skydance (Paramount) came back with a revised proposal. It raised the bid to $31 per WBD share in cash, added a ticking fee, offered a $7 billion regulatory termination fee, and agreed to fund the $2.8 billion termination fee owed to Netflix. (SEC) Reuters reported that WBD said the revised Paramount proposal could be considered superior, which set the process in motion. (Reuters)

By February 27, 2026, WBD terminated the Netflix agreement and entered into a merger agreement with Paramount Skydance. WBD later disclosed that Paramount Skydance paid the $2.8 billion Netflix termination fee on WBD’s behalf. (SEC)

That is the transaction story. The compliance story is deeper.

This Was Not Merely a Higher Price

In M&A, price matters. But price is rarely the only issue. Boards also look at certainty of closing, regulatory risk, financing, timing, shareholder value, legal exposure, and execution risk. Paramount did not merely increase the cash price. It addressed several deal objections at once. It offered to cover the Netflix break fee. It added a ticking fee if closing was delayed. It increased regulatory risk protection. It positioned its offer as cleaner, faster, and more certain than the existing transaction. (SEC)

That matters because boards do not evaluate superior proposals in a vacuum. They evaluate the entire package. The better governance question is not simply, “Which offer is higher? ”It is, “Which offer delivers the best risk-adjusted value to shareholders, and can the Board prove how it reached that conclusion? ”

The Termination Fee Became a Governance Issue

The $2.8 billion termination fee is an important part of the story. In ordinary conversation, that number sounds like a barrier. In this transaction, it became part of the competitive bidding structure. Paramount agreed to fund the termination fee, which changed the economics for WBD shareholders. WBD’s own annual report language later stated that, after the Board determined it had received a Company Superior Proposal and Netflix waived its right to propose revisions, WBD terminated the Netflix agreement and Paramount paid Netflix the $2.8 billion fee on WBD’s behalf. (SEC)

For compliance and governance professionals, this is the control point: when a large termination fee can be assumed, reimbursed, funded, or otherwise neutralized by a rival bidder, the company needs clear documentation showing who approved that structure, how it was analyzed, how it was disclosed, and how conflicts were managed.

Disclosure Was Not a Back-Office Exercise

In a contested transaction, disclosure is part of the control environment. The company must update shareholders, respond to rival communications, track proxy statements, preserve drafts, document board deliberations, and avoid selective disclosure. The Netflix proxy materials laid out the termination fee structure and the circumstances under which the fee could become payable. (SEC) Paramount’s revised proposal was also publicly communicated through SEC filings, including the increased $31-per-share cash price and the regulatory termination fee. (SEC)

This is where compliance should pay attention. A transaction can move faster than the company’s document discipline. Emails, banker calls, board materials, draft press releases, proxy supplements, and negotiation notes can become evidence. If the company doesn’t have a real-time evidence protocol, the record will build itself, which isn’t ideal.

Why Compliance Professionals Should Care

Some believe this is a board-and-banker story. That is too narrow. It is also a compliance story because compliance is about governance, controls, documentation, accountability, escalation, and evidence. A high-stakes transaction tests whether the company’s control environment holds up under the highest pressure. It tests whether the Board receives complete information. It tests whether management understands escalation obligations. It tests whether legal, finance, communications, investor relations, and compliance can coordinate without losing the record.

This is exactly the kind of moment when the DOJ’s Evaluation of Corporate Compliance Programs is relevant, even outside an enforcement action. The central question is familiar: is the program well-designed, adequately resourced, empowered to function, and working in practice? In M&A, that means the compliance function should understand how deal governance intersects with disclosure controls, third-party risk, regulatory commitments, document preservation, and post-closing integration.

The Larger Lesson

The WBD bidding war shows that corporate governance is not theoretical. It is operational. A superior proposal clause is not just legal drafting. A termination fee is not just a financial number. A proxy supplement is not just a filing. Each is a control point. The companies that manage these moments well do three things. They make decisions through disciplined processes. They document the basis for those decisions in real time. They align governance, legal, finance, disclosure, and compliance before the crisis point arrives.

Practical Takeaways for Compliance Professionals

  1. Major transactions require evidence discipline from day one.
  2. Disclosure controls must be ready before a rival bidder appears.
  3. Termination fees and regulatory commitments should be treated as governance issues, not simply deal terms.
  4. Board minutes and waiver records must tell the fiduciary story.
  5. Compliance should have a seat at the broader transaction control table, especially when regulatory, third-party, data access, communications, and post-closing integration risks are implicated.

That is the lesson for every CCO. You may not be running the auction, but your program should help the company prove that it made decisions with integrity, evidence, and accountability.

Categories
Daily Compliance News

Daily Compliance News: April 29, 2026, The Trial of the Century Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • PR exec tried to get rid of documents. (FT)
  • Why did First Brands hire BDO? (FT)
  • Altman v. Musk. Trial of the Century. (FT)
  • Should your Board appoint a Bot? (FT)

For more information on the use of AI in compliance programs, Tom Fox’s new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out Tom’s latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Blog

Corporate Value(s), Corporate Risk, and the Board’s Oversight Challenge

There was a time when many executives could treat corporate values as a branding exercise, a recruiting line, or a paragraph on the company website. That time is over. Today, corporate values are operational. They shape customer loyalty, employee engagement, regulatory attention, shareholder expectations, and public trust. Most importantly for boards and compliance professionals, they shape risk.

That is the central lesson of Corporate Value(s) by Jill Fisch and Jeff Schwartz. Their insight is both practical and profound: managers should select the corporate values that maximize long-term economic value, and to do that, they need reliable information about what stakeholders actually care about. The paper does not argue that corporations should become moral philosophers. It argues for something more useful for the compliance function. Corporate values are part of the long-term value equation, and management ignores them at its peril.

Why This Matters to Compliance

For a corporate compliance audience, this is not an abstract governance debate. It is a board oversight issue. It is a cultural issue. It is an internal controls issue. And it is a warning that values misalignment can become a business crisis long before it shows up in a formal investigation or on a quarterly earnings call.

The paper is particularly strong in rejecting two simplistic views. First, it rejects the notion that companies can operate as if values do not matter. Second, it rejects the idea that companies should chase social legitimacy untethered from business reality. Instead, the authors land where sophisticated boards and chief compliance officers should land: values matter because they affect value, and management needs disciplined ways to understand that connection.

Culture as a Control

That is where compliance comes in. Too often, companies treat culture as a soft concept and values as a public relations topic. Yet every experienced compliance professional knows that culture is a control. It influences decision-making when policy manuals are silent, when incentives are misaligned, and when leaders face pressure. Corporate values, when operationalized correctly, help define that culture. They tell employees, managers, and third parties what the company stands for when the choice is not easy, the answer is not obvious, and money is on the line.

The paper notes that values-based concerns now influence a broad range of business decisions, from product design and sourcing to employment policies and public positioning. It also emphasizes that employees, customers, governments, and shareholders all communicate their values and preferences in different ways, and that management must stay attuned to those preferences, as misalignment can carry real economic consequences. That is precisely the language of risk management.

A Governance Issue for the Board

For boards, this means values cannot be siloed in human resources, investor relations, or communications. Values belong in governance. Boards need to ask not only what the company says its values are, but how those values are translated into operations, incentives, escalation, and response. If culture is a control, then values are part of the control environment.

This is also why corporate values should be viewed as a business risk issue. A values mismatch can trigger employee walkouts, consumer backlash, shareholder agitation, government retaliation, or a reputational spiral amplified through social media. The paper offers multiple examples showing how value-related decisions can carry material economic consequences. For the modern board, that means values are no longer a side conversation. They are part of enterprise risk management.

The paper offers another insight that compliance professionals should take seriously. Management often lacks perfect information about stakeholder values, and shareholders face structural impediments in communicating their views clearly. The authors argue that shareholder input can help management better understand public sentiment, reputational risk, and the tradeoffs between values and value. Whether one agrees with every detail of their governance analysis, the broader compliance lesson is straightforward: management needs listening mechanisms before a crisis hits.

Compliance as an Information System

That point should resonate deeply with compliance professionals. A mature compliance program is, at its core, an information system. It is supposed to tell management what it needs to know before misconduct metastasizes. The same is true for values-based risk. If the only time leadership learns that employees, customers, or investors believe the company is out of step is when a boycott begins, or a viral post explodes, the company’s information channels have already failed.

What Boards Should Do

  1. Boards should insist that management identify the company’s most material values-sensitive risk areas. These will vary by industry. For one company, it may be product safety. For another, environmental performance. For another, labor standards, DEI, or political engagement. The important point is that these issues should be mapped as risk categories, not simply discussed as messaging challenges.
  2. Boards should ask whether the company has credible mechanisms to hear from stakeholders before controversy becomes a crisis. The paper emphasizes that employees and customers often have clearer channels to express their values and preferences than shareholders do. A compliance-minded board should ask: Are we learning from all of them? Are we capturing concerns through speak-up systems, culture assessments, employee town halls, customer trends, market testing, and investor engagement? Or are we waiting for a public backlash to tell us what we should already know?
  3. Boards should evaluate whether management is treating corporate culture as a control. This means looking beyond tone at the top to the systems beneath it: incentives, middle-management behavior, escalation pathways, decision rights, and accountability. Values that live only in a code of conduct are decorative. Values that influence promotions, discipline, product choices, third-party oversight, and crisis response become operational.
  4. Boards should ensure that compliance has a seat at the table when values-laden business decisions are made. The compliance function should not decide corporate values. That is not its role. But it should help management test assumptions, identify blind spots, assess stakeholder reactions, and determine whether a proposed course is consistent with the company’s culture and risk appetite. In that sense, compliance serves as both translator and challenger.
  5. Boards should resist the temptation to turn every values issue into a political debate. The paper wisely cautions against viewing corporations as moral leaders first and economic institutions second. That is a sound warning. But there is an equal and opposite danger in pretending that values are irrelevant to business. They are not. The board’s job is not to moralize. It is to govern. And governance today requires management to understand how stakeholder values affect long-term value.

Steps for Chief Compliance Officers

For chief compliance officers, there are some clear, practical steps to take.

Begin by incorporating values-sensitive issues into risk assessment and culture reviews. Build a process to identify where stakeholder expectations may materially affect the company’s operations, reputation, and control environment. Make sure that speak-up and escalation systems can capture values-based concerns, not only legal violations. Work with management to develop an early-warning capability around stakeholder sentiment. Bring boards concrete reporting on culture trends, employee concerns, reputational flashpoints, and areas where the company may be drifting away from its stated values. Finally, pressure-test whether the company’s incentives, communications, and business decisions align with the culture it claims to have.

The Bottom Line

The bottom line is this: corporate values are not soft. They are not ornamental. They are not outside the compliance function’s field of vision. They are part of how companies create value, lose trust, and invite risk. The real challenge for boards and CCOs is not to choose values in the abstract. It is to build the governance and information systems that help management understand stakeholder values before a crisis hits. That is not politics. That is good governance.

Categories
Blog

Data Governance, Privacy, and Model Integrity: The Control Foundation of AI Governance

Artificial intelligence may look like a technology story on the surface, but beneath that surface lies a governance reality every board and Chief Compliance Officer must confront. AI systems are only as sound as the data that feeds them, the controls that govern them, and the integrity of the outputs they generate. When data governance is weak, privacy obligations are poorly managed, or model integrity is assumed rather than tested, AI risk can move quickly from a technical flaw to enterprise exposure.

In the prior blog posts in this series, I examined the foundational questions of AI governance: board oversight and accountability, and the danger of strategy outrunning governance. Today, I want to turn to a third issue that sits at the core of every credible AI governance program: data governance, privacy, and model integrity.

This is where the AI conversation often moves from excitement to discipline. Companies may be eager to deploy tools, automate functions, and improve decision-making. But none of that matters if the underlying data is flawed, sensitive information is mishandled, or the model produces outputs that are unreliable, biased, or impossible to explain in context—the more powerful the technology, the more important the governance framework beneath it.

For boards and CCOs, this is not simply a technical control matter. It is a governance matter because failures in data integrity, privacy management, and model performance can have legal, regulatory, reputational, financial, and cultural consequences simultaneously.

AI Governance Begins with the Data

There is an old saying in technology: garbage in, garbage out. In the AI era, that phrase remains true, but it is no longer sufficient. In corporate governance terms, the problem is not merely bad data. It is unknown, unauthorized, untraceable, biased, stale, overexposed, or used in ways the organization never properly approved. That is why data governance is the control foundation of AI governance.

Every AI use case depends on inputs. Those inputs may include structured internal data, public information, personal data, third-party data, proprietary records, historical documents, transactional records, prompts, or user interactions. If management does not understand where that data comes from, who has rights over it, whether it is accurate, how it is classified, and whether it is appropriate for the intended purpose, then the company is not governing AI. It is merely using it.

For compliance professionals, this point should feel familiar. Data governance is not new. What is new is the speed and scale at which AI can amplify data weaknesses. A spreadsheet error may affect one report. A flawed AI input may affect thousands of interactions, recommendations, or decisions before anyone notices.

Why Boards Should Care About Data Lineage

Boards do not need to become technical experts in model training or data architecture. But they do need to ask whether management understands the provenance and reliability of the information flowing into critical AI systems.

At a governance level, this is a question of data lineage. Can the company trace the source of the data, how it was curated, whether it was changed, and whether it was approved for the intended use? If a customer, regulator, employee, or auditor asks why the system reached a particular result, can management explain not only the output, but the data conditions that shaped it?

A board that does not ask these questions risks receiving polished dashboards and impressive demonstrations while missing the underlying weaknesses. AI systems can sound authoritative even when they are wrong. That is part of what makes governance here so essential. Confidence is not the same as integrity.

This is also where the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) offers a helpful mindset. The ECCP pushes companies to think in terms of operational reality. Do policies work in practice? Are controls tested? Is the company learning from what goes wrong? The same discipline applies here. A company should not assume its data environment is fit for AI simply because it has data available. It should test, verify, document, and challenge that assumption.

Privacy Is Not an Adjacent Issue

Too many organizations still treat privacy as adjacent to AI governance rather than central to it. That is a mistake. AI systems often rely on data sets that include personal information, employee information, customer records, usage patterns, communications, or behavior-based inputs. Even when a company believes it has de-identified or anonymized data, there may still be re-identification risks, overcollection concerns, retention issues, or use limitations tied to law, contract, or internal policy.

For the board and the CCO, privacy should not be discussed as a compliance side note. It should be part of the approval and governance architecture from the outset. Before an AI use case is deployed, management should understand what personal data is involved, whether its use is permitted, what notices or disclosures apply, what access restrictions are required, how the data will be retained, and whether any vendor relationships create additional privacy exposure.

This is particularly important in generative AI environments, where employees may paste confidential, proprietary, or personal information into tools without fully appreciating the consequences. A privacy incident in the AI context may not begin with malicious intent. It may begin with convenience. That is why governance must focus not only on policy, but on system design, training, and usage constraints.

The CCO has a critical role here because privacy governance often intersects with policy management, employee conduct, training, investigations, and disciplinary response. If privacy is left solely to specialists without integration into the broader governance process, the organization risks building fragmented controls that do not hold together under pressure.

Model Integrity Is a Governance Question

Model integrity sounds like a technical term, but it is really a governance concept. It asks whether the system is performing in a manner consistent with its intended purpose, risk classification, and control expectations.

That means asking hard questions. Is the model accurate enough for the use case? Has it been validated before deployment? Are there known limitations? Does it perform differently across populations or scenarios? Can outputs be reviewed in a meaningful way by human decision-makers? Are there conditions under which the model should not be used? These are not engineering questions alone. They are governance questions because they determine whether management is relying on the system responsibly.

This is where NIST’s AI Risk Management Framework is especially valuable. NIST emphasizes that organizations should map, measure, and manage AI risks, including those related to validity, reliability, safety, security, resilience, explainability, and fairness. It is not enough to say that a tool works most of the time. The organization must understand where it may fail, how failure will be detected, and what safeguards are in place when it does.

ISO/IEC 42001 reinforces the same discipline through the lens of management systems. It requires structured attention to risk identification, control design, monitoring, documentation, and continual improvement. In other words, it treats model integrity not as a technical aspiration, but as an organizational responsibility. For boards, the takeaway is direct: if management cannot explain how model integrity is validated and maintained, then the board does not yet have assurance that AI is being governed effectively.

Third Parties Increase the Stakes

One of the more dangerous assumptions in AI governance is that outsourcing technology also outsources risk. It does not. Many organizations will deploy AI through third-party vendors, embedded tools, software platforms, or external service providers. That may be practical, even necessary. But it also means the company may be relying on data practices, training methods, model assumptions, or privacy safeguards it did not design and cannot fully see.

That is why data governance, privacy, and model integrity must extend to third-party risk management. Procurement cannot focus solely on functionality and price. Legal cannot focus solely on contract form. Compliance, privacy, security, and risk all need to understand what the vendor is doing, what data is being used, what rights the company has to inspect or question performance, and what happens when the vendor changes the model or its underlying terms.

This is not simply good vendor management. It is a governance necessity. A company remains accountable for business decisions made using third-party AI tools, especially when those tools affect customers, employees, compliance obligations, or regulated activities.

Documentation Is What Makes Governance Real

As with every major governance issue, documentation is what turns theory into evidence. If a company is serious about data governance, privacy, and model integrity, it should have records that show it. Those records may include data inventories, data classification standards, model validation summaries, privacy assessments, vendor due diligence files, testing results, approved use cases, control requirements, escalation logs, and remediation actions. Without this documentation, governance becomes anecdotal. With it, governance becomes reviewable, auditable, and improvable.

This is another place where the ECCP mindset is so useful. Prosecutors and regulators tend to ask the same core question in different ways: how do you know your program works? In the AI context, the answer cannot be “our vendor told us so” or “the business says the tool is helpful.” It must be grounded in evidence, testing, and management discipline.

What Boards and CCOs Should Be Pressing For

Boards should expect management to present AI use cases with enough clarity to answer four questions. What data is being used? What privacy implications attach to that use? How has model integrity been tested? What controls will remain in place after deployment?

CCOs should press equally hard from the management side. Is there a documented data governance process for AI? Are privacy reviews built into the intake and approval process? Are models validated according to risk? Are third-party tools subject to diligence and contract controls? Are incidents and anomalies logged and investigated? Are employees trained not to expose confidential or personal information through improper use? These are not burdensome questions. They are the practical questions that separate governed AI from hopeful AI.

Governance Requires Trustworthy Inputs and Defensible Outputs

In the end, AI governance depends on a simple but demanding truth: the organization must be able to trust what goes into the system and defend what comes out of it.

If the data is poorly governed, privacy rights are handled casually, or model integrity is assumed rather than demonstrated, then no amount of strategic enthusiasm will make the program safe. Boards will not have real oversight. CCOs will not have a defensible control environment. The company will merely have a faster way to create risk.

That is why data governance, privacy, and model integrity are not support issues in AI governance. They are central issues. They determine whether the enterprise is using AI with discipline or simply hoping for the best.

In the next article in this series, I will turn to the fourth governance challenge: ongoing monitoring, where many organizations discover that approving an AI use case is far easier than governing it after it goes live.

Categories
Blog

Board Oversight and Accountability in AI: Where Governance Begins

For boards and Chief Compliance Officers, AI governance does not begin with the model. It begins with oversight, accountability, and the discipline to define who owns risk, who makes decisions, and who answers when something goes wrong. If AI is changing how companies operate, then board governance and compliance leadership must change as well.

In the first article in this series, I laid out the five significant corporate governance challenges around artificial intelligence: board oversight and accountability, strategy outrunning governance, data governance and model integrity, ongoing monitoring, and culture and speak-up. In Part 2, I turn to the first and most foundational issue: board oversight and accountability.

This is where every AI governance program either starts with rigor or begins with ambiguity. And ambiguity, in governance, is rarely neutral. It is usually the breeding ground for failure.

There is a tendency in some organizations to treat AI oversight as a natural extension of technology oversight. That is too narrow. AI touches legal exposure, regulatory risk, data governance, privacy, discrimination concerns, intellectual property, operational resilience, internal controls, and corporate culture. That makes AI a board-level and CCO-level issue, not just a CIO issue.

The central governance question is straightforward: who is responsible for AI risk, and how is that responsibility exercised in practice? If the board cannot answer that question, if management cannot explain it, and if the compliance function is not part of the answer, then the company does not yet have credible AI governance.

Why Board Oversight Matters Now

Boards have always been expected to oversee enterprise risk. What has changed with AI is the speed, scale, and opacity of the risks involved. A business process can be altered quickly by a generative AI tool. A model can influence customer interactions, internal decisions, and external communications at scale. Employees can adopt AI capabilities before governance structures are fully formed. Vendors can embed AI inside products and services without management fully understanding the downstream implications. That is why AI cannot be governed informally. It requires deliberate oversight.

The board does not need to manage models line by line. That is not its role. But the board must ensure that management has established a governance structure capable of identifying AI use cases, classifying risk, escalating significant issues, testing controls, and reporting failures. Just as important, the board must know who inside management is accountable for making that system work.

This is where the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) offers a very practical lens. The ECCP asks whether a compliance program is well designed, adequately resourced, empowered to function effectively, and tested in practice. Those four questions are equally powerful in evaluating AI governance. Is the governance structure well designed? Is it resourced? Is the compliance function empowered in AI decision-making? Is the program working in practice? If the answer to any of those questions is uncertain, the board should treat that uncertainty as a governance gap.

Accountability Begins with Ownership

One of the oldest problems in corporate governance is fragmented responsibility. AI only intensifies that risk. Consider the typical organizational landscape. IT may own its own infrastructure. Legal may review contracts and liability. Privacy may address data use. Security may focus on cyber threats. Risk may handle enterprise frameworks. Compliance may address policy, controls, investigations, and reporting. Business leaders may champion the use case. Internal audit may come in later for assurance. The board, meanwhile, receives updates from multiple directions.

Without a clearly defined operating model, this becomes a classic accountability fog. Everyone has a slice of the issue, but no one owns the whole risk. A more disciplined approach requires naming an accountable executive owner for enterprise AI governance; in some companies, that may be the Chief Risk Officer. In others, it may be a Chief Legal Officer, Chief Compliance Officer, or a designated senior executive with cross-functional authority. The title matters less than the clarity. The organization must know who convenes the process, who resolves conflicts, who signs off on high-risk use cases, and who reports upward to the board.

For the CCO, this does not mean taking sole ownership of AI. That would be unrealistic and unwise. But it does mean insisting that compliance has a defined role in the governance architecture. AI raises issues of policy adherence, training, escalation, investigations, third-party risk, disciplinary consistency, and remediation. Those are core compliance issues. A governance model that sidelines the CCO is not merely incomplete; it is unstable.

The Right Committee Structure

Once ownership is established, the next question is structural: where does AI governance live? The answer should be enterprise-wide, but with a defined committee architecture. Companies need at least two governance layers.

The first is a management-level AI governance committee or council. This should be a cross-functional working body with representation from compliance, legal, privacy, security, technology, risk, internal audit, and relevant business units, as appropriate. Its purpose is operational governance. It reviews proposed use cases, classifies risk levels, evaluates controls, addresses issues, and determines escalation.

The second is a board-level oversight mechanism. This does not always require a new standing AI committee. In some organizations, oversight may sit with the audit committee, risk committee, technology committee, or full board, depending on the company’s structure and maturity. What matters is not the name of the committee. What matters is that there is an identified board body with responsibility for overseeing AI governance and receiving regular reporting.

This is consistent with the NIST AI Risk Management Framework, which begins with the “Govern” function. NIST recognizes that governance is not an afterthought; it is the foundation that enables the rest of the risk management lifecycle. ISO/IEC 42001 similarly reinforces that AI governance must be embedded in a management system with defined roles, controls, review mechanisms, and continuous improvement. Both frameworks point in the same direction: AI governance requires structure, not aspiration.

Reporting Lines That Actually Work

Good governance lives or dies by reporting lines. If information cannot move efficiently upward, then oversight will be stale, filtered, or incomplete. Boards should require periodic reporting on several core areas: the current AI inventory, high-risk use cases, incident trends, control exceptions, third-party AI dependencies, regulatory developments, and remediation status. The board does not need a data dump. It needs decision-useful reporting.

That means management should create a formal reporting cadence. Quarterly reporting is sufficient for many organizations, but high-risk environments require more frequent updates. The reporting should identify not only what has been approved, but what has changed. That includes scope changes, incidents, near misses, new vendors, policy exceptions, and any material concerns raised by employees, customers, or regulators.

The CCO should be part of the reporting chain, not a bystander. A balanced governance model allows compliance to elevate concerns independently if necessary, particularly when a business leader is pushing to move faster than controls will support. That is not an obstruction. That is governance doing its job.

Escalation Protocols: The Missing Middle

Many companies have approval procedures, but far fewer have robust escalation protocols. That is a mistake. Governance fails only when there is no structure. It also fails when there is no clear path for handling edge cases, incidents, or disagreements.

An effective AI governance program should specify escalation triggers. For example, a use case should be escalated when it affects employment decisions, consumer rights, regulated communications, financial reporting, sensitive personal data, or legally significant outcomes. Escalation should also occur when there is evidence of model drift, hallucinations in a material context, unexplained bias, control failure, a third-party vendor issue, or a credible employee concern.

These triggers should not live in someone’s head. They should be documented in policy, operating procedures, or a risk classification matrix. There should also be a defined process for who gets notified, what interim controls are applied, whether deployment pauses are available, and how issues are documented for follow-up.

This is another place where the ECCP remains highly relevant. DOJ prosecutors routinely ask whether issues are escalated appropriately, whether investigations are timely, and whether lessons learned are incorporated into the program. AI governance should be built with the same operational seriousness. If an issue arises, the company should not be improvising its governance response in real time.

Documentation Is Evidence of Governance

One of the great compliance truths is that governance without documentation is hard to prove and harder to sustain. For AI governance, documentation should include at least these categories: use case inventories, risk classifications, approval memos, committee minutes, control requirements, incident logs, training records, validation summaries, escalation decisions, and remediation actions. This is not paperwork for its own sake. It is the evidentiary trail that shows the organization is governing AI thoughtfully and consistently.

Boards should care about this because documentation is what allows oversight to be more than anecdotal. It is also what allows internal audit, regulators, and investigators to assess whether the governance program is functioning.

For the CCO, documentation is particularly important because it connects AI oversight to the larger compliance architecture. It helps align AI governance with policy management, training, investigations, speak-up systems, third-party due diligence, and corrective action tracking. In other words, it turns AI governance from a loose collection of meetings into a defensible management process.

Board Practice and CCO Practice Must Meet in the Middle

The best AI governance models do not pit the board and the compliance function against innovation. They create a structure that allows innovation to move, but only within defined guardrails. Boards should ask sharper questions. Who owns AI governance? What committee reviews high-risk use cases? What issues must be escalated? What reporting do we receive? How are incidents tracked and remediated? What role does compliance play?

CCOs should be equally direct. Where does compliance sit in the approval process? How do employees report AI concerns? What documentation is required? When can compliance elevate an issue on its own? How are lessons learned being fed back into policy and training?

This is the practical heart of the matter. Oversight is not a slogan. Accountability is not a press release. Both must be built into reporting lines, committee design, escalation protocols, and documentation discipline.

AI governance begins here because every other issue in this series depends on it. If oversight is weak and accountability is blurred, strategy will outrun governance, data issues will go unnoticed, monitoring will become inconsistent, and culture will not carry the load. But if the board and CCO get this first issue right, they create the governance spine that the rest of the program can rely on.

Join us tomorrow, where we review the rule of data governance in AI governance, because that is where every effective AI governance program either starts strong or starts to fail.

Categories
Blog

Five Corporate Governance Challenges in AI: A Roadmap for CCOs and Boards

AI is not simply a technology deployment question. It is a corporate governance challenge that requires board attention, compliance discipline, and operational oversight. For Chief Compliance Officers and board members, the task is not merely to encourage innovation, but to ensure that innovation is governed, monitored, and aligned with business values and risk tolerance.

Artificial intelligence has moved from pilot projects and innovation labs into the bloodstream of the modern corporation. It now touches customer service, finance, procurement, HR, sales, third-party management, internal reporting, and strategic decision-making. That expansion is why AI can no longer be treated as a narrow IT issue. It is a governance issue. More particularly, it is a governance issue with compliance implications at every lifecycle stage.

For compliance professionals, that means AI is not simply about whether a model works. It is about whether the organization has built the structures, accountability, and culture to use AI responsibly. For boards, it means AI oversight can no longer be delegated away with a cursory quarterly update. The board must understand not only where AI is being used, but whether the company’s governance architecture is fit for purpose.

This is the first post in a series examining the five most important corporate governance issues around AI. They are not exotic or theoretical. They are the same types of governance challenges compliance professionals have seen before in other contexts: ownership, control design, data integrity, monitoring, and culture. AI raises the stakes and accelerates the timeline.

1. Board Oversight and Accountability

The first challenge is the most fundamental: who is actually in charge?

One of the great failures in governance is diffuse accountability. When everyone has some responsibility, no one has real responsibility. AI governance suffers from this problem in many organizations. Legal is concerned about liability. IT is focused on systems. Security is focused on cyber risk. Privacy is focused on data usage. Compliance is focused on controls and conduct. Business leaders are focused on speed and competitive advantage. The board hears fragments from all of them, but may not receive a coherent picture.

That is a dangerous place to be. AI governance begins with clear ownership. The board should know who is accountable for enterprise AI governance, how decisions are escalated, and how high-risk use cases are reviewed. A company does not need bureaucracy for its own sake, but it does need clarity.

This is where the Department of Justice’s Evaluation of Corporate Compliance Programs remains instructive, even if AI is not its exclusive focus. The ECCP repeatedly asks whether compliance is well designed, adequately resourced, empowered to function effectively, and tested in practice. Those same questions apply directly to AI governance. If accountability for AI is vague, if compliance is not in the room, or if oversight is not documented, governance will be performative rather than operational.

2. Strategy Outrunning Governance

The second challenge is one many companies know all too well: innovation is sprinting ahead while governance is still tying its shoes.

Business teams are under enormous pressure to deploy AI quickly. Senior leadership hears daily that AI can deliver efficiency, productivity, growth, and competitive advantage. Vendors promise transformation. Employees experiment informally. In that environment, governance can be cast as friction.

But good governance is not the enemy of innovation. It is what keeps innovation from becoming unmanaged exposure.

The central question here is simple: has the company defined the rules of the road before putting AI into production? In practical terms, has it determined which use cases are permissible, which require enhanced review, which are prohibited, and which must go to the board or a designated committee? Has it established approval criteria, documentation standards, and stop/go decision points?

The NIST AI Risk Management Framework is especially helpful on this point because it treats AI governance as an ongoing management discipline rather than a one-time sign-off. Its emphasis on Govern, Map, Measure, and Manage is a powerful reminder that strategy and governance must move together. ISO/IEC 42001 brings similar discipline by framing AI management systems around structure, accountability, controls, and continual improvement.

The lesson for compliance professionals is clear: if the business has a faster process for buying or launching AI than for reviewing risks and governance, it has already fallen behind.

3. Data Governance, Privacy, and Model Integrity

The third challenge is the quality and integrity of what goes into, and comes out of, AI systems.

AI does not operate in a vacuum. It depends on data, assumptions, training inputs, prompts, workflows, and human interaction. That means weaknesses in data governance are not side issues. They are central governance risks. Poor data lineage, unvalidated data sources, confidentiality breaches, inadequate access controls, and bias in training data can all create downstream failures that become legal, reputational, regulatory, and operational events.

For boards, the temptation is to hear “AI” and think about futuristic questions. But the more immediate concern is often much more familiar. Does management know where the data came from? Does the company understand whether sensitive or proprietary information is being exposed? Are outputs accurate enough for the intended use? Are the controls around data usage consistent with privacy obligations and internal policy?

This is where AI governance intersects with traditional compliance disciplines in a very real way. Privacy, information governance, records management, cybersecurity, and internal controls all converge here. A system that produces impressive outputs but relies on flawed or unauthorized data is not a governance success. It is a governance failure waiting to be discovered.

ISO 42001 is particularly useful because it forces organizations to think in systems terms. It is not merely about the model itself; it is about the management environment surrounding it. That is exactly how boards and CCOs should think about model integrity.

4. Ongoing Monitoring and the “Day Two” Problem

The fourth challenge is the one that too many organizations underestimate: governance after deployment. A great many companies put substantial effort into approving an AI use case, but far less into monitoring it once it is live. Yet this is where some of the greatest risks emerge. Models drift. Employees use tools for new purposes. Controls that looked solid on paper weaken in practice. Reviewers become overloaded. Risk profiles change. Regulators evolve their expectations. The use case expands far beyond its original design.

That is why AI governance must include what I call the “Day Two” problem. What happens after launch? This is once again a place where the ECCP offers a useful lens. The DOJ does not ask merely whether a policy exists. It asks whether it works in practice, whether it is tested, and whether lessons learned are incorporated back into the program. AI governance should be held to the same standard. If the company has no way to monitor performance, investigate anomalies, log incidents, revalidate assumptions, or update controls, then it lacks effective AI governance. It has an approval memo.

The board should be asking for reporting that goes beyond usage metrics or efficiency gains. It should want to know about incidents, exception trends, control failures, validation results, and remediation efforts. In other words, governance must be dynamic because AI risk is dynamic.

5. Culture, Speak-Up, and Human Judgment

The fifth challenge may be the most overlooked, yet it is often the earliest warning system a company has: culture. Employees will usually see AI failures before leadership does. They will spot the odd output, the customer complaint, the biased result, the misuse of a tool, the shortcut around a control, or the inaccurate summary that could trigger a bad decision. The question is whether they will say something.

This is why AI governance is not solely about structure and policy. It is also about whether the organization has a culture that encourages people to raise concerns. Do employees understand that AI-related problems are reportable? Do they know where to raise them? Are managers trained to respond properly? Are anti-retaliation protections reinforced in this context?

Human judgment also matters because AI does not eliminate accountability. If anything, it heightens the need for judgment. A machine-generated output can create a false sense of confidence, especially when it arrives quickly and sounds authoritative. Boards and CCOs must resist that temptation. Human oversight is not a ceremonial step. It is an essential governance control.

The strongest AI governance programs will be the ones that connect structure with culture. They will not merely create committees and frameworks. They will create an environment where people trust the system enough to challenge it.

The Governance Road Ahead

For CCOs and boards, the governance challenge around AI is not mysterious. It is demanding, but it is not mysterious. The questions are recognizable. Who owns it? What are the rules? Can we trust the data? Are we monitoring the system over time? Will people speak up when something goes wrong?

These five issues form the roadmap for the series ahead. In the coming posts, I will take up each one in turn and explore what it means in practice for modern compliance programs and board oversight. Because if there is one lesson here, it is this: AI governance is not about admiring the technology. It is about governing the enterprise that uses it.

Join us tomorrow, where we review board oversight and accountability, because that is where every effective AI governance program either starts strong or starts to fail. 

Categories
GSK in China: 13 Years Later

GSK In China: 13 Years Later – Where Was the Board? Director Oversight and Doing Business in China

Thirteen years after the GSK China scandal exploded onto the global stage, its lessons remain as urgent as ever for compliance professionals and business leaders. In this podcast series, we revisit the case not simply as corporate history, but as a living cautionary tale about culture, incentives, third parties, investigations, and governance. Each episode explores what went wrong, why it went wrong, and how those failures still echo in today’s compliance and ethics landscape. Join me as we unpack the scandal and draw practical lessons for building stronger, more resilient organizations. This episode examines why major bribery scandals occur “under the board’s nose,” using GSK as a launching point to explain directors’ legal and practical compliance responsibilities.

It traces oversight duties under Delaware law, highlighting Caremark’s good-faith duty to ensure information and reporting systems, Stone v. Ritter’s standard for liability for sustained or systematic oversight failure, and the business judgment rule. It contrasts “check-the-box” programs with risk-based oversight via the Piat case, where formal compliance masked illegal conduct embedded in business plans. The discussion ties board expectations to FCPA guidance hallmarks, emphasizing tone at the top, empowered compliance functions with direct board access, DOJ/SEC scrutiny, and SEC Reg. S-K 407 risk-oversight disclosures, and potential disgorgement. It then focuses on China as a high-risk environment, third-party intermediary exposure, and M&A “deal-breaker” dilemmas requiring rigorous pre- and post-acquisition diligence, concluding with the paradox that boards may be incentivized toward plausible deniability. Our hosts are Timothy and Fiona.

Key highlights:

  • Compliance Starts at the Top
  • Caremark Duty Explained
  • FCPA Hallmarks for Boards
  • Passive Board Era Ends
  • Plausible Deniability Paradox

Resources:

GSK in China: A Game Changer for Compliance on Amazon.com

GSK in China: Anti-Bribery Enforcement Goes Global on Amazon.com

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Ed. Note: Notebook LM created the voices of the hosts, Timothy and Fiona, based on text written by Tom Fox

Categories
Blog

AI Risk Appetite: The Conversation Boards Are Not Having

There is a quiet but serious problem developing in boardrooms around AI. Directors are hearing about innovation. They are hearing about productivity gains. They are hearing about competitive pressure, transformation, and speed. What they are not hearing enough about is risk appetite. That is the missing conversation.

Most companies are already using AI in one form or another. Some are deploying enterprise tools. Some are approving vendor solutions with embedded AI. Some are allowing business units to experiment in a controlled fashion. Some, of course, are doing all of the above and pretending it is a strategy. Yet for all the discussion about adoption, there has been far less focus on a basic governance question: what level of AI-driven decision risk is acceptable for this company? That is not a technical question. It is a board question.

The Risk Appetite Gap in AI Governance

AI is not simply another software purchase. It can influence recommendations, rankings, forecasts, summaries, classifications, and decisions. It can operate upstream from business judgments or directly within them. It can affect customer communications, hiring decisions, compliance monitoring, internal investigations, financial analysis, and reporting workflows. So the central governance challenge is not whether AI exists in the enterprise. It is how much authority the company is willing to give it, in what contexts, with what controls, and with what margin for error. If you do not define that, you do not have AI governance. You have AI optimism.

What Is AI Risk Appetite?

At its core, AI risk appetite is the level and type of AI-related risk an organization is willing to accept in pursuit of business value. That includes a series of questions boards ought to be asking. How much error is acceptable in AI-generated output before a human must intervene? Which uses are low-risk productivity enhancements, and which are sensitive, consequential, or reputation-threatening? In what contexts can AI make recommendations only, and in what contexts can it influence or automate action? How much dependence on opaque third-party models is acceptable? What degree of explainability does the company require for different use cases? When does speed stop being a benefit and start becoming exposure?

Many boards are currently discussing AI deployment without ever discussing AI tolerance. That is like approving a global third-party strategy without deciding what level of distributor risk, sanctions exposure, or bribery risk the company is prepared to accept. No compliance professional would recommend that. Yet in AI, organizations do versions of it every day.

Why Boards Avoid the Conversation

There are several reasons boards have been slow to engage on AI risk appetite.

First, the technology moves fast, and the terminology can become a fog machine. Directors do not want to look uninformed, so discussions often stay broad and strategic. Second, management may not yet have the internal inventory or classification framework needed to make a risk-appetite conversation concrete. Third, many companies are still in an experimentation phase, which creates the illusion that formal governance can come later. Fourth, there is a natural tendency to believe AI risk belongs to IT, legal, or security, rather than to enterprise oversight.

AI risk appetite cannot be delegated away because it intersects with business judgment, ethics, records, privacy, data governance, resilience, and culture. It cuts across functions. It also cuts across reputational boundaries. If a company uses AI in a way that produces unfair results, faulty decisions, poor disclosures, or customer harm, nobody is going to say, “Well, that was a technical issue, so the board need not have been involved.” Boards do not get a hall pass when the governance system is missing.

The Conversations Boards Need to Be Having

Risk Map. The first conversation is about where AI sits on the company’s risk map. Is AI a productivity tool, a strategic platform, a decision-support capability, or some combination of all three? The answer matters because it affects the level of oversight. A company using AI for internal drafting support faces one type of exposure. A company using AI in customer-facing interactions, underwriting, hiring, fraud detection, or compliance monitoring faces another challenge.

Decision Significance. Boards need to ask where AI is being used in decisions that affect legal rights, financial outcomes, customer treatment, employment status, compliance judgments, or public disclosures. Not all uses are equal. A board that treats AI use in marketing copy the same as AI use in employee discipline is not governing. It is lumping.

Acceptable Error and Human Review. Boards should ask: what level of inaccuracy can the company tolerate in a given use case, and who is accountable for checking the output before action is taken? Human oversight has become one of those phrases everybody likes, and few define. Directors need something more disciplined. When is review mandatory? What does a meaningful review look like? What evidence shows that the reviewer is not simply rubber-stamping machine output?

Data and Model |Dependency. What data is being used? Who owns it? Who has the right to it? How current is it? Are third-party vendors changing capabilities under existing contracts? Is the company becoming dependent on systems it does not fully understand or cannot easily audit? Boards should not need to know how the engine works, but they absolutely need to know whether the company is driving a car with uncertain brakes.

Incident Tolerance and Escalation. What types of AI failures must be reported to senior leadership or the board? A hallucinated internal memo may be embarrassing. A flawed AI-assisted hiring screen or customer communication may be far more serious. The board should ensure management has defined materiality thresholds before an incident occurs, not after the headlines begin.

The CCO’s Role in Shaping the Conversation

This is where compliance officers can be enormously helpful.

The CCO is often the person in the enterprise most experienced at turning abstract risk into operating discipline. Compliance knows how to frame risk-based governance. It knows how to create escalation structures, policy frameworks, investigations protocols, and oversight dashboards. It knows that culture and control design matter just as much as rules. Here are four ways to do so.

  1. A CCO can help management develop a tiered inventory of AI use cases. This is essential. Boards cannot discuss appetite in the abstract. They need to see the map. Which uses are low risk? Which are medium? Which are high? Which are prohibited absent specific approval?
  2. Compliance can help translate legal, ethical, and operational concerns into board-level language. Directors do not need a seminar on neural networks. They need clear framing around consequences, control points, accountabilities, and thresholds.
  3. A CCO can help build governance around human review, documentation, and escalation. If the company says a human is responsible, compliance can help test whether that responsibility is real, documented, and operational.
  4. Compliance can keep the conversation grounded in how people actually behave. Employees will choose convenience. Business teams will move quickly. Vendors will market aggressively. Managers may trust the generated output more than they should. A good compliance officer knows that policy must be built for actual human behavior, not ideal behavior.

Compliance as Risk Mitigation and Business Enablement

One of the enduring frustrations in compliance is that governance is often viewed as a speed bump until something goes wrong. AI gives us another chance to make the larger point. Governance does not slow innovation. Bad governance slows innovation by causing rework, distrust, remediation, and public embarrassment.

A well-defined AI risk appetite does the opposite. It gives the business clarity. It tells innovation teams where they can move quickly and where they must slow down. It helps procurement negotiate the right terms. It helps managers know when to escalate. It helps employees understand when they may rely on AI and when they must verify it. Most importantly, it gives the board a strategic rather than reactive basis for oversight.

That is compliance at its best. Not Dr. No, from the Land of “no,” but the function that makes responsible growth possible.

Final Thoughts

Boards need not fear AI. But they do need to govern it. And governance begins with clarity about appetite. If your board has discussed an AI opportunity but not AI tolerance, it has only had half the conversation. If your company has adopted tools but has not defined acceptable levels of error, autonomy, dependency, and oversight, it is operating on hope. Hope, as every compliance professional knows, is not a strategy and certainly not a control.

Here are the questions I would leave you with. Has your board defined what level of AI-driven decision risk it is willing to accept? Can management explain how that appetite changes across low-risk and high-risk use cases? And can your compliance function show, with evidence, whether the company is operating inside those lines? If the answer is no, then the conversation boards may be the most important AI conversation of all.