Categories
Blog

Corporate Value(s), Corporate Risk, and the Board’s Oversight Challenge

There was a time when many executives could treat corporate values as a branding exercise, a recruiting line, or a paragraph on the company website. That time is over. Today, corporate values are operational. They shape customer loyalty, employee engagement, regulatory attention, shareholder expectations, and public trust. Most importantly for boards and compliance professionals, they shape risk.

That is the central lesson of Corporate Value(s) by Jill Fisch and Jeff Schwartz. Their insight is both practical and profound: managers should select the corporate values that maximize long-term economic value, and to do that, they need reliable information about what stakeholders actually care about. The paper does not argue that corporations should become moral philosophers. It argues for something more useful for the compliance function. Corporate values are part of the long-term value equation, and management ignores them at its peril.

Why This Matters to Compliance

For a corporate compliance audience, this is not an abstract governance debate. It is a board oversight issue. It is a cultural issue. It is an internal controls issue. And it is a warning that values misalignment can become a business crisis long before it shows up in a formal investigation or on a quarterly earnings call.

The paper is particularly strong in rejecting two simplistic views. First, it rejects the notion that companies can operate as if values do not matter. Second, it rejects the idea that companies should chase social legitimacy untethered from business reality. Instead, the authors land where sophisticated boards and chief compliance officers should land: values matter because they affect value, and management needs disciplined ways to understand that connection.

Culture as a Control

That is where compliance comes in. Too often, companies treat culture as a soft concept and values as a public relations topic. Yet every experienced compliance professional knows that culture is a control. It influences decision-making when policy manuals are silent, when incentives are misaligned, and when leaders face pressure. Corporate values, when operationalized correctly, help define that culture. They tell employees, managers, and third parties what the company stands for when the choice is not easy, the answer is not obvious, and money is on the line.

The paper notes that values-based concerns now influence a broad range of business decisions, from product design and sourcing to employment policies and public positioning. It also emphasizes that employees, customers, governments, and shareholders all communicate their values and preferences in different ways, and that management must stay attuned to those preferences, as misalignment can carry real economic consequences. That is precisely the language of risk management.

A Governance Issue for the Board

For boards, this means values cannot be siloed in human resources, investor relations, or communications. Values belong in governance. Boards need to ask not only what the company says its values are, but how those values are translated into operations, incentives, escalation, and response. If culture is a control, then values are part of the control environment.

This is also why corporate values should be viewed as a business risk issue. A values mismatch can trigger employee walkouts, consumer backlash, shareholder agitation, government retaliation, or a reputational spiral amplified through social media. The paper offers multiple examples showing how value-related decisions can carry material economic consequences. For the modern board, that means values are no longer a side conversation. They are part of enterprise risk management.

The paper offers another insight that compliance professionals should take seriously. Management often lacks perfect information about stakeholder values, and shareholders face structural impediments in communicating their views clearly. The authors argue that shareholder input can help management better understand public sentiment, reputational risk, and the tradeoffs between values and value. Whether one agrees with every detail of their governance analysis, the broader compliance lesson is straightforward: management needs listening mechanisms before a crisis hits.

Compliance as an Information System

That point should resonate deeply with compliance professionals. A mature compliance program is, at its core, an information system. It is supposed to tell management what it needs to know before misconduct metastasizes. The same is true for values-based risk. If the only time leadership learns that employees, customers, or investors believe the company is out of step is when a boycott begins, or a viral post explodes, the company’s information channels have already failed.

What Boards Should Do

  1. Boards should insist that management identify the company’s most material values-sensitive risk areas. These will vary by industry. For one company, it may be product safety. For another, environmental performance. For another, labor standards, DEI, or political engagement. The important point is that these issues should be mapped as risk categories, not simply discussed as messaging challenges.
  2. Boards should ask whether the company has credible mechanisms to hear from stakeholders before controversy becomes a crisis. The paper emphasizes that employees and customers often have clearer channels to express their values and preferences than shareholders do. A compliance-minded board should ask: Are we learning from all of them? Are we capturing concerns through speak-up systems, culture assessments, employee town halls, customer trends, market testing, and investor engagement? Or are we waiting for a public backlash to tell us what we should already know?
  3. Boards should evaluate whether management is treating corporate culture as a control. This means looking beyond tone at the top to the systems beneath it: incentives, middle-management behavior, escalation pathways, decision rights, and accountability. Values that live only in a code of conduct are decorative. Values that influence promotions, discipline, product choices, third-party oversight, and crisis response become operational.
  4. Boards should ensure that compliance has a seat at the table when values-laden business decisions are made. The compliance function should not decide corporate values. That is not its role. But it should help management test assumptions, identify blind spots, assess stakeholder reactions, and determine whether a proposed course is consistent with the company’s culture and risk appetite. In that sense, compliance serves as both translator and challenger.
  5. Boards should resist the temptation to turn every values issue into a political debate. The paper wisely cautions against viewing corporations as moral leaders first and economic institutions second. That is a sound warning. But there is an equal and opposite danger in pretending that values are irrelevant to business. They are not. The board’s job is not to moralize. It is to govern. And governance today requires management to understand how stakeholder values affect long-term value.

Steps for Chief Compliance Officers

For chief compliance officers, there are some clear, practical steps to take.

Begin by incorporating values-sensitive issues into risk assessment and culture reviews. Build a process to identify where stakeholder expectations may materially affect the company’s operations, reputation, and control environment. Make sure that speak-up and escalation systems can capture values-based concerns, not only legal violations. Work with management to develop an early-warning capability around stakeholder sentiment. Bring boards concrete reporting on culture trends, employee concerns, reputational flashpoints, and areas where the company may be drifting away from its stated values. Finally, pressure-test whether the company’s incentives, communications, and business decisions align with the culture it claims to have.

The Bottom Line

The bottom line is this: corporate values are not soft. They are not ornamental. They are not outside the compliance function’s field of vision. They are part of how companies create value, lose trust, and invite risk. The real challenge for boards and CCOs is not to choose values in the abstract. It is to build the governance and information systems that help management understand stakeholder values before a crisis hits. That is not politics. That is good governance.

Categories
Blog

Data Governance, Privacy, and Model Integrity: The Control Foundation of AI Governance

Artificial intelligence may look like a technology story on the surface, but beneath that surface lies a governance reality every board and Chief Compliance Officer must confront. AI systems are only as sound as the data that feeds them, the controls that govern them, and the integrity of the outputs they generate. When data governance is weak, privacy obligations are poorly managed, or model integrity is assumed rather than tested, AI risk can move quickly from a technical flaw to enterprise exposure.

In the prior blog posts in this series, I examined the foundational questions of AI governance: board oversight and accountability, and the danger of strategy outrunning governance. Today, I want to turn to a third issue that sits at the core of every credible AI governance program: data governance, privacy, and model integrity.

This is where the AI conversation often moves from excitement to discipline. Companies may be eager to deploy tools, automate functions, and improve decision-making. But none of that matters if the underlying data is flawed, sensitive information is mishandled, or the model produces outputs that are unreliable, biased, or impossible to explain in context—the more powerful the technology, the more important the governance framework beneath it.

For boards and CCOs, this is not simply a technical control matter. It is a governance matter because failures in data integrity, privacy management, and model performance can have legal, regulatory, reputational, financial, and cultural consequences simultaneously.

AI Governance Begins with the Data

There is an old saying in technology: garbage in, garbage out. In the AI era, that phrase remains true, but it is no longer sufficient. In corporate governance terms, the problem is not merely bad data. It is unknown, unauthorized, untraceable, biased, stale, overexposed, or used in ways the organization never properly approved. That is why data governance is the control foundation of AI governance.

Every AI use case depends on inputs. Those inputs may include structured internal data, public information, personal data, third-party data, proprietary records, historical documents, transactional records, prompts, or user interactions. If management does not understand where that data comes from, who has rights over it, whether it is accurate, how it is classified, and whether it is appropriate for the intended purpose, then the company is not governing AI. It is merely using it.

For compliance professionals, this point should feel familiar. Data governance is not new. What is new is the speed and scale at which AI can amplify data weaknesses. A spreadsheet error may affect one report. A flawed AI input may affect thousands of interactions, recommendations, or decisions before anyone notices.

Why Boards Should Care About Data Lineage

Boards do not need to become technical experts in model training or data architecture. But they do need to ask whether management understands the provenance and reliability of the information flowing into critical AI systems.

At a governance level, this is a question of data lineage. Can the company trace the source of the data, how it was curated, whether it was changed, and whether it was approved for the intended use? If a customer, regulator, employee, or auditor asks why the system reached a particular result, can management explain not only the output, but the data conditions that shaped it?

A board that does not ask these questions risks receiving polished dashboards and impressive demonstrations while missing the underlying weaknesses. AI systems can sound authoritative even when they are wrong. That is part of what makes governance here so essential. Confidence is not the same as integrity.

This is also where the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) offers a helpful mindset. The ECCP pushes companies to think in terms of operational reality. Do policies work in practice? Are controls tested? Is the company learning from what goes wrong? The same discipline applies here. A company should not assume its data environment is fit for AI simply because it has data available. It should test, verify, document, and challenge that assumption.

Privacy Is Not an Adjacent Issue

Too many organizations still treat privacy as adjacent to AI governance rather than central to it. That is a mistake. AI systems often rely on data sets that include personal information, employee information, customer records, usage patterns, communications, or behavior-based inputs. Even when a company believes it has de-identified or anonymized data, there may still be re-identification risks, overcollection concerns, retention issues, or use limitations tied to law, contract, or internal policy.

For the board and the CCO, privacy should not be discussed as a compliance side note. It should be part of the approval and governance architecture from the outset. Before an AI use case is deployed, management should understand what personal data is involved, whether its use is permitted, what notices or disclosures apply, what access restrictions are required, how the data will be retained, and whether any vendor relationships create additional privacy exposure.

This is particularly important in generative AI environments, where employees may paste confidential, proprietary, or personal information into tools without fully appreciating the consequences. A privacy incident in the AI context may not begin with malicious intent. It may begin with convenience. That is why governance must focus not only on policy, but on system design, training, and usage constraints.

The CCO has a critical role here because privacy governance often intersects with policy management, employee conduct, training, investigations, and disciplinary response. If privacy is left solely to specialists without integration into the broader governance process, the organization risks building fragmented controls that do not hold together under pressure.

Model Integrity Is a Governance Question

Model integrity sounds like a technical term, but it is really a governance concept. It asks whether the system is performing in a manner consistent with its intended purpose, risk classification, and control expectations.

That means asking hard questions. Is the model accurate enough for the use case? Has it been validated before deployment? Are there known limitations? Does it perform differently across populations or scenarios? Can outputs be reviewed in a meaningful way by human decision-makers? Are there conditions under which the model should not be used? These are not engineering questions alone. They are governance questions because they determine whether management is relying on the system responsibly.

This is where NIST’s AI Risk Management Framework is especially valuable. NIST emphasizes that organizations should map, measure, and manage AI risks, including those related to validity, reliability, safety, security, resilience, explainability, and fairness. It is not enough to say that a tool works most of the time. The organization must understand where it may fail, how failure will be detected, and what safeguards are in place when it does.

ISO/IEC 42001 reinforces the same discipline through the lens of management systems. It requires structured attention to risk identification, control design, monitoring, documentation, and continual improvement. In other words, it treats model integrity not as a technical aspiration, but as an organizational responsibility. For boards, the takeaway is direct: if management cannot explain how model integrity is validated and maintained, then the board does not yet have assurance that AI is being governed effectively.

Third Parties Increase the Stakes

One of the more dangerous assumptions in AI governance is that outsourcing technology also outsources risk. It does not. Many organizations will deploy AI through third-party vendors, embedded tools, software platforms, or external service providers. That may be practical, even necessary. But it also means the company may be relying on data practices, training methods, model assumptions, or privacy safeguards it did not design and cannot fully see.

That is why data governance, privacy, and model integrity must extend to third-party risk management. Procurement cannot focus solely on functionality and price. Legal cannot focus solely on contract form. Compliance, privacy, security, and risk all need to understand what the vendor is doing, what data is being used, what rights the company has to inspect or question performance, and what happens when the vendor changes the model or its underlying terms.

This is not simply good vendor management. It is a governance necessity. A company remains accountable for business decisions made using third-party AI tools, especially when those tools affect customers, employees, compliance obligations, or regulated activities.

Documentation Is What Makes Governance Real

As with every major governance issue, documentation is what turns theory into evidence. If a company is serious about data governance, privacy, and model integrity, it should have records that show it. Those records may include data inventories, data classification standards, model validation summaries, privacy assessments, vendor due diligence files, testing results, approved use cases, control requirements, escalation logs, and remediation actions. Without this documentation, governance becomes anecdotal. With it, governance becomes reviewable, auditable, and improvable.

This is another place where the ECCP mindset is so useful. Prosecutors and regulators tend to ask the same core question in different ways: how do you know your program works? In the AI context, the answer cannot be “our vendor told us so” or “the business says the tool is helpful.” It must be grounded in evidence, testing, and management discipline.

What Boards and CCOs Should Be Pressing For

Boards should expect management to present AI use cases with enough clarity to answer four questions. What data is being used? What privacy implications attach to that use? How has model integrity been tested? What controls will remain in place after deployment?

CCOs should press equally hard from the management side. Is there a documented data governance process for AI? Are privacy reviews built into the intake and approval process? Are models validated according to risk? Are third-party tools subject to diligence and contract controls? Are incidents and anomalies logged and investigated? Are employees trained not to expose confidential or personal information through improper use? These are not burdensome questions. They are the practical questions that separate governed AI from hopeful AI.

Governance Requires Trustworthy Inputs and Defensible Outputs

In the end, AI governance depends on a simple but demanding truth: the organization must be able to trust what goes into the system and defend what comes out of it.

If the data is poorly governed, privacy rights are handled casually, or model integrity is assumed rather than demonstrated, then no amount of strategic enthusiasm will make the program safe. Boards will not have real oversight. CCOs will not have a defensible control environment. The company will merely have a faster way to create risk.

That is why data governance, privacy, and model integrity are not support issues in AI governance. They are central issues. They determine whether the enterprise is using AI with discipline or simply hoping for the best.

In the next article in this series, I will turn to the fourth governance challenge: ongoing monitoring, where many organizations discover that approving an AI use case is far easier than governing it after it goes live.

Categories
Blog

Board Oversight and Accountability in AI: Where Governance Begins

For boards and Chief Compliance Officers, AI governance does not begin with the model. It begins with oversight, accountability, and the discipline to define who owns risk, who makes decisions, and who answers when something goes wrong. If AI is changing how companies operate, then board governance and compliance leadership must change as well.

In the first article in this series, I laid out the five significant corporate governance challenges around artificial intelligence: board oversight and accountability, strategy outrunning governance, data governance and model integrity, ongoing monitoring, and culture and speak-up. In Part 2, I turn to the first and most foundational issue: board oversight and accountability.

This is where every AI governance program either starts with rigor or begins with ambiguity. And ambiguity, in governance, is rarely neutral. It is usually the breeding ground for failure.

There is a tendency in some organizations to treat AI oversight as a natural extension of technology oversight. That is too narrow. AI touches legal exposure, regulatory risk, data governance, privacy, discrimination concerns, intellectual property, operational resilience, internal controls, and corporate culture. That makes AI a board-level and CCO-level issue, not just a CIO issue.

The central governance question is straightforward: who is responsible for AI risk, and how is that responsibility exercised in practice? If the board cannot answer that question, if management cannot explain it, and if the compliance function is not part of the answer, then the company does not yet have credible AI governance.

Why Board Oversight Matters Now

Boards have always been expected to oversee enterprise risk. What has changed with AI is the speed, scale, and opacity of the risks involved. A business process can be altered quickly by a generative AI tool. A model can influence customer interactions, internal decisions, and external communications at scale. Employees can adopt AI capabilities before governance structures are fully formed. Vendors can embed AI inside products and services without management fully understanding the downstream implications. That is why AI cannot be governed informally. It requires deliberate oversight.

The board does not need to manage models line by line. That is not its role. But the board must ensure that management has established a governance structure capable of identifying AI use cases, classifying risk, escalating significant issues, testing controls, and reporting failures. Just as important, the board must know who inside management is accountable for making that system work.

This is where the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) offers a very practical lens. The ECCP asks whether a compliance program is well designed, adequately resourced, empowered to function effectively, and tested in practice. Those four questions are equally powerful in evaluating AI governance. Is the governance structure well designed? Is it resourced? Is the compliance function empowered in AI decision-making? Is the program working in practice? If the answer to any of those questions is uncertain, the board should treat that uncertainty as a governance gap.

Accountability Begins with Ownership

One of the oldest problems in corporate governance is fragmented responsibility. AI only intensifies that risk. Consider the typical organizational landscape. IT may own its own infrastructure. Legal may review contracts and liability. Privacy may address data use. Security may focus on cyber threats. Risk may handle enterprise frameworks. Compliance may address policy, controls, investigations, and reporting. Business leaders may champion the use case. Internal audit may come in later for assurance. The board, meanwhile, receives updates from multiple directions.

Without a clearly defined operating model, this becomes a classic accountability fog. Everyone has a slice of the issue, but no one owns the whole risk. A more disciplined approach requires naming an accountable executive owner for enterprise AI governance; in some companies, that may be the Chief Risk Officer. In others, it may be a Chief Legal Officer, Chief Compliance Officer, or a designated senior executive with cross-functional authority. The title matters less than the clarity. The organization must know who convenes the process, who resolves conflicts, who signs off on high-risk use cases, and who reports upward to the board.

For the CCO, this does not mean taking sole ownership of AI. That would be unrealistic and unwise. But it does mean insisting that compliance has a defined role in the governance architecture. AI raises issues of policy adherence, training, escalation, investigations, third-party risk, disciplinary consistency, and remediation. Those are core compliance issues. A governance model that sidelines the CCO is not merely incomplete; it is unstable.

The Right Committee Structure

Once ownership is established, the next question is structural: where does AI governance live? The answer should be enterprise-wide, but with a defined committee architecture. Companies need at least two governance layers.

The first is a management-level AI governance committee or council. This should be a cross-functional working body with representation from compliance, legal, privacy, security, technology, risk, internal audit, and relevant business units, as appropriate. Its purpose is operational governance. It reviews proposed use cases, classifies risk levels, evaluates controls, addresses issues, and determines escalation.

The second is a board-level oversight mechanism. This does not always require a new standing AI committee. In some organizations, oversight may sit with the audit committee, risk committee, technology committee, or full board, depending on the company’s structure and maturity. What matters is not the name of the committee. What matters is that there is an identified board body with responsibility for overseeing AI governance and receiving regular reporting.

This is consistent with the NIST AI Risk Management Framework, which begins with the “Govern” function. NIST recognizes that governance is not an afterthought; it is the foundation that enables the rest of the risk management lifecycle. ISO/IEC 42001 similarly reinforces that AI governance must be embedded in a management system with defined roles, controls, review mechanisms, and continuous improvement. Both frameworks point in the same direction: AI governance requires structure, not aspiration.

Reporting Lines That Actually Work

Good governance lives or dies by reporting lines. If information cannot move efficiently upward, then oversight will be stale, filtered, or incomplete. Boards should require periodic reporting on several core areas: the current AI inventory, high-risk use cases, incident trends, control exceptions, third-party AI dependencies, regulatory developments, and remediation status. The board does not need a data dump. It needs decision-useful reporting.

That means management should create a formal reporting cadence. Quarterly reporting is sufficient for many organizations, but high-risk environments require more frequent updates. The reporting should identify not only what has been approved, but what has changed. That includes scope changes, incidents, near misses, new vendors, policy exceptions, and any material concerns raised by employees, customers, or regulators.

The CCO should be part of the reporting chain, not a bystander. A balanced governance model allows compliance to elevate concerns independently if necessary, particularly when a business leader is pushing to move faster than controls will support. That is not an obstruction. That is governance doing its job.

Escalation Protocols: The Missing Middle

Many companies have approval procedures, but far fewer have robust escalation protocols. That is a mistake. Governance fails only when there is no structure. It also fails when there is no clear path for handling edge cases, incidents, or disagreements.

An effective AI governance program should specify escalation triggers. For example, a use case should be escalated when it affects employment decisions, consumer rights, regulated communications, financial reporting, sensitive personal data, or legally significant outcomes. Escalation should also occur when there is evidence of model drift, hallucinations in a material context, unexplained bias, control failure, a third-party vendor issue, or a credible employee concern.

These triggers should not live in someone’s head. They should be documented in policy, operating procedures, or a risk classification matrix. There should also be a defined process for who gets notified, what interim controls are applied, whether deployment pauses are available, and how issues are documented for follow-up.

This is another place where the ECCP remains highly relevant. DOJ prosecutors routinely ask whether issues are escalated appropriately, whether investigations are timely, and whether lessons learned are incorporated into the program. AI governance should be built with the same operational seriousness. If an issue arises, the company should not be improvising its governance response in real time.

Documentation Is Evidence of Governance

One of the great compliance truths is that governance without documentation is hard to prove and harder to sustain. For AI governance, documentation should include at least these categories: use case inventories, risk classifications, approval memos, committee minutes, control requirements, incident logs, training records, validation summaries, escalation decisions, and remediation actions. This is not paperwork for its own sake. It is the evidentiary trail that shows the organization is governing AI thoughtfully and consistently.

Boards should care about this because documentation is what allows oversight to be more than anecdotal. It is also what allows internal audit, regulators, and investigators to assess whether the governance program is functioning.

For the CCO, documentation is particularly important because it connects AI oversight to the larger compliance architecture. It helps align AI governance with policy management, training, investigations, speak-up systems, third-party due diligence, and corrective action tracking. In other words, it turns AI governance from a loose collection of meetings into a defensible management process.

Board Practice and CCO Practice Must Meet in the Middle

The best AI governance models do not pit the board and the compliance function against innovation. They create a structure that allows innovation to move, but only within defined guardrails. Boards should ask sharper questions. Who owns AI governance? What committee reviews high-risk use cases? What issues must be escalated? What reporting do we receive? How are incidents tracked and remediated? What role does compliance play?

CCOs should be equally direct. Where does compliance sit in the approval process? How do employees report AI concerns? What documentation is required? When can compliance elevate an issue on its own? How are lessons learned being fed back into policy and training?

This is the practical heart of the matter. Oversight is not a slogan. Accountability is not a press release. Both must be built into reporting lines, committee design, escalation protocols, and documentation discipline.

AI governance begins here because every other issue in this series depends on it. If oversight is weak and accountability is blurred, strategy will outrun governance, data issues will go unnoticed, monitoring will become inconsistent, and culture will not carry the load. But if the board and CCO get this first issue right, they create the governance spine that the rest of the program can rely on.

Join us tomorrow, where we review the rule of data governance in AI governance, because that is where every effective AI governance program either starts strong or starts to fail.

Categories
Blog

Five Corporate Governance Challenges in AI: A Roadmap for CCOs and Boards

AI is not simply a technology deployment question. It is a corporate governance challenge that requires board attention, compliance discipline, and operational oversight. For Chief Compliance Officers and board members, the task is not merely to encourage innovation, but to ensure that innovation is governed, monitored, and aligned with business values and risk tolerance.

Artificial intelligence has moved from pilot projects and innovation labs into the bloodstream of the modern corporation. It now touches customer service, finance, procurement, HR, sales, third-party management, internal reporting, and strategic decision-making. That expansion is why AI can no longer be treated as a narrow IT issue. It is a governance issue. More particularly, it is a governance issue with compliance implications at every lifecycle stage.

For compliance professionals, that means AI is not simply about whether a model works. It is about whether the organization has built the structures, accountability, and culture to use AI responsibly. For boards, it means AI oversight can no longer be delegated away with a cursory quarterly update. The board must understand not only where AI is being used, but whether the company’s governance architecture is fit for purpose.

This is the first post in a series examining the five most important corporate governance issues around AI. They are not exotic or theoretical. They are the same types of governance challenges compliance professionals have seen before in other contexts: ownership, control design, data integrity, monitoring, and culture. AI raises the stakes and accelerates the timeline.

1. Board Oversight and Accountability

The first challenge is the most fundamental: who is actually in charge?

One of the great failures in governance is diffuse accountability. When everyone has some responsibility, no one has real responsibility. AI governance suffers from this problem in many organizations. Legal is concerned about liability. IT is focused on systems. Security is focused on cyber risk. Privacy is focused on data usage. Compliance is focused on controls and conduct. Business leaders are focused on speed and competitive advantage. The board hears fragments from all of them, but may not receive a coherent picture.

That is a dangerous place to be. AI governance begins with clear ownership. The board should know who is accountable for enterprise AI governance, how decisions are escalated, and how high-risk use cases are reviewed. A company does not need bureaucracy for its own sake, but it does need clarity.

This is where the Department of Justice’s Evaluation of Corporate Compliance Programs remains instructive, even if AI is not its exclusive focus. The ECCP repeatedly asks whether compliance is well designed, adequately resourced, empowered to function effectively, and tested in practice. Those same questions apply directly to AI governance. If accountability for AI is vague, if compliance is not in the room, or if oversight is not documented, governance will be performative rather than operational.

2. Strategy Outrunning Governance

The second challenge is one many companies know all too well: innovation is sprinting ahead while governance is still tying its shoes.

Business teams are under enormous pressure to deploy AI quickly. Senior leadership hears daily that AI can deliver efficiency, productivity, growth, and competitive advantage. Vendors promise transformation. Employees experiment informally. In that environment, governance can be cast as friction.

But good governance is not the enemy of innovation. It is what keeps innovation from becoming unmanaged exposure.

The central question here is simple: has the company defined the rules of the road before putting AI into production? In practical terms, has it determined which use cases are permissible, which require enhanced review, which are prohibited, and which must go to the board or a designated committee? Has it established approval criteria, documentation standards, and stop/go decision points?

The NIST AI Risk Management Framework is especially helpful on this point because it treats AI governance as an ongoing management discipline rather than a one-time sign-off. Its emphasis on Govern, Map, Measure, and Manage is a powerful reminder that strategy and governance must move together. ISO/IEC 42001 brings similar discipline by framing AI management systems around structure, accountability, controls, and continual improvement.

The lesson for compliance professionals is clear: if the business has a faster process for buying or launching AI than for reviewing risks and governance, it has already fallen behind.

3. Data Governance, Privacy, and Model Integrity

The third challenge is the quality and integrity of what goes into, and comes out of, AI systems.

AI does not operate in a vacuum. It depends on data, assumptions, training inputs, prompts, workflows, and human interaction. That means weaknesses in data governance are not side issues. They are central governance risks. Poor data lineage, unvalidated data sources, confidentiality breaches, inadequate access controls, and bias in training data can all create downstream failures that become legal, reputational, regulatory, and operational events.

For boards, the temptation is to hear “AI” and think about futuristic questions. But the more immediate concern is often much more familiar. Does management know where the data came from? Does the company understand whether sensitive or proprietary information is being exposed? Are outputs accurate enough for the intended use? Are the controls around data usage consistent with privacy obligations and internal policy?

This is where AI governance intersects with traditional compliance disciplines in a very real way. Privacy, information governance, records management, cybersecurity, and internal controls all converge here. A system that produces impressive outputs but relies on flawed or unauthorized data is not a governance success. It is a governance failure waiting to be discovered.

ISO 42001 is particularly useful because it forces organizations to think in systems terms. It is not merely about the model itself; it is about the management environment surrounding it. That is exactly how boards and CCOs should think about model integrity.

4. Ongoing Monitoring and the “Day Two” Problem

The fourth challenge is the one that too many organizations underestimate: governance after deployment. A great many companies put substantial effort into approving an AI use case, but far less into monitoring it once it is live. Yet this is where some of the greatest risks emerge. Models drift. Employees use tools for new purposes. Controls that looked solid on paper weaken in practice. Reviewers become overloaded. Risk profiles change. Regulators evolve their expectations. The use case expands far beyond its original design.

That is why AI governance must include what I call the “Day Two” problem. What happens after launch? This is once again a place where the ECCP offers a useful lens. The DOJ does not ask merely whether a policy exists. It asks whether it works in practice, whether it is tested, and whether lessons learned are incorporated back into the program. AI governance should be held to the same standard. If the company has no way to monitor performance, investigate anomalies, log incidents, revalidate assumptions, or update controls, then it lacks effective AI governance. It has an approval memo.

The board should be asking for reporting that goes beyond usage metrics or efficiency gains. It should want to know about incidents, exception trends, control failures, validation results, and remediation efforts. In other words, governance must be dynamic because AI risk is dynamic.

5. Culture, Speak-Up, and Human Judgment

The fifth challenge may be the most overlooked, yet it is often the earliest warning system a company has: culture. Employees will usually see AI failures before leadership does. They will spot the odd output, the customer complaint, the biased result, the misuse of a tool, the shortcut around a control, or the inaccurate summary that could trigger a bad decision. The question is whether they will say something.

This is why AI governance is not solely about structure and policy. It is also about whether the organization has a culture that encourages people to raise concerns. Do employees understand that AI-related problems are reportable? Do they know where to raise them? Are managers trained to respond properly? Are anti-retaliation protections reinforced in this context?

Human judgment also matters because AI does not eliminate accountability. If anything, it heightens the need for judgment. A machine-generated output can create a false sense of confidence, especially when it arrives quickly and sounds authoritative. Boards and CCOs must resist that temptation. Human oversight is not a ceremonial step. It is an essential governance control.

The strongest AI governance programs will be the ones that connect structure with culture. They will not merely create committees and frameworks. They will create an environment where people trust the system enough to challenge it.

The Governance Road Ahead

For CCOs and boards, the governance challenge around AI is not mysterious. It is demanding, but it is not mysterious. The questions are recognizable. Who owns it? What are the rules? Can we trust the data? Are we monitoring the system over time? Will people speak up when something goes wrong?

These five issues form the roadmap for the series ahead. In the coming posts, I will take up each one in turn and explore what it means in practice for modern compliance programs and board oversight. Because if there is one lesson here, it is this: AI governance is not about admiring the technology. It is about governing the enterprise that uses it.

Join us tomorrow, where we review board oversight and accountability, because that is where every effective AI governance program either starts strong or starts to fail. 

Categories
GSK in China: 13 Years Later

GSK In China: 13 Years Later – Where Was the Board? Director Oversight and Doing Business in China

Thirteen years after the GSK China scandal exploded onto the global stage, its lessons remain as urgent as ever for compliance professionals and business leaders. In this podcast series, we revisit the case not simply as corporate history, but as a living cautionary tale about culture, incentives, third parties, investigations, and governance. Each episode explores what went wrong, why it went wrong, and how those failures still echo in today’s compliance and ethics landscape. Join me as we unpack the scandal and draw practical lessons for building stronger, more resilient organizations. This episode examines why major bribery scandals occur “under the board’s nose,” using GSK as a launching point to explain directors’ legal and practical compliance responsibilities.

It traces oversight duties under Delaware law, highlighting Caremark’s good-faith duty to ensure information and reporting systems, Stone v. Ritter’s standard for liability for sustained or systematic oversight failure, and the business judgment rule. It contrasts “check-the-box” programs with risk-based oversight via the Piat case, where formal compliance masked illegal conduct embedded in business plans. The discussion ties board expectations to FCPA guidance hallmarks, emphasizing tone at the top, empowered compliance functions with direct board access, DOJ/SEC scrutiny, and SEC Reg. S-K 407 risk-oversight disclosures, and potential disgorgement. It then focuses on China as a high-risk environment, third-party intermediary exposure, and M&A “deal-breaker” dilemmas requiring rigorous pre- and post-acquisition diligence, concluding with the paradox that boards may be incentivized toward plausible deniability. Our hosts are Timothy and Fiona.

Key highlights:

  • Compliance Starts at the Top
  • Caremark Duty Explained
  • FCPA Hallmarks for Boards
  • Passive Board Era Ends
  • Plausible Deniability Paradox

Resources:

GSK in China: A Game Changer for Compliance on Amazon.com

GSK in China: Anti-Bribery Enforcement Goes Global on Amazon.com

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Ed. Note: Notebook LM created the voices of the hosts, Timothy and Fiona, based on text written by Tom Fox

Categories
Blog

AI Risk Appetite: The Conversation Boards Are Not Having

There is a quiet but serious problem developing in boardrooms around AI. Directors are hearing about innovation. They are hearing about productivity gains. They are hearing about competitive pressure, transformation, and speed. What they are not hearing enough about is risk appetite. That is the missing conversation.

Most companies are already using AI in one form or another. Some are deploying enterprise tools. Some are approving vendor solutions with embedded AI. Some are allowing business units to experiment in a controlled fashion. Some, of course, are doing all of the above and pretending it is a strategy. Yet for all the discussion about adoption, there has been far less focus on a basic governance question: what level of AI-driven decision risk is acceptable for this company? That is not a technical question. It is a board question.

The Risk Appetite Gap in AI Governance

AI is not simply another software purchase. It can influence recommendations, rankings, forecasts, summaries, classifications, and decisions. It can operate upstream from business judgments or directly within them. It can affect customer communications, hiring decisions, compliance monitoring, internal investigations, financial analysis, and reporting workflows. So the central governance challenge is not whether AI exists in the enterprise. It is how much authority the company is willing to give it, in what contexts, with what controls, and with what margin for error. If you do not define that, you do not have AI governance. You have AI optimism.

What Is AI Risk Appetite?

At its core, AI risk appetite is the level and type of AI-related risk an organization is willing to accept in pursuit of business value. That includes a series of questions boards ought to be asking. How much error is acceptable in AI-generated output before a human must intervene? Which uses are low-risk productivity enhancements, and which are sensitive, consequential, or reputation-threatening? In what contexts can AI make recommendations only, and in what contexts can it influence or automate action? How much dependence on opaque third-party models is acceptable? What degree of explainability does the company require for different use cases? When does speed stop being a benefit and start becoming exposure?

Many boards are currently discussing AI deployment without ever discussing AI tolerance. That is like approving a global third-party strategy without deciding what level of distributor risk, sanctions exposure, or bribery risk the company is prepared to accept. No compliance professional would recommend that. Yet in AI, organizations do versions of it every day.

Why Boards Avoid the Conversation

There are several reasons boards have been slow to engage on AI risk appetite.

First, the technology moves fast, and the terminology can become a fog machine. Directors do not want to look uninformed, so discussions often stay broad and strategic. Second, management may not yet have the internal inventory or classification framework needed to make a risk-appetite conversation concrete. Third, many companies are still in an experimentation phase, which creates the illusion that formal governance can come later. Fourth, there is a natural tendency to believe AI risk belongs to IT, legal, or security, rather than to enterprise oversight.

AI risk appetite cannot be delegated away because it intersects with business judgment, ethics, records, privacy, data governance, resilience, and culture. It cuts across functions. It also cuts across reputational boundaries. If a company uses AI in a way that produces unfair results, faulty decisions, poor disclosures, or customer harm, nobody is going to say, “Well, that was a technical issue, so the board need not have been involved.” Boards do not get a hall pass when the governance system is missing.

The Conversations Boards Need to Be Having

Risk Map. The first conversation is about where AI sits on the company’s risk map. Is AI a productivity tool, a strategic platform, a decision-support capability, or some combination of all three? The answer matters because it affects the level of oversight. A company using AI for internal drafting support faces one type of exposure. A company using AI in customer-facing interactions, underwriting, hiring, fraud detection, or compliance monitoring faces another challenge.

Decision Significance. Boards need to ask where AI is being used in decisions that affect legal rights, financial outcomes, customer treatment, employment status, compliance judgments, or public disclosures. Not all uses are equal. A board that treats AI use in marketing copy the same as AI use in employee discipline is not governing. It is lumping.

Acceptable Error and Human Review. Boards should ask: what level of inaccuracy can the company tolerate in a given use case, and who is accountable for checking the output before action is taken? Human oversight has become one of those phrases everybody likes, and few define. Directors need something more disciplined. When is review mandatory? What does a meaningful review look like? What evidence shows that the reviewer is not simply rubber-stamping machine output?

Data and Model |Dependency. What data is being used? Who owns it? Who has the right to it? How current is it? Are third-party vendors changing capabilities under existing contracts? Is the company becoming dependent on systems it does not fully understand or cannot easily audit? Boards should not need to know how the engine works, but they absolutely need to know whether the company is driving a car with uncertain brakes.

Incident Tolerance and Escalation. What types of AI failures must be reported to senior leadership or the board? A hallucinated internal memo may be embarrassing. A flawed AI-assisted hiring screen or customer communication may be far more serious. The board should ensure management has defined materiality thresholds before an incident occurs, not after the headlines begin.

The CCO’s Role in Shaping the Conversation

This is where compliance officers can be enormously helpful.

The CCO is often the person in the enterprise most experienced at turning abstract risk into operating discipline. Compliance knows how to frame risk-based governance. It knows how to create escalation structures, policy frameworks, investigations protocols, and oversight dashboards. It knows that culture and control design matter just as much as rules. Here are four ways to do so.

  1. A CCO can help management develop a tiered inventory of AI use cases. This is essential. Boards cannot discuss appetite in the abstract. They need to see the map. Which uses are low risk? Which are medium? Which are high? Which are prohibited absent specific approval?
  2. Compliance can help translate legal, ethical, and operational concerns into board-level language. Directors do not need a seminar on neural networks. They need clear framing around consequences, control points, accountabilities, and thresholds.
  3. A CCO can help build governance around human review, documentation, and escalation. If the company says a human is responsible, compliance can help test whether that responsibility is real, documented, and operational.
  4. Compliance can keep the conversation grounded in how people actually behave. Employees will choose convenience. Business teams will move quickly. Vendors will market aggressively. Managers may trust the generated output more than they should. A good compliance officer knows that policy must be built for actual human behavior, not ideal behavior.

Compliance as Risk Mitigation and Business Enablement

One of the enduring frustrations in compliance is that governance is often viewed as a speed bump until something goes wrong. AI gives us another chance to make the larger point. Governance does not slow innovation. Bad governance slows innovation by causing rework, distrust, remediation, and public embarrassment.

A well-defined AI risk appetite does the opposite. It gives the business clarity. It tells innovation teams where they can move quickly and where they must slow down. It helps procurement negotiate the right terms. It helps managers know when to escalate. It helps employees understand when they may rely on AI and when they must verify it. Most importantly, it gives the board a strategic rather than reactive basis for oversight.

That is compliance at its best. Not Dr. No, from the Land of “no,” but the function that makes responsible growth possible.

Final Thoughts

Boards need not fear AI. But they do need to govern it. And governance begins with clarity about appetite. If your board has discussed an AI opportunity but not AI tolerance, it has only had half the conversation. If your company has adopted tools but has not defined acceptable levels of error, autonomy, dependency, and oversight, it is operating on hope. Hope, as every compliance professional knows, is not a strategy and certainly not a control.

Here are the questions I would leave you with. Has your board defined what level of AI-driven decision risk it is willing to accept? Can management explain how that appetite changes across low-risk and high-risk use cases? And can your compliance function show, with evidence, whether the company is operating inside those lines? If the answer is no, then the conversation boards may be the most important AI conversation of all.

Categories
Blog

When AI Strategy Outruns Governance: What the Board Should Do Before Innovation Becomes Exposure

A scene is playing out in companies across the globe right now. Innovation teams are moving fast. Procurement is signing contracts. Business units are experimenting with copilots, workflow agents, and internal knowledge tools. Marketing is testing generative content. HR is evaluating AI for talent processes. Finance wants forecasting help. Security is watching from the corner. Legal is asking pointed questions. Compliance is handed the bill for governance after the train has already left the station. But the reality is that it is a board governance issue.

The problem is not that companies are moving too slowly on AI. In many organizations, the opposite is true. AI strategy is moving faster than the governance structure designed to oversee it. When that happens, the gap creates risk in ways boards understand very well: unmanaged decision-making, unclear accountability, inconsistent controls, fragmented reporting, and blind spots around operational resilience, ethics, and trust.

If you are a Chief Compliance Officer (CCO), this is your moment. Not to say no to AI. Not to become the Department of Technological Misery. But to help the board and senior leadership understand that AI governance is about capturing upside without swallowing avoidable downside. That is the central lesson. Strategy without governance is aspiration. Strategy with governance is a business discipline.

Why This Is a Board Issue

Boards are not expected to code models, evaluate vector databases, or decide which prompt library a business unit should use. They are expected to oversee risk, culture, controls, and management accountability. AI now sits squarely in that lane.

Once AI touches business processes, it can affect decision rights, data usage, customer interactions, employee treatment, financial reporting inputs, records management, and reputation. That means the board does not need to manage the machinery, but it must ensure a management system is in place for it.

This is where compliance can bring real value. Ethisphere’s latest work on the Ethics Premium makes a useful point for governance professionals: leading programs improve board reporting practices, including more frequent meetings with directors to ensure they receive the information needed for effective oversight, and they are also pushing documentation to be ready for AI-driven assistance so employees can find answers when they need them. In other words, mature governance is not static. It evolves as technology evolves.

That same report also reminds us that strong ethics and compliance systems are associated with higher returns, less downside, and faster recoveries, which is exactly the language boards understand when evaluating strategic risk and resilience.

So let us translate that lesson into the AI context. The board’s task is not to bless every shiny new tool. Its task is to ensure management has built an operating system for responsible AI use.

What a Board Should Do

The first thing a board should do is insist on a clear AI governance architecture. That means management should be able to answer basic questions cleanly and quickly. Who owns the enterprise AI strategy? Who approves high-risk use cases? Who validates controls before deployment? Who monitors incidents, exceptions, and drift? Who reports to the board? If five executives give five different answers, you do not have governance. You have a theater.

Second, the board should require a risk-based inventory of AI use cases. I am continually amazed at how many organizations start with policy language before they know where AI is actually being used. That is backwards. Boards should ask for a current inventory of internal, customer-facing, employee-facing, and vendor-enabled AI use cases. The inventory should distinguish between low-risk productivity tools and higher-risk uses involving sensitive data, regulated processes, legal judgments, employment decisions, or customer outcomes. If management cannot map the use cases, it cannot credibly manage the risk.

Third, the board should demand decision-use discipline. Not every AI output deserves the same level of trust. Some uses are advisory. Some are operational. Some may influence consequential business judgments. Boards should ask management where AI outputs are being relied upon, who reviews them, and what level of human oversight is required before action is taken. The issue is not whether humans are “in the loop” as a slogan. The issue is whether human review is meaningful, documented, and tied to the use case’s risk.

Fourth, the board should require intelligible reporting, not merely technical. Board oversight fails when management delivers either fluff or jargon. Directors need reporting that answers practical questions: What are our top AI use cases? Which ones are classified as high risk? What incidents or near misses have occurred? What controls were tested? What third parties are material to our AI stack? What changed this quarter? What needs escalation? Good board reporting turns AI from mystique into management.

That point is entirely consistent with what Ethisphere identifies in leading ethics and compliance programs: improved board reporting practices that provide directors with the information they need for effective oversight.

Where Compliance Officers Can Help the Board Most

This is where the CCO earns their seat at the table.

First, the compliance function can help management create the classification framework. Compliance professionals know how to tier risk, define escalation paths, and build governance around business reality. You have been doing it for years with third parties, gifts and entertainment, investigations, and training. AI is a new technology, but the governance muscle memory is familiar.

Second, compliance can help build the policy-to-practice bridge. A glossy AI principles statement is not governance. Governance is what happens when procurement uses approved clauses, HR knows what tools it can use, managers understand escalation triggers, training is tailored to real workflows, and documentation supports decision-making. Ethisphere’s report notes that best-in-class programs are investing in clear, compelling documentation and training approaches designed for actual employee use, not simply for formal compliance completion. That is precisely the model AI governance needs.

Third, compliance can help the board by translating operational signals into governance signals. A rejected deployment, a data-permission problem, a hallucinated output in a sensitive workflow, a vendor change notice, a policy exception, or a spike in employee questions may each seem isolated. They are not. They are governance indicators. The CCO can aggregate them into trend lines that the board can actually use.

Fourth, compliance can help define the cadence and content of board reporting. Directors do not need every technical detail. They do need a disciplined dashboard and escalation protocol. Compliance is often the right function to help standardize that process, because it lives at the intersection of risk, policy, training, speak-up culture, investigations, and controls.

The Operational Reality Boards Must Understand

One reason AI governance lags strategy is that AI adoption is not happening in one place. It is happening everywhere. That decentralization is what makes governance hard. The legal team may be reviewing one contract while a business leader is piloting another tool within budget. An employee may paste sensitive information into a system that was never intended to accept it. A vendor may quietly add AI functionality to an existing platform. A manager may begin relying on generated summaries as if they are verified facts. None of this requires malicious intent. It only requires speed, convenience, and a little ambiguity. Corporate history teaches that those ingredients are often enough.

Boards, therefore, need to understand a simple truth: AI risk is not only model risk. It is a workflow risk. It is a data risk. It is governance risk. It is a cultural risk. But culture matters here. Ethisphere found that nearly every honoree equips managers with toolkits and talk tracks to discuss ethical dilemmas with their teams, and 51% require managers to do so. That should be a flashing neon sign for AI governance. If managers are not talking with employees about responsible use, escalation expectations, and when not to trust the machine, the company is relying on hope as a control. Hope is not a control. It is a prayer.

Final Thoughts

When AI strategy outruns governance, the problem is not innovation. The problem is unmanaged innovation. Boards should not respond by slamming on the brakes. They should respond by insisting on lanes, guardrails, dashboards, and accountability.

For compliance officers, the opportunity is enormous. You can help the board ask better questions. You can help management build a governance operating system. You can help the business adopt AI faster, smarter, and more defensibly.

That is the larger point. Compliance is not there to suffocate strategy. Compliance is there to make the strategy sustainable.

Here are the questions I would leave you with:

  • Does your board receive meaningful AI oversight reporting, or only periodic reassurance?
  • Can your company identify its highest-risk AI use cases today, not next quarter?
  • If a director asked tomorrow who owns AI governance end-to-end, would the answer be immediate and credible?
  • If not, your AI strategy may already be outrunning your governance.
Categories
Blog

AI Governance and Fiduciary Duty: Board Oversight of AI As Core Governance

There was a time when boards could treat AI as a management-side innovation issue, something for the technology team, the innovation committee, or perhaps an occasional strategy offsite. That time is ending. No longer. For every compliance professional, AI stops being a technology story and becomes a governance story. And once it becomes a governance story, boards need to pay attention through the lens they know best: fiduciary duty.

The issue is not whether every director needs to become an engineer. They do not. The issue is whether the board is exercising appropriate oversight over a capability that can materially affect legal exposure, operational resilience, internal controls, reputation, and enterprise value. Under that lens, ignoring AI oversight begins to look less like prudence and more like a governance gap.

The Board Question Is No Longer “Do We Use AI?”

Too many board discussions still start in the wrong place. A director asks, “Are we using AI?” Management says yes, in a handful of pilots. Another director asks whether there is a policy. Legal says yes, one is being drafted. Everyone nods, reassured that the matter is under control. That is not oversight. That is atmospherics.

The real board questions are different. Where is AI being used? What decisions does it influence? What data does it rely on? Who owns it? How is risk assessed? What controls are in place? What gets reported upward when something changes or goes wrong?

COSO’s GenAI guidance is quite direct on this point. It states that the board of directors must have visibility into GenAI use and associated risks, including regular reporting on adoption, key risk indicators, incidents, and material changes to high-impact use cases. It also says oversight bodies should have the capacity to challenge assumptions, request independent validation, and direct corrective action.

Fiduciary Duty Means Oversight, Not Technical Mastery

The fiduciary duty standard is more practical and more familiar. Directors are expected to exercise informed oversight over material risk. If AI is shaping material processes, material decisions, or material exposures, then the board should ask how management governs it and what evidence supports that confidence.

This is where compliance can be a true translator. We understand how to connect abstract governance expectations to operational proof. We know the difference between having a policy and having a control. We know that a dashboard without escalation is theater. We know that a pilot without documentation is an anecdote. And we know that “the business owns it” is not enough unless ownership is defined, trained, monitored, and accountable.

COSO again gives a helpful framework. It emphasizes clear ownership of each GenAI tool, platform, or capability, with defined authority, escalation paths, and documented scope of use. It further stresses that assigning ownership without the capability to deliver invites failure, and that accountability should be tied not only to adoption but also to accuracy, safety, compliance, and adherence to controls. Boards do not need to run AI. But they do need assurance that someone competent owns it and that the ownership model is real.

Why AI Oversight Is Different from Ordinary IT Oversight

Some directors may be tempted to ask whether this is simply another version of cybersecurity or of oversight for digital transformation. There is overlap, certainly, but AI presents a different governance profile. COSO notes several characteristics that distinguish GenAI. It is dynamic: models, prompts, and retrieval data can change frequently, requiring continuous risk assessment, change control, and monitoring. It is easily scalable, meaning it can amplify errors and bias as readily as it can amplify efficiency. It has a low barrier to entry, which increases the risk of shadow AI and ungoverned adoption. And critically, it can be confidently wrong.

That last point is especially important for boards. A broken machine usually signals that it is broken. AI often does the opposite. It produces polished, persuasive, and highly plausible output even when it is materially mistaken. That means traditional management confidence can be a weak proxy for actual reliability. Boards, therefore, need a different kind of assurance model, one that asks not only whether the system is in place, but whether the organization can validate outputs, explain limitations, monitor drift, and intervene when use cases expand beyond what was originally approved.

The Governance Gap Boards Must Avoid

Here is where the fiduciary-duty lens becomes especially useful. The governance failure in the AI era is unlikely to be that a board has never heard the term “AI.” Every board in America has heard it. The failure is more likely to be subtler and therefore more dangerous: the board heard about AI in broad strategic terms but never built a repeatable oversight mechanism around it.

That is the governance gap.

It shows up when management reports adoption but not risk classification.

It shows up when directors hear about productivity gains but not control failures.

It shows up when there is an AI policy but no inventory of use cases.

It shows up when there is enthusiasm about innovation but no discussion of third-party dependencies, data quality, escalation paths, or human review.

It shows up when incidents are handled ad hoc rather than through a defined reporting structure.

COSO warns that rapid iteration can outpace existing processes, and that prompts, thresholds, and retrieval connectors are critical configuration elements that require the same rigor as other controlled system settings. It also highlights third-party and vendor risk, noting that outsourced GenAI capabilities can limit visibility into training data, model updates, data handling, and underlying controls.

In other words, the board should not assume AI risk is contained simply because a vendor is involved or because the tool sits inside a familiar enterprise platform. That should sharpen the oversight question.

What Good Board Oversight Looks Like

The good news is that effective AI oversight is not mystical. It looks a great deal like good oversight in other high-risk areas. It is structured, periodic, evidence-based, and tied to accountability. At a minimum, boards should expect management to provide five things.

  1. An inventory of material AI use cases, categorized by risk and business impact.
  2. A governance structure that identifies owners, review forums, escalation paths, and the role of compliance, legal, risk, audit, and technology.
  3. Clear policies and boundaries around acceptable use, prohibited data, high-impact decisions, and when human review is mandatory.
  4. Meaningful reporting. Not just adoption statistics, but risk indicators, incidents, model or vendor changes, validation results, and material control exceptions.
  5. A remediation and monitoring process that reflects the dynamic nature of AI.

That is consistent with COSO’s broader framework, which stresses alignment with organizational goals and risk appetite, the use of relevant information, internal communication, ongoing evaluations, and the communication of deficiencies. This is where I would encourage boards to think less in terms of “AI briefings” and more in terms of “AI oversight cadence.” A one-time presentation is not governance. A recurring structure is.

The Board Does Not Need More Hype. It Needs Evidence.

One risk in the current market is that AI discussions are still drenched in promotional language. Faster. Smarter. More innovative. Transformational. Useful words, but not enough for a board discharging fiduciary obligations.

Boards need evidence. This is where the compliance function can shine. Compliance professionals know how to convert aspiration into evidence. We know how to build a record showing that oversight is not merely claimed, but exercised.

And make no mistake, documentation matters. Structured communication and clear records are essential for reconstructing decisions, demonstrating accountability, and supporting regulatory or audit review. That principle runs through effective compliance practice generally and becomes even more important in AI governance, where organizations must often explain not only what decision was made, but how the process was overseen.

Five Questions Every Board Should Ask Now

If I were advising a board chair or audit committee chair, I would start with five questions.

  1. What are our highest-risk AI use cases, and who owns each one?
  2. What information does the board receive regularly about AI adoption, incidents, and material changes?
  3. How do we know that management is validating AI outputs rather than simply trusting them?
  4. Where are third-party AI tools embedded in our environment, and what visibility do we have into the risks they pose?
  5. What evidence would we produce tomorrow if a regulator, auditor, or shareholder asked how this board oversees AI?

Those questions do not require the board to become technical. They require the board to become disciplined.

The Bottom Line

AI governance is moving quickly from optional good practice to expected governance hygiene. That is the real message the real message boards need to hear. Under a fiduciary-duty lens, the challenge is straightforward. Directors do not need to be AI developers. But they do need to ensure that management has built a credible system for identifying, governing, monitoring, and escalating AI risk. When AI touches material business processes, board silence is not neutrality. It is exposure.

The companies that get this right will not be the ones that talk most loudly about innovation. They will be the ones whose boards insist on visibility, accountability, evidence, and follow-through. That is not anti-innovation. That is governance doing its job.

Categories
Blog

5 Strategic Board Playbooks for AI Risk (and a Bootcamp)

Artificial intelligence is no longer a future-state technology risk. It is a current-state governance issue. If AI is being deployed inside governance, risk, and compliance functions, then it is already shaping how your company detects misconduct, prioritizes investigations, manages regulatory obligations, and measures program effectiveness. That makes AI risk a board agenda item, not a management footnote.

In an innovation-forward organization, the goal is not to slow AI adoption. The goal is to professionalize it. Board of Directors and Chief Compliance Officers (CCOs) should approach AI the way they approached cybersecurity a decade ago: move it from “interesting updates” to a structured reporting cadence with measurable controls, clear accountability, and director education that raises the collective literacy of the room.

Today, we consider 5 strategic playbooks designed for a Board of Directors and a CCO operating in an industry-agnostic environment, building AI in-house, without a model registry yet, and with a cross-functional AI governance committee chaired and owned by Compliance. The program must also work across multiple regulatory regimes, including the DOJ Evaluation of Corporate Compliance Programs (ECCP), the EU AI Act, and a growing patchwork of state laws. We end with a proposal for a Board of Directors Boot Camp on their responsibilities to oversee AI in their organization.

Playbook 1: Put AI Risk on the Calendar, Not on the Wish List

If AI risk is always “important,” it becomes perpetually postponed. The first play is procedural: create a standing quarterly agenda item with a consistent structure.

Quarterly board agenda structure (20–30 minutes):

  1. What changed since last quarter? Items such as new use cases, material model changes, new regulations, and major control exceptions.
  2. AI full Risk Dashboard, with 8–10 board KPIs, trends, and thresholds.
  3. Top risks and mitigations, including three headline risks with actions, owners, and dates.
  4. Assurance and testing, which would include internal audit coverage, red-teaming results, and remediation progress.
  5. Decisions required include policy approvals, risk appetite adjustments, and resourcing.

This cadence does two things. First, it forces repeatability. Second, it creates institutional memory. Boards govern better when they can compare quarter-over-quarter progress, not when they receive one-off deep dives that cannot be benchmarked.

Playbook 2: Build the AI Governance Operating Model Around Compliance Ownership

In your design, Compliance owns AI governance and its use throughout the organization, supported by a cross-functional AI governance committee. That is a strong model, but only if it is explicit about responsibilities.

Three lines of accountability:

  • Compliance (Owner): policy, risk framework, controls, training, and board reporting.
  • AI Governance Committee (Integrator): cross-functional prioritization, approvals, escalation, and issue resolution.
  • Build Teams (Operators): documentation, testing, change control, and implementation evidence.

Boards should ask one simple question each quarter: Who is accountable for AI governance, and how do we know it is working? If the answer is “everyone,” then the real answer is “no one.” Your model makes the answer clear: Compliance owns it, and the committee operationalizes it.

Playbook 3: Create the AI Registry Before You Argue About Controls

You have no model registry yet. That is the first operational gap to close, because you cannot govern what you cannot inventory. In a GRC context, this is not a “nice to have.” Without an inventory, you cannot prove coverage, you cannot scope an audit, you cannot define reporting, and you cannot explain to regulators how you know where AI is influencing decisions.

Minimum viable AI registry fields (start simple):

  • Use case name and business owner;
  • Purpose and decision impact (advisory vs. automated);
  • Data sources and data sensitivity classification;
  • Model type and version, with change log;
  • Key risks (bias, privacy, explainability, security, reliability);
  • Controls mapped to the risk (testing, monitoring, approvals);
  • Deployment status (pilot, production, retired); and
  • Incident history and open issues.

Boards do not need the registry details. They need the coverage metric and the assurance that the registry is complete enough to support governance.

Playbook 4: Align to the ECCP, EU AI Act, and State Laws Without Creating a Paper Program

Many organizations make a predictable mistake: they respond to multiple frameworks by producing multiple binders. That creates activity, not effectiveness. A better approach is to use a single control architecture to map to multiple requirements. The board should see one integrated story:

  • DOJ ECCP lens: effectiveness, testing, continuous improvement, accountability, and resourcing;
  • EU AI Act lens: risk classification, transparency, human oversight, quality management, and post-market monitoring; and
  • State law lens: privacy, consumer protection concepts, discrimination prohibitions, and notice requirements where applicable

This mapping becomes powerful when it ties back to the board dashboard. The board is not there to read statutes. The board is there to govern outcomes.

Playbook 5: Use a Board Dashboard That Measures Coverage, Control Health, and Outcomes

You asked for a combined dashboard and narrative with 8–10 KPIs. Here is a board-level set designed for AI in governance, risk, and compliance functions, with in-house build, internal audit, and red teaming for assurance.

Board AI Governance KPIs (8–10)

1. AI Inventory Coverage Rate

Percentage of AI use cases captured in the registry versus estimated footprint.

2. Risk Classification Completion Rate

Percentage of registered use cases risk-classified (EU AI Act style tiers or internal tiers).

3. Pre-Deployment Review Pass Rate

Percentage of deployments that cleared required testing and approvals on first submission.

4. Model Change Control Compliance

Percentage of model changes executed with documented approvals, testing evidence, and rollback plans.

5. Explainability and Documentation Score

Percentage of in-scope use cases with complete documentation, rationale, and user guidance.

6. Monitoring Coverage

Percentage of production use cases with active monitoring for drift, anomalies, and performance degradation.

7. Issue Closure Velocity

Median days to close AI governance issues, by severity.

8. Internal Audit Coverage and Findings Trend

Number of audits completed, rating distribution, repeat findings, and remediation status.

9. Red Team Findings and Remediation Rate

Number of material vulnerabilities identified and percentage remediated within the target time.

10. Escalations and Incident Rate

Number of AI-related incidents or escalations (including near-misses), with severity and lessons learned.

These KPIs do not require vendor controls and align with an in-house build model. They also support both board oversight and compliance management.

AI Director Boot Camp

Your board has a medium level of literacy and needs a boot camp. I agree. Directors do not need to become engineers. They need a common vocabulary and a governance frame. The recommended boot camp design is one-half day, making it highly practical. It should include the following.

  1. AI in the company’s operating model. This means where it touches decisions, risk, and compliance outcomes.
  2. AI risk taxonomy, such as bias, privacy, security, explainability, reliability, third-party, and later.
  3. Regulatory landscape overview, including a variety of laws and regulatory approaches, including the DOJ ECCP approach to effectiveness, the EU AI Act risk framing, and several state law themes approaches.
  4. Governance model walkthrough to ensure the BOD understands the registry, risk classification, controls, monitoring, and escalation.
  5. Tabletop exercises, such as an AI incident in a GRC context with false negatives in monitoring or biased triage.
  6. Board oversight duties. Teach the BOD how they can meet their obligations, including which questions to ask quarterly, which thresholds trigger escalation, and similar insights.

The deliverable from the boot camp should be a one-page “Director AI Oversight Guide” with the KPIs, escalation triggers, and the quarterly agenda structure.

The Bottom Line for Boards and CCOs

This is the moment to treat AI risk like a board-governed discipline. The organizations that get it right will not be the ones with the longest AI policy. They will be the ones with the clearest operating model, the most reliable reporting cadence, and the strongest evidence of control effectiveness.

If Compliance owns AI governance, then Compliance must also own the proof. That proof is delivered through a registry, a quarterly board agenda item, a balanced KPI dashboard, and assurance through internal audit and red teaming. Add a director boot camp to create shared understanding, and you have the beginnings of a program that is innovation-forward and regulator-ready.

That is the strategic playbook: not fear, not hype, but governance.

Categories
Blog

Key Boards Issues for 2026: What Compliance and Governance Leaders Must See Coming

Boards entering 2026 are doing so in an environment defined not by stability, but by volatility. Regulatory priorities are shifting rapidly, geopolitical risk is reshaping markets, technology is accelerating faster than governance frameworks can keep pace, and long-standing assumptions about shareholder engagement and corporate oversight are being tested. In this environment, the role of compliance is no longer reactive or advisory at the margins. It is structural.

The Thoughts for Boards: Key Issues for 2026 memorandum from the law firm of Wachtell, Lipton, Rosen & Katz, which appeared in the Harvard Law School Forum on Corporate Governance, provides a valuable roadmap for boards navigating this uncertainty. For compliance professionals, however, the document does something more important: it reveals where governance risk is quietly migrating. The challenge for compliance leaders is not simply to track these developments, but to translate them into oversight, controls, and strategic guidance that boards can use going forward.

A More Permissive SEC Does Not Mean Less Risk

One of the most striking developments outlined in the memorandum is the SEC’s recalibration of its role. From easing reporting burdens to stepping back from adjudication of shareholder proposals under Rule 14a-8, the Commission is signaling greater deference to companies in deciding how and when to engage with shareholders. At first glance, this appears to reduce regulatory pressure. In reality, it shifts risk inward.

When regulators retreat, discretion moves to boards and management. Predictable SEC processes no longer mediate decisions about disclosure cadence, shareholder engagement, and proposal exclusion. They are governance judgments that will be evaluated ex post by investors, courts, activists, and the media. For compliance professionals, this means fewer bright lines and more gray zones.

The potential move toward semi-annual reporting is a prime example. While it may reduce short-termism, it also alters internal disclosure controls, forecasting discipline, and market expectations. Compliance must ensure that reduced frequency does not translate into reduced rigor. Less reporting does not mean less accountability.

DEI and ESG: From Public Messaging to Quiet Risk Management

The memorandum describes sustained political and regulatory pushback against DEI and ESG initiatives, including executive orders, revised SEC guidance, and heightened scrutiny of shareholder proposals. Yet it also notes an important countervailing force: institutional investors have not abandoned interest in these areas. They have become quieter. This creates a compliance paradox.

On one hand, public signaling around DEI and ESG may expose companies to political and regulatory risk. On the other hand, abandoning these initiatives entirely risks alienating long-term shareholders, employees, and business partners. The compliance function sits at the center of this tension. In 2026, DEI and ESG will increasingly be treated less as branding exercises and more as internal governance risks. Compliance leaders should focus on process integrity, consistency, and documentation rather than rhetoric. The question is no longer whether a company “supports” DEI or ESG, but whether its practices align with its stated values and risk disclosures.

Tone at the top matters here more than ever. Boards must understand that silence does not equal neutrality. How a company governs these issues internally will determine its exposure externally.

Government as Shareholder: A New Governance Reality

Perhaps the most underappreciated development highlighted in the memorandum is the Trump Administration’s growing role as an equity holder in public companies deemed critical to national security. These investments vary widely in form, from passive economic stakes to golden shares with veto rights over strategic decisions. For compliance and governance professionals, this raises novel questions.

Government ownership blurs traditional distinctions between regulator and shareholder. It introduces new stakeholders with potentially divergent objectives, including national security, industrial policy, and geopolitical strategy. Even when governance rights are limited, the mere presence of the government on the cap table can alter decision-making dynamics and investor perceptions.

Compliance must be prepared to advise boards on conflicts of interest, disclosure obligations, and fiduciary duties in this new context. The risk is not simply regulatory; it is structural. Companies operating in sensitive sectors must assume that government involvement is no longer exceptional but potentially recurring.

AI Oversight Moves from Optional to Mandatory

Artificial intelligence dominated board agendas in 2025, and there is no indication that attention will diminish in 2026. The memorandum correctly emphasizes that AI is no longer confined to technology companies. It is embedded in products, operations, compliance monitoring, and decision-making across industries. For boards, the oversight challenge is acute. AI introduces opacity, speed, and scale that traditional governance frameworks were not designed to manage. For compliance officers, this creates both opportunity and risk.

AI is increasingly used within compliance itself, from transaction monitoring to proxy voting analytics. But the use of AI does not eliminate accountability. Boards will still be expected to understand how AI systems function, what risks they create, and how those risks are mitigated.

This is why board-level AI literacy is becoming a governance imperative. Compliance leaders should be proactive in helping boards understand AI not as a technical novelty, but as a risk multiplier. Data governance, model bias, explainability, and third-party reliance must all be incorporated into enterprise risk management frameworks.

Crypto and Digital Assets: Strategy First, Compliance Always

The memorandum highlights a friendlier regulatory environment for crypto-assets, alongside growing corporate interest in crypto treasury strategies and asset tokenization. This combination is dangerous if misunderstood. Regulatory friendliness is not regulatory clarity. Crypto engagement introduces risks related to custody, valuation, sanctions, AML, cybersecurity, and financial reporting. Boards that view crypto as a strategic opportunity without fully appreciating these risks are exposing the company to significant downside.

Compliance must insist on strategic discipline. Why is the company engaging with crypto? What problem is it solving? How does it align with the business model? Without clear answers, crypto becomes speculation rather than strategy. In 2026, compliance officers should expect to spend more time explaining why not to move quickly than how to move fast.

Shareholder Engagement Is Becoming More Fragmented, Not Less Important

The memorandum’s discussion of shareholder engagement reflects a fundamental shift. Institutional investors are splintering their stewardship approaches. Retail investors are more organized and more volatile. Proxy advisors are under regulatory and political attack. The result is unpredictability.

Boards can no longer rely on a small set of proxy advisor recommendations or institutional voting norms. Engagement must become more targeted, more frequent, and more informed. Compliance plays a critical role here by ensuring that engagement practices remain consistent with disclosure rules, insider trading controls, and governance policies.

The rise of retail activism and meme-stock dynamics also creates reputational risk that traditional governance tools were not designed to address. Social media is now a governance arena. Compliance must help boards understand that investor relations, communications, and risk management are increasingly inseparable.

Delaware Still Matters, Even as Alternatives Emerge

Finally, the memorandum addresses trends toward reincorporation in Texas and Nevada, as well as Delaware’s legislative response. While high-profile moves grab headlines, the underlying message is continuity rather than disruption. For most public companies, Delaware remains the default for a reason: predictability. Reincorporation carries costs, risks, and uncertainty that often outweigh perceived benefits. Compliance professionals should ensure that boards approach these decisions with discipline rather than reaction to political or cultural trends. Governance arbitrage is rarely a substitute for governance quality.

Conclusion: Compliance as Governance Infrastructure

The overarching lesson from the Key Issues for 2026 memorandum is that governance risk is becoming more diffuse, not less. Regulatory pullbacks, technological acceleration, geopolitical intervention, and fragmented shareholder bases all point to one conclusion: boards will be expected to exercise more judgment with fewer guardrails. As with all things under this Trump Administration, another key concept is volatility. That places compliance at the center of corporate governance.

In 2026, effective compliance will not be measured solely by the absence of enforcement actions. It will be measured by whether boards can navigate volatility and ambiguity without losing coherence, integrity, or trust. Compliance professionals who understand this shift will be indispensable partners in long-term value creation.