Categories
Blog

Board Oversight and Accountability in AI: Where Governance Begins

For boards and Chief Compliance Officers, AI governance does not begin with the model. It begins with oversight, accountability, and the discipline to define who owns risk, who makes decisions, and who answers when something goes wrong. If AI is changing how companies operate, then board governance and compliance leadership must change as well.

In the first article in this series, I laid out the five significant corporate governance challenges around artificial intelligence: board oversight and accountability, strategy outrunning governance, data governance and model integrity, ongoing monitoring, and culture and speak-up. In Part 2, I turn to the first and most foundational issue: board oversight and accountability.

This is where every AI governance program either starts with rigor or begins with ambiguity. And ambiguity, in governance, is rarely neutral. It is usually the breeding ground for failure.

There is a tendency in some organizations to treat AI oversight as a natural extension of technology oversight. That is too narrow. AI touches legal exposure, regulatory risk, data governance, privacy, discrimination concerns, intellectual property, operational resilience, internal controls, and corporate culture. That makes AI a board-level and CCO-level issue, not just a CIO issue.

The central governance question is straightforward: who is responsible for AI risk, and how is that responsibility exercised in practice? If the board cannot answer that question, if management cannot explain it, and if the compliance function is not part of the answer, then the company does not yet have credible AI governance.

Why Board Oversight Matters Now

Boards have always been expected to oversee enterprise risk. What has changed with AI is the speed, scale, and opacity of the risks involved. A business process can be altered quickly by a generative AI tool. A model can influence customer interactions, internal decisions, and external communications at scale. Employees can adopt AI capabilities before governance structures are fully formed. Vendors can embed AI inside products and services without management fully understanding the downstream implications. That is why AI cannot be governed informally. It requires deliberate oversight.

The board does not need to manage models line by line. That is not its role. But the board must ensure that management has established a governance structure capable of identifying AI use cases, classifying risk, escalating significant issues, testing controls, and reporting failures. Just as important, the board must know who inside management is accountable for making that system work.

This is where the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) offers a very practical lens. The ECCP asks whether a compliance program is well designed, adequately resourced, empowered to function effectively, and tested in practice. Those four questions are equally powerful in evaluating AI governance. Is the governance structure well designed? Is it resourced? Is the compliance function empowered in AI decision-making? Is the program working in practice? If the answer to any of those questions is uncertain, the board should treat that uncertainty as a governance gap.

Accountability Begins with Ownership

One of the oldest problems in corporate governance is fragmented responsibility. AI only intensifies that risk. Consider the typical organizational landscape. IT may own its own infrastructure. Legal may review contracts and liability. Privacy may address data use. Security may focus on cyber threats. Risk may handle enterprise frameworks. Compliance may address policy, controls, investigations, and reporting. Business leaders may champion the use case. Internal audit may come in later for assurance. The board, meanwhile, receives updates from multiple directions.

Without a clearly defined operating model, this becomes a classic accountability fog. Everyone has a slice of the issue, but no one owns the whole risk. A more disciplined approach requires naming an accountable executive owner for enterprise AI governance; in some companies, that may be the Chief Risk Officer. In others, it may be a Chief Legal Officer, Chief Compliance Officer, or a designated senior executive with cross-functional authority. The title matters less than the clarity. The organization must know who convenes the process, who resolves conflicts, who signs off on high-risk use cases, and who reports upward to the board.

For the CCO, this does not mean taking sole ownership of AI. That would be unrealistic and unwise. But it does mean insisting that compliance has a defined role in the governance architecture. AI raises issues of policy adherence, training, escalation, investigations, third-party risk, disciplinary consistency, and remediation. Those are core compliance issues. A governance model that sidelines the CCO is not merely incomplete; it is unstable.

The Right Committee Structure

Once ownership is established, the next question is structural: where does AI governance live? The answer should be enterprise-wide, but with a defined committee architecture. Companies need at least two governance layers.

The first is a management-level AI governance committee or council. This should be a cross-functional working body with representation from compliance, legal, privacy, security, technology, risk, internal audit, and relevant business units, as appropriate. Its purpose is operational governance. It reviews proposed use cases, classifies risk levels, evaluates controls, addresses issues, and determines escalation.

The second is a board-level oversight mechanism. This does not always require a new standing AI committee. In some organizations, oversight may sit with the audit committee, risk committee, technology committee, or full board, depending on the company’s structure and maturity. What matters is not the name of the committee. What matters is that there is an identified board body with responsibility for overseeing AI governance and receiving regular reporting.

This is consistent with the NIST AI Risk Management Framework, which begins with the “Govern” function. NIST recognizes that governance is not an afterthought; it is the foundation that enables the rest of the risk management lifecycle. ISO/IEC 42001 similarly reinforces that AI governance must be embedded in a management system with defined roles, controls, review mechanisms, and continuous improvement. Both frameworks point in the same direction: AI governance requires structure, not aspiration.

Reporting Lines That Actually Work

Good governance lives or dies by reporting lines. If information cannot move efficiently upward, then oversight will be stale, filtered, or incomplete. Boards should require periodic reporting on several core areas: the current AI inventory, high-risk use cases, incident trends, control exceptions, third-party AI dependencies, regulatory developments, and remediation status. The board does not need a data dump. It needs decision-useful reporting.

That means management should create a formal reporting cadence. Quarterly reporting is sufficient for many organizations, but high-risk environments require more frequent updates. The reporting should identify not only what has been approved, but what has changed. That includes scope changes, incidents, near misses, new vendors, policy exceptions, and any material concerns raised by employees, customers, or regulators.

The CCO should be part of the reporting chain, not a bystander. A balanced governance model allows compliance to elevate concerns independently if necessary, particularly when a business leader is pushing to move faster than controls will support. That is not an obstruction. That is governance doing its job.

Escalation Protocols: The Missing Middle

Many companies have approval procedures, but far fewer have robust escalation protocols. That is a mistake. Governance fails only when there is no structure. It also fails when there is no clear path for handling edge cases, incidents, or disagreements.

An effective AI governance program should specify escalation triggers. For example, a use case should be escalated when it affects employment decisions, consumer rights, regulated communications, financial reporting, sensitive personal data, or legally significant outcomes. Escalation should also occur when there is evidence of model drift, hallucinations in a material context, unexplained bias, control failure, a third-party vendor issue, or a credible employee concern.

These triggers should not live in someone’s head. They should be documented in policy, operating procedures, or a risk classification matrix. There should also be a defined process for who gets notified, what interim controls are applied, whether deployment pauses are available, and how issues are documented for follow-up.

This is another place where the ECCP remains highly relevant. DOJ prosecutors routinely ask whether issues are escalated appropriately, whether investigations are timely, and whether lessons learned are incorporated into the program. AI governance should be built with the same operational seriousness. If an issue arises, the company should not be improvising its governance response in real time.

Documentation Is Evidence of Governance

One of the great compliance truths is that governance without documentation is hard to prove and harder to sustain. For AI governance, documentation should include at least these categories: use case inventories, risk classifications, approval memos, committee minutes, control requirements, incident logs, training records, validation summaries, escalation decisions, and remediation actions. This is not paperwork for its own sake. It is the evidentiary trail that shows the organization is governing AI thoughtfully and consistently.

Boards should care about this because documentation is what allows oversight to be more than anecdotal. It is also what allows internal audit, regulators, and investigators to assess whether the governance program is functioning.

For the CCO, documentation is particularly important because it connects AI oversight to the larger compliance architecture. It helps align AI governance with policy management, training, investigations, speak-up systems, third-party due diligence, and corrective action tracking. In other words, it turns AI governance from a loose collection of meetings into a defensible management process.

Board Practice and CCO Practice Must Meet in the Middle

The best AI governance models do not pit the board and the compliance function against innovation. They create a structure that allows innovation to move, but only within defined guardrails. Boards should ask sharper questions. Who owns AI governance? What committee reviews high-risk use cases? What issues must be escalated? What reporting do we receive? How are incidents tracked and remediated? What role does compliance play?

CCOs should be equally direct. Where does compliance sit in the approval process? How do employees report AI concerns? What documentation is required? When can compliance elevate an issue on its own? How are lessons learned being fed back into policy and training?

This is the practical heart of the matter. Oversight is not a slogan. Accountability is not a press release. Both must be built into reporting lines, committee design, escalation protocols, and documentation discipline.

AI governance begins here because every other issue in this series depends on it. If oversight is weak and accountability is blurred, strategy will outrun governance, data issues will go unnoticed, monitoring will become inconsistent, and culture will not carry the load. But if the board and CCO get this first issue right, they create the governance spine that the rest of the program can rely on.

Join us tomorrow, where we review the rule of data governance in AI governance, because that is where every effective AI governance program either starts strong or starts to fail.

Categories
Blog

The “Day Two” Problem of AI Governance: What CCOs Must Monitor After the Launch

A scene is playing out in companies across the globe right now. Innovation teams are moving fast. Procurement is signing contracts. Business units are experimenting with copilots, workflow agents, and internal knowledge tools. Marketing is testing generative content. HR is evaluating AI for talent processes. Finance wants forecasting help. Security is watching from the corner. Legal is asking pointed questions. Compliance is handed the bill for governance after the train has already left the station. But the reality is that it is a board governance issue.

The problem is not that companies are moving too slowly on AI. In many organizations, the opposite is true. AI strategy is moving faster than the governance structure designed to oversee it. When that happens, the gap creates risk in ways boards understand very well: unmanaged decision-making, unclear accountability, inconsistent controls, fragmented reporting, and blind spots around operational resilience, ethics, and trust.

If you are a Chief Compliance Officer (CCO), this is your moment. Not to say no to AI. Not to become the Department of Technological Misery. But to help the board and senior leadership understand that AI governance is about capturing upside without swallowing avoidable downside. That is the central lesson. Strategy without governance is aspiration. Strategy with governance is a business discipline.

Why This Is a Board Issue

Boards are not expected to code models, evaluate vector databases, or decide which prompt library a business unit should use. They are expected to oversee risk, culture, controls, and management accountability. AI now sits squarely in that lane.

Once AI touches business processes, it can affect decision rights, data usage, customer interactions, employee treatment, financial reporting inputs, records management, and reputation. That means the board does not need to manage the machinery, but it must ensure a management system is in place for it.

This is where compliance can bring real value. Ethisphere’s latest work on the Ethics Premium makes a useful point for governance professionals: leading programs improve board reporting practices, including more frequent meetings with directors to ensure they receive the information needed for effective oversight, and they are also pushing documentation to be ready for AI-driven assistance so employees can find answers when they need them. In other words, mature governance is not static. It evolves as technology evolves.

That same report also reminds us that strong ethics and compliance systems are associated with higher returns, less downside, and faster recoveries, which is exactly the language boards understand when evaluating strategic risk and resilience.

So let us translate that lesson into the AI context. The board’s task is not to bless every shiny new tool. Its task is to ensure management has built an operating system for responsible AI use.

What a Board Should Do

The first thing a board should do is insist on a clear AI governance architecture. That means management should be able to answer basic questions cleanly and quickly. Who owns the enterprise AI strategy? Who approves high-risk use cases? Who validates controls before deployment? Who monitors incidents, exceptions, and drift? Who reports to the board? If five executives give five different answers, you do not have governance. You have a theater.

Second, the board should require a risk-based inventory of AI use cases. I am continually amazed at how many organizations start with policy language before they know where AI is actually being used. That is backwards. Boards should ask for a current inventory of internal, customer-facing, employee-facing, and vendor-enabled AI use cases. The inventory should distinguish between low-risk productivity tools and higher-risk uses involving sensitive data, regulated processes, legal judgments, employment decisions, or customer outcomes. If management cannot map the use cases, it cannot credibly manage the risk.

Third, the board should demand decision-use discipline. Not every AI output deserves the same level of trust. Some uses are advisory. Some are operational. Some may influence consequential business judgments. Boards should ask management where AI outputs are being relied upon, who reviews them, and what level of human oversight is required before action is taken. The issue is not whether humans are “in the loop” as a slogan. The issue is whether human review is meaningful, documented, and tied to the use case’s risk.

Fourth, the board should require intelligible reporting, not merely technical. Board oversight fails when management delivers either fluff or jargon. Directors need reporting that answers practical questions: What are our top AI use cases? Which ones are classified as high risk? What incidents or near misses have occurred? What controls were tested? What third parties are material to our AI stack? What changed this quarter? What needs escalation? Good board reporting turns AI from mystique into management.

That point is entirely consistent with what Ethisphere identifies in leading ethics and compliance programs: improved board reporting practices that provide directors with the information they need for effective oversight.

Where Compliance Officers Can Help the Board Most

This is where the CCO earns their seat at the table.

First, the compliance function can help management create the classification framework. Compliance professionals know how to tier risk, define escalation paths, and build governance around business reality. You have been doing it for years with third parties, gifts and entertainment, investigations, and training. AI is a new technology, but the governance muscle memory is familiar.

Second, compliance can help build the policy-to-practice bridge. A glossy AI principles statement is not governance. Governance is what happens when procurement uses approved clauses, HR knows what tools it can use, managers understand escalation triggers, training is tailored to real workflows, and documentation supports decision-making. Ethisphere’s report notes that best-in-class programs are investing in clear, compelling documentation and training approaches designed for actual employee use, not simply for formal compliance completion. That is precisely the model AI governance needs.

Third, compliance can help the board by translating operational signals into governance signals. A rejected deployment, a data-permission problem, a hallucinated output in a sensitive workflow, a vendor change notice, a policy exception, or a spike in employee questions may each seem isolated. They are not. They are governance indicators. The CCO can aggregate them into trend lines that the board can actually use.

Fourth, compliance can help define the cadence and content of board reporting. Directors do not need every technical detail. They do need a disciplined dashboard and escalation protocol. Compliance is often the right function to help standardize that process, because it lives at the intersection of risk, policy, training, speak-up culture, investigations, and controls.

The Operational Reality Boards Must Understand

One reason AI governance lags strategy is that AI adoption is not happening in one place. It is happening everywhere. That decentralization is what makes governance hard. The legal team may be reviewing one contract while a business leader is piloting another tool within budget. An employee may paste sensitive information into a system that was never intended to accept it. A vendor may quietly add AI functionality to an existing platform. A manager may begin relying on generated summaries as if they are verified facts. None of this requires malicious intent. It only requires speed, convenience, and a little ambiguity. Corporate history teaches that those ingredients are often enough.

Boards, therefore, need to understand a simple truth: AI risk is not only model risk. It is a workflow risk. It is a data risk. It is governance risk. It is a cultural risk. But culture matters here. Ethisphere found that nearly every honoree equips managers with toolkits and talk tracks to discuss ethical dilemmas with their teams, and 51% require managers to do so. That should be a flashing neon sign for AI governance. If managers are not talking with employees about responsible use, escalation expectations, and when not to trust the machine, the company is relying on hope as a control. Hope is not a control. It is a prayer.

Final Thoughts

When AI strategy outruns governance, the problem is not innovation. The problem is unmanaged innovation. Boards should not respond by slamming on the brakes. They should respond by insisting on lanes, guardrails, dashboards, and accountability.

For compliance officers, the opportunity is enormous. You can help the board ask better questions. You can help management build a governance operating system. You can help the business adopt AI faster, smarter, and more defensibly.

That is the larger point. Compliance is not there to suffocate strategy. Compliance is there to make the strategy sustainable.

Here are the questions I would leave you with:

  • Does your board receive meaningful AI oversight reporting, or only periodic reassurance?
  • Can your company identify its highest-risk AI use cases today, not next quarter?
  • If a director asked tomorrow who owns AI governance end-to-end, would the answer be immediate and credible?
  • If not, your AI strategy may already be outrunning your governance.
Categories
All Things Investigations

ATI In-House Insights: Challenges and Tips for Navigating a Changing Risk Landscape with Sarah Iles

In this episode of the ATI: InHouse Insights Podcast, Mike DeBernardis speaks with seasoned in-house compliance leader Sarah Isles about navigating an ever-changing risk landscape shaped by political, geopolitical, regulatory, and technological shifts. 

Sarah shares her background across manufacturing sectors and discusses how multinational compliance risks evolve as jurisdictional priorities shift, including sanctions, export controls, tariffs, sustainability, labor rights, data protection, and AI. They identify internal challenges, including a lack of infrastructure to address new risks, siloed ownership, and weak change management, and emphasize clear governance and accountability. Sarah advises “back to basics,” using DOJ’s Evaluation of Corporate Compliance Programs, focusing on real risk mitigation over form-heavy questionnaires, keeping communication channels open through formal committees and informal connections, scaling risk assessments appropriately, targeting communications to relevant audiences, escalating thoughtfully, and building resilient programs by expecting and embracing constant change.

Key highlights:

  • Geopolitics Drives Risk
  • Internal Adaptation Hurdles
  • Silos and Ownership
  • Culture and Change
  • Proactive Compliance Basics
  • Partnering With Business
  • Right-Sized Risk Assessments
  • Communicating Emerging Risks

Resources:

Sarah Iles LinkedIn

Mike DeBernardis LinkedIn

ATI: In-House Insights Podcast

Hughes Hubbard & Reed Website

Categories
Blog

AI Is Only as Good as the Data: What Compliance Leaders Need to Know About Data Readiness

There is an old lesson in compliance that remains evergreen: bad facts produce bad decisions. The same is true for data science: Garbage In, Garbage Out (GIGO). In the GenAI era, that lesson has a new twist. Bad data produces bad outputs at machine speed.

That is why the report, Taming the Complexity of AI Data Readiness, deserves the attention of every Chief Compliance Officer, compliance technologist, and board member who asks management, “What is our AI strategy?” The better follow-up question is, “What is our data readiness strategy?” Because the report makes one point with unmistakable clarity: the model is not the mission; the data foundation is.

For compliance professionals, this is not a technical side issue. It is central to the enterprise risk conversation. If your organization is training, testing, or deploying AI on messy, siloed, biased, stale, or poorly governed data, you are not building a competitive advantage. You are an industrializing risk.

The Dirty Little Secret of Enterprise AI

The report lays out a reality that will not surprise anyone who has lived through a data initiative. Most organizations are not ready. Only 7% of survey respondents said their company’s data was completely ready for AI adoption. By contrast, 51% said it was only somewhat ready, while 27% said it was not very or not at all ready. Only 42% said their organization had high trust in its AI data, and 73% agreed their company should prioritize AI data quality more than it currently does. That should give every compliance officer pause.

We are living through a corporate rush toward GenAI, yet most companies are still stuck at the same old starting line: fragmented, inconsistent, poorly governed data. Many AI conversations inside companies still begin with use cases, copilots, and vendor demos. Far fewer begin with data lineage, data permissions, data quality, or governance maturity. That is a mistake.

If the underlying data is unreliable, the downstream output will be unreliable as well. Worse, it may arrive dressed up in polished prose, persuasive charts, or tidy summaries that create a false sense of confidence. In compliance with that, it is especially dangerous. Whether the use case is sanctions screening, due diligence, internal investigations, policy management, financial controls, or regulatory reporting, a bad answer delivered quickly is still a bad answer.

Bad Data Is Not Just a Tech Problem

One of the most useful parts of the report is how it frames the core barriers. The top challenge cited by respondents was siloed data and difficulty integrating sources at 56%. After that, a lack of a clear data strategy ranked 44%, and data quality or bias issues ranked 41%. Other concerns included regulatory constraints on data use, unclear data lineage, inadequate security, and outdated data. Every one of those should sound familiar to compliance professionals.

Siloed data means incomplete visibility. Weak lineage means you may not be able to defend how an answer was generated. Bias in the data means distorted outputs. Outdated data means inaccurate decisions. Weak security exposes sensitive information. Regulatory constraints mean the company may not even have the right to use certain data the way its AI aspirations assume.

The report underscores this point. 52% of respondents identified inaccurate or biased AI results as a top concern, while 40% cited the loss of security or intellectual property. That is not abstract. That is the modern compliance risk register.

Can We Trust the Data?

A quote from Teresa Tung of Accenture in the report is worth lingering over. She said data readiness means “you can access data to see an accurate view of what is happening in your business and what you can do about it.” That is also a very good working definition of compliance intelligence.

A mature compliance program helps a company understand what is happening inside the business and what should be done in response. That means your hotline data, your gifts and entertainment data, your training metrics, your third-party files, your investigation records, and your control data all need to mean what you think they mean.

The report makes this point with a simple example. Price data is not useful unless you know whether it is in U.S. or Australian dollars, whether it is a unit or bulk price, and when it applies. The compliance equivalent is easy to imagine. A third-party risk flag is not useful unless you know what triggered it, what jurisdiction it covers, how recently it was refreshed, what source produced it, and whether anyone validated it. Context is a control. Without it, data can mislead just as easily as it can inform.

Why This Is Becoming a Board-Level Issue

Another important finding is that only 23% of organizations have created a data strategy for AI adoption, although 53% are currently developing one. In other words, companies know they have a problem, but most are still working through it. This is where compliance can truly function as a business enabler.

The best compliance leaders know that governance is not the enemy of innovation. Governance is what makes innovation scalable and sustainable. If the business wants to use AI at scale, compliance should request a documented AI data strategy that addresses security, privacy, data quality, governance, accessibility, bias management, and alignment with business objectives.

The report found that security and protection of sensitive data were the most critical elements of such plans, at 59%, followed by clean, usable data quality at 46% and data governance at 41%. That is not just an IT checklist. That is a board conversation.

Bring AI to the Data

The report also discusses a concept compliance professionals need to understand: data gravity. Large and sensitive data sets tend to stay where they are because moving them is costly, slow, and risky. Increasingly, organizations are turning to architectures that bring AI processing to the data rather than moving data to the model. The report highlights approaches, such as zero-copy access and containerized applications, that can reduce latency, control costs, and address security and sovereignty concerns. This matters greatly for compliance.

Many regulated environments cannot simply move sensitive data across systems or borders because a vendor wants a cleaner AI workflow. Privacy laws, localization rules, contracts, and plain good judgment all cut against that approach. If AI can be brought to the data rather than copying data into multiple new environments, the organization may reduce both operational and compliance risk.

Compliance officers do not need to become cloud architects. But they do need to ask the right questions. Are we duplicating sensitive data unnecessarily? Are we crossing jurisdictional lines? Can we explain lineage, access, and security? Are we creating an AI environment that looks controlled or improvised?

Agentic AI: Real Promise, Real Risk

The report is optimistic about the potential of agentic AI for data management. 47% of respondents said their organizations believe agentic AI can solve data quality issues, and 65% expect many business processes to be augmented or replaced by agentic AI over the next 2 years. Experts cited benefits such as mapping data, documenting it, performing quality checks, monitoring drift, and automating routine tasks that previously required significant manual effort.

There is real promise here. Compliance teams spend far too much time on manual work that adds little strategic value. Tools that can responsibly automate mapping, documentation, testing, triage, or drift monitoring deserve serious attention.

But this is no place for magical thinking. The report is equally clear that success requires the right team: data engineers, domain experts, prompt expertise, and a product owner aligned to a business objective. That is the lesson. Agentic AI does not eliminate the need for governance. It raises the stakes for governance. If you automate poor judgment on top of poor data, you do not get efficiency. You get scalable failure.

Five Questions for Every CCO

So what should compliance leaders do now? Start with five questions.

  1. Which AI use cases in our company depend on sensitive, regulated, or high-risk data?
  2. Can we explain the lineage, quality, freshness, permissions, and context of that data?
  3. Do we have a documented AI data strategy, or are we confusing pilots with governance?
  4. Are we moving data in ways that create avoidable privacy, security, or sovereignty risks?
  5. Who owns the meaning of the data?

That final question may be the most important. The report stresses that the business must own the data so it is described properly and used correctly. Data is not just a technical asset. It is a business asset with legal, ethical, and operational meaning. Compliance should insist that meaning be defined before AI starts drawing inferences from it.

The Bottom Line

The great temptation in the AI era is to focus on the model’s brilliance. The wiser course is to focus on the data’s readiness. That is where trust begins. That is where defensibility begins. And that is where sustainable value begins. For compliance professionals, the message is plain. AI governance that ignores data readiness is not governance at all. It is wishful thinking with a dashboard.

The organizations that win with AI will not simply have more tools. They will have better data, better lineage, better controls, better discipline, and better judgment about when and how to use AI. In compliance, that is not glamorous. But it is where real success usually lives.

Categories
Blog

Aly McDevitt Week: Part 5 – Ransomware, Crisis Response, and the Compliance Imperative to Move Fast

This week, I want to pay tribute to my former Compliance Week colleague, Aly McDevitt, who announced on LinkedIn that she was retiring from CW to become a full-time mother. I wrote a tribute to Aly, which appeared in CW last week. To prepare to write that piece, I re-read her long-form case studies, which she wrote over the years for CW. They are as compelling today as when she wrote them. This week, I will be paying tribute to Aly by reviewing five of her pieces. The schedule for this week is:

Monday: A Tale of Two Storms

Tuesday: Coming Clean

Wednesday: Inside a Dark Pact

Thursday: Reaching Into the Value Chain

Friday: Ransomware Attack: An immersive case study of a cyber event based on real-life scenarios

McDevitt took a different but highly effective approach in this case study. Rather than centering the story on a single historical corporate scandal, she crafted an immersive fictional scenario grounded in real-life attacks, expert interviews, and public guidance. Compliance Week made clear that, while the company and its characters are imagined, the legal, operational, and compliance issues are very real. That makes this piece especially valuable for compliance professionals because it is less a postmortem of one company and more a practical field manual for the next crisis.

McDevitt’s story begins where many cyber incidents begin: with a person, not a machine.

A longtime employee, Betsy, receives an “urgent” email that appears to be from her boss. She clicks a malicious link, lands on a phony, internal-looking site, realizes too late that something is wrong, and then makes the mistake that turns a bad moment into a corporate crisis: she does not report it. Her silence gives the attacker time. Within days, the company, Vulnerable Electric (VE), a private utility serving 1.4 million customers with about 600 employees and $250 million in annual revenue, is facing a full-blown ransomware attack.

That is the first lesson, and McDevitt drives it home with precision. Ransomware is often described as a technology problem, but the first failure is frequently human, organizational, and cultural. Betsy clicked. But more importantly, she hesitated, feared blame, and kept quiet. As McDevitt explains through the expert commentary, her biggest mistake was not simply opening the link. It was actively deciding not to report the incident to the proper internal authority.

For compliance officers, that point should sound very familiar. Whether the issue is corruption, harassment, sanctions, safety, or cyber, organizations do not fail only because something bad happens. They fail because people do not feel safe reporting it quickly.

McDevitt also lays out why this issue matters so much now. She notes that ransomware payments in 2020 reached roughly $350 million, a more than 300 percent increase from the prior year, and that proactive prevention is no longer optional. She further situates the case study in the context of critical infrastructure, noting that entities such as utilities are subject to heightened scrutiny and are encouraged to align with the NIST cybersecurity framework. In other words, ransomware is not just an IT nuisance. It is an enterprise risk, a regulatory risk, and in some sectors a national security risk.

Once the attack is recognized, McDevitt shows the company doing something right: it moves into a structured response. The CEO activates the full cyber incident response team, or CIRT, and the war room includes not only technical leaders and legal counsel, but also the chief compliance officer, the head of communications, external incident response professionals, and other essential decision-makers. This is exactly what a mature response should look like. Cyber incidents do not fall under a single function. They are enterprise events.

I particularly appreciated how McDevitt uses the case study to underline the role of compliance. The CCO is not there as decoration. The article makes clear that if employee data has been exfiltrated, the incident constitutes a personal data disclosure with potentially local, state, and international notification consequences, and that compliance and legal personnel should be in the room from the start. That is a crucial point for corporate compliance professionals. Cyber risk management is not separate from compliance. It is now one of compliance’s core operating terrains.

McDevitt also captures the psychology of the first 36 hours. Anthony Ferrante says those hours are extremely stressful for a CEO, who is simultaneously thinking about operations, data, reputation, and people. That observation matters because it explains why preparation before an attack is so important. You do not want your executives inventing a process under duress. McDevitt reports that VE had already created an incident playbook with roles, escalation steps, and a five-part response framework: facts, business impact, root cause, corrective actions, and lessons learned. That is the kind of disciplined structure compliance leaders should insist upon.

Another strength of McDevitt’s reporting is her treatment of communications. Too many organizations still believe communications should be brought in late, after the lawyers and technologists finish their work. McDevitt, through multiple expert voices, makes the opposite case. Communications should have a seat at the table, not at the back wall. The reason is straightforward: stakeholders will forgive many things, but they will not forgive caginess. VE’s communications lead rightly argues that employees and customers should hear from the company first, not from the media or the attacker.

This point becomes even sharper when McDevitt contrasts VE’s approach with the real-life story of “Melvin,” an employee at another firm that remained offline for 10 days with no formal communication and did not disclose the sensitive data breach to employees in a timely or transparent way. That section may be the most important communications lesson in the entire piece. Employees are not bystanders. They are among the primary victims of a data breach, and they know when something is wrong. Silence destroys trust.

Then comes the hard question at the center of nearly every ransomware story: Do you pay?

McDevitt wisely resists easy moralizing. She notes the FBI’s official position is not to pay, because payment fuels the criminal business model and does not guarantee restoration. Yet she also reports the practical view of experienced practitioners: payment is not illegal per se, and companies often face a grim choice among bad options. The anonymous chief compliance officer quoted in the case study says it best: there are no good options, only the least bad option.

McDevitt’s two parallel paths, pay and do not pay, are particularly useful because they show that neither choice is clean. In Path A, VE pays $5 million, gets imperfect decryption support, recovers faster, but then faces scrutiny over whether it should have consulted OFAC before payment and whether it may have paid a sanctioned party. In Path B, VE does not pay, endures a longer recovery, suffers a data breach, and still faces reputational and legal fallout. McDevitt’s point is not that one route is right and one is wrong. Her point is that ransomware decision-making is governance under pressure.

That is why the postmortem matters so much. McDevitt closes the case study by emphasizing that the long-term impacts fall into three risk buckets: reputational, legal, and regulatory. She then turns to practical lessons: train the workforce, strengthen spam filters, run tabletop exercises, isolate infected devices immediately, secure backups offline, contact law enforcement quickly, do not rush engagement with the attacker, and communicate with each stakeholder group in a timely and tailored way. She also adds smart recommendations on canary files, forensic retainers, access reviews, logging, threat intelligence monitoring, and industry information sharing.

Finally, McDevitt ends on a note that compliance professionals should not miss. Betsy is not scapegoated. She is thanked for telling the truth and invited to participate in a phishing-resilience campaign for other employees. That is not sentimentality. That is culture. If your response to human error is humiliation, people will hide problems. If your response is accountability plus learning, people will surface them.

That may be the most important compliance lesson of all. Ransomware is a cyber crisis, but surviving it depends on culture, governance, and trust just as much as on technology.

I hope you have enjoyed reading about Aly’s case studies for CW. I am a columnist for Compliance Week.

Categories
Blog

The Fall of the Alamo and Empowerment of the Compliance Professional

Today is the anniversary of the most historic day of many in the history of the great state of Texas, the date of the fall of the Alamo. While March 2, Texas Independence Day, is when Texas declared its independence from Mexico, and April 21, San Jacinto Day, is when Texas won its independence from Mexico, both probably have more long-lasting significance. If there is one word that Texas is known for worldwide, it is the Alamo. The Alamo was a crumbling Catholic mission in San Antonio where 189 men were held out for 13 days by the Mexican Army of General Santa Anna, which numbered approximately 5,000. But in 1836, Santa Anna unleashed his forces, which overran the mission and killed all the fighting men. Those who did not die in the attack were executed, and all the deceased bodies were unceremoniously burned. Proving he was not without chivalry, Santa Anna spared the lives of the Alamo’s women, children, and slaves. But for Texans across the globe, this is our day.

While Thermopylae will always go down as the greatest ‘Last Stand’ battle in history, the Alamo is in contention for Number 2. Like all such battles, sometimes the myth becomes the legend, and the legend becomes the reality. In Thermopylae, the myth is that 300 Spartans stood against the entire 10,000-man Persian Army. However, there was also a force of 700 Thespians (not actors, but citizens from the City-State of Thespi) and a contingent of 400 Thebans fighting alongside the 300 Spartans. Somehow, their sacrifices have been lost to history.

Likewise, the legend that elevates the Alamo battle to myth is the line in the sand. The story goes that William Barrett Travis, on March 5, the day before the final attack, when it was clear that no reinforcements would arrive in time and everyone who stayed would perish, called all his men into the plaza of the compound. He then pulled out his saber and drew a line in the ground. He said that they were surrounded and would all likely die if they stayed. Any man who wanted to stay and die for Texas should cross the line and stand with him. Only one man, Moses Rose, declined to cross the line. The immediate survivors of the battle did not relate this story after they were rescued, and the line-in-the-sand tale did not appear until the 1880s.

But the thing about ‘last stand’ battles is that they generally turn out badly for the losers. Very badly. I thought about this when Chuck Duross, back when he was head of the Department of Justice’s (DOJ) Foreign Corrupt Practices Act (FCPA) unit, said at a conference that he viewed anti-corruption compliance practitioners as “The Alamo” in terms of the last line of defense in the prevention of compliance violations. I gingerly raised my hand and acknowledged his tribute to the great state of Texas, but pointed out that all the defenders were slaughtered, so perhaps another analogy was appropriate. Everyone had a good laugh at the conference back then. But in reflecting on the history of my state and what the Alamo means to us all, I have wondered if my initial response was too facile.

What happens to a Chief Compliance Officer (CCO) or compliance practitioner when they have to make a stand? Do they make the ultimate corporate sacrifice? Will they receive the equivalent of a corporate execution as the defenders of the Alamo received? This worrisome issue occurred even if the person had resigned to pursue other opportunities.’ Michael Scher has been a leading voice in protecting compliance officers. In a post entitled Michael Scher Talks to the Feds, he said, “A compliance officer (CO) working in Asia asked for recognition and protection: “A CO will not stand up against the huge pressure to maintain compliance standards if he does not get sufficient protection under the law. Most COs working in the overseas operations of U.S. companies are not U.S. citizens, but they are usually the first to identify violations. Since the FCPA deals with foreign corruption, how could the DOJ and SEC not protect these COs? “

The DOJ is now looking at the quality of your CCO and compliance function and how they are perceived, treated, and received in the corporate setting. In the 2024 Evaluation of Corporate Compliance Programs (2024 ECCP), the DOJ expanded its inquiry to evaluate the “sufficiency of the personnel and resources within the compliance function, in particular, whether those responsible for compliance have: (1) sufficient seniority within the organization; (2) sufficient resources, namely, staff to effectively undertake the requisite auditing, documentation, and analysis; and (3) sufficient autonomy from management, such as direct access to the board of directors or the board’s audit committee.”

Further, there were four specific areas of inquiry and evaluation: (1) Structure, (2) Experience and Qualifications, (3) Funding and Resources, and (4) Autonomy.

In the section entitled “Structure,” the evaluation made the following inquiries:

  • How does the compliance function compare with other strategic functions in the company in terms of stature, compensation levels, rank/title, reporting line, resources, and access to key decision-makers?
  • What has been the turnover rate for compliance and relevant control function personnel?
  • What role has compliance played in the company’s strategic and operational decisions? How has the company responded to specific instances where compliance raised concerns?
  • Have any transactions or deals been stopped, modified, or further scrutinized due to compliance concerns?

In the section entitled “Experience and Qualifications,” the 2024 ECCP made the following inquiries:

  • Do compliance and control personnel have the appropriate experience and qualifications for their roles and responsibilities?
  • Has the level of experience and qualifications in these roles changed over time?
  • Who reviews the compliance function’s performance, and what is the review process?

In the area of “Funding and Resources,” the 2024 ECCP asked:

  • Has there been sufficient staffing for compliance personnel to effectively audit, document, analyze, and act on the results of the compliance efforts?
  • Has the company allocated sufficient funds for this?
  • Have there been times when requests for resources by compliance and control functions have been denied, and if so, on what grounds?

Finally, in the area of “Autonomy,” the 2024 ECCP asked:

  • Do the compliance and relevant control functions have direct reporting lines to any member of the board of directors and/or the audit committee?
  • How often do they meet with directors?
  • Are members of the senior management present for these meetings?
  • How does the company ensure the independence of the compliance and control personnel?

These were all deeper and more robust, focusing on the CCO and the DOJ compliance team. If your compliance team is run on a shoestring, you will likely be downgraded for your overall commitment to FCPA compliance. The same is true for promotions and other advancement opportunities within an organization. Not many organizations have a compliance function so mature that a CCO is appointed to another senior-level position.

Upon further reflection, Duross was correct, and the Alamo reference was appropriate for compliance officers. Sometimes we must draw a line in the sand with management. And when we do, we have to cross that line to get on the right side of the issue, and the consequences be damned. The DOJ has clarified that it expects CCOs and compliance professionals to draw that line when necessary, and that when they do, companies must heed their warnings.

Categories
Blog

The Starliner, Culture and Compliance: Leadership Lessons from a NASA Investigation Report

Corporate compliance professionals spend a lot of time talking about controls, training, third parties, and investigations. Yet the hard truth is that the most important control environment sits above all of that: leadership behavior and the culture it creates. That is why this NASA investigation report on the Boeing CST-100 Starliner Crewed Flight Test (CFT) is such a useful case study. It is a technical report, to be sure. But it is also a cultural, leadership, and governance report. NASA’s bottom line is unambiguous: technical excellence and safety require transparent communication and clear roles and responsibilities, not as slogans, but as operating requirements that must be institutionalized so safety is never compromised in pursuit of schedule or cost.

If you are a Chief Compliance Officer, General Counsel, or business leader, you should read this report the way you read an enforcement action. Not to gawk. Not to assign blame. But to harvest lessons for your own organization before you have your own high-visibility close call.

The incident(s) that led to the report

The CFT mission launched June 5, 2024, as a pivotal step toward certifying Starliner to transport astronauts to the International Space Station. It was planned as an 8-to-14-day mission but was extended to 93 days after significant propulsion system anomalies emerged. Ultimately, the Starliner capsule returned uncrewed, while astronauts Barry “Butch” Wilmore and Sunita “Suni” Williams returned aboard SpaceX’s Crew-9 Dragon in March 2025. In February 2025, NASA chartered a Program Investigation Team (PIT) to examine the technical, organizational, and cultural factors contributing to the anomalies.

The report describes four major hardware anomaly areas, including Service Module RCS thruster fail-offs that temporarily caused a loss of 6 Degrees of Freedom control during ISS rendezvous and required in-situ troubleshooting to recover enough capability to dock, a Crew Module thruster failure during descent that reduced fault tolerance, and helium manifold leaks where seven of eight Service Module helium manifolds leaked during the mission. The PIT further determined that the 6DOF loss during rendezvous met criteria for a Type A mishap (or at least a high-visibility close call), underscoring how close the program came to a very different ending.

That is the “what.” For compliance professionals, the “so what” is that NASA did not treat this as a purely engineering problem. It treated it as an integrated system failure, in which culture and leadership either reduce risk or magnify it.

Lesson 1: Decision authority is culture, not paperwork

One of the report’s clearest threads is that fragmented roles and responsibilities delayed decision-making and eroded confidence. In the compliance world, unclear decision rights become the breeding ground for “informal governance”: private conversations, end-runs around committees, and decisions that are never fully documented. Over time, that becomes a shadow-control environment that your policies cannot touch.

Compliance action steps

  • Define decision rights for the riskiest calls (high-risk third parties, market entry, major remediation, critical incidents).
  • Require a short, written record of: facts reviewed, options considered, dissent captured, decision made, and owner accountable.
  • Separate “recommendation authority” from “approval authority” so everyone knows where they sit.

Lesson 2: Transparency is a control, and selective data sharing destroys trust

The report explicitly flags that the lack of data access fueled concerns about selective information sharing. Interviewees described frustration that information could be filtered, selectively chosen, or sanitized, which eroded confidence in the process and people. It also notes reports of questions being labeled “too detailed” or “out of scope” without mechanisms to ensure concerns were addressed. That is the compliance danger zone. When teams believe the narrative matters more than the data, they stop escalating early. They start documenting defensively. They seek safety in silence.

Compliance action steps

  • Build “open data” expectations into your incident response and investigative protocols.
  • Create a defined pathway for technical or subject-matter dissent to be logged, reviewed, and dispositioned.
  • Treat meeting notes and decisions as governed records, not optional artifacts.

Lesson 3: Risk acceptance without rigor becomes “unexplained anomaly tolerance”

NASA calls out “anomaly resolution discipline” and warns that repeated acceptance of unexplained anomalies without root cause can lead to recurrence. That single lesson belongs on a poster in every compliance office. In corporate terms, “unexplained anomalies” are recurring control exceptions, repeat hotline themes, repeated third-party red flags, and audit findings that are “managed” rather than fixed. If leadership normalizes that pattern, it teaches the organization that closure is more important than correction.

Compliance action steps

  • Require root cause analysis for repeat issues, not just incident closure.
  • Set escalation thresholds for “repeat with no root cause” findings.
  • Audit remediation quality, not only remediation completion.

Lesson 4: Partnerships fail when “shared accountability” is not operationalized

The report emphasizes that shared accountability in the commercial model was inconsistently understood and applied. It also notes that historical relationships and private conversations outside formal forums created perceptions of blurred boundaries, favoritism, and lack of objectivity, whether or not those perceptions were accurate. Compliance teams have seen this movie. Think distributors, joint ventures, outsourced compliance support, and major technology partners. If accountability is shared in theory but siloed in practice, something will fall through the cracks. Usually, it falls right into your lap when regulators arrive.

Compliance action steps

  • Define “shared accountability” in contracts, governance charters, and escalation protocols.
  • Ensure independence and objectivity are protected by design, not by personality.
  • Create joint forums where data is shared broadly, dissent is recorded, and decisions are made openly.

Lesson 5: Burnout is a risk factor, and meeting chaos is a governance failure

The report’s recommendations recognize the operational reality: high-pressure environments can degrade decision quality. It calls for “pulse checks,” rotation of high-pressure responsibilities, contingency staffing, and time protection for deep work to proactively address burnout and improve decision-making under mission conditions. Compliance professionals should take that to heart. Crisis cadence is sometimes unavoidable. Permanent crisis cadence is a leadership choice. And it carries predictable consequences: shortcuts, missed details, weakened documentation, and poor judgment.

Compliance action steps

  • Build surge staffing plans for investigations and incident response.
  • Rotate incident commander roles when events extend beyond days.
  • Protect time for analysis, not just meetings and status updates.

Lesson 6: Accountability must be visible, not performative

NASA does not bury the human dimension. The report contains leadership recommendations to speak openly with the joint team about leadership accountability, including concurrence with the report and reclassification as a mishap, and to hold a leadership-led stand-down day focused on reflection, accountability concerns, and rebuilding trust. For corporate leaders, this is where trust is won or lost after a crisis. Employees can tolerate a hard outcome. They struggle to tolerate spin. If your organization communicates externally with confidence but internally with vagueness, your culture learns the wrong lesson: optics first, truth second.

Compliance action steps

  • After a major incident, publish an internal accountability and remediation plan with owners and timelines.
  • Provide regular updates on what has been completed, what is delayed, and why.
  • Make it safe for the workforce to ask questions in interactive forums, as NASA recommends.

Lesson 7: Trust repair requires a plan, not a pep talk

One of the most useful artifacts in the report is a sample Organizational Trust Plan. It sets a goal to rebuild trust by establishing clear expectations, open accountability, and shared commitment to safety and mission success. It includes objectives around transparent communication, acknowledging past challenges, reinforcing shared values, and structured engagement. It then lays out action steps: leadership engagement, facilitated sessions, outward expressions of accountability, teamwide rollout, training and coaching, and communication through a written plan and regular updates.

That is exactly the kind of operational discipline compliance leaders should bring to culture work. Culture does not change because someone gives a speech. Culture changes when the organization changes how it makes decisions, treats dissent, and follows through.

Five key takeaways for the compliance professional

  1. Clarify decision rights before the crisis. Ambiguity becomes politics under pressure.
  2. Make transparency non-negotiable. Perceived filtering of data destroys credibility.
  3. Do not normalize unexplained anomalies. Repeat issues without a root cause are future failures.
  4. Operationalize shared accountability with partners. Otherwise, it is a slogan.
  5. Rebuild trust with a written plan and visible accountability. Trust repair is a managed process.

In the end, the Starliner lesson for compliance is simple: controls matter, but culture decides whether controls work when it counts. If leadership cannot run disagreements well, cannot share data broadly, and cannot demonstrate accountability after the fact, the best-written compliance program in the world will fail the moment the pressure rises.

Categories
Blog

5 Strategic Board Playbooks for AI Risk (and a Bootcamp)

Artificial intelligence is no longer a future-state technology risk. It is a current-state governance issue. If AI is being deployed inside governance, risk, and compliance functions, then it is already shaping how your company detects misconduct, prioritizes investigations, manages regulatory obligations, and measures program effectiveness. That makes AI risk a board agenda item, not a management footnote.

In an innovation-forward organization, the goal is not to slow AI adoption. The goal is to professionalize it. Board of Directors and Chief Compliance Officers (CCOs) should approach AI the way they approached cybersecurity a decade ago: move it from “interesting updates” to a structured reporting cadence with measurable controls, clear accountability, and director education that raises the collective literacy of the room.

Today, we consider 5 strategic playbooks designed for a Board of Directors and a CCO operating in an industry-agnostic environment, building AI in-house, without a model registry yet, and with a cross-functional AI governance committee chaired and owned by Compliance. The program must also work across multiple regulatory regimes, including the DOJ Evaluation of Corporate Compliance Programs (ECCP), the EU AI Act, and a growing patchwork of state laws. We end with a proposal for a Board of Directors Boot Camp on their responsibilities to oversee AI in their organization.

Playbook 1: Put AI Risk on the Calendar, Not on the Wish List

If AI risk is always “important,” it becomes perpetually postponed. The first play is procedural: create a standing quarterly agenda item with a consistent structure.

Quarterly board agenda structure (20–30 minutes):

  1. What changed since last quarter? Items such as new use cases, material model changes, new regulations, and major control exceptions.
  2. AI full Risk Dashboard, with 8–10 board KPIs, trends, and thresholds.
  3. Top risks and mitigations, including three headline risks with actions, owners, and dates.
  4. Assurance and testing, which would include internal audit coverage, red-teaming results, and remediation progress.
  5. Decisions required include policy approvals, risk appetite adjustments, and resourcing.

This cadence does two things. First, it forces repeatability. Second, it creates institutional memory. Boards govern better when they can compare quarter-over-quarter progress, not when they receive one-off deep dives that cannot be benchmarked.

Playbook 2: Build the AI Governance Operating Model Around Compliance Ownership

In your design, Compliance owns AI governance and its use throughout the organization, supported by a cross-functional AI governance committee. That is a strong model, but only if it is explicit about responsibilities.

Three lines of accountability:

  • Compliance (Owner): policy, risk framework, controls, training, and board reporting.
  • AI Governance Committee (Integrator): cross-functional prioritization, approvals, escalation, and issue resolution.
  • Build Teams (Operators): documentation, testing, change control, and implementation evidence.

Boards should ask one simple question each quarter: Who is accountable for AI governance, and how do we know it is working? If the answer is “everyone,” then the real answer is “no one.” Your model makes the answer clear: Compliance owns it, and the committee operationalizes it.

Playbook 3: Create the AI Registry Before You Argue About Controls

You have no model registry yet. That is the first operational gap to close, because you cannot govern what you cannot inventory. In a GRC context, this is not a “nice to have.” Without an inventory, you cannot prove coverage, you cannot scope an audit, you cannot define reporting, and you cannot explain to regulators how you know where AI is influencing decisions.

Minimum viable AI registry fields (start simple):

  • Use case name and business owner;
  • Purpose and decision impact (advisory vs. automated);
  • Data sources and data sensitivity classification;
  • Model type and version, with change log;
  • Key risks (bias, privacy, explainability, security, reliability);
  • Controls mapped to the risk (testing, monitoring, approvals);
  • Deployment status (pilot, production, retired); and
  • Incident history and open issues.

Boards do not need the registry details. They need the coverage metric and the assurance that the registry is complete enough to support governance.

Playbook 4: Align to the ECCP, EU AI Act, and State Laws Without Creating a Paper Program

Many organizations make a predictable mistake: they respond to multiple frameworks by producing multiple binders. That creates activity, not effectiveness. A better approach is to use a single control architecture to map to multiple requirements. The board should see one integrated story:

  • DOJ ECCP lens: effectiveness, testing, continuous improvement, accountability, and resourcing;
  • EU AI Act lens: risk classification, transparency, human oversight, quality management, and post-market monitoring; and
  • State law lens: privacy, consumer protection concepts, discrimination prohibitions, and notice requirements where applicable

This mapping becomes powerful when it ties back to the board dashboard. The board is not there to read statutes. The board is there to govern outcomes.

Playbook 5: Use a Board Dashboard That Measures Coverage, Control Health, and Outcomes

You asked for a combined dashboard and narrative with 8–10 KPIs. Here is a board-level set designed for AI in governance, risk, and compliance functions, with in-house build, internal audit, and red teaming for assurance.

Board AI Governance KPIs (8–10)

1. AI Inventory Coverage Rate

Percentage of AI use cases captured in the registry versus estimated footprint.

2. Risk Classification Completion Rate

Percentage of registered use cases risk-classified (EU AI Act style tiers or internal tiers).

3. Pre-Deployment Review Pass Rate

Percentage of deployments that cleared required testing and approvals on first submission.

4. Model Change Control Compliance

Percentage of model changes executed with documented approvals, testing evidence, and rollback plans.

5. Explainability and Documentation Score

Percentage of in-scope use cases with complete documentation, rationale, and user guidance.

6. Monitoring Coverage

Percentage of production use cases with active monitoring for drift, anomalies, and performance degradation.

7. Issue Closure Velocity

Median days to close AI governance issues, by severity.

8. Internal Audit Coverage and Findings Trend

Number of audits completed, rating distribution, repeat findings, and remediation status.

9. Red Team Findings and Remediation Rate

Number of material vulnerabilities identified and percentage remediated within the target time.

10. Escalations and Incident Rate

Number of AI-related incidents or escalations (including near-misses), with severity and lessons learned.

These KPIs do not require vendor controls and align with an in-house build model. They also support both board oversight and compliance management.

AI Director Boot Camp

Your board has a medium level of literacy and needs a boot camp. I agree. Directors do not need to become engineers. They need a common vocabulary and a governance frame. The recommended boot camp design is one-half day, making it highly practical. It should include the following.

  1. AI in the company’s operating model. This means where it touches decisions, risk, and compliance outcomes.
  2. AI risk taxonomy, such as bias, privacy, security, explainability, reliability, third-party, and later.
  3. Regulatory landscape overview, including a variety of laws and regulatory approaches, including the DOJ ECCP approach to effectiveness, the EU AI Act risk framing, and several state law themes approaches.
  4. Governance model walkthrough to ensure the BOD understands the registry, risk classification, controls, monitoring, and escalation.
  5. Tabletop exercises, such as an AI incident in a GRC context with false negatives in monitoring or biased triage.
  6. Board oversight duties. Teach the BOD how they can meet their obligations, including which questions to ask quarterly, which thresholds trigger escalation, and similar insights.

The deliverable from the boot camp should be a one-page “Director AI Oversight Guide” with the KPIs, escalation triggers, and the quarterly agenda structure.

The Bottom Line for Boards and CCOs

This is the moment to treat AI risk like a board-governed discipline. The organizations that get it right will not be the ones with the longest AI policy. They will be the ones with the clearest operating model, the most reliable reporting cadence, and the strongest evidence of control effectiveness.

If Compliance owns AI governance, then Compliance must also own the proof. That proof is delivered through a registry, a quarterly board agenda item, a balanced KPI dashboard, and assurance through internal audit and red teaming. Add a director boot camp to create shared understanding, and you have the beginnings of a program that is innovation-forward and regulator-ready.

That is the strategic playbook: not fear, not hype, but governance.

Categories
Blog

When Your AI Chat Becomes Exhibit A: What United States v. Heppner Means for Compliance Professionals

There are court rulings that quietly shape doctrine, and others that detonate assumptions. The recent decision of Judge Jed Rakoff from the Southern District of New York in United States v. Heppner falls into the latter category. In a February 10, 2026, ruling,  the Court made clear that the attorney-client privilege or the work-product doctrine did not protect materials generated through a third-party generative AI platform. In plain English, what a defendant typed into a public AI system was discoverable.

For compliance professionals, this is not a narrow litigation footnote. It is a flashing red warning light. The era of casual AI experimentation inside corporations is over. Governance now must catch up with adoption. Today, we will consider the Court’s ruling and why it matters to a Chief Compliance Officer.

The Court’s Core Holding

The defendant in Heppner had used a third-party generative AI tool to draft and refine materials that were later shared with counsel. When prosecutors sought production, the defense argued that these materials were protected by privilege and work-product protections. The court disagreed.

The reasoning was straightforward and, frankly, predictable:

  • The AI tool was not an attorney.
  • The terms of service did not guarantee confidentiality and allowed retention or potential disclosure of inputs.
  • The materials were not prepared at the direction of counsel for the purpose of obtaining legal advice.
  • Simply sending AI-generated drafts to counsel after the fact did not, by itself, retroactively cloak them in privilege.

This is a fundamental point: privilege attaches to communications made in confidence for the purpose of seeking legal advice. When an employee enters sensitive facts into a third-party AI platform that disclaims confidentiality, that “confidence” is at best questionable. When those drafts are created independently of counsel’s direction, work-product arguments grow thin. The court did not create a new doctrine. It applied existing principles to new technology. That is precisely why this ruling is so important.

The Illusion of Confidentiality

Many business users treat AI platforms like a digital notebook. They assume that because the interaction occurs on a screen and feels private, it is private. That assumption is dangerous. Public and consumer AI platforms often reserve the right to store, analyze, or use inputs for service improvement. Even when vendors promise limited retention, those commitments may not meet the strict confidentiality standards necessary to preserve privilege. From a legal perspective, once you introduce a third party without adequate confidentiality protections, you risk waiving your rights.

The compliance lesson is blunt: generative AI is not your lawyer, and it is not your secure internal memo system. This is where governance intersects with culture. If employees are entering investigative summaries, draft responses to regulators, internal audit findings, or potential misconduct narratives into public AI tools, you are manufacturing discoverable evidence. That is not a hypothetical risk. That is now a litigated reality.

Why This Is a Board-Level Issue

The Department of Justice has made clear through the Evaluation of Corporate Compliance Programs (ECCP) that companies must identify and manage emerging risks. Artificial intelligence is no longer emerging. It is embedded in operations, marketing, finance, and legal workflows. The Heppner ruling converts AI usage from a technology convenience into a legal risk category. Boards of Directors should be asking:

  • Do we have an inventory of AI tools used across the enterprise?
  • Are employees permitted to input confidential, regulated, or legally sensitive information into third-party platforms?
  • Have we reviewed the vendor’s terms of service regarding confidentiality, retention, and data ownership?
  • Are legal and compliance functions involved in approving AI deployments?

If the answer to any of these questions is uncertain, there is a governance gap. AI governance is no longer solely about bias, explainability, or regulatory compliance. It is also about preserving privilege, managing litigation risk, and managing evidence.

Privilege cannot Be Recreated After the fact.

One of the most significant aspects of the ruling is the rejection of “retroactive privilege.” Sending AI-generated content to counsel after it is created does not transform it into protected communication. This matters for compliance investigations. Consider the following scenario:

An internal report of potential misconduct surfaces. An employee uses a public AI tool to summarize the facts and generate possible legal arguments before reaching out to in-house counsel. That summary now exists outside any protected legal channel. The vendor may retain it. It may be discoverable.

By the time counsel becomes involved, the privilege damage may already be done. The message for compliance teams is clear: legal engagement must precede, or at least direct, sensitive analysis, not follow it.

Work Product Is Not a Safety Net

Some may argue that AI-assisted drafting in anticipation of litigation should fall under the work-product doctrine. The court in Heppner was not persuaded. Work-product protection generally applies to materials prepared by or for an attorney in anticipation of litigation. When individuals independently generate content using AI tools without counsel’s direction, that protection is far from guaranteed. Compliance professionals should not assume that labeling a document “prepared in anticipation of litigation” will insulate AI-generated material. Courts will look at substance over form.

Practical Steps for Compliance Leaders

This ruling demands operational response from every CCO. Here are some steps every compliance program should consider.

1. Treat Third-Party AI as Non-Confidential by Default

Unless you have a contractual, enterprise-level arrangement with robust confidentiality provisions and clear data controls, assume that information entered into a third-party AI platform is not protected. This default posture should be reflected in policy language.

2. Update Acceptable Use Policies

Your code of conduct and IT policies should explicitly address the use of generative AI. Prohibit the entry of:

  • Privileged communications.
  • Investigation details.
  • Personally identifiable information.
  • Trade secrets.
  • Sensitive regulatory communications.

Policy must move from general warnings to specific examples.

3. Involve Legal in AI Governance

AI procurement should not be a purely IT function. Legal and compliance must review vendor terms, especially around:

  • Data retention.
  • Subprocessor use.
  • Confidentiality obligations.
  • Audit rights.
  • Breach notification.

If you cannot articulate how your AI vendor protects inputs, you cannot defend privilege claims.

4. Implement Training That Reflects Real Risk

Annual compliance training should now include explicit guidance on AI usage. Employees should understand that entering confidential information into public AI tools can waive privilege and render it discoverable. Training should include practical scenarios. The objective is behavioral change, not abstract awareness.

5. Establish Secure AI Environments for Legal Work

If your organization intends to use AI in legal or investigative contexts, consider enterprise solutions that:

  • Operate within your controlled environment.
  • Restrict data sharing.
  • Provide contractual confidentiality.
  • Maintain clear audit logs.

Even then, legal oversight is essential. Secure does not automatically mean privileged.h

6. Align with Litigation Hold Procedures

AI interaction logs may constitute discoverable material. Ensure that your litigation hold processes account for AI-generated content. If your organization logs prompts and outputs, those logs may fall within the scope of preservation obligations. Ignoring this dimension creates spoliation risk.

The Cultural Dimension

Technology adoption inside companies often outruns governance. Employees experiment. Business units optimize. Productivity improves. Compliance arrives later. That sequencing is no longer sustainable. The Heppner ruling should catalyze a shift from reactive to proactive governance. AI usage must be mapped, risk-ranked, and monitored, just as third-party intermediaries, high-risk markets, and financial controls are. If your risk assessment does not explicitly include generative AI, it is incomplete.

Connecting to the DOJ’s Expectations

The DOJ has repeatedly emphasized dynamic risk assessment. Artificial intelligence now clearly falls within the scope of corporate compliance evaluation. Prosecutors will not be sympathetic to arguments that “everyone was using it” or that policies were silent. They will ask:

  • Did the company identify AI as a risk area?
  • Did it implement controls?
  • Did it train employees?
  • Did it monitor usage?
  • Did it respond to incidents?

The answers to those questions will influence charging decisions, resolutions, and penalty calculations.

A Final Word: Convenience Versus Control

Generative AI is transformative. It enhances drafting, analysis, and research. It can elevate compliance operations if deployed thoughtfully. However, convenience without control is exposure. The lesson of United States v. Heppner is not that AI should be avoided. It is that AI must be governed with the same rigor as any other high-impact enterprise tool.

Privilege is fragile. Once waived, it cannot be restored. In a world where a chat prompt can become an exhibit, compliance professionals must lead the charge in redefining responsible AI use. If you are a chief compliance officer, this is your moment. Update your policies. Engage your board. Coordinate with legal and IT. Embed AI governance into your compliance framework. Because the next time an AI conversation surfaces in discovery, you do not want to explain why your program treated it like a harmless experiment.

Categories
Blog

AI and Work Intensification – The Compliance Response

There is a comforting myth circulating in corporate hallways and boardrooms: if we deploy AI across governance, risk, and compliance, the work will shrink. Investigations will move faster. Monitoring will get smarter. Policies will draft themselves. Third-party diligence will become push-button. The compliance function will finally “do more with less.” That myth was challenged in a recent Harvard Business Review article, “AI Doesn’t Reduce Work—It Intensifies It by Aruna Ranganathan and Xingqi Maggie Ye.

The authors believe that what happens is work intensification. AI expands throughput, increases expectations, and generates more outputs that still require human judgment, verification, and accountability. Instead of fewer tasks, you get more tasks. Instead of simpler work, you get faster cycles, more iterations, and new forms of quality risk. For the Chief Compliance Officer (CCO) leading AI governance, this is not a side effect. It is a core operating model issue.

If compliance owns AI governance across the enterprise, compliance must also own the discipline of how humans and AI work together. I call that discipline an AI practice standard, management guidance that sets expectations for pace, quality, verification, escalation, and sustainable workload.

Today, we consider how to consider this issue as a compliance operating model challenge across all GRC workflows: policy management, investigations, hotline intake, monitoring and surveillance, third-party due diligence, regulatory change management, audit planning, training, and reporting. The tone is cautionary because the risk is real: a compliance function that mistakes AI output volume for compliance effectiveness.

The Compliance Operating Model Problem: More Output, More Review, More Risk

Compliance work is not manufacturing. It is judgment work. It requires discretion, context, and defensible decisions. AI can accelerate inputs and draft outputs, but it does not accept responsibility. The CCO does. The business does. The board does. When AI enters GRC workflows, it tends to create four pressure points:

1. Compression of timelines. If a draft can be produced in five minutes, someone will ask why it cannot be finalized in five more.

2. Explosion of options. AI generates multiple versions, scenarios, and recommendations, which expands decision load and review cycles.

3. Higher volume of “signals.” AI-enabled monitoring produces more alerts, more pattern matches, and more anomalies. Much will be noise. All require triage.

4. Illusion of completion. Teams begin to treat a plausible AI answer as a finished work product. That is how quality defects are born.

The result is a compliance function that looks “faster” while becoming more fragile. Burnout rises. Rework increases. Errors creep into documentation. Controls become less reliable because the humans operating them are overwhelmed by the sheer volume AI makes possible.

All this means the question for the CCO is not, “How do we roll out AI?” The question is, “How do we govern the human work that AI intensifies?”

Five KPIs for Work Intensification Risk

Next, we consider five KPIs specifically designed to measure work intensification. These are board-credible, compliance-owned, and operationally measurable.

1. After-Hours Compliance Work Index

Percentage of compliance work activity occurring outside standard business hours (for example, 6 p.m. to 7 a.m.), measured across key systems (case management, GRC platform activity logs, email metadata, collaboration tool usage). This matters because AI compresses timelines and pushes work into nights and weekends. This index serves as an early warning for burnout and quality failures.

2. AI Rework Rate

Percentage of AI-assisted work products requiring material revision after human review (policies, investigation summaries, risk narratives, diligence reports). This matters because

if AI increases speed but doubles rework, you are not gaining productivity. You are shifting effort downstream.

3. Cycle Time Compression vs. Quality Defect Ratio

Track cycle time reductions alongside quality defects (corrections, escalations, documentation gaps, audit findings). You can express this KPI as Cycle Time Improvement / Defect Increase.

This matters because faster is not better if defects rise. This ratio keeps leadership honest.

4. Alert-to-Action Conversion Rate

Percentage of AI-generated alerts that result in a confirmed issue, investigation, remediation, or control enhancement. This matters because AI intensifies monitoring. This KPI exposes whether you are drowning in noise or generating actionable intelligence.

5. Burnout Signal Composite

A quarterly composite score built from pulse surveys such as fatigue, workload, autonomy, attrition in compliance roles, sick leave usage trends, and employee assistance program utilization patterns. This matters because compliance effectiveness depends on people. Burnout is a control failure risk.

These five metrics give the CCO and board a shared view of whether AI is improving the compliance function or simply accelerating it toward exhaustion.

How to Measure the Leading Indicators

You requested practical recommendations for measuring after-hours work, cycle time, quality defects, and burnout indicators. Here is a measurement approach that is realistic and defensible.

After-Hours Work

  • Use system log data from the case management, GRC, and document management platforms to track timestamped activity.
  • Supplement with email and collaboration metadata to measure volume outside standard hours.
  • Report trends by team and workflow, not individuals. This is about operating model health, not surveillance.

Cycle Time

  • Establish “start” and “stop” definitions for each workflow:
    • Investigations: intake date to closure date
    • Due diligence: request date to clearance date
    • Policy updates: drafting starts from the published version
    • Regulatory change: trigger identification to implementation
  • Track AI-assisted versus non-AI-assisted cycle times to isolate the impact.

Quality Defects

  • Define defects as “items requiring material correction after initial completion,” including:
    • Incomplete documentation
    • Wrong risk rating or missing rationale
    • Incorrect regulatory mapping
    • Reopened cases due to insufficient analysis
    • Audit findings tied to workflow execution
  • Capture defects through QA sampling, supervisor review logs, audit results, and post-incident reviews.

Burnout Indicators

  • Run a quarterly pulse survey with 5–7 questions on workload, pace, clarity, and ability to disconnect.
  • Track voluntary attrition and vacancy duration for compliance roles.
  • Include aggregate HR indicators such as overtime trends or sick leave usage, where available.
  • Use a composite score and trend it. The trend line is what matters.

The key is to build instrumentation without creating a culture of monitoring employees. Your goal is not to watch people. Your goal is to protect the control environment.

Adopt an Enterprise AI Practice Standard Now

For an innovation-forward company, the right move is not to slow down. The right move is to govern how you speed up. Your call to action is simple and strong: to adopt an enterprise AI practice standard as management guidance, owned by Compliance, implemented across all GRC workflows, measured by five work-intensification KPIs, and tested by internal audit and red teaming.

If you do that, you gain three things immediately:

1. A sustainable operating model

2. Defensible governance for regulators and boards

3. A compliance function that remains credible under pressure

AI can make compliance better. But only if the humans who run compliance can still breathe.