Categories
Sunday Book Review

Sunday Book Review: March 29, 2026, The Top Books for COs Edition

In the Sunday Book Review, Tom Fox considers books that would interest compliance professionals, business executives, or anyone curious. It could be books about business, compliance, history, leadership, current events, or anything else that might interest Tom. In this episode, we look at 4 top books that every compliance professional should read and have in their library.

  1. The Complete Works of Sherlock Holmes by AC Doyle
  2. Higher Ground by Alison Taylor
  3. The Honest Truth About Dishonesty by Dan Ariely
  4. The Power of Habit by Charles Duhigg
Categories
AI Today in 5

AI Today in 5: March 19, 2026, The Elasticity Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Elasticity as a compliance standard in the age of AI. (UCToday)
  2. Context-first AI for Co-Pilot. (FinTechGlobal)
  3. AI agents to reduce discovery costs. (BusinessWire)
  4. GSA AI clause. (Holland & Knight)
  5. How the military is using AI. (CBS)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Blog

Vendor AI Risk Is the New Third-Party Risk Frontier: From Contracts to Compliance Evidence

For years, compliance professionals have understood a basic truth about third-party risk: your company can outsource a function, but it cannot outsource accountability. That principle has long applied to distributors, agents, resellers, consultants, customs brokers, and supply-chain partners. In the age of artificial intelligence, it now applies equally to AI vendors.

And here is the key issue. Most companies are not building AI entirely in-house. They are licensing models, embedding third-party copilots, procuring AI-enabled platforms, connecting external APIs, and relying on vendors for everything from data enrichment to automated decision support. In other words, the AI stack is increasingly a third-party stack.

That means AI governance is rapidly becoming a third-party risk management problem. For compliance officers, this is a critical shift. The question is no longer simply whether your organization is using AI. The question is whether you have sufficient contractual leverage, operational visibility, and documentary evidence to demonstrate that third-party AI risk is managed in a credible, defensible, and scalable manner. If the answer is no, then your AI program may be far less mature than it looks on the PowerPoint slide.

AI Is Rarely a Standalone Tool

One of the most dangerous myths in the current AI conversation is that “the AI” is a single product that can be evaluated once and approved once. That is not how most enterprise deployments work. A single AI-enabled workflow may involve a foundation model provider, a cloud host, a retrieval layer, one or more data processors, a business application vendor, and internal configuration choices that change over time. Add subcontractors, model updates, and cross-border data flows, and you begin to see the real picture. The risk does not sit neatly with any single vendor. It sits across an ecosystem.

That matters because when something goes wrong, regulators, plaintiffs, auditors, and boards will not care that the problem sat in a vendor dependency chain. They will ask what your company knew, what it required, what it monitored, and what evidence it retained. The bottom line is that vendor AI risk has to move out of the procurement annex and into the core compliance framework.

Start with a More Realistic Definition of Third-Party AI Risk

When many companies think about vendor AI risk, they default to privacy and cybersecurity. Those issues are absolutely important, but they are only the beginning.

Third-party AI risk can also include opaque training data, weak model governance, unexplained output variability, inaccurate summarization, hidden subcontractors, unauthorized data retention, insufficient segregation of customer data, model changes without notice, untested bias, poor incident response, weak record retention, and limited auditability. If the tool affects regulated processes, the stakes rise even higher.

Think about the real-world use cases now being deployed. AI tools support customer communications, onboarding, HR screening, contract review, due diligence triage, transaction monitoring, investigations, and report drafting. In each of those settings, the company may be relying on output it did not fully generate, cannot fully inspect, and may not be able to reproduce later without the right controls in place.

That is where compliance must lean in. The core question is not whether the vendor claims to use responsible AI. The core question is whether your company can obtain sufficient evidence that the system is well-controlled for its intended use.

Contracts Are the First Line of Governance

If AI risk is outsourced to vendors, contracts become the first line of governance. Yet too many AI agreements still read like standard software contracts with a few privacy words sprinkled on top. That is not good enough. A sound AI vendor agreement should, at a minimum, address permitted use, data rights, confidentiality, security, model-change notification, subcontractor transparency, performance expectations, audit rights, incident reporting, regulatory cooperation, and termination support.

Most importantly, the contract should define the use case. That sounds basic, but it is essential. A vendor tool approved for low-risk drafting support is not automatically appropriate for high-impact decision-making. If the intended use is not defined, the actual use will drift. And drift is where governance begins to fail. The agreement should also make clear what data the vendor can use, for what purpose, and for how long. Can the vendor use your inputs to train its models? Can it retain prompts or outputs? Can it use metadata to improve service? Can affiliates or subprocessors access the data? If those questions are not answered with precision, you lack clarity. You have hope. Hope is not a control.

SLAs Need to Measure More Than Uptime

Service level agreements are another area where companies need to upgrade their thinking. Traditional SLAs focus on uptime, availability, and support response times. Those are still necessary, but with AI, they are not sufficient. For an AI-enabled service, the SLA discussion should expand to include quality, reliability, explainability support, incident escalation, and change transparency. A system can be available 99.9% of the time and still produce garbage. That is not a service success. That is a control failure delivered efficiently.

I am not suggesting that every company can negotiate custom model-accuracy guarantees from every AI vendor. In many cases, that will not be realistic. But companies can require practical commitments around things like response logging, traceability, notification of material model or system changes, error-handling workflows, and support for validation testing. They can define turnaround times for incidents involving hallucinations, security breaches, inappropriate outputs, or data leakage. They can require that the vendor cooperate with investigations and remediation.

That is where the compliance function should partner closely with legal, procurement, information security, and the business owner. The goal is not to demand impossible warranties. The goal is to create enough visibility so that the company is not flying blind.

Audit Rights Must Be Usable, Not Decorative

Many vendor contracts include broad-sounding audit clauses that are so restricted, delayed, or indirect that they provide little real assurance. In the AI context, that problem is magnified. If you cannot meaningfully assess controls over data handling, model governance, subprocessors, logging, incident response, and change management, then your audit right is little more than legal wallpaper.

A usable audit-right framework does not always mean sending a team on-site with clipboards. It can include layered assurance mechanisms: independent third-party assessments, SOC reports, model governance summaries, penetration-test results, bias testing documentation, incident logs, certifications, tabletop exercise results, and the right to ask targeted follow-up questions. In higher-risk arrangements, it may also include deeper review rights, validation support, or the ability to commission an independent assessment.

From Due Diligence to Ongoing Monitoring

Once a contract is signed, the real work begins. Models change. Vendors add subprocessors. Features evolve. Use cases expand. Business users discover new workflows that procurement never contemplated. A vendor that began as a low-risk drafting tool can quietly become embedded in a regulated process six months later. That is why monitoring matters.

Companies should inventory AI vendors and classify them by risk. They should map which business processes depend on them, what data they touch, what decisions they inform, and what regulatory exposure they create. They should require periodic attestations, monitor control changes, review incidents, reassess data use, and revisit whether the tool is being used in line with approved purposes.

This is also where shadow AI becomes a third-party problem. Employees often access AI functionality through existing vendors before compliance even realizes it is enabled. Suddenly, a platform you bought for workflow management has rolled out AI summarization, drafting, or analytics features. If no one is watching vendor change notices and product updates, the company can slide into AI use without ever consciously approving it. That is a governance gap.

Build a Compliance Evidence File

If there is one practical takeaway, it is this: for significant AI vendors, build a compliance evidence file.

By that, I mean a documented record showing the rationale for approval, the use case, the risk classification, the key contractual controls, the diligence performed, the evidence reviewed, the approvals obtained, and the monitoring steps required going forward. If the vendor supports a high-risk process, the file should also include validation results, escalation pathways, and a record of any incidents or material changes.

Why does this matter? Because when the board asks why the company trusted a third-party AI tool, you need a better answer than “the business wanted it.” When the internal audit asks how control assurance was established, you need something more concrete than “a legal review of the contract.” And when a regulator asks how the company oversees outsourced AI risk, you need documentation that demonstrates a repeatable, risk-based process.

Five Questions Every CCO Should Ask

Every Chief Compliance Officer should be asking five simple questions right now.

  1. Do we know which vendors in our ecosystem are using or enabling AI?
  2. Have we classified those vendors based on data sensitivity and the business impact of the use case?
  3. Do our contracts clearly address data rights, change notification, incident response, and usable audit rights?
  4. Do our SLAs measure what matters for AI-enabled services, not just uptime?
  5. Can we produce evidence showing why a vendor was approved, what controls we relied on, and how the relationship is being monitored?

If the answer to any of those questions is no, the work is not done.

The Bottom Line

Third-party risk has always been about visibility, leverage, and evidence. AI does not change that. It intensifies it. The organizations that manage vendor AI risk well will not be the ones with the flashiest AI procurement strategy. They will be the ones that define use cases carefully, contract for transparency, demand usable assurance, monitor continuously, and retain evidence that their oversight is real.

That is where compliance comes in. Not as the department that slows innovation down, but as the function that makes outsourced innovation governable. Because in the end, if AI is rarely in-house, then AI governance cannot be either.

Categories
All Things Investigations

ATI In-House Insights: Challenges and Tips for Navigating a Changing Risk Landscape with Sarah Iles

In this episode of the ATI: InHouse Insights Podcast, Mike DeBernardis speaks with seasoned in-house compliance leader Sarah Isles about navigating an ever-changing risk landscape shaped by political, geopolitical, regulatory, and technological shifts. 

Sarah shares her background across manufacturing sectors and discusses how multinational compliance risks evolve as jurisdictional priorities shift, including sanctions, export controls, tariffs, sustainability, labor rights, data protection, and AI. They identify internal challenges, including a lack of infrastructure to address new risks, siloed ownership, and weak change management, and emphasize clear governance and accountability. Sarah advises “back to basics,” using DOJ’s Evaluation of Corporate Compliance Programs, focusing on real risk mitigation over form-heavy questionnaires, keeping communication channels open through formal committees and informal connections, scaling risk assessments appropriately, targeting communications to relevant audiences, escalating thoughtfully, and building resilient programs by expecting and embracing constant change.

Key highlights:

  • Geopolitics Drives Risk
  • Internal Adaptation Hurdles
  • Silos and Ownership
  • Culture and Change
  • Proactive Compliance Basics
  • Partnering With Business
  • Right-Sized Risk Assessments
  • Communicating Emerging Risks

Resources:

Sarah Iles LinkedIn

Mike DeBernardis LinkedIn

ATI: In-House Insights Podcast

Hughes Hubbard & Reed Website

Categories
Blog

AI Is Only as Good as the Data: What Compliance Leaders Need to Know About Data Readiness

There is an old lesson in compliance that remains evergreen: bad facts produce bad decisions. The same is true for data science: Garbage In, Garbage Out (GIGO). In the GenAI era, that lesson has a new twist. Bad data produces bad outputs at machine speed.

That is why the report, Taming the Complexity of AI Data Readiness, deserves the attention of every Chief Compliance Officer, compliance technologist, and board member who asks management, “What is our AI strategy?” The better follow-up question is, “What is our data readiness strategy?” Because the report makes one point with unmistakable clarity: the model is not the mission; the data foundation is.

For compliance professionals, this is not a technical side issue. It is central to the enterprise risk conversation. If your organization is training, testing, or deploying AI on messy, siloed, biased, stale, or poorly governed data, you are not building a competitive advantage. You are an industrializing risk.

The Dirty Little Secret of Enterprise AI

The report lays out a reality that will not surprise anyone who has lived through a data initiative. Most organizations are not ready. Only 7% of survey respondents said their company’s data was completely ready for AI adoption. By contrast, 51% said it was only somewhat ready, while 27% said it was not very or not at all ready. Only 42% said their organization had high trust in its AI data, and 73% agreed their company should prioritize AI data quality more than it currently does. That should give every compliance officer pause.

We are living through a corporate rush toward GenAI, yet most companies are still stuck at the same old starting line: fragmented, inconsistent, poorly governed data. Many AI conversations inside companies still begin with use cases, copilots, and vendor demos. Far fewer begin with data lineage, data permissions, data quality, or governance maturity. That is a mistake.

If the underlying data is unreliable, the downstream output will be unreliable as well. Worse, it may arrive dressed up in polished prose, persuasive charts, or tidy summaries that create a false sense of confidence. In compliance with that, it is especially dangerous. Whether the use case is sanctions screening, due diligence, internal investigations, policy management, financial controls, or regulatory reporting, a bad answer delivered quickly is still a bad answer.

Bad Data Is Not Just a Tech Problem

One of the most useful parts of the report is how it frames the core barriers. The top challenge cited by respondents was siloed data and difficulty integrating sources at 56%. After that, a lack of a clear data strategy ranked 44%, and data quality or bias issues ranked 41%. Other concerns included regulatory constraints on data use, unclear data lineage, inadequate security, and outdated data. Every one of those should sound familiar to compliance professionals.

Siloed data means incomplete visibility. Weak lineage means you may not be able to defend how an answer was generated. Bias in the data means distorted outputs. Outdated data means inaccurate decisions. Weak security exposes sensitive information. Regulatory constraints mean the company may not even have the right to use certain data the way its AI aspirations assume.

The report underscores this point. 52% of respondents identified inaccurate or biased AI results as a top concern, while 40% cited the loss of security or intellectual property. That is not abstract. That is the modern compliance risk register.

Can We Trust the Data?

A quote from Teresa Tung of Accenture in the report is worth lingering over. She said data readiness means “you can access data to see an accurate view of what is happening in your business and what you can do about it.” That is also a very good working definition of compliance intelligence.

A mature compliance program helps a company understand what is happening inside the business and what should be done in response. That means your hotline data, your gifts and entertainment data, your training metrics, your third-party files, your investigation records, and your control data all need to mean what you think they mean.

The report makes this point with a simple example. Price data is not useful unless you know whether it is in U.S. or Australian dollars, whether it is a unit or bulk price, and when it applies. The compliance equivalent is easy to imagine. A third-party risk flag is not useful unless you know what triggered it, what jurisdiction it covers, how recently it was refreshed, what source produced it, and whether anyone validated it. Context is a control. Without it, data can mislead just as easily as it can inform.

Why This Is Becoming a Board-Level Issue

Another important finding is that only 23% of organizations have created a data strategy for AI adoption, although 53% are currently developing one. In other words, companies know they have a problem, but most are still working through it. This is where compliance can truly function as a business enabler.

The best compliance leaders know that governance is not the enemy of innovation. Governance is what makes innovation scalable and sustainable. If the business wants to use AI at scale, compliance should request a documented AI data strategy that addresses security, privacy, data quality, governance, accessibility, bias management, and alignment with business objectives.

The report found that security and protection of sensitive data were the most critical elements of such plans, at 59%, followed by clean, usable data quality at 46% and data governance at 41%. That is not just an IT checklist. That is a board conversation.

Bring AI to the Data

The report also discusses a concept compliance professionals need to understand: data gravity. Large and sensitive data sets tend to stay where they are because moving them is costly, slow, and risky. Increasingly, organizations are turning to architectures that bring AI processing to the data rather than moving data to the model. The report highlights approaches, such as zero-copy access and containerized applications, that can reduce latency, control costs, and address security and sovereignty concerns. This matters greatly for compliance.

Many regulated environments cannot simply move sensitive data across systems or borders because a vendor wants a cleaner AI workflow. Privacy laws, localization rules, contracts, and plain good judgment all cut against that approach. If AI can be brought to the data rather than copying data into multiple new environments, the organization may reduce both operational and compliance risk.

Compliance officers do not need to become cloud architects. But they do need to ask the right questions. Are we duplicating sensitive data unnecessarily? Are we crossing jurisdictional lines? Can we explain lineage, access, and security? Are we creating an AI environment that looks controlled or improvised?

Agentic AI: Real Promise, Real Risk

The report is optimistic about the potential of agentic AI for data management. 47% of respondents said their organizations believe agentic AI can solve data quality issues, and 65% expect many business processes to be augmented or replaced by agentic AI over the next 2 years. Experts cited benefits such as mapping data, documenting it, performing quality checks, monitoring drift, and automating routine tasks that previously required significant manual effort.

There is real promise here. Compliance teams spend far too much time on manual work that adds little strategic value. Tools that can responsibly automate mapping, documentation, testing, triage, or drift monitoring deserve serious attention.

But this is no place for magical thinking. The report is equally clear that success requires the right team: data engineers, domain experts, prompt expertise, and a product owner aligned to a business objective. That is the lesson. Agentic AI does not eliminate the need for governance. It raises the stakes for governance. If you automate poor judgment on top of poor data, you do not get efficiency. You get scalable failure.

Five Questions for Every CCO

So what should compliance leaders do now? Start with five questions.

  1. Which AI use cases in our company depend on sensitive, regulated, or high-risk data?
  2. Can we explain the lineage, quality, freshness, permissions, and context of that data?
  3. Do we have a documented AI data strategy, or are we confusing pilots with governance?
  4. Are we moving data in ways that create avoidable privacy, security, or sovereignty risks?
  5. Who owns the meaning of the data?

That final question may be the most important. The report stresses that the business must own the data so it is described properly and used correctly. Data is not just a technical asset. It is a business asset with legal, ethical, and operational meaning. Compliance should insist that meaning be defined before AI starts drawing inferences from it.

The Bottom Line

The great temptation in the AI era is to focus on the model’s brilliance. The wiser course is to focus on the data’s readiness. That is where trust begins. That is where defensibility begins. And that is where sustainable value begins. For compliance professionals, the message is plain. AI governance that ignores data readiness is not governance at all. It is wishful thinking with a dashboard.

The organizations that win with AI will not simply have more tools. They will have better data, better lineage, better controls, better discipline, and better judgment about when and how to use AI. In compliance, that is not glamorous. But it is where real success usually lives.

Categories
Blog

COSO Meets GenAI: The Internal Controls Playbook for Compliance

If you are a compliance professional looking at your company’s GenAI rollout and wondering when the grown-ups will finally arrive, I have good news. They just did.

COSO has now stepped directly into the GenAI conversation with its new paper, Achieving Effective Internal Control Over Generative AI, and that matters a great deal. For those of us in compliance, internal audit, risk, and governance, COSO is not a shiny new acronym trying to catch the latest tech train. COSO is the train schedule. It is the framework that boards, auditors, controllers, and compliance professionals already understand. And with this publication, COSO has done something very important: it has translated GenAI risk into the language of internal control. That is exactly what the market needed.

Because up until now, too much of the GenAI discussion has lived in one of two places. Either it sat in the innovation lab with people talking breathlessly about transformation, or it sat in the legal department where everyone worried, quite correctly, about hallucinations, privacy, and bias. What has often been missing is the operational bridge between aspiration and assurance. COSO gives us that bridge. It says, in effect, GenAI is not outside your control environment. It is now part of it. And if it is part of it, then it must be governed, tested, monitored, and documented like any other significant business capability.

GenAI Does Not Change the Need for Control. It Changes the Terrain

One of the most important points in the COSO paper is that GenAI does not upend the COSO Internal Control-Integrated Framework. Rather, it changes the environment in which those controls operate. The five familiar COSO components remain the same: control environment, risk assessment, control activities, information and communication, and monitoring activities. What changes is the nature of the underlying risk. GenAI introduces probabilistic outputs, model drift, prompt injection, opaque reasoning, rapid configuration changes, and the adoption of shadow AI outside normal approval channels. That is a very useful framing for compliance officers.

It means we should stop treating AI governance as some exotic side project. If GenAI is used in operations, legal, finance, HR, procurement, investigations, or reporting, it belongs within your existing governance architecture. You do not need to invent a new religion. You need to apply the old disciplines to a new set of facts.

This is where compliance can and should lead. We understand what it means to build controls around fast-moving risk. We understand escalation, role clarity, training, monitoring, and accountability. COSO is effectively telling compliance professionals, “You already know more about governing GenAI than you think. Now apply that muscle memory with precision.”

A Capability-First Approach Is a Game Changer

The most practically useful innovation in the COSO guidance is its capability-first taxonomy. Rather than organizing AI controls by vendor, product name, or technical buzzwords, COSO focuses on what the GenAI system actually does. It identifies eight capability types: data extraction and ingestion; data transformation and integration; automated transaction processing and reconciliation; workflow orchestration; judgment, forecasting, and insight generation; AI-powered monitoring and continuous review; knowledge retrieval and summarization; and human-AI collaboration. That is enormously helpful because it is how compliance people actually work.

We do not manage risk by admiring the label on the software box. We manage risk by understanding what a tool does in a process, what can go wrong, how fast it can go wrong, and how the error propagates downstream. A GenAI tool that summarizes policies creates one set of risks. A GenAI agent that routes approvals, posts transactions, or helps shape regulatory disclosures creates another. COSO provides organizations with a common language for distinguishing among use cases and calibrating controls accordingly. That is not just elegant. It is actionable.

The Five Foundational Truths Every CCO Should Memorize

COSO also offers five foundational characteristics for GenAI internal control, and each should be printed and posted on the wall of every compliance office.

First, GenAI is probabilistic, not deterministic. In plain English, it can sound authoritative and still be wrong. Therefore, outputs must be treated as claims requiring validation, not facts to be accepted by default. Second, GenAI is dynamic. Models, prompts, and retrieval data evolve quickly, so controls and monitoring must keep pace. Third, GenAI is easily scalable, meaning it can scale both productivity and error rates. Fourth, it has a low barrier to entry, which is why shadow AI is such a real problem. Fifth, and perhaps most interestingly, GenAI can help govern GenAI through pattern detection, validation, and monitoring.

There is a lot packed into those five points. For compliance, the biggest takeaway is this: static governance will fail in a dynamic AI environment. Annual reviews will not cut it. A once-a-year policy refresh will not cut it. A single training session on acceptable use will not cut it. GenAI governance has to be living governance.

What COSO Says About the Control Environment

COSO starts where it should: tone, structure, and accountability. The paper says organizations need a GenAI acceptable use policy, clear ethical boundaries, oversight and accountability responsibilities, named owners for each AI tool or platform, role-based training, and accountability mechanisms tied not only to adoption but also to safety, compliance, and performance. Boards and cross-functional oversight groups need visibility into adoption, incidents, changes, and risk indicators.

That is a direct message to compliance leaders. If nobody owns the prompts, the retrieval connectors, the model configurations, the escalation path, or the approval structure, then nobody owns the risk. And in a regulatory environment moving steadily toward AI accountability, “nobody owned it” is not a defense. It is an indictment.

I particularly liked COSO’s emphasis that prompts, system prompts, and retrieval connectors should be treated as governed configurations. That is exactly right. Too many companies still treat prompting like an informal user habit rather than a control-relevant configuration choice. In a high-impact context, the prompt is not casual. It is part of the system.

Risk Assessment Must Get More Dynamic

COSO’s discussion of risk assessment is equally strong. It calls for use cases to have clearly defined objectives, acceptable and unacceptable boundaries, and success criteria. It also warns that organizations must first ask whether GenAI is even the right tool for the task. In some cases, traditional automation or deterministic systems may be safer and more reliable. The risk assessment should account for hallucinations, drift, provenance gaps, prompt injection, bias, third-party dependencies, and significant changes such as vendor updates, connector changes, or evolving regulations.

This is where compliance earns its keep. We are the ones who should be asking: What if the model changes quietly? What if the source data becomes stale? What if the retrieval layer excludes a critical policy update? What if the system routes something to the wrong approver? What if the tool is used in a context where a simpler and safer solution would do the job better?

COSO is right to emphasize scenario analysis and living risk registers. In the GenAI era, risk registers that only update annually are museum pieces.

Human-in-the-Loop Is Not Optional

When COSO turns to control activities, it gets very practical. It says GenAI outputs should be subject to human corroboration proportionate to risk, and in high-impact business, legal, or regulatory contexts, AI assistance should be segregated from authoritative decision-making. The paper also calls for version control, audit trails, access restrictions, change management, source citation requirements, segregation of duties, confidence thresholds, and documented approvals for configuration changes. That is the heart of responsible AI governance.

I was also struck by COSO’s discussion of reliance in an ICFR setting. The paper draws an important distinction between situations in which management relies on AI output as evidence of control effectiveness and situations in which a human independently re-performs the work. When true reliance exists, the evidentiary expectations rise: documented prompts, model versions, sampling rationale, exception resolution, and retained evidence.

Even beyond financial reporting, that concept is vital for compliance. The moment your team starts relying on GenAI output for sanctions reviews, due diligence summaries, monitoring alerts, investigative chronology, or policy interpretation, you have to ask a simple question: What is our evidence that this output was reliable enough to trust?

Monitoring Is Where the Real Work Begins

COSO’s final major lesson is that monitoring GenAI is not a one-and-done exercise. Organizations need continuous metrics and periodic deep reviews. They need to track precision, recall, exception volumes, latency, fairness, drift, and outcome quality. They need retraining triggers, rollback protocols, remediation logs, and playbooks for common AI control failures. COSO also makes the excellent point that in probabilistic systems, control failure may no longer be a simple pass-fail matter. Organizations may need multi-metric tolerance ranges across dimensions such as accuracy, bias, leakage, explainability, and change velocity.

That is a sophisticated and realistic view. Compliance teams should take it seriously because it reflects the world we are moving into. AI control effectiveness will not be judged solely by whether a control exists on paper. It will be judged by whether the organization can show that it monitors performance, investigates deviations, remediates failures, and adapts as the technology changes.

The Bottom Line

The real genius of the COSO GenAI framework is that it takes AI out of the abstract and puts it where it belongs: inside the machinery of governance. It turns the conversation from “Do we have an AI policy?” to “Do we have effective internal control over AI use?” That is a far better question.

For compliance officers, the action items are clear. Inventory your GenAI use cases. Classify them by capability. Identify owners. Assess risk dynamically. Put human review where the stakes justify it. Govern prompts and configurations, such as controlled assets. Monitor continuously. And do not let your AI strategy outrun your control environment.

Because in the end, the organizations that benefit most from GenAI will not be the ones that moved fastest with the fewest guardrails. They will be the ones who figured out how to innovate with discipline. That is not bureaucracy. That is a competitive advantage.

Categories
AI Today in 5

AI Today in 5: March 16, 2026, The Who Owns the Decision Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. AI boosts brainstorming. (Earth.com)
  2. The AI Imperative. (Wolters Kluwer)
  3. Who owns compliance decisions? (FinTech Global)
  4. AI opens a new front in the hospitals v. insurers battle. (Reuters)
  5. Embodied AI for manufacturing. (FinanceMagnates)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Blog

The GenAI Playbook for Compliance

There is a question I continue to hear from compliance professionals, boards, and senior executives alike: “When will generative AI finally be good enough for us to trust it?” As discussed by Bharat Anand and Andy Wu in their recent Harvard Business Review article The GenAI Playbook for Organizations they believe this is the wrong question.

The better question, and the one every Chief Compliance Officer should be asking right now, is this: “Where can we use GenAI effectively today, with the right controls, to make our compliance program more efficient, more resilient, and more business relevant?” This is their core insight, and they argue that leaders should stop obsessing over whether GenAI is perfect and instead focus on where it can create value now and how strategy, not speed alone, wins.

For the compliance profession, that insight lands with particular force. We are not in the business of chasing shiny objects. We are in the business of managing risk, enabling growth, and preserving trust. GenAI is not a parlor trick. It is becoming an operating reality. The question is no longer whether compliance should engage. The question is whether compliance will lead with discipline or lag while the business adopts AI without it.

Stop Asking Whether AI Is Smart. Start Asking Where Errors Matter.

One of the most useful contributions of the article is its simple yet powerful framework: evaluating GenAI use cases through two lenses. First, what is the cost of error? Second, does the task rely primarily on explicit data or on tacit human judgment? That is gold for compliance.

Too many organizations still evaluate AI in sweeping, binary terms. Either they think it is magical or too dangerous to touch. Neither position is helpful. Compliance officers need a more operational lens. We need to break work into tasks and then ask where automation is appropriate, where human oversight is essential, and where human judgment must remain firmly in control. That is exactly how mature compliance programs should approach GenAI. Not with ideology. With risk assessment.

The “No Regrets” Zone for Compliance

The article identifies a “no regrets” zone: low cost of error, explicit knowledge, and high potential for immediate deployment. Examples include summarizing documents, screening resumes, or handling routine inquiries. In compliance, many early wins live here.

Think about policy summarization, training-content adaptation, meeting-note extraction, initial hotline trend coding, third-party questionnaire triage, basic control documentation, and first-draft responses to routine business questions. None of these tasks should be delegated blindly. But many can be accelerated responsibly.

For instance, a compliance team buried under requests from procurement, HR, sales, and legal can use GenAI to produce first-pass summaries of policies, draft FAQs, organize issue logs, and identify recurring themes from employee questions. That does not replace the compliance professional. It frees that professional to focus on what matters most: judgment, influence, escalation, and strategic problem-solving.

This is where many compliance teams have been and continue to be too timid. They have waited for perfection in a space where perfection was never the benchmark. The benchmark should be whether the tool improves speed, lowers administrative friction, and allows compliance personnel to move up the value chain.

The “Quality Control” Zone Is the Compliance Sweet Spot

The article also identifies a “quality control” zone, where the knowledge is explicit but the cost of error is high. In those cases, GenAI can do substantial work, but humans must verify, review, and retain accountability. The authors cite legal drafting, software development, and financial due diligence as examples. That is the very heartland of compliance.

Consider sanctions screening narratives, third-party due diligence memos, internal investigation chronologies, risk assessment documentation, compliance testing workpapers, and board reporting drafts. These are exactly the kinds of tasks where GenAI can accelerate the heavy lifting, but should never be the final word.

This is also where compliance can bring discipline to the rest of the enterprise. The business may want speed. Compliance must insist on verified speed.  A practical model is straightforward: (1)

GenAI drafts  Humans review  Controls document  Leaders own.

That is not anti-innovation. That is responsible innovation. It is also consistent with what regulators increasingly expect: not the absence of AI, but governance around its use. Whether one looks to the DOJ’s emphasis on effective controls and continuous improvement in the Evaluation of Corporate Compliance Programs, the NIST AI Risk Management Framework, or the growing global focus on AI governance, the message is the same: effective AI governance requires continuous improvement. If your company uses AI in a consequential process, you had better know where it is being used, who is checking it, what data feeds it, and how errors are caught.

The “Human-First” Zone Must Stay Human

The article is particularly strong in its warning about tasks that require tacit knowledge and carry a high cost of error: strategy, sensitive personnel decisions, crisis leadership, and other matters where judgment, ethics, and context are central. In those cases, GenAI may support, but it should not decide. Compliance professionals should print that out and tape it to the wall.

Some activities must remain human-led. Decisions about discipline, executive accountability, remediation after a serious investigation, disclosure strategy, culture assessment, or whether a business relationship “feels wrong” despite facially acceptable paperwork are not suitable for AI-driven decision-making. They require experience, intuition, moral clarity, and often courage.

That does not mean AI has no role. It can assemble facts, surface patterns, propose draft communications, and model possible outcomes. But it cannot own the judgment. In a compliance function, the more consequential the decision, the more important it is that a human being stands behind it. That is not nostalgia. That is governance.

Broad Access Without Chaos

One of the article’s more provocative arguments is that organizations should mandate broad access to GenAI tools because value creation begins when employees can experiment and discover useful applications. At the same time, the authors warn of bottlenecks that trap innovation in slow approval processes. I agree with the spirit of that point, but from a compliance perspective, there must be an important qualifier: broad access does not mean unmanaged access. This is where the compliance function can truly be a business enabler. Compliance should not be the department of “no AI.” It should be the department of “safe AI at scale.” That means several things.

  1. Build a risk-based use policy for GenAI. Employees need clear guidance on prohibited uses, approved tools, escalation triggers, and data-handling requirements.
  2. Classify use cases. Not every AI use case deserves the same scrutiny. A tool for drafting a training outline is not the same as a tool for assessing third-party bribery risk.
  3. Establish review protocols. High-risk outputs require human validation, documented sign-off, and, in some cases, legal or compliance approval.
  4. Train broadly and repeatedly. AI governance cannot live in a PDF on an intranet site. It has to be operationalized through real examples and practical scenarios.
  5. Monitor and improve. If GenAI is being used across the enterprise, compliance should have visibility into where, how, and with what effect.

That is what a mature AI governance program looks like. It is also the same risk management protocol that every compliance professional uses daily.

Data Is the Real Compliance Story

Another important insight from the article is that competitive advantage will come not merely from adopting GenAI but from pairing it with proprietary data, redesigned workflows, and complementary organizational assets. The authors emphasize centralizing data, identifying what data is not yet being collected, and redesigning the organization around AI-enabled learning loops. For compliance, this should be a wake-up call.

Most compliance functions are sitting on a treasure trove of underused data: hotline reports, training metrics, policy attestations, third-party files, gifts and entertainment data, investigation outcomes, audit findings, HR trends, distributor analytics, and culture survey results. Yet in many companies, that information remains fragmented across systems and functions.

If compliance wants to be strategic in the AI era, it has to take data architecture seriously, not simply for reporting, but for insight. The future compliance advantage will go to organizations that can connect signals across functions and convert them into earlier detection, smarter resource allocation, and more tailored interventions. In other words, the future of compliance is not just controls. It is controls plus intelligence.

Three Questions Every CCO Should Ask This Week

So, where does this leave the compliance officer trying to lead in real time? I suggest three immediate questions. First, which compliance tasks are in the “no regrets” zone and should be piloted now? Second, which tasks sit in the “quality control” zone and require a formal human-in-the-loop process? Third, which decisions are so consequential, contextual, or values-laden that they must remain unmistakably human-first?

If you cannot answer those questions, your company does not yet have a GenAI compliance strategy. It has experimentation without governance or caution without direction. Neither is sustainable.

The GenAI era will not reward the fastest organization. It will reward the organization that best aligns technology, governance, data, and human judgment. That is the compliance challenge. It is also a compliance opportunity. Compliance has always been about more than preventing misconduct. At its best, it helps a company make better decisions, allocate trust wisely, and compete with integrity. GenAI does not change that mission. It sharpens it. The playbook is here. The real question is whether compliance will run it.

Categories
Blog

Aly McDevitt Week: Part 5 – Ransomware, Crisis Response, and the Compliance Imperative to Move Fast

This week, I want to pay tribute to my former Compliance Week colleague, Aly McDevitt, who announced on LinkedIn that she was retiring from CW to become a full-time mother. I wrote a tribute to Aly, which appeared in CW last week. To prepare to write that piece, I re-read her long-form case studies, which she wrote over the years for CW. They are as compelling today as when she wrote them. This week, I will be paying tribute to Aly by reviewing five of her pieces. The schedule for this week is:

Monday: A Tale of Two Storms

Tuesday: Coming Clean

Wednesday: Inside a Dark Pact

Thursday: Reaching Into the Value Chain

Friday: Ransomware Attack: An immersive case study of a cyber event based on real-life scenarios

McDevitt took a different but highly effective approach in this case study. Rather than centering the story on a single historical corporate scandal, she crafted an immersive fictional scenario grounded in real-life attacks, expert interviews, and public guidance. Compliance Week made clear that, while the company and its characters are imagined, the legal, operational, and compliance issues are very real. That makes this piece especially valuable for compliance professionals because it is less a postmortem of one company and more a practical field manual for the next crisis.

McDevitt’s story begins where many cyber incidents begin: with a person, not a machine.

A longtime employee, Betsy, receives an “urgent” email that appears to be from her boss. She clicks a malicious link, lands on a phony, internal-looking site, realizes too late that something is wrong, and then makes the mistake that turns a bad moment into a corporate crisis: she does not report it. Her silence gives the attacker time. Within days, the company, Vulnerable Electric (VE), a private utility serving 1.4 million customers with about 600 employees and $250 million in annual revenue, is facing a full-blown ransomware attack.

That is the first lesson, and McDevitt drives it home with precision. Ransomware is often described as a technology problem, but the first failure is frequently human, organizational, and cultural. Betsy clicked. But more importantly, she hesitated, feared blame, and kept quiet. As McDevitt explains through the expert commentary, her biggest mistake was not simply opening the link. It was actively deciding not to report the incident to the proper internal authority.

For compliance officers, that point should sound very familiar. Whether the issue is corruption, harassment, sanctions, safety, or cyber, organizations do not fail only because something bad happens. They fail because people do not feel safe reporting it quickly.

McDevitt also lays out why this issue matters so much now. She notes that ransomware payments in 2020 reached roughly $350 million, a more than 300 percent increase from the prior year, and that proactive prevention is no longer optional. She further situates the case study in the context of critical infrastructure, noting that entities such as utilities are subject to heightened scrutiny and are encouraged to align with the NIST cybersecurity framework. In other words, ransomware is not just an IT nuisance. It is an enterprise risk, a regulatory risk, and in some sectors a national security risk.

Once the attack is recognized, McDevitt shows the company doing something right: it moves into a structured response. The CEO activates the full cyber incident response team, or CIRT, and the war room includes not only technical leaders and legal counsel, but also the chief compliance officer, the head of communications, external incident response professionals, and other essential decision-makers. This is exactly what a mature response should look like. Cyber incidents do not fall under a single function. They are enterprise events.

I particularly appreciated how McDevitt uses the case study to underline the role of compliance. The CCO is not there as decoration. The article makes clear that if employee data has been exfiltrated, the incident constitutes a personal data disclosure with potentially local, state, and international notification consequences, and that compliance and legal personnel should be in the room from the start. That is a crucial point for corporate compliance professionals. Cyber risk management is not separate from compliance. It is now one of compliance’s core operating terrains.

McDevitt also captures the psychology of the first 36 hours. Anthony Ferrante says those hours are extremely stressful for a CEO, who is simultaneously thinking about operations, data, reputation, and people. That observation matters because it explains why preparation before an attack is so important. You do not want your executives inventing a process under duress. McDevitt reports that VE had already created an incident playbook with roles, escalation steps, and a five-part response framework: facts, business impact, root cause, corrective actions, and lessons learned. That is the kind of disciplined structure compliance leaders should insist upon.

Another strength of McDevitt’s reporting is her treatment of communications. Too many organizations still believe communications should be brought in late, after the lawyers and technologists finish their work. McDevitt, through multiple expert voices, makes the opposite case. Communications should have a seat at the table, not at the back wall. The reason is straightforward: stakeholders will forgive many things, but they will not forgive caginess. VE’s communications lead rightly argues that employees and customers should hear from the company first, not from the media or the attacker.

This point becomes even sharper when McDevitt contrasts VE’s approach with the real-life story of “Melvin,” an employee at another firm that remained offline for 10 days with no formal communication and did not disclose the sensitive data breach to employees in a timely or transparent way. That section may be the most important communications lesson in the entire piece. Employees are not bystanders. They are among the primary victims of a data breach, and they know when something is wrong. Silence destroys trust.

Then comes the hard question at the center of nearly every ransomware story: Do you pay?

McDevitt wisely resists easy moralizing. She notes the FBI’s official position is not to pay, because payment fuels the criminal business model and does not guarantee restoration. Yet she also reports the practical view of experienced practitioners: payment is not illegal per se, and companies often face a grim choice among bad options. The anonymous chief compliance officer quoted in the case study says it best: there are no good options, only the least bad option.

McDevitt’s two parallel paths, pay and do not pay, are particularly useful because they show that neither choice is clean. In Path A, VE pays $5 million, gets imperfect decryption support, recovers faster, but then faces scrutiny over whether it should have consulted OFAC before payment and whether it may have paid a sanctioned party. In Path B, VE does not pay, endures a longer recovery, suffers a data breach, and still faces reputational and legal fallout. McDevitt’s point is not that one route is right and one is wrong. Her point is that ransomware decision-making is governance under pressure.

That is why the postmortem matters so much. McDevitt closes the case study by emphasizing that the long-term impacts fall into three risk buckets: reputational, legal, and regulatory. She then turns to practical lessons: train the workforce, strengthen spam filters, run tabletop exercises, isolate infected devices immediately, secure backups offline, contact law enforcement quickly, do not rush engagement with the attacker, and communicate with each stakeholder group in a timely and tailored way. She also adds smart recommendations on canary files, forensic retainers, access reviews, logging, threat intelligence monitoring, and industry information sharing.

Finally, McDevitt ends on a note that compliance professionals should not miss. Betsy is not scapegoated. She is thanked for telling the truth and invited to participate in a phishing-resilience campaign for other employees. That is not sentimentality. That is culture. If your response to human error is humiliation, people will hide problems. If your response is accountability plus learning, people will surface them.

That may be the most important compliance lesson of all. Ransomware is a cyber crisis, but surviving it depends on culture, governance, and trust just as much as on technology.

I hope you have enjoyed reading about Aly’s case studies for CW. I am a columnist for Compliance Week.

Categories
AI Today in 5

AI Today in 5: March 12, 2026, The Attorneys and AI Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. How AI forensics is helping compliance gridlock. (PYMNTS)
  2. Creating responsible AI governance standards. (mycarrollcountynews)
  3. AI agents cannot open bank accounts. (FinTechWeekly)
  4. The court castigated an attorney using AI to write briefs. (TheNews&Observer)
  5. 3 key principles for AI use in businesses. (BusinessInsider)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.